DevOps is much more than than just local improvements in the development, delivery and operation of software: it represents a movement which touches not only development and operations (obviously!), but organizational culture, quality assurance, security, and program management. This workshop is a compact one-day briefing for DevOps beginners and those who’d like to get a comprehensive overview of the complex, encompassing DevOps landscape. It is tailored for technical stakeholders as well as for business leaders. Anyone tasked with a DevOps transformation will be able to spend a day wrapping their head around the world of DevOps, with exercises to help them look at their specific organization's challenges through a DevOps lens. In the workshop, we'll cover: 1) DevOps history and underpinnings; 2) Core DevOps principles; 3) Continuous Delivery and Business Flow; 4) An Overview of Technologies and Tools to help implement DevOps. By joining DevOpsCon’s Transformation Day, you’ll get an insight into all the necessary aspects to make a DevOps transformation in your own organization successful.
You started to use Docker to put your applications in containers. Great! But how do you deploy to production? What modifications are necessary to make your application scalable and available? How do we setup a Docker cluster? What about logging, metrics, and all the other operational requirements of production? In this workshop, we will answer those questions using tools from the Docker ecosystem, with a strong focus on the native orchestration capabilities available since Docker Engine 1.12, aka "Swarm Mode". Each attendee will get a private cluster of five nodes. We will setup this cluster with "Swarm Mode" and deploy a demo app with web frontends, web services, background workers, and stateful data stores. Then we will scale that application and deal with logging, metrics, and more. You don't have to pre-install Docker to attend; all you need is a computer with a web browser and a SSH client.
In this hands-on workshop we’ll all attack the training web app to take on the role of a pentester one step at a time. You’ll learn how to work with professional security tools through a range of practical tasks and will also learn pentesters’ general approach for attacking web apps. Of course, we’ll also deal with defensive measures for protecting the security holes found, though our focus will remain on the systematic use of professional hacking tools for carrying out (partially automated) security analyses. Once you’ve completed this workshop, you’ll have practical experience of carrying out attacks on web apps, which you can transfer into your own software development work so as to increase the security of your projects for the long-term.
Machine Learning is often hyped, but is it useful for DevOps? In this tutorial we will dive into this world. We will show you hands on how you can do data inspection, anomaly detection, prediction, capacity planning and so on. Using realistic datasets and partially programmed code we will make you accustomed to machine learning concepts such as regression, classification, over-fitting, cross-validation and many more. This tutorial is accessible for anyone with some basic Python knowledge and eager to learn the core concepts of machine learning. We make a use of a iPython/Jupyter Notebook running on a dedicated server, so nothing but a laptop with internet connection is required to participate.
Simon Wardley examines the issue of situational awareness and explains how it applies to the world of DevOps. Using examples from government and the commercial world, we explore how you can map your environment, identify opportunities to exploit, and learn to play the game.
Software development (e.g. in the car industry, medical devices, financial industry) is based on long running, concurrently executed and highly dependent processes. The coordination and synchronization of these processes has become a complex task due to the increasing number of functions and involved systems e.g in modern cars. These systems realize advanced features by complex software and cloud-based backends and enable the distribution of functionality as required, for example, by safety equipment. In this talk we elaborate fundamental requirements for the Release Managemet in that industries. We will show how a professional release management process can help to overcome obstacles and can be a helpful starting point to built a DevOps organization. The final goal is to build a continuous improvement system for Release Management where all participants of a DevOps organization take responsibility for a successful, automated software rollout.
Presenting from the perspective of a fictitious web application penetration test, this session will provide you with a well-founded overview of the open-source tools used by security professionals and penetration testers in their daily work on the detection of security vulnerabilities. Despite the high quality of the supportive tools in this field, this is still unknown territory for many development projects and therefore unused potential. After my presentation, you will be familiar with the tools of the professionals along with the purpose, usage scenarios with concrete examples, and pros and cons – in the hope that their use does not remain only in the hands of the penetration tester.
With Distributed Named Pipes, see http://dnpip.es I proposed and implemented a simple yet efficient mechanism enabling microservices to communicate, akin to the concept of Unix named pipes. The proposed spec itself has triggered some discussion in the community (https://news.ycombinator.com/item?id=13151230 and https://lobste.rs/s/ymgqr5/distributed_named_pipes for example) and it's now time to broaden the discussion, talk about use cases and alternatives. This talk will motivate Distributed Named Pipes and show where and how they are useful as well what their limitations are.
How do you know what 100 millions users like? Wix.com is conducting hundreds of experiments every month on production to understand which features our users like and which hurt or improve our business. In this talk we’ll explain how our engineering team is supporting our product managers in making the right decisions and getting our product road map on the right path. What are the best practices of conducting A/B tests and how does the fact that you have A/B test in your system affect your architecture and product decisions? We will also present some of the open source tools we developed that help us experimenting our products on humans.
Ihr Entwicklungszyklus sollte nicht darunter leiden, dass Sie Ihre Entwicklungs- und Testdaten für DevOps immer aus den neusten Produktionsdaten aktualisieren müssen. Herkömmliche Lösungen für DevOps Storage und die damit verbundenen Kopierverfahren machen den gesamten Prozess schwerfällig und verursachen tage- oder wochenlange Verzögerungen. Wir sprechen über Speicherlösungen für DevOps und die Enterprise Cloud, mit denen sich Hunderte von Entwickler-VMs in wenigen Minuten von einer Master-VM aus dem Produktionssystem aktualisieren lassen. Das spart in jedem Entwicklungszyklus Stunden oder sogar Tage. Außerdem stellen wir vor, wie sich DevOps-Prozesse vereinfachen lassen, insbesondere, was Provisionierung und Anwendungsleistung angeht.
Gen editing in small labs, building your own iPhone from parts bought on a market, storing energy cheaply – no niche is too small and strange to not be researched or hacked right at this moment. In the very small and in the huge. In this context, there are two kinds of companies: Those opening the bills and those keeping the envelope closed. What is keeping companies from looking at the the problem? And how could you get towards a culture that embraces new challenges and opportunities? This and more is what this talk is about.
Once you created your new website or webapp you're pretty happy to finally release it into production. But do you know if you can handle the load if your site is mentioned on Hacker News or TechCrunch? Did you forget to load test your site? Or are you in pre-production and looking for load testing before going live?
This talk is showing you all the open source tools to load test your site, shows you useful scenarios and gives you introductions to utilize the load from different clouds. We also take a trip and look how you profile your site under load (for example with Blackfire in case of PHP). And last but not least, I will show you how parts of this can be automated with Jenkins.
Whether you have an OpenStack cloud at your hands (awesome, more Open Source m/) or you're using Amazon or Google cloud. All are covered.
Dieser Hands-on-Vortrag wird zum größten Teil auf dem Terminal bestritten. Behandelt werden Capabilities, Seccomp, AppArmor, SELinux, cgroups sowie User-Namespaces und damit auch Bausteine, aus denen (Docker-)Container gebaut werden, die zur Absicherung der Container verwendet werden, sowie Bausteine, die auch jenseits von Containern von Operations verwendet werden sollten. Bei diesem Hands-on lernt und sieht der Zuhörer, wie mit Linux-Bordmitteln (Docker-)Applikationen und
Services abgesichert werden.
To enable the potential of DevOps you need an enforced and advanced infrastructure in your organization. While building up this environment you should take care of different KPIs like accounting, capacity management and more. There is a wide variety of technologies and possibilities to achieve this goal. To ensure the quality of your architecture you need a wide and deep technological expertise.
During the presentation, we want to step by step break down the architecture to its needed components. In explanation of each step there will be an advice how to achieve the best quality of every layer. Additionally, there are very important key points which should be planned, clarified and implemented like daily operations, lifecycle management and security.
The DevOps world is young and wild: new tools can become the hottest trend while others lose momentum in a short period of time, sometimes in unpredictable ways. For a young company with little experience on the topic it can be difficult to understand which instruments are right to use, since everything seems to be moving at lightning speed! This is the story behind the online platform developed by Camunda to describe business processes collaboratively using BPMN. During this talk I will analyze the challenges that our team had to face, how we reacted to them and how we managed to build a highly-automated infrastructure that allowed us to develop our product faster. We will speak about some best practices that we are following and how tools like Ansible, Terraform and Docker are used daily to deploy our product.
Gernot Pflüger is founding CEO of CPP Studios since more than three decades although within a German production agency operating as a corporate democracy that does not mean much. Within his company there is common wages, bookkeeping transparency and a very straightforward stakeholder way of work for the employees. Although CPP is run like socialistic countries once dreamt to function it is very successful economically, holds patents, grows and is known for its innovations.
Last year rbb (Rundfunk Berlin-Brandenburg) moved the front line of their large scale web publishing to Open Telekom Cloud (OTC). Moving to seamless continuous delivery for elastic cloud servers, the development and test organization has to change. Hence we added tools like Ansible, OpenStack API, git, Docker and Jenkins to continuously build and test server images. Furthermore automated Load- and Performance-Testing using Taurus and JMeter were included in the delivery pipeline, before the rolling updates to production are done without any downtime. Everything will be shown in a real live demo, no boring slides!
ING reorganized IT five times to increase agility and speed. After going agile/scrum for application development six years ago ING restructured the IT department in 180 agile DevOps teams. Then ING integrated the commercial colleagues in > 400 self-organizing squads (BizDevOps) inspired by Spotify’s organizational model. Two years ago ING radically changed the people approach for software engineers to enable and reward engineering mastery. Last year ING restructured the infrastructure department empowering the engineers more. The last step is to implement an engineer centric control framework and fine-tune deployment pipelines to remove IT Risk related manual work and wait times. ING is convinced that in order to go fast, engineers need to be in control - full stack. Such changes require a lot of learning from everybody involved.
There are a lot of Continuous Integration services but Jenkins is still one of the most used in most programming languages. In this talk I will share the CurrencyFair experience, how our IT Team made of 40 engineers manage CurrencyFair delivery with GitHub, Jenkins, Hubot and Slack on different environments. Artifact to guarantee the stability of your codebase, pipeline and some Jenkins's plugins in order to create the most comfortable delivery flow for your projects.
The biggest value of data science lies in data-driven automation of business decisions and therefore as an active part of the value stream just like software development in tech companies. Because of this, the same arguments for continuous delivery apply to the data science delivery process. Furthermore, as data science needs to pull data “greedy” from many different sources, this imposes a new dimension of complexity to the continuous delivery pipeline. While the increased required resources can mostly be managed, the increased coherence due to the excessive number of different data sources from throughout the whole business creates great trouble for modern modular and scalable architectures like microservices. At Blue Yonder, we have more than seven years of (sometimes painful) experience delivering and operating predictive applications as-a-service for our customers. In this talk I will share important lessons learned, how we deploy, how we test, how we monitor and how we “crunch the numbers”, in short, I will take you for a walk through our data science delivery pipeline.
The Docker Project delivers a complete open source platform to "build, ship, and run" any application, anywhere, using containers. The Docker Engine and the other main components (Compose, Machine,
and the SwarmKit orchestration system) are free; but Docker Inc. (the company who started the Docker Project) also has a complete commercial offering named "Docker EE" (for Enterprise Edition) that adds an extra set of features geared at larger organizations, as well as an extended support and release cycle.
In this talk, I will explain (and show with demos) what you can do using exclusively Docker CE (community, free edition) and which features are added by Docker EE. This talk is for you if you are in the process of selecting a container platform; or if you're just curious, and want to know exactly what you can do (and cannot do) with Docker CE and EE.
Innovation and startlos are big words. This panel will have a look at how startups get their ideas and missions and how they pull it through. How crazy must one be to compete in finance and utility? Or does it actually help that these markets look tightly sealed? Do regulations rather help startups? What are these startups biggest obstacles in everyday life? We will have a look at two startups, hacking the the areas of insurance and utility – two areas you wouldn’t assume as being first choice of trying to compete.
In der heutigen Zeit ist die Produktions-IT zunehmend ein Pfeiler für den Erfolg eines Unternehmens geworden. Neben Anforderungen aus verschiedensten Geschäftsfeldern müssen zunehmend höhere Geschwindigkeiten bei der Bereitstellung von Diensten und eine erhöhte Flexibilität gewährleistet werden. Der neue Standard „Hybrid-IT“ verbindet Zielstellungen aus der Entwicklung mit der Flexibilität von Containern und erhöht die Anforderungen an Produktions-IT und IT-Sicherheit. In welcher Reihenfolge kann all dies miteinander sinnvoll standardisiert und automatisiert werden?
One of the largest risks and highest costs in any project is the act of deployment. By following O.C.D. principles, you can achieve new levels of efficiency and security that positively impacts your entire organization. This session also examines how OCD principles can be extended beyond deployment to upstream development processes and the velocity and efficiency improvements this can bring.
Implementing a continuous delivery (CD) pipeline is not trivial, and the introduction of container technology to the development stack can introduce additional challenges and requirements. In this talk we will look at the high-level steps that are essential for creating an effective pipeline for creating and deploying containerized applications. Topics covered include: The impact of containers on CD, Adding metadata to container images, Validating NFR changes imposed by executing Java applications within a container, Lessons learned the hard way (in production).
In this talk we give insights in how we have set up several Kubernetes clouds for SAP. We tell the whole story, from PoC state, first developer installations, testing until production. We implemented on premises, in internal and external IaaS clouds. We used Docker and Rkt, rolled out database applications and implemented deployment pipelines. Special applications have special needs, so we tweaked parameters. At the end, we implemented several self-installing, self-hosted, self-healing and extendable Kubernetes clusters, which allow application deployment on scale for cutting edge case involving high performance databases and number crunching applications.
Git is awesome, but sometimes it is just pure pain. With all those powerful features, screw-ups will happen from time to time. This talk is interesting for you if you are working with Git or planning to. It will also present the case for using Git on the command line, as well as showing you easy ways to handle typical situations that happen in your daily (working) life.
Employee engagement and co-creation, idea sharing, top-down management, digitalisation strategy! You name the board room buzzwords. But are they really helping to change the corporate culture of an established large company? Can you really engage 50 thousand employees and innovate a multinational corporation with some impressive powerpoints and a complex transformation program planned by external consultants? Innovation Inside-out is the approach Erste Group Bank AG embraced in 2013 to challenge the existing corporate environment. How does it work? Where are the pain points? And what are the results?
Kennen Sie das? Auf Konferenzen wird stets von agilem Arbeiten sowie DevOps gesprochen und darüber, wie Business, Development und Operations gemeinsam eine Erfolgsgeschichte nach der anderen schreiben – aber in Ihrer Firma bleibt immer noch der große Erfolg aus? Was, wenn Sie das ändern könnten? Matthias Kainer wird erläutern, wie ein Entwickler mit bestimmten Tools und Methoden das Unternehmen verändern kann. Er wird darauf eingehen wie
• Sie ein Team dazu motivieren, agiler und schneller zu arbeiten.
• Sie das Ihrem Team entgegengebrachte Vertrauen nutzen können, um Veränderungen in seinem näheren Umfeld anzustoßen. Thema wird zudem sein, wie Ihr Team gemeinsam mit Produktmanagement, Development und Operations zielführend zusammenarbeitet, indem Sie neue Tools und Vorgehensweisen einsetzen.
• es Ihnen gelingen kann, auch bereichs- und unternehmensweit Veränderungen zu initiieren.
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering, automation and consumer-grade design to really deliver an on-premises cloud experience. Thomas Findelkind and Jens Mertes from Nutanix are going to describe the basics of Nutanix and give a tech-demo on integrated automation tools to drive agile infrastructure and DevOps IT.
With Docker it became easy to start applications locally without installing any dependencies. Even running a local cluster is not a big thing anymore. AWS on the other side offers with ECS a managed container service that states to schedule containers based on resource needs, isolation policies, and availability requirements.
Sounds good, but is it really that easy? In this talk you'll get an overview of ECS and all other services that are needed to run your containers in production. Philipp shows how an ECS cluster and your containerized applications can automatically be deployed and scaled. He also shares his experiences and explains which features are still missing.
When you are designing a production environment security is essential. All the Docker ecosystem but in particular Docker Swarm allows us to ship our containers out of our laptop, how can we make this process safe? During my talk, I will share tips around production environment, immutability and how troubleshooting common attack as code injection with Docker. Static analysis of our images, content trust with Notary to make our journey secure.
How can we setup a cluster on the main cloud providers with VPN and node labeling to expose only a portion of our cluster? I will also show what Docker provides (Content Trust, Static Analysis) but also open source alternatives as Notary, centos/clair and Cilium.
In the end of this talk, we had a better idea around how manage Docker in production.
Continuous Delivery wird oft im Kontext von abgeschlossenen Projekten erklärt. Aber wie bringt man als Lieferant einer Anwendung, die für die Nutzung angepasst werden muss, Continuous Delivery zum Partner oder Kunden? Was ist zu beachten bei der Umsetzung? Wann ist für den Lieferanten die Lieferung abgeschlossen? Was ist notwendig, um den Kunden zu unterstützen? Bei der Umsetzung einer Pipeline vom Lieferanten zum Kundenprojekt gibt es einiges zu beachten – intern als auch extern. An Hand eines Erfahrungsberichts bei der Umstellung von einer zeitbasierten Integration und jährlichen Lieferung hin zu einer kontinuierlichen Lieferung einer komplexen Shopsoftware an Partner oder Kunden zur Anpassung werden wichtige Aspekte von Continuous Delivery im Kontext eines Softwarelieferanten beleuchtet und diskutiert. Bei der Beratung verschiedener Partner und Kunden ist aufgefallen, dass die Prinzipien, Ideen und Vorteile von Continuous Delivery nicht immer zu den allgemeinen Prinzipien gehören. Gute Dokumentation und ein einfacher Einsatz sind ein wichtiger Aspekt, um diese Herausforderung zu meistern. Die Betrachtung dazu ist ein weiterer wichtiger Aspekt der Session.
Most of us still think of China as a place of copyists, cheap labor and fake products. The truth is, that places like Shenzhen are leading in innovation of IoT and high velocity discovery of new opportunities in connected things. This session will tell stories from the unexpected world of Shenzhen labs and research. A world travel in innovation.
Wenn Sie sich die Entwicklung der IT auf einer Speisekarte aufgelistet vorstellen, möchte ich mit Ihnen die Seite mit den Desserts durchgehen. Neben dem passenden Ambiente für den Genuss Ihres Desserts stelle ich Ihnen die Utensilien vor, mit denen Sie diese Desserts bequem und manierlich zu sich nehmen können. Zudem ist zu beachten, dass Sie als Gast im Restaurant ja unterschiedliche Vorlieben, Verträglichkeiten und auch einen aktuellen Dessertbedarf haben, die sich nett aufgereiht zu nicht abschreckenden Preisen auf der Dessertseite der Karte widerfinden müssen. Darf ich Ihnen heute als passenden Abschluss etwas Predictive Analytics oder IoT zum Dessert empfehlen?
Dependency hell. Two words that many software engineers know and loathe. Unfortunately, Netflix engineers are not immune to the cost of dependency hell. Library owners publish new versions of their code without a comprehensive understand of the organizational impact. Application owners ingest new library versions that can fail in obvious or subtle ways, leading to decreased confidence and slower organizational velocity. In this talk, Mike McGarr (Manager, Developer Productivity at Netflix) will talk about the challenges of shared code, dependency hell and some existing solutions. He will then share the approach that Netflix is moving towards to decrease the cost of dependency hell.
Enterprises realized that they must transform in order to compete in the innovation-driven market of tomorrow. They want to implement DevOps to gain more agility and quality for their IT but are still struggling
with bad silos, compliance regulations or company's culture. SUSE accepted the challenge to successfully implementing DevOps in enterprises by offering customers a wide range of DevOps related Open
Source technologies like OpenStack or Kubernetes with the needed enhancements to make them usable and maintainable in an enterprise environment.
This talk will focus on the challenges around the implementation of the DevOps way of working due to the existing shortage of trained professionals. It will do this by discussing the need to take the existing workforce and ensure that they are not only engaged with the right skillset but also the right mindset. As DevOps is largely being embraced in existing organizations it is impossible to build teams from the ground up, leading to on-the-go adaptation. This emphasizes gaps in skills and knowledge. In order to ensure that these gaps are filled, organizations are having to assess their workforce for related competencies. Meanwhile new employees need to have up-to-date certifications. All this due to the accelerated adoption of DevOps and philosophy behind it which centers on people, process, technology and information. In light of this, it is imperative to empower the people who are the driving force behind DevOps.
The high level of automation for the container and microservice lifecycle makes the monitoring of Kubernetes or Swarm more challenging than in more traditional, more static deployments. Any static setup to monitor specific application containers does not work because orchestration tools like Kubernetes or Swarm make their own decisions according to the defined deployment rules. In this talk you will learn how DevOps can cope with challenges in Monitoring and Log Management on Docker Swarm and Kubernetes. We will start with the basics of container monitoring and logging, including APIs and tools, followed by an overview of the key metrics of both platforms. We will speak about cluster-wide deployments for monitoring and log management solutions and how to discover services for log collection and monitoring, tagging of logs and metrics. Finally, we will share insights derived from monitoring a 4700 node Swarm cluster, as part of the Swarm3k project.
“Docker build” with plain Dockerfiles is currently the canonical and recommended approach for creating Docker images. But is this also always the best way for all use cases? This talk presents alternative and creative ways to produce Docker images. You will meet dedicated template systems like fish-pepper which adds composability and parameterisation in addition to the Dockerfile intrinsic extension mechanism. In another recipe you will learn how to reproducibly create Docker images with Ansible. The image fabrication can be directly integrated into your Maven-based builds, too. This and more will be explained and demonstrated with live demos. At the end you will have a good overview of what is out there for loading your Docker containers in addition to a vanilla “docker build”.
If you’ve traded a stock, engaged in a financial transaction, spun up production infrastructure, or bought groceries at your local supermarket, there is a very high probability you’ve interacted with one of HashiCorp’s open source projects written in Go. This talk discusses our journey with Go from its infancy to the mature, production-ready language it has become today. This talk will discuss the decision-making process which ultimately landed on Go, the amazing benefits we’ve gotten out of the standard library, the not-so-amazing limitations we’ve hit along the way, and finally the reason Go has become the most used tool in our toolbox.
DevOps provides the ability to increase time to market to an new level. The question is no longer if we need to speed up our delivery. The challenge is to find the right "pace" for your product. Not every organization and every product needs to run at the speed of Netflix and Spotify, even if we'd like it to be like this. We need to adjust the organization, processes and tools appropriatly and to identify the real bottlenecks in the delivery pipeline continuously. And by the way, we need to justify our investment in the DevOps mission. Are we just automating the current processes or can we use this DevOps thing to really support our business? In this talk, I'd like to discuss with you how to find the right design for your delivery process and your organization to behave as a business enabler and how you can scale DevOps within your organization without loosing agility. Let's explore how we can listen carefully to the unknown customer out there and to build software they really like in the speed of your business.
DevOps has become a religion for enterprises worldwide, but adoption has several hurdles. Based on a survey of over 2000+ responses worldwide, emerged a consistent pattern of barriers to DevOps adoption. Not surprisingly these barriers intersect the human element, the tool chain as well as the technology. This session will address the top 10 barriers and discuss measures to address them and accelerate DevOps adoption in the process. Technologists that are undertaking DevOps planning for their organizations, architects responsible for rolling out DevOps processes as well as practitioners will benefit from this session.
For effective, modern, cloud-connected software systems we need to organize our teams in certain ways. Taking account of Conway's Law, we look to match the team structures to the required software architecture, enabling or restricting communication and collaboration for the best outcomes. This talk will cover the basics of organization design, exploring a selection of key team topologies and how and when to use them in order to make the development and operation of your software systems as effective as possible. The talk is based on experience helping companies around the world with the design of their teams.
Using Docker containers to develop, test and run applications is well known for cloud and datacenter environments. In this talk we're going to introduce you to the principles and specific requirements of using Docker for IoT use cases. We're looking into the details of how to optimize an application for size and easier deployment on small IoT devices.
Docker can run your applications very efficiently, even on the smallest devices. You'll see during this talk how to do that easily. An application will be created, based on a Docker centric workflow, and will be deployed on real "small" hardware. By iterating on optimization steps for small devices, you will discover which value it even brings to classic container applications. This reveals the details and benefits of a container-based development and runtime environment, which helps to achieve better portability, maintenance and security by isolating the application code from the hardware.
Der Azure Resource Manager (ARM) ist das DevOps-Herz der Microsoft Cloud. Egal ob IaaS- oder PaaS-Dienste, ARM kümmert sich darum, dass die benötigten Komponenten provisioniert werden. In der Session stellt Rainer Stropek, langjähriger Azure MVP und MS Regional Director, ARM anhand vieler Beispiele vor. Sie sehen, wie man ARM über Bash Script, PowerShell und mithilfe von JSON-Templates programmiert. Rainer zeigt auch, wie man mit ARM Docker Hosts und Docker Cluster in Azure anlegt. Den Abschluss bildet ein Überblick über die Automatisierung von ARM Scripts mit Azure Automation.
Devs have IDEs with code completion and syntax highlighting, version control, Unit tests, CI/CD, pre-prod environments, and – as of lately – even microservice platforms like RedHat's OpenShift, Pivotal's CloudFoundry or Microsoft's ServiceFabric (just to name a few). Ops have logfiles, ..., charts, ..., uhm, and, ..., did I mention logfiles? This talk focuses on what DevOps does or should do for Operations (which includes the Operating Coder). For one, this means tools we're still missing (especially in the open source space), plus things that developers should do to support the non-coding fraction as good as possible. After all, it's their task to operate apps and services they did neither plan, nor architect or develop.
We know you need to respond quickly to constantly evolving needs of providing services and tools faster than ever before. Therefore you’re probably building a true DevOps environment that includes a highly
programmable, automated, scalable, and secure infrastructure to provide everything your developers need to bring applications to market faster. This session will discuss the crucial role of automation while
highlighting the full advantage of container technology to help you meet the new demands.
Today, there is tremendous pressure on enterprises to accelerate the velocity of software innovation because of competition and the opportunity to exploit disruptive, new technologies. This pressure is driving application development organizations to apply a range of modern practices, developed in smaller Web companies, to the software development lifecycle (SDLC). In the race to deliver software faster, small companies can afford to adopt cultural mind shift like DevOps. However, in the quest to move faster and adopt modern development practices, larger enterprises must avoid increasing security, compliance, and performance risks in the SDLC. Managing deployment and release risk is particularly critical in sensitive, highly regulated sectors such as financial services, government, healthcare, automotive, and defense.
"Failing fast," "failing forward" and "Learning from failure" are all the rage in the tech industry right now. The tech company "unicorns" seem to talk endlessly about how they reframe failure into success. And yet, many of us are still required to design and implement backup system capabilities, redundancies, and controls into our software and operations processes. And when those fail, we cringe at the conversation with management that will ensue. So is all this talk of reframing "failure" as "success" within our organizations just that: talk? And what does that look like, anyway? We'll explore mindset, the history it's rooted in, as well as effective methods to move your organization toward it and some land mines to avoid along the way.
Runtime information of deployed software has been used by both business and operations units to make informed decisions under the umbrella term “analytics”. Especially performance information (e.g. execution times, throughput, CPU utilization) often comes in the form of time series graphs in dashboards or numbers in reports. I hypothesize that this is not the right abstraction to present performance information for software developers whose daily workflow involves producing high volumes of code in an IDE. I argue runtime information needs to be especially targeted for software developers, hence the title "Developer targeted Performance Analytics". Performance metrics should be intertwined with source code artifacts (e.g., method calls, loops) in the IDE to provide this information in the right context and aid the software development workflow.
In this talk, I want to present outcomes of the core topic of my Ph.D., that manifests this approach as an Eclipse IDE plugin, PerformanceHat [1], an open source project that combines runtime performance information with source code and leverages machine learning to provide performance predictions for code changes. This should enable software developers to make data-driven decisions about their code changes.
[1] http://sealuzh.github.io/PerformanceHat/
The AWS Lambda release in 2014 pushed serverless into the mainstream. Other major cloud providers (Google, Microsoft, IBM) have caught up, and today there’s a vast offering of serverless services available. Not to mention the handful of feature rich, open source projects that enable running serverless applications on-premise. This talk will help you understand what serverless is, and how it improves software development and operations. It will provide an overview of the serverless ecosystem – from Function-as-a-Service providers to microservices orchestrators that allow building a complex state machine in the cloud. Without worrying about scalability, capacity planning and security patches.
Wie üblich in den Hands-on-Sessions bewegen wir uns auf der Kommandozeile und werden einen Cluster aufsetzen. Selbstredend wird es zum Zeitpunkt des Vortrags aktuellste Version von Docker (Swam Mode) sein. Docker Swarm Mode ist noch immer the New Kid on the Block, der sich gegen Kubernetes, Mesos/Marathon etc. positioniert. Wir werden sehen, wie einfach das in Docker integrierte Clustering sich anfühlt. Was wir damit machen können und was nicht.
The DevOps movement is gradually changing IT organisations. Project managers are often forgotten, but they also need to change. Are project managers ready for the DevOps change? In this talk we will cover the changes project managers are experiencing, first with Agile and now with DevOps.
A full automated deployment process is one core factor for a successful software development. But this automation does not stop at the application server. If you look beyond the deployment target you can see the data center and its infrastructure. This has to be included in the automation process. Therefore you can use Infrastructure as Code to create complete data centers within the deployment process and roll out the software. In this talk we show what such a process can look like and what has to be done to achieve the desired automation. As a practical example we will point out this procedure with AWS and CloudFormation.
Financial business is traditional – huge numbers, computer systems, and a lot of legacy to carry around. At Volkswagen Financial Services, we see cloud infrastructures as one of the technologies that will help us bring order into a big landscape. Also, software developers tend to be lazy, so are we. Which is why we don’t want to have to deal with a lot of infrastructure. We think we found a promising way to be lazy, happy, efficient, and aligned with our current technologies, all at once.
Adopting DevOps should lead to high-performance teams, organizations and ecosystems. Those high-performance teams and organizations show "the right behavior" every single day. But what is this "right behavior" in the DevOps context? What is a general approach to behavioural change and how effective has this approach proven to be in the past? Why not use science instead of common sense? What can you do to effectively to change the behavior of all the stakeholders in your domain, like your teams, customers, leaders/managers, or other relevant stakeholders? Where does the importance of training and even certification on an individual level come into play and what benefits does it provide? This highly interactive presentation will provide you with real life examples, tips and tricks (all based on scientific and practical evidence and experience) on how to help truly create high-performance DevOps environments.
Traditionally, application monitoring is a purely operational concern. However, it is hard to monitor complex systems without using some kind of whitebox approach, exemplified by the Prometheus monitoring system. Whitebox monitoring requires code instrumentation, and all of a sudden monitoring matters from very early on in the development process. This both encourages and benefits from a DevOps approach – and has additional side effects that can even be helpful for development itself.
In turn, profiling code is often thought as something only developers do. Profiling code in production has the obvious benefit of observing a real-life scenario rather than artificial test runs. The ability to profile any production binary at any time is immensely helpful while investigating production issues. Similar to ubiquitous whitebox monitoring and distributed tracing, this ability has been available to Google engineers for quite some time. Thus, it is not surprising that profiling is exceptionally easy with the Go programming language initiated by Google.
For full self-reference, the talk will demonstrate how the Prometheus server, which is written in Go, was developed, debugged, and optimized using Prometheus-style whitebox monitoring and the “always on” Go profiling abilities.
A quick scan through Netlfix’s GitHub repository will inform you that Netflix has built their core cloud platform on Java. Our cloud deployment platform is built for JVM applications and continues to serve us well. But as Netflix evolves, languages and platforms like Node.js and Python have grown in numbers and are increasingly used in critical systems. We need to start thinking about building tools to support a polyglot world at Netflix. In this talk, Mike McGarr (Manager, Developer Productivity at Netflix) will talk about the various tools and approaches we are employing to provide first class support for a variety of languages and platforms at Netflix. Mike will share some of the challenges of supporting a polyglot codebase as well as lessons learned for enterprises embarking this journey.
For effective Cloud-connected software systems we need to organize our teams in certain ways. Taking account of Conway’s Law, we look to match the team structures to the required software architecture, enabling or restricting communication and collaboration for the best outcomes. In this workshop you will learn how to design your organization for modern, Cloud-connected software systems, covering topics such as: fundamental team topology types; how and when to use the fundamental team topologies; how to recognise other team topologies and to map these onto the fundamental types using topology fitness functions; the dynamics of team design and how team topologies should evolve; heuristics for discovering new topologies. On completion of the workshop, you should have a sound understanding of which team topologies to apply in different circumstances and why. Attendees are expected to have a basic understanding of Conway’s Law and of the DevOps Topologies patterns at http://devopstopologies.com/.
Learn how to containerize workloads, deploy them to Google Container Engine clusters, scale them to handle increased traffic, and continuously deploy to provide application updates. Objectives: container basics; how to containerize an existing application; Kubernetes concepts and principles; how to deploy applications to Kubernetes using the CLI; how to set up a continuous delivery pipeline. This workshop is intended for developers and operations professionals looking to get hands-on experience with Kubernetes and Google Container Engine. The typical audience member is comfortable running commands at a terminal and has a basic understanding of web technologies.
In this hands-on workshop we’ll all attack the training web app to take on the role of a pentester one step at a time. You’ll learn how to work with professional security tools through a range of practical tasks and will also learn pentesters’ general approach for attacking web apps. Of course, we’ll also deal with defensive measures for protecting the security holes found, though our focus will remain on the systematic use of professional hacking tools for carrying out (partially automated) security analyses. Once you’ve completed this workshop, you’ll have practical experience of carrying out attacks on web apps, which you can transfer into your own software development work so as to increase the security of your projects for the long-term.
Nach den ersten Schritten mit Docker will man schnell eigene Docker-Application-Images bauen. Genau dies werden wir im Workshop machen. Dabei werden wir neben dem Warum und Wie auch die Best Practices kennen lernen. Themen des Workshops sind unter anderem:
* Base-Images (eigene Base-Images bauen)
* Bau von Docker-Images
* Worst and Useless Practices beim Docker-Image-Bau
* Multi-Stage Builds und Labeling
* Betreiben einer eigenen Registry
Notiz: Der Image-Bau von Windows-Images ist nicht Gegenstand des Workshops.