DevOps is much more than than just local improvements in the development, delivery and operation of software: it represents a movement which touches not only development and operations (obviously!), but organizational culture, quality assurance, security, and program management.
This workshop is a compact one-day briefing for DevOps beginners and those who’d like to get a comprehensive overview of the complex, encompassing DevOps landscape. It is tailored for technical stakeholders as well as for business leaders. Anyone tasked with a DevOps transformation will be able to spend a day wrapping their head around the world of DevOps, with exercises to help them look at their specific organization’s challenges through a DevOps lens. In the workshop, we’ll cover:
By joining DevOpsCon’s Transformation Day, you’ll get an insight into all the necessary aspects to make a DevOps transformation in your own organization successful.
We can think of the whole computer systems like a human body that consist of cells of various types. Those cells can be hardware or software units. When they are software units, the smaller they are, the easier it is for them to self-heal, recuperate from failures, multiply, or even get destroyed when that is needed. We call those small units microservices, and they can indeed have behaviors similar to those observed in a human body. Microservices-based systems can be made in a way that they have the ability to self-heal. We'll explore practices and tools required to set up fully autonomous self-healing systems based on Docker clusters. Such systems will be capable of both reactive recuperation from failures and proactive predictions of steps that should be taken to prevent failures from happening.
This is a basic overview on the features of Kubernetes:
The workshop is hands-on. Bring in your own laptop.
If you intend to “write it and run it”, or even come close to that, then this is the course for you. Microservices promise the advantages of fine-grained scalability, effective team organisation, speed of delivery and amenability to the key stress on a software system: change. “Running Production Microservices” takes your system and deep-dives into the tools and techniques that make your distributed system operable at runtime. Applying DevOps and Site Reliability Engineering techniques, practices and tools, this course helps you fashion a management and monitoring strategy that enables a rapidly changing, distributed system rather than becoming the bottle-neck. You’ll learn how to decide where to apply SRE and DevOps thinking and when to implement key concepts such as Circuit Breakers and Bulkheads using the latest tools. Techniques such as A/B, blue/green deployments, distributed logging strategies and effective deployment pipeline-management are all brought together to give you the best runtime microservice operability and debugging toolbox possible.
Sebastian Meyen, Program Chair, welcomes all attendees, introduces this year's program and gives last details about DevOpsCon 2017.
Production hates you. The machines, the networks, the very users you hope to provide a service hate you. This is reality, and it makes production a hostile battle ground. In this talk Russ Miles will talk about how to turn this pain to your advantages. Following on from his popular “Why don’t we learn?” talk it is now the time for the sequel. Through a sequence of case studies, personal stories and code examples Russ will talk about how sociotechnical systems like your development team improve through stress, turning this pain to their advantage through learning loops so that it is no longer about “how do we avoid the pain” but rather “how do I embrace and thrive on more”.
In this talk, we explore five practical, tried-and-tested, real world techniques for improving operability with many kinds of software systems, including cloud, Serverless, Microservices, on-premise, and IoT. Based on our work in many industry sectors, we will share our experience of helping teams to improve the operability of their software systems through these straightforward, team-friendly techniques.
This presentation shows you how this transition from traditional Java application development to a cloud-native based model with Kubernetes as orchestration platform can take place without pain. We will learn how and with which tools we can install a local cloud-native development environment easily. In a first step, we will see how to migrate Java applications without changes effortlessly to Kubernetes. However, to profit the most of the container abstraction applications architectures need to adapt, too. Microservices are a perfect fit for this new world. We will see how we can take full benefit of advanced platform features.This presentation focus on live demos with hands-on coding. We start our journey from scratch with a small plain old Java application which we will port to Kubernetes into minutes. Step by step we increase the complexity so that at the end we will have a good feeling how we can bring Java projects to Kubernetes without knowing all the bells and whistles.
Okay, Microservices are cool. But, as all the new trendy buzzword, it’s not a silver bullet, and there are several problems to manage. One is the authentication, distributed authentication is hard, and there is many ways to achieve it. Configuration is the second issue to be managed when dealing with distributed micro application strategy. This talk is a concrete return of experience to build a strategy on microservice and problems we will have to deal on this occasion.
Managing container infrastructure in a production environment is challenged by problems of scale. One of the biggest problems is trust - specifically trust of the application. To put it another way, can you trust that all containers in your Kubernetes or OpenShift cluster are performing the tasks you expect of them? We know that containerization has increased the pace of deployment, but has trust kept pace? If a container becomes compromised in some fashion, how many other containers are at risk and how far has trust been broken?
Most of the trending tools in the DevOps universe are developed using the Go programming language. There are many popular tools for service discovery, container orchestration, infrastructure automation, monitoring solutions, reverse proxies, web servers and even databases – all written in Go. AWS Lambda currently only supports Java, Node, Python and C#. But there are workarounds to use other language, including one for using Go.The small statically linked binaries, type safety, startup time, the lightweight tooling and the vast amount of bindings and libraries make Go the perfect language for writing Lambda functions that drive your cloud automation and integrating the plethora of different cloud services from both AWS and other providers. We discuss options on how to use Go inside Lambda functions and how it compares against the officially supported languages - especially when used as as super glue for your AWS infrastructure.
It's a pity that DevOps is not an easy tool. It would be so much easier to introduce it. DevOps is still difficult to grasp, as it represents a mix of processes, tools and above all mindset, to "enable more efficient collaboration between Dev, Ops and Quality Assurance (QA)". (Source: Wikipedia) Often the organization does not provide an optimal environment for this. This session will focus on best practices (processes, tools) and will show a way to introduce DevOps step-by-step and to benefit from DevOps even in environments which seem to be hard to change. Thomas and André rely on their own experience gained in their professional careers and on stories from their network.
Managing secrets securely in container environments is (still) hard these days. I want to show people how to integrate secrets with their containers with zero exposure. Instead of passing secrets to containers using environment variables, which are not concealed by the Docker ecosystem, I'll introduce you to an a couple of alternatives while focusing on using Vault and envconsul/consul-template for managing secrets in containers without prior knowledge, in templates or the environment and even update them dynamically/periodically. Together with the other backends Vault provides (e.g. databases, PKIs or ssh integrations) you are able to build a much safer environment for your containerized applications.
Everybody talks about Docker, Kubernetes, Rancher, Nomad, Rocket etc. - and for good reasons. But do I really have to use those tools in every project? Can't I develop and operate a stable microservice architecture without having to put everything into containers and orchestrate them? A practical example will show that more light-weight approaches still have a right to exist in the year 2017 (and beyond) and that a system based on microservices on AWS can be fun even without containers.
It is one thing to talk about Devops practices to all the subject matter experts in the field, it is another to explain the business case of Devops in clear pictures, substantial arguments and a sound line of reasoning for those who decide over budgets, investments and projects. They ask about the why, the how, the sustainability of the often listed buzzwords and the operational and tactical consequences, and the effects on training for their organizations. All those questions will be covered and answered during this session.
“Failing fast,” “failing forward” and “Learning from failure” are all the rage in the tech industry right now. The tech company “unicorns” seem to talk endlessly about how they reframe failure into success. And yet, many of us are still required to design and implement backup system capabilities, redundancies, and controls into our software and operations processes. And when those fail, we cringe at the conversation with management that will ensue. So is all this talk of reframing “failure” as “success” within our organizations just that: talk? And what does that look like, anyway? We’ll explore mindset, the history it’s rooted in, as well as effective methods to move your organization toward it and some land mines to avoid along the way.
Using selected examples from enterprise customers, Christian Koch, the CEO of Scandio, will show how companies can accelerate software delivery and increase software quality at the same time by using continuous integration and delivery pipelines combined with container technology and the Atlassian tools. In order to remain competitive in the future, internal processes and future technical requirements must be coordinated. The automated construction of the IT infrastructure and the use of cloud platforms is intended to create a sustainable project environment.
Immer mehr Unternehmen gehen dazu über, Cloud-Anwendungen zu entwickeln oder zu benutzen. Denn insbesondere Software-as-a-Service und Web-Applikationen müssen von Grund auf neu entwickelt werden, um von Cloud Umgebungen zu profitieren. Neben der reinen Entwicklung muss jedoch der Betrieb genauso gewährleistet werden wie die Sicherheit im Betrieb der Applikationen. Moderne Service-Architekturen in Containern stellen andere Anforderungen an Cloud- und Container-Infrastrukturen. Dieser Talk beleuchtet die Möglichkeiten der Applikations-zentrischen Automatisierung und deren Härtung unabhängig von der Ziel-Infrastruktur.
A recent survey by McKinsey highlights the fundamental shift (10% in 2015, to an estimated 51% in 2018) in the way enterprises use the public cloud environments as their primary environment. We need to be able to take a hybrid cloud and a multi-site approach where we:
Setting up your network and application resources has been made simple by pretty much the software or open-source solutions these days. However, operationalizing your resources is still a nightmare for many admins and operators. Ensuring the application is available always and that there are enough resources that can handle the erratic spikes in application traffic while scaling the backend resources elegantly without affecting app performance.
In this session, we will discuss the need for cloud-bursting where one can spin up a “skeleton crew” of resources in one cloud and redirect application traffic from the primary site when it goes down or when one cannot scale out any further at the primary site.
Oft leidet der Entwicklungszyklus darunter, dass Entwicklungs- und Testdaten für DevOps immer aus den neuesten Produktionsdaten aktualisiert werden müssen. Stunden- oder tagelange Verzögerungen sind die Folge, denn die etablierten Kopierverfahren im Bereich DevOps-Storage machen den gesamten Prozess schwerfällig. Dabei gibt es bereits intelligentere Speicherlösungen für DevOps und die Enterprise Cloud, mit denen sich Hunderte von Entwickler-VMs in wenigen Minuten von einer Master-VM aus dem Produktionssystem aktualisieren lassen. In dieser Session wird – u.a. anhand einer Demo - gezeigt, wie sich die kritischen DevOps-Prozesse vereinfachen lassen, insbesondere, was die Provisionierung und Anwendungsleistung angeht. So können Sie in jedem Entwicklungszyklus Stunden oder sogar Tage sparen.
What are the differences between containers and virtual machines? Where and why should you use Docker, runc, Rocket, KVM, Xen, VirtualBox, IncludeOS, RancherOS? This talk is a full session providing understanding on how these technologies work, how they compare to each other, and lot’s of demo to understand differences and fundamental concept on isolation. So, let’s look under the hood, and understand how your system works (hint: it’s not magic). And yes, it will be understandable even if you are not an OPS or an expert. That’s precisely the point.The idea of this talk is lead by many major confusion around linux since the docker hype, things like “a container is a light VM” and other misunderstanding of low levels of the OS. This talk purpose is to provide some knowledge on this topics to everyone, help them choosing tools, architecture or solve their scaling or security issue.
Security unter Docker (wie auch anderen Containerimplementierungen) ist nicht mal containerspezifisch; es handelt sich um Techniken, die der Admin schon jetzt in seiner täglichen Arbeit nutzt, nutzen sollte und nutzen wird. Damit ist das vermittelte Know-how auf breiter Basis zu verwenden. Die Hands-on-Vorträge sind als solche zu verstehen. Der größte Teil des Vortrags wird auf dem Terminal bestritten. Bei Docker-Security werden behandelt:
Der Talk bietet viele Beispiele dafür, warum etwas sicher oder unsicher ist und wie das Erlernte auch im Docker Swarm Mode und Kubernetes konfiguriert werden kann. Was bringt die Session letztlich? Der Zuhörer erfährt, wie existierende Linux-Features in Docker, dem Docker Swarm Mode und Kubernetes verwendet werden können, um die Infrastruktur vor den Applikationscontainern abzuschotten. Alle Themen werden auf dem Terminal gezeigt und vorgeführt.
Software applications have become an essential source of business value for large organizations. That’s why it’s so important for companies to find ways to continually improve their software and deliver it to market as fast as possible. So how can organizations reduce cost and time to market while still offering quality and enabling developers to focus on innovation? The answer is Continuous Delivery—a set of processes and practices that radically remove waste from the software production process, enable faster delivery of high-quality functionality and allow for a rapid and effective feedback loop between a business and its users.Rob will demonstrate four principles that will help large companies implement Continuous Delivery and drive the creation of valuable software. These principles include:
Rob will provide real-world examples of each of these principles, illustrating common obstacles to achieving them and strategies for successful implementation of a Continuous Delivery pipeline.
“Failing fast,” “failing forward” and “Learning from failure” are all the rage in the tech industry right now. The tech company “unicorns” seem to talk endlessly about how they reframe failure into success. And yet, many of us are still required to design and implement backup system capabilities, redundancies, and controls into our software and operations processes. And when those fail, we cringe at the conversation with management that will ensue. So is all this talk of reframing “failure” as “success” within our organizations just that: talk? And what does that look like, anyway? We’ll explore mindset, the history it’s rooted in, as well as effective methods to move your organization toward it and some land mines to avoid along the way.
Many software projects use build pipelines including tools like Jenkins, SonarQube, Artifactory etc. But often those pipeline tools are installed and maintained manually. There are certain risks with this approach and in case of failure it often takes a long time to have a running pipeline again. This session shows how to automate the creation of a build pipeline. With Terraform, a Docker infrastructure is created at AWS, where Jenkins, SonarQube and Artifactory are pre-configured and deployed. The pipeline is ready for operation in just a few minutes, as Kai will demonstrate in a live demo.
The way we design, develop and run applications on Cloud Native platforms such as Kubernetes differs significantly from the traditional approach. In this talk, we will look at a collection of common patterns for developing Cloud-Native applications. These patterns encapsulate proven solutions to common problems and help you to prevent reinventing the wheel. After a short introduction into the Kubernetes platform, we will look at several pattern categories, including foundational, behavorial, structural and configuration patterns. In the end, you will have a solid overview how common problems can be solved when developing Cloud Native application for Kubernetes.
Have you ever spent time digging through various terminals, greping, lessing, awking and trying to find that few log lines that may be important? Have you every done that under time pressure, because mission critical services were not working? Have you every heard from your developers that they can't tell you anything, because they don't have access to application logs? Have you ever considered a centralized storage for logs, but time and resources are not on your side? If you said yes, to any of the above questions, than this talk is for you. During the talk I'll introduce you to the world of log centralization and analysis, both when it comes to open source, but also commercial tools. We will go from top to bottom and learn how to setup log centralization and analysis for servers, virtualized environments and containers. We will get from log shipping, through centralized buffering to storage and analysis to show you, that having a centralized log analysis tool is not a rocket science. Finally, you will see how useful is to combine the logs from all your servers in a single place for blazingly fast correlation.
DevOps is not neither a tool nor methodology. It’s a culture, an attitude, a movement. And there are so many things happening on so many different levels at the same time. As as there are no golden recipies, it’s about the experience and the knowledge of very individual involved in DevOps. So, let’s share that knowledge and go on stage and make your short ad-hoc presentation, raise a question, share a notable experience … be part of the DevOps movement.
The rules of the game are very simple: you grab a file card at the speaker checkin and write your topic on it you want to talk about (ca. 5 minutes). In the evening we have time for about 6 topics -- if there are more suggestions, the audience decides which topics will be presented.
Agile methods and DevOps approaches can bring enormous benefits to an organization by increasing flexibility, reducing time-to-value, all while increasing quality. However, these are not methods you simply "adopt". They require a substantial transformation of a company's values, beliefs, and processes. For example, DevOps is about removing impediments from the flow in the software delivery to the business. Likewise, agile requires changing the way the company budgets and funds projects. This session reflects on the experience of truly transforming IT inside a large organization as opposed to simply adopting DevOps.
What are the best practices for securing containerized applications? How can developers secure their containerized applications across the DevOps pipeline? This talk with share practical tips and tricks on how to secure your containerized applications and conclude with a demo of Conjur from CyberArk. Conjur is an open source security service that integrates with popular CI/CD tools to secure secrets, provide machine identity authorization, and more.
Docker als immutable (unveränderliche) Wegwerfcontainer sind schon Bestandteil vieler Umgebungen. Dabei schüttelt Docker die klassische Infrastruturplanung gut durcheinander. Bis dato brachte dies - neben Orchstrierungslösungen wie Kubernetes und Docker Swarm Mode - minimale Betriebsysteme wie CoreOS oder Atomic hervor. Diese Betriebssysteme sind kaum mehr als ein systemd - zum Verwalten der Container(-tools) - und einer simplen Upgrade- und Rollback-Strategie des Betriebssystems. Linuxkit geht einen Schritt weiter, ist es doch ein immutable OS. Linuxkit/Moby ermöglicht es, über eine einfache Yaml-Datei, VMs/Betriebssysteme für verschiedene Architekturen zu bauen. So bietet Linuxkit Out-of-the-box PhoenixServer.
Was bringt die Session?: Neben einem Überblick über die technische Landschaft lernt der Besucher wie Linuxkit funktioniert. Während des Vortrags werden wir ein kleinstes (Single-Purpose) Betriebsystem bauen. Zudem wird der Unterschied zu Atomic, CoreOS oder gar klassischen Betriebssystemen verdeutlicht. Selbstredend wird Linuxkit kritisch gewürdigt. Zum Abschluß überlegen wir gemeinsam, wie die Infrastruktur der Zukunft, welche Linuxkit anstatt von klassischen Betriebssystemen verwendet, aussieht und betrieben wird.
The DevOps culture in cloud-based companies brings also the requirement to increase the number of developers or team members that are able to trigger deployments to the production data centers. The Production engineer moves from the gatekeeper role to the enabler role and is required to provide a creative tools that can safely deploy to production or rollback easily, if needed. In that way - the Production engineer dependency is significantly decreased.
In this talk, we'd like to show a case study of end-to-end solution which provides both safe and easy way to run multiple and instant deployments to production on a daily basis by the quality and development team members. The DevOps process have been developed internally at LivePerson and increased the accountability and the commitment by the team members. LivePerson is a purely cloud and microservices based company, and market leader in real-time intelligent customer engagement.
“It’s not the database”, “It’s not the network”, “It’s not the servers”. These comments will be familiar to many DevOps adopters. Microservices, containers, and cloud are just a few factors making it harder to understand how where root causes of issues lie - increasing the chances of War Rooms and “mean time to innocence” appearing.
Measurement, the “M” in the CALMS model, has always been a key pillar of DevOps. But what should DevOps adopters measure and how can they track multiple metrics as environments become exponentially more complex? How can they accurately identify where the bottlenecks exist and avoid finger pointing?
Referencing real user stories and original research, this session explores why these scenarios are taking place and what is needed to create a continuous feedback loop with high velocity, high quality releases in a blame-free environment.
Linux VMs with Docker in Azure, Azure Container Services, Azure Container Instances, Docker for Windows, Windows Docker Containers – Microsoft has fallen in love with Docker. In this session, Rainer Stropek (long-time Azure MVP, MS Regional Director) speaks about recent Microsoft-related developments in the Docker universe. He shows new Docker features on Windows 10 (Linux- and Windows-based containers), speaks about Microsoft’s base images in the Docker hub (e.g. .NET, VSTS, IIS, SQL Server), demonstrates the new Docker integration in Visual Studio and finally shows Azure’s brand new Container Instances (=Docker PaaS with optional Kubernetes integration).
Nowadays, companies need to modernize their IT to fit the market requirements. Fast deployment of new features can only work with the right tooling and infrastructure. Therefore, technologies like containers, orchestration and cloud providers are getting really popular these days. Together with Erik, check out how to automate production-ready deployment of akubernetes cluster with tools like Terraform & Ansible and the Europeancloud provider Exoscale.
With Digital a top priority, IT is forced to change: it can no longer just play a role in supporting business objectives, but fuel the next-generation business models spawned by the digital revolution. However, in the light of the current economic climate, IT organizations are also pressured to reduce costs, increase productivity, achieve faster time-to-market and ensure high value delivery, on top of intensifying the rate of innovation and staying ahead of the technology curve.
The talk will provide insight into pragmatic steps enterprises can take to expand the DevOps footprint and provide a solid foundation for next-generation digital delivery models. Patterns of success, proven approaches and cases will be presented, based on Cognizant’s extensive experience in complex enterprise transformations.
I will share how DevOps is done at Atlassian and how Atlassian thinks about the future of teams. I then will unload and contrast the lessons I learned in my time at Skype and leading the DevOps transformation at Lloyds Banking Group.
What’s the status quo of the serverless hype? However, it’s time to look at the first „lessons learned”. Not only for beginners in the serverless world, it’s important to take into account some tripping events, which no one wants to run in a second time.
„No server” doesn’t mean „No Ops”! The responsibility for correctness and reliability is still at our side and cannot be omitted. We discuss appropriate use cases of „Serverless” - and such, where serverless doesn’t suit. We also cover things to consider when it comes to topics like programming model, container handling, caching, latency, security, monitoring and deployment chains, etc.
Application Insights promises monitoring without having to write lots of code or setting up complex services. Sounds too good to be true? In this session, Rainer Stropek (long-time Azure MVP and MS Regional Director) introduces Application Insights. Instead of slides and boring theory, Rainer focuses on practical aspects. He shows how his team has been using Application Insights successfully for years. Additionally, he demonstrates how Application Insights can easily be embedded in .NET and Node.js applications.
Does your application or service use a database? When that application changes because of new business requirements, you may need to make changes to the database schema. These database migrations could lead to downtime and can be an obstacle to implementing continuous delivery/deployment. How can we deal with database migrations when we don’t want our end-users to experience downtime, and want to keep releasing? In this talk we’ll discuss non-destructive changes, rollbacks, large data sets, useful tools and a few strategies to migrate our data safely, with minimum disruption to production.
Nachdem die ersten Schritte mit Containern im Projekt gemacht sind, will man seine Applikation nun sauber, stabil und kontinuierlich in die Produktionsumgebung bringen. Dies soll in einem effektiven Workflow geschehen: transparent und nachvollziehbar, zurückrollbar und sicher. Die Produktionsumgebung muss überwachbar, automatisiert und am besten selbstheilend und selbstskalierend sein... Schnell wird klar, dass dies alles keine leichte Aufgabe ist. Diese Hands-on-Session bietet dem Zuschauer eine mögliche Lösung an, wie er mit einer Reihe von Open-Source-Tools (unter anderem GitLab CI, Ansible und Kubernetes) Continuous Delivery für verteilte Applikationen umsetzen kann. Dabei wird auch kurz auf Grundlagen der verwendeten Tools, das Warum und mögliche Alternativen eingegangen.
Docker is a popular choice in tech today. However, containers alone are not enough to bring complex applications into production. Load balancing, fault tolerance, continuous integration and delivery, logging/monitoring, and release management are some of the other important aspects for successfully rolling out software products.
Kubernetes helps achieve these tasks by transferring the area of containers to the cloud. It makes it possible to model a single large host from many “small” hosts, which then benefits from automation. However, Kubernetes is just a piece of technology meant to simplify the release and development process.
Finally, OpenShift from Red Hat is a well-rounded approach towards DevOps that brings everything together.
Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server- and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.In this tutorial you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.We wil cover how to provision servers with:
* an application user
* a PostgreSQL database
* nginx with a load balanced reverse proxy
* an init script installed as a service
* zero downtime deployment of an application that uses the provisioned infrastructure