DevOps is much more than just local improvements in the development, delivery and operation of software: it represents a movement which touches not only development and operations (obviously!), but organizational culture, quality assurance, security, and program management.
Anyone tasked with a DevOps transformation will be able to spend a day wrapping their head around the world of DevOps, with exercises to help them look at their specific organization’s challenges through a DevOps lens. In the workshop, we’ll cover the following topics:
By joining DevOpsCon’s Transformation Day, you’ll get an insight into all the necessary aspects to make a DevOps transformation in your own organization successful.
Continuous Integration (CI) and Continuous Delivery (CD) are development practice of applying small code changes frequently. It's well known that it becomes more and more essential to any agile-based organization. This workshop will help you to understand the CI/CD concepts, mindset, and how to implement the practices that help to form the DevOps culture to implement CI/CD for software development.
Imagine… everyone able to work consistently at their best:
Training for technical diving prepares you for situations in a high pressure environment where failures are unavoidable but also highly dangerous. It means surviving in a hostile environment with limited resources where the failure of individual components can trigger deadly failure cascades. A diver who can’t understand how to do proper post-accident analysis and who hasn’t learned to repeatedly survive through failure is a liability to themselves and to others. This talk uses the framework of technical SCUBA and rebreather diving to talk about patterns for failure management and how they can be applied in an engineering context.
Whether starting a greenfield project in one of the public clouds or implementing a lift-and-shift project - cloud security always is and will be an important topic. Even more so with the “privacy by design” principle that was put in place by GDPR. There are lots of best practices out there: multi-account strategies, principle of least privilege, automated patching and scanning security vulnerabilities and enforcing encryption – just to name a few measures to harden your cloud infrastructure. Ideally all these are driven by CI/CD pipelines to enable confident changes and short cycle times. The session will cover AWS-based examples on proven best practices and solutions that can be used to harden your cloud infrastructure with little effort using the already available features and components.
Containers and Kubernetes are becoming the de-facto standard for software distribution, management, and operations. Development teams see and realize the power of these technologies to improve efficiencies, save time, and focus on the unique business requirements of each project.
At the same time, the process of deploying, running, managing, and handling upgrades for Kubernetes is time-consuming and requires significant in-house expertise. InfoSec, infrastructure, and software operations teams, for example, face a myriad of challenges when managing a new set of tools and technologies as they integrate them into an existing enterprise infrastructure.
In this session, Oleg will outline the unique challenges organizations embarking on a containerization and Kubernetes journey must consider. In particular, he’ll focus on following critical elements:
· Centralized monitoring and log collection
· Security, identity and access management, audit
· Governance and resource management
· Backup and disaster recovery
· Infrastructure management
Oleg will also provide an overview of the general architecture of a centralized Kubernetes operations layer based on open-source components such as Prometheus, Graphana, ELK Stack, KeyCloak, etc. By breaking down each of these areas, attendees will gain a good understanding of what it takes to ensure successful container and Kubernetes ‘Day 2’ operations.
This talk is about how to strategically think about using system data and operations to uncover the deeper story of your product, to understand what users want without asking them and without guessing. Make better use of system logs and other data to inform what we build focusing on the "learn" aspect of the build-measure-learn development cycle. Think of it as bringing a deeper connection between product roadmaps and a DevOps culture.
Developers love containers: they help to package applications neatly and unravel library dependencies. Some ops teams take a more sober view of the technology, though. They are responsible for the infrastructure and the security. Shared kernels separating applications only by namespaces and groups, cause raised eyebrows now and then. The level of isolation provided by real VMs was more credible, some say.
Kata containers are about to overcome this gap: The project starts containers in extremely lightweight VMs but keeps the interface. All calls and integrations to Docker or Kubernetes remain the same because the project implements the OCI interface.
The talk introduces to the architectural design of Kata Containers, explains the easy installation using a Docker runtime environment and shows how to use it. We present the results of the simple benchmarks and discuss differences to purely namespace-based containers. Finally, we look at improved memory consumption and some other features, that got introduced in the more recent updates of Kata Containers.
Unix shell scripts are our constant companions since the seventies, and although there have been many other contenders like Perl or Python, Shell scripts are still here, alive and kicking. With the rise of the container writing shell scripts becomes an essential skill again, as plain shell scripts are the least common denominator for every Linux container. Even we as developers in a DevOps world can not neglect shell scripting.In this hands-on session, we will see how we can polish our shell-fu. We will see how the best practices we all have learned and love when doing our daily coding can be transferred to the shell scripting. An opinionated approach to coding conventions will be demonstrated for writing idiomatic, modular and maintainable scripts. Integration tests for non-trivial Shell scripts are as essential as for our applications, and we will learn how to write them. These techniques and much more will be part of our ride through the world of Bash & Co. Come and enjoy some serious shell script coding, you won't regret it and will see that Shell coding can be fun, too.
Package managers are hard. Helm learned a lot of lessons from others’ mistakes, but also repeated some. For example, having a single index file per repository is not scalable. It’s the same mistake that NPM made and it causes slower CI, high memory consumption, slower searches, and more. Another example is not having private enterprise repositories in mind, leaving out authorization and authentication features. In this talk, we’ll explore several solutions to those problems, their strengths, and their weaknesses.
The consistent application of DevOps and Continuous Delivery practices requires transformation within almost all functions of an company. While this is a challenge for small and medium-sized companies, it is a giant task for large international corporations.
Heiko and Dirk will introduce SAP's Cloud Transformation in their talk, during which they will focus on the transformation of SAP quality working model. Away from a strong centralized governance role to a supportive approach that uses coaching practices to prepare development teams for the cloud.
Digital Economy makes Applications the Center of your World and the trust anchor of your customer. At the same time your customer expects the highest comfort which makes it necessary to keep your Applications UpToDate with new features as quick as possible. This results in several release cycles per day which needs to be in sync with your security policy. To get this managed, it is necessary to get your Application Security Infrastructure over Infrastructure as Code integrated in your CI/CD system. In this presentation we will show a declarative approach to keep the integration as simple as possible together with the capability to integrate RBAC to get the domain specific knowledge of different teams like SecOps, DevOps and NetOps seamlessly integrated.
1. Speed – rapid iterations require agile coordination
2. Relevance – integrating market feedback requires integrated communication
3. Quality and Testing (not "testing quality”) are built into every step of the process
The above is well-documented and widely accepted as part of the DevOps revolution. But we have not yet won the battle!
The solution lies not in adopting new tools, which only would be little more than an improved means to an unimproved end, but in changing our concept of what a product is, how we architect it, how we build it, how we test it, and how we orchestrate its release.
We need to stretch our imagination and reassess our goals to include the larger organization. Orchestrating the release of a product is not a technical exercise: it requires coordination between many people and groups within in the organization and the right people need to be involved. None need to be more committed to the transformation than middle-level managers, architects, and team leads.
The purpose of the talk is to motivate teams to take a moment and reflect on their Continuous Delivery pipeline architecture for the sole purpose of improvement and encourage them to KISS "Keep it simple, (maybe not stupid)" their continuous delivery architecture all the way. The architecture of continuous delivery pipelines is becoming complicated and complex everyday, I will go on and add that sometimes the continuous delivery pipeline architecture of a product ends up being more complicated and resource intensive than the actual product. In this session I intend to highlight the different reasons that may be contributing to the over complexity of continuous delivery pipeline architectures and eventually also talk about different ways to adopt to keep the architecture simple and manageable in the long run.
In the course of digital transformation, many business processes are currently undergoing fundamental change. Management wants the cloud in order to better face cost and competitive pressure. There is hardly a company that does not start strategic cloud initiatives. According to independent market research, by 2020 more computing power will be used as a IaaS than in traditional on-premise infrastructures. No Cloud will be as rare as No Internet. Apart from all the advantages such as agility, costs, and efficiency as well as automation, hybrid architectures also entail various challenges:
- Multiple environments - too many tools
- Operational Challenges - lack of skills and professionals
- Advanced threats
- Compliance requirements in increasingly regulated markets
Modern infrastructures can perfectly balance cloud and on-premise parts for every requirement. Learn how to leverage the value of cloud infrastructures while addressing challenges.
How to apply DevOps in a traditional and large enterprise is a hot topic. Terms like digitization and transformation are now common buzzwords among management, thus defining a technology roadmap and reinventing your business with software is a must if you want to excel or even survive. It is clear that today, no company can't compete without technology, the question is: how do we use that technology efficiently?
For years, the community has given answers to the growing and demanding IT requirements: Agile methodologies are there since the early 2000s, as a result, in the last 10 years a DevOps culture has been developed, and there are multiple successful stories that prove its value. Nonetheless, often stakeholders are very disappointed about how IT works in their companies: software often fails, misses deadlines, is expensive, unmaintainable and tech trends change rapidly. Even if state of the art technologies are applied and enable such transformations, there is still a long road ahead to achieve such DevOps culture.
At MindDoc, we have the ambitious challenge of transforming traditional healthcare services with software solutions that improve the delivery of care to our patients. At first glance, nothing differs from any other business who wants to transform. However, there are many ways in which you can approach digitization in a traditional enterprise, often with a strong focus on outsourcing. Instead, we follow a "digital unit" approach, which puts an emphasis on organizational independence, in-house development, and fewer communication dependencies. Getting the best of a startup and a big company is mainly the goal of such approach, with plenty of challenges and tough decisions.
As a result, we present the chronology of how is the internal development of an online therapy platform in one year, from zero to a product that can be constantly shaped based on the needs of our patients and therapists. Challenges like recruitment, old IT interfacing, keep solving problems with code, cross-functional communication, scaling Agile teams, process compliance, security, what to outsource... are still there, and a digital unit approach gives the flexibility needed to provide solutions for them.
If you want to be in nowadays, you count on open source software (OSS) to stay flexible and avoid reinventing the wheel. Many applications contain more open source code than proprietary code.
Using components with known vulnerabilities is one of the most common OWASP risks.
In this presentation, we will look at the security, operational and legal challenges associated with the use of third-party components which are mostly open source.
Next, we discuss how these risks can be addressed using various make-it-yourself or buy-it approaches so that you can stay atop of the OSS flood.
Applications today are deployed in fully automatic environments like microservices, containers or clouds, which allow seamless load balancing, auto-scaling and other infrastructure-dependent services. To secure such applications many different security policies need to be applied, like SSL/TLS, ACLs, IP Reputation, WAF and more. This talk will look at the challenges and solutions how to integrate a modern WAF into an application development pipeline and how to create processes that support both security and development requirements. We will share a reference implementation and present a demo environment where we implemented these processes.
Experience live how open source tools can be used to check the security of a web application — fully automated as part of a DevOps build pipeline. See how a dynamic and static security analysis toolset identifies vulnerabilities and gives remediation advice. Also learn how to generate vulnerability reports and consolidate findings. Enhance your security tool-belt and get prepared to check your applications afterwards...
DevOps aims for increasing business efficiency through IT – thus requiring, next to automation practices, changes in the way of working and collaboration between Dev and Ops. But for those who thought this is where the story ends: sorry, you’re wrong!
I’ve seen many examples where the “Pulse of DevOps” is not been transferred to the entire enterprise ecosystem, and much more: where other business units and departments are not even prepared for the speed that DevOps requires for many actions and decisions.
In this talk you will hear from real live examples and client discussions on how to enable the overall organization to fully support the Agile/DevOps Transformation.
This talk calls attention to the seven biggest problems encountered when building security into agile projects. Based on rules of thumb from security consulting, you will experience first-hand the expected (and unexpected) obstacles you can meet. You will hear about technical, organizational, process-oriented and even skill-related issues, with which teams find themselves confronted, when trying to improve the security of their projects in a sustainable way. You will receive, as a content-related takeaway, tailor-made solution models that are applicable and effective for both large companies and specialized service providers alike.
Puppet Pipelines simplifies continuous delivery and unifies workflows across your Dev and Ops teams. It automates the build and deployment of your applications — whether they’re traditionally packaged or container-based apps running in Kubernetes — and gives you deep visibility and audit trails for every action taken. Join us for a demonstration of the power of Puppet Pipelines to coordinate your entire software development lifecycle!
Distributed applications like microservices shift some of their complexities into the interaction of services. Such a service mesh, which can have hundreds of runtime instances, is very difficult to manage. You will be concerned with some of the following questions:
• which service will be requested by which other services in which version and how often depending on the request content?
• how can you test the interaction and how can you replace single services with new ones?
These and other questions will be discussed in this session. Tools to make your live easier with a service mesh will also be introduced.
Microservices development environment becomes more and more popular in cloud-based companies in order to support better CI/CD methodologies. I would like to show a case study which leads to best practices of how do we manage CI/CD for 200 microservices based both on Docker/Kubernetes and Puppet under production environments and be able to control them all using various of tools, internal developments and technologies.
The complexity of the socio-technical systems we engineer, operate, and exist within is staggering. Despite this complexity, it remains a fact of life in software development and operations, one which can become easy to ignore, due to our daily interactions with and familiarity of that system (and, let’s face it, as a good strategy to cope with it). When those systems falter or fail, we often find in the postmortems and retrospectives afterward that there were "weak signals" that portended doom, but we didn’t know they were there or how to sense them.
In this talk, we’ll look at what decades of research in the safety sciences has to say about humans interacting with and operating complex socio-technical systems, including what air craft carriers have to do with Internet infrastructure operations, how resilience engineering can help us, and the use of heuristics in incident response. All of these provide insight into ways we can improve one the most advanced — and most effective — monitoring tools we have available to keep those systems running: ourselves.
In den letzten drei Jahren haben wir die Infrastruktur der Fernseh-Plattform waipu.tv gebaut. Dabei haben wir angefangen Tools für den Betrieb in Golang zu schreiben. Aus einigen der Tools wurden Core-Services, die auch die Last einer Fußball-WM-Übertragung locker wegstecken. Wir wollen euch zeigen, wie wir mit der selben Tool-Chain (Golang & Co) Betriebs-Probleme lösen und kritische Business-Applikationen entwickeln. Klassisch DevOps oder Golden Hammer?
Go is not a language traditionally used in SysOps. However, as SysOps transforms to DevOps and systems complexity keeps increasing, the need for scalability is increasing as well. Scalable system need generalized support, less scripting and more software development, ideally using a cross-platform language that supports concurrency and parallelism. This is a great time to refresh the toolbox.
Looking at recent observability and operations tools, many are written in Go: Docker, Kubernetes, Prometheus, CoreOS, Istio, Grafana, Jaeger, Moby, etc.
In this talk we will talk about the language, when its use makes sense and what features make it a good choice, e.g. type safety, clear syntax designed for concurrency, built-in support for parallelism, and the built-in cross platform and cross architecture support that doesn’t require dependencies management.
Organizational change is hard; there is no blueprint. Individuals in different organizations have different experiences. Some are successful, some are not, and some feel they could achieve more. Let’s have an open discussion on experiences, successes, pitfalls, and backlash when it comes to organizational changes. Of course, the attendees of DevOpsCon, are invited to share their thoughts with the panelists.
"Without data, you're just another person with opinions".
In this talk, we'll talk about data-driven DevOps and how the cross-cutting metrics from dev, QA, and ops can be integrated to provide you and the teams you support with an insight into the status of your engineering organization.
As the DevOps Evangelists of your organization, you can help your teams to adopt data-driven decision making whereas it becomes more important due to cross-pillar influence and collaborated need for success. The practical aspect will cover dos and don'ts and examples of metrics that you can implement in to help your teams today.
You've probably heard of BDD and TDD. Well, many teams struggle with CDD: Chaos-Driven Delivery. That is, teams struggle with how to handle the constant onslaught of overwhelming amounts of work and begin to lose hope. The good news is that if you understand operating systems, you already know a great deal about how to tame the chaos!
From traditional IT operations and support to AI driven ChatOps: Having the right information, at the right time often makes a difference. In life... and in IT Operations and “DevOps” teams. In this breakout session you will get an overview about ChatOps, how it can help to fasten up problem solving and make collaboration more efficient. You will hear some real-life stories and integration examples.
The first half of this talk will provide an overview of what the Cloud Native Computing Foundation (CNCF) is, its purpose, how it works, and provide an overview of CNCF-hosted projects. We will go into some detail about the process that projects go through to get accepted into the CNCF and what kinds of benefits this provides to those projects.
The second part of this talk will provide a technical overview of the projects with demos and details of some selected projects. We will particularly focus on how various CNCF-hosted projects can be deployed and used with Kubernetes.
When integrating Kubernetes into an existing environment there are special requirements to consider, especially when migrating high-load applications. This talk will highlight some practical experiences during a central infrastructure project to internally provide Kubernetes at 1&1 Mail & Media. Besides the platform topics, the talk addresses demands to software development and operations while integrating with a recently introduced continuous delivery environment. A basic understanding of Kubernetes components and object types will be helpful to get the best out of this talk.
Kubernetes and Docker have become nowadays the standard for building reliable and flexible software services, but with flexibility comes responsibility.
In this talk we will share 7 Principles we defined for running production ready Kubernetes and you will learn how not to repeat the same mistakes we did in the past years.
We will embrace the failures and talk about solutions that work in today’s complex world of Kubernetes.
"Blameless postmortems" and "learning from failure" are very en vogue in the technology industry right now. Both fall into that less-discussed category of "CI": Continuous Improvement. But for as much as we all talk about them, in many organizations and teams, the outcome of continual organizational learning and improvement remains elusive. Why is this?
In this talk, we’ll look at five "dirty words"* that are often thrown around during postmortems, retrospectives, and other learning exercises that not only make it difficult for teams to discuss learning, but promote activities and behaviors that are actually counterproductive to continuous improvement. We’ll dig into the existing research on why this is – it turns out we’re not the only industry struggling with this! – and look at some different language we can start using that can more ably facilitate sustainable Continuous Improvement in our work environments.
*Not actually dirty words.
Development teams are looking more and more to Serverless for new projects, and for many teams that means using a commercial Functions service. But, there are options. In this talk, we’ll discuss Serverless concepts and work with the Fn Project, an open source, cloud-agnostic functions platform which can be deployed across your existing cloud estate.
Every new project in nearly every organization wants to include DevOps, and every team wants to get faster. However, putting every person in a room together and introducing some new technologies does not guarantee for even slightly improved lead times. What is the reason? Does DevOps really work for the world outside our IT bubble? Let´s switch perspectives and find out how speed and flow is perceived by the end users of our systems. What are possible brakes and how can we release them?
Tool-supported quality assurance has been part of software development since the very beginning and thanks to the idea of release pipelines, there is now a "natural" starting point for it. But what kind of quality control should be attached to which pipeline stage and is the pipeline the right place for this at all?
The presentation shows which areas can currently be checked by tools, from security over license compatibility to architecture analysis. It also highlights where automated quality control has failed in the past and what we can learn from these failures, for example regarding the duration of a practicable feedback loop.
Time and time and time again, we see people doing incredibly stupid things. Once we start talking it turns out things aren’t as stupid as they seem to be. From a different perspective things make perfect sense - all of a sudden. Perspectives that are usual in companies but can blur the view and make things look stupid can be the following: a view from higher up the ladder, further down (vertical) or form a different silo (horizontal). A horizontal change in perspective can makes things appear wrong. A vertical shift may make things look too detailed (from higher up) or irrelevant (from lower down). In his talk, Markus will explain these phenomena and how they can lead to dysfunctional organizations, and gives examples of ways that helped him and companies he worked with to deal with them.
“Feeling means believing”. As a complement to our keynote, we offer a simple game. The game will simulate how to feel and perceive DevOps speed. We use the widely known exercise of paper mail enveloping to model a flow close to software development. Given this model, we can observe typical brakes and boosts in a condensed time frame, while having interesting discussions and fun.
The ever-increasing complexity of modern systems, modern DevOps and continuous delivery-centric workflows place new demands on performance and reliability testing approaches. When systems are comprised of many distributed components, each with its own performance and reliability characteristics, and when a misconfiguration that causes a cascading failure under load can be automatically deployed across environments all the way from dev to production in a matter of hours, you really need to make sure that a rigorous, well-understood, and easy-to-follow performance testing process is in place. In this talk, we will look at how an effective performance testing process can be implemented from the ground up (using the open-source Artillery.io toolkit) to be an integral part of an organization’s continuous delivery pipeline. This talk is for developers, QAs, and engineering managers who are working on a greenfield project with high performance and reliability requirements or working on production systems which are experiencing issues when under load.
“Occupational roles express the relationship between a production process and the social organization of the group. In one direction, they are related to tasks, which are related to each other; in the other, to people, who are also related to each other.” --Eric Trist
When considering the flow of work through a work system, it is often advised to “follow the work, not the people.” This simple principle, tracing work through a system in order to understand handoffs, wait time and waste, is invaluable for enabling a transition from resource efficiency to flow efficiency. As Trist describes it, this is the way in which roles are related to production processes.
What is left unanswered then is… Trist’s “other direction.” How are work and roles related to people and other roles?
Whole Work is a sociotechnical theory about how to design work and work systems to decrease toil, increase quality and address the needs of humans doing the work.
Jabe will discuss the design principles of Whole Work systems and discuss how to use them to create resilient sociotechnical systems in a complex and dynamic economic environment.
While Openshift provides a lot of prebaked components, one can install thru the catalog – it’s NOT setting you up for rapid, well integrated and goverened software development. It’s an endless open toolkit.
That is why we have started github/opendevstack, with the goal that developers can focus on features and not on getting CI/CD to work.
This session introduces the why and shows how opendevstack works – from provisioniong to run in less than 3 minutes, based on Jenkins, sonarqube and other usefull CI components.
Microservices are everywhere. Everyone seems to be either going into that direction or is talking about doing so. But are they really the best choice for you?
Developers! Architects! Buckle up as we’re going to cut through the hype. Instead of going all-in on microservices or all-in on big ball of mud, we’ll introduce a third choice: the Majestic Modular Monolith! We’ll look at what it brings to the table, when it may be a good fit and how it compares to the other two approaches in terms of code organization, productivity, scalability and much more. We’ll look at how this can be designed and implemented in practice. Get ready. We won’t shy away from the hard questions.
In this hands-on workshop we’ll all attack the training web app to take on the role of a pentester one step at a time. You’ll learn how to work with professional security tools through a range of practical tasks and will also learn pentesters’ general approach for attacking web apps. Of course, we’ll also deal with defensive measures for protecting the security holes found, though our focus will remain on the systematic use of professional hacking tools for carrying out security analyses.
As a second objective of this workshop you will learn what type of security checks can be automated and how this DevOps-style automation of security checks within build chains is best done.
Once you’ve completed this workshop, you’ll have practical experience of carrying out manual and automated attacks on web apps, which you can transfer into your own software development work so as to increase the security of your projects for the long-term.
DevOps can have a huge impact in scaling organizations and making them more adaptive to change and thus resilient. However, the transition from traditional organizational structures to an agile, DevOps-oriented way of working is often hard.
To be successful, we need the ability to pick the right problems, view them from different angles and then collaboratively solve them. No solution thought out by a single person or department will help and stay or will even be close to ideal.
Therefore we need to find ways to understand and solve problems in groups of diverse people. Here, with a limited number of participants, you can name particularly relevant problems and then work on them under the guidance of experienced moderators, an experienced companion of numerous organizations starting and discovering the path of their own DevOps transformations.
A workshop day full of exciting insights, lively discussions with peers in the same situation, and concrete solutions for your organization.
In diesem Workshop zu Docker/Kubernetes kommt jeder Teilnehmer zu Wort. Zu Beginn des Workshops werden technische Probleme und Fragestellungen gesammelt und in kleinen Gruppen individuell behandelt bzw. beantwortet. Wer Fragen zu Microservices oder CI/CD hat, dem wird hier ebenso geholfen, wie Teilnehmern, die Grundlagen verstehen oder Produktionsschritte sowie Planung diskutieren möchten. Dabei ist durchaus gewünscht, dass nicht nur die anwesenden Experten ihr Wissen vermitteln, sondern ganz im Sinne eines "Think Tanks" gemeinsam mit den Teilnehmern kreative und sinnvolle Lösungen erarbeitet werden.