Blog

KISS of Death by Complexity?

Playing with the fire of complexity

Aug 24, 2021

KISS means: Keep it simple, stupid! In other words: Don't make it unnecessarily complicated. That sounds good and desirable. But don't we tend to do the opposite in practice? If you look around, we are up to our necks in complexity. Instead of doing something about it, we are busy piling on new concepts, tools, and technologies every day, driven by hype and hopes for patent remedies. Time for a critical look.

Playing with the fire of complexity

You don’t believe me about complexity (See box: “Complexity”)? Let’s take a look at the list of ingredients for a relatively manageable internet-enabled application. It often looks something like this: Java/Kotlin, JavaScript, SPA, Angular/React/Vue, npm/Yarn, webpack, Grunt, React Native/Swift/Kotlin, PWA, microservices, Maven/Gradle, Spring Boot/Micronaut/Quarkus, BFF, API Gateway, REST/GraphQL, IAM (Identity and Access Management), OAuth 2.0, JWT, Keycloak, Docker, Kubernetes, Rancher, Helm, Operators, service mesh (Istio/Linkerd), Consul/ZooKeeper/etcd, Prometheus/Graphite, ELK, RDBMS, NoSQL/NewSQL (MongoDB/Cassandra), event sourcing, CQRS, Kafka/RabbitMQ, Jenkins/CircleCI, unit testing (JUnit/MochaJS), integration/contract testing (FitNesse/Jasmine/Pact), security testing (ZAB), load testing (JMeter/Gatling/Locust), chaos engineering, user-acceptance testing, Git, Artifactory/Nexus, IaC (Terraform/AWS CloudFormation/Puppet/Ansible), OpenAPI,…

There’s still a three-digit number of libraries and frameworks missing that are typically used (no exaggeration!), and there are a lot more tools and technologies missing that I haven’t even mentioned here. Plus, there are hundreds of more tech alternatives, often used by larger companies, because every project and software engineer has different preferences.

 

STAY TUNED

Learn more about DevOpsCon

 

With IAM and security testing, the highly critical and complex topic of security is only hinted at. Last but not least, all the other concepts, tools, and technologies needed to build and operate a reliably functioning operational infrastructure are missing – which would easily add another list like this.
And all this for a manageable internet-enabled application – let’s say it’s a customer portal that is not too complex. In case you think I’m exaggerating: I’ve seen more than one project at scale where the buzzword bingo list looked something like this. For the sake of unfairness, testing was often not as pronounced. Other than that, though, the list fits.

Now, a company usually runs not just one application, but a whole lot of them – often hundreds or thousands of applications. Of course, these applications are not all based on the same concepts, tools, and technologies as the application outlined above. Accordingly, you can find quite a few different concepts, tools, and technologies in a wild mix and without clear distinction, often going back to the 1970s.

Additionally, you will find a lot of standard software. This is not limited to the inevitable SAP with all its modules in different generations and expansion stages. You will find standard products of various types, tasks, and sizes, integrated with each other and the self-developed systems in many different ways.

In line with this, you usually find a wide variety of incarnations of integration solutions – here an EAI solution, there an ESB and an API management solution, all of which are nowhere near as widely and uniformly used as promised at the time of introduction.

All in all, a great deal of complexity is piling up in IT. Since most of those involved no longer have any idea how to get a grip on this uncontrolled growth, they instead search for the next silver bullet that will magically solve their problems all at once – and thus increase complexity by adding yet another layer.

 

Indispensable IT

Depending on your mood, you could either smile or shake your head at this and then go back to business as usual, if IT hadn’t become indispensable in the meantime.

In the past, most everyday products and services were free from IT. Some of them contained a customized hardware solution that, once designed and tested, was used for many years.

Today, however, most products and services are largely software-based or purely software-based (i.e., implemented as an app or application). This enables much more sophisticated solutions than in the past, which can also be improved and adapted much more quickly. Not only cell phones and tablets, but also cars, washing machines, and amplifiers increasingly require regular software updates today.

But that is only the visible part of products and services; most of them rely on backend services hosted in a (cloud) data center, where they implement large parts of intelligence – yet another list of additional software.

A large part of the so-called B2C (business-to-customer) interaction today takes place via software or is at least supported by software – and the trend is rising: online retail, online banking, online insurance, online media, and so on, right up to online city management.

If we look at B2B (business-to-business) interaction, we face even more software-based communication. Internally (B2E, business to employee), most companies are also highly dependent on software. This is not limited to offices. Software is increasingly creeping into the production lines and workshops where products are built and maintained. In summary, software surrounds us these days. Software is everywhere.

But this also means that more and more aspects of our daily business and private lives depend on functioning software. Software has become indispensable. If our software doesn’t work, or if the corresponding applications fail, we have a problem.

 

 

A self-reinforcing vicious circle

So we can observe two developments:

 

  • IT is becoming more and more complex.
  • IT is becoming more and more indispensable.

 

When I put these two developments side by side, worry lines form on my forehead. IT must become more reliable and robust since it is becoming ever more indispensable. Instead, its complexity is growing unceasingly.

This is also in line with the observations I make time and again. Many IT departments are already on the verge of becoming unmaintainable. There are critical applications that are no longer touched because no one knows what is happening in them. Attempts are being made to implement urgently needed integrations into the core systems using RPA (Robot Process Automation) because everyone is afraid of what would happen if they touch the old code. Could the system still even be built from the source code at all? There are ancient operating system versions running that have been unpatched for years (to the great delight of all the hackers out there) because urgently needed software for newer OS versions is no longer available. And much more.

The oft-quoted “never touch a running system” takes on a completely different, bitter aftertaste. This is not only risky for the companies, but the affected employees also suffer. While they try to somehow maintain the shaky construct with a lot of energy, business departments become increasingly dissatisfied because changes take longer and longer, and accordingly continuously increase pressure to change.

In doing so, they deprive the affected employees of urgently needed time to repair the worst predetermined breaking points. This often results in a self-reinforcing vicious circle:

 

  • IT is becoming more and more complex, which is why changes are taking longer and longer.
  • Business departments are becoming increasingly dissatisfied, which is why they are increasing the pressure to deliver.
  • Changes have to be hastily cobbled together more and more frequently in order to meet the delivery pressure, which further increases complexity.

 

Well-meaning colleagues may come around the corner with more and more new concepts, tools, and technologies, because they heard whispers from a well-oiled hype sales industry that with technology X everything is guaranteed to be “finally really good”. Well, most of the time nothing will be good at all and the complexity tower will only increase by another floor.

For many, this feels like a Kafkaesque nightmare, which at some point can also affect your health.

 

Quite a few driving forces

But nobody actually wants that. Nobody wants more complexity. Actually, everybody prefers it to be simple: KISS. It’s cheaper, faster, easier to change, and more fun. But if nobody ever wanted it, how did we get here?

So let’s take a few steps back and take a sober look at the situation. What driving forces have led to complexity and what can we do about it? If we do that, we find – not surprisingly – that the drivers are also complex and influence one-another:

 

  • It starts with the fact that market conditions have changed, but people at the corporate level mistakenly react with “more of the old tried and true”.
  • This is massively reinforced by the fact that most people – not limited to, but especially decision-makers (even within IT) – do not really understand IT per se or software’s special features compared to other “materials”.
  • Nor do they understand what role IT currently has in value creation (keyword: digital transformation) and thus, they try to save every last cent in pointless places since IT is regarded as mere cost centers.
  • It continues with misunderstood and dully but incorrectly copied “Agile” or “DevOps” cargo cults that increase complexity without delivering added value.
  • Architectures from hyperscalers are copied senselessly, even if the requirements and challenges are actually completely different from those of the hyperscalers.
  • Large new architecture initiatives are regularly launched to “finally bring IT completely in line”, but in reality they always fail halfway through.
  • Ideological OSS debates coupled with obscure vendor lock-in discussions with obviously misunderstood TCO models do not make it easier either, but they mostly lead to more unnecessary complexity. Entire articles could be written on the nonsense of statements such as “OSS costs nothing” or “With OSS, I have no lock-in” – that would go far beyond the scope of this text.
  • Nor is the situation improved by escapist quality theater frequently observed in developer circles. This arises, among other things, from the fact that most companies have systematically robbed developers of all intrinsic motivational factors. Therefore, they (understandably) like to rush to substitute confirmations in order to maintain their self-esteem – unfortunately often with almost religious zeal. This may feel good for the person in question, but it usually overshoots the mark. In the end, the complexity has grown much more than the additional benefit.
  • Last but not least, we compound our problems in IT with our almost pathological obsession with youth. Unfortunately, we regularly apply “new is good, old is bad” to knowledge that would be worth preserving as an IT body of knowledge – just as real engineering disciplines do. Instead, we regularly throw away our painstakingly acquired knowledge in order to chase after the next silver bullet on the horizon – because it is newer and therefore, it must be better. Welcome to the next layer of complexity!

 

Each of these drivers (and a few more not mentioned here) is worth at least a whole article. For space reasons, I will limit myself to outlining only one of the driving forces.

 

Culture eats strategy for breakfast!

Explore the Business & Company Culture Track

 

Example: Copying Hyperscaler Architectures

Various hyperscalers (Amazon, Netflix and co.) encountered difficulties that were very specific to them. They had problems regarding delivery speed and scaling that no one before them had in this form. For this reason, they had to completely rethink and redesign many parts of their IT. One of the things that came out of this was the concept of microservices because, in combination with a whole range of other measures, it helped them test new ideas much more quickly and address their scaling problems more effectively.

But they also learned that microservices come at a high price. They had to deal with all the complexity of distributed systems at the application level. That basically meant they had to learn to design, implement, test, deploy, and operate applications very differently. For reliable operations in particular, they typically invested hundreds or thousands of person-years to build and operate the infrastructure required to do so. Nevertheless, it was worth the effort for them because, in their particular situation, the benefits outweigh the drawbacks.

Then microservices became popular. Hyperscaler engineers started talking about their concepts; they were justifiably a bit proud of what they had created. Then, that’s what IT staff at traditional companies heard: faster delivery, scalable, and cool. That’s what they wanted, too. Faster was an exasperating issue anyway – different departments were constantly breathing down their necks. And services that are just big enough for a single person to understand? Taking the accumulated complexity in their own company into consideration, this sounded like the long-awaited silver bullet.
When someone said “reusable” (the promise made again and again for decades in comparable contexts and never kept, that all necessary investments would pay off in the shortest possible time), there was no stopping it. Then, new projects without microservice architecture practically became a thing of the past.

Unfortunately, nobody stopped to ask whether microservices even solve the problems in question. In the end, you pay a high price for them (see above). So you’d expect there to be careful consideration of whether it’s worth it.

Unfortunately, in my experience, there never was. There was a lot of talk about scalability, even though practically no company has scaling requirements that cannot be solved with much simpler architectures. There was a lot of talk about “simpler” without considering that such architectures are significantly more complex overall. There was a lot of talk about easier technology migration, without considering that the existing migration problems are to a small extent due to the architecture styles used. And so on.

What remains, if you are completely honest? For most companies, almost nothing except a great deal of additional complexity due to unthinkingly copying the hyperscalers’ concepts. Most problems could have been solved using much simpler means.
By the way, this does not mean that microservices are fundamentally bad. Quite the opposite: they are a great architectural style for a special kind of problem. In all other cases, all that remains in the end is a great deal of additional complexity that doesn’t really solve any problems.

 

Breaking the circle

That was one driving force briefly outlined. As described above, there are many other drivers in addition to this that influence and reinforce each other, and it is not uncommon for us – with the best of intentions – to pile on more complexity. What can we do to make it better?

I don’t think there is a simple answer to this question. If there were, we would have (hopefully) found and implemented it long ago. What I’d like to offer instead are a few recommendations that are not a panacea, but from my experience offer a good starting point:

  • Ask “Why?”: When someone comes around the corner with a complicated-looking solution, ask “Why?”. Why this solution? What are we trying to accomplish? How does the solution help solve the problem? Could it be done differently? Could it be simpler? The question “Why?” creates focus. Far too often we do things more or less reflexively without asking why we do them. A critical “Why?” at the right time can prevent many unnecessarily complex aberrations.
  • Better advice: A great deal of complexity arises from the fact that decision-makers are unable to properly assess the consequences of their decisions, either due to a lack of knowledge regarding various facets of IT or due to recommendations from questionable “experts”. Here it is our task to offer these decision-makers alternatives and to work out and communicate their advantages and disadvantages clearly and objectively – in the language of the decision-makers and not in cryptic IT-speak. In my experience, this works wonders.
  • Think holistically: An IT solution always affects many stakeholders, and what is an advantage for one group may be a disadvantage for another. It is therefore important to evaluate solution ideas not only from the developer’s point of view, but to also consider any other groups affected, e.g., operations, end users, business, and so on. Often, such a holistic view reveals that a solution that appears attractive locally has serious disadvantages and often involves a great deal of additional complexity.
  • Know your options: In order to weigh different solution ideas against each other, you first have to know more than one option. If I always stay in the same familiar corner, then I will always have only one solution idea. The tools may vary in detail, but it’s actually always the same solution. That’s why it’s important to know different options without succumbing to the urge of wanting to use them all. For example, you can use a Java/OSS stack – probably all of us know this. But what about low-code solutions, for example? When would they be a viable option? When not? What about using a managed service instead of building the solution yourself? I can only include this in my considerations if I have also taken an unbiased look at the options.
  • Critically question hype: That should be obvious. However, we are fighting against a marketing industry that is well lubricated with millions of euros and always wants to make the latest miracle cure palatable to us. There are a lot of promises being made with polished websites boasting a simple “Get started” that is courting our favor. The Developer Advocate tells us about a perfectly packaged, wonderful future. An entire industry wants us to jump on the hype train. It is often difficult to maintain the necessary critical distance in order to soberly work out the benefits and price in relation to the concrete problem – but we have to do this if we don’t want things to become more and more complex.
  • Apply Occam’s razor: If we have several options for solving a problem and the simplest of the solutions has no obvious disadvantages, we should choose it. Humans have a so-called “complexity bias”, i.e., we tend to prefer more complex solutions to simple ones. In IT, this bias seems to be very pronounced and often creates additional complexity. So the next time you come up with an idea for a solution, maybe you should take a razor to the problem and ask whether there might be a simpler solution.

 

As I said, these are not panaceas and they are often not as easy to implement as they may seem at first glance. However, when applied with a sense of proportion (please, no dogmatism, that always causes more harm than good), these recommendations can definitely help avoid a lot of unnecessary complexity.

In fairness, however, a brief warning: This kind of thinking is not popular. It leads to unspectacular solutions. You can’t build monuments with it. This goes against the widespread understanding of “good” (complex) solutions – not to mention the hype industry, which thrives on constant new complexity. Therefore, despite all superficial approval, you will have to reckon with not knocking down any open doors. Expect to encounter a lot of hidden and open resistance.

 

 

Conclusion

Let’s summarize briefly: We can observe that IT is becoming increasingly complex and increasingly indispensable at the same time. This leads to more and more problems in the development and operation of applications. At the same time, the pressure to deliver from business departments is getting stronger and stronger, leading to a self-reinforcing vicious circle that has negative consequences for the people involved, up to and including health problems.

The reasons for this ever-increasing complexity are also complex and range from incorrect reactions to changing market requirements to unrealistic compensatory actions at the developer level. Often, a lack of understanding of the consequences of decisions leads to the accumulation of more and more complexity, despite our good intentions.

I have made a few recommendations that, when applied with a sense of proportion, can help you better recognize and avoid unnecessary complexity. However, these are not a patent remedy and, due to the low esteem in which simple solutions are held in the IT industry, they will probably not open any doors.

Nevertheless, the following applies: Too much complexity is no fun. That’s why we should all work on doing something about too much complexity. Nobody really wants to work without fun!
If you are looking for more information on the statements from this article, you can read the blog series I wrote about it [1]. The series goes much deeper into the various aspects of excessive complexity and also provides references to further material.

 

Links & Literature

[1] Friedrichsen, Uwe: “Simplify! – Part 1”: https://www.ufried.com/blog/simplify_1/

Stay tuned:

Behind the Tracks

 

Kubernetes Ecosystem

Docker, Kubernetes & Co

Microservices & Software Architecture

Maximize development productivity

Continuous Delivery & Automation

Build, test and deploy agile

Cloud Platforms & Serverless

Cloud-based & native apps

Monitoring, Traceability & Diagnostics

Handle the complexity of microservices applications

Security

DevSecOps for safer applications

Business & Company Culture

Radically optimize IT

Organizational Change

Overcome obstacles on the road to DevOps

Live Demo #slideless

Showing how technology really works