No related posts found.
Grown platforms produce grown problems: dependencies nobody fully understands anymore, processes that contradict each other, friction that slows developers down and eventually wears them out. Russell Miles has seen this in dozens of organizations – and he has a name for it: the Franken-platform. With his keynote “How Developer Platforms Fail (And How Yours Won’t)“ he brings that diagnosis to DevOpsCon Berlin (June 15-19, 2026).
Platform Engineering is the answer: taking that grown bundle of tools and shaping it deliberately, with concrete goals for Developer Experience – for the people who work with it every day, not for the architecture itself. Many organizations have gone down this path – and genuinely reduced a lot of friction in the process.
But a platform alone doesn’t solve every complexity problem. Even well-intentioned platform initiatives may come under pressure – from growth, from organizational friction, from new technologies such as AI, from new requirements that weren’t foreseeable when the platform was first built. Four of those pressure points are worth looking at closely.
Scaling Without Structure Produces the Wrong Platform
In small teams, a lot works implicitly. Alignment happens in conversation, shared assumptions never get written down because everyone already knows them. Platforms can stay lean under these conditions – and that’s not a weakness, that’s efficiency.
As teams grow, that dynamic reverses. What was never made explicit becomes a source of friction. And the first place that shows up is cognitive load. When developers start building their own shadow solutions instead of using the platform, that’s not an adoption problem. It’s the clear indicator that the platform has failed at its actual job.
Platform architecture and team structure are not independent decisions. Conway’s Law describes this as a rule, not a tendency – systems mirror the communication structure of the organization that built them. Fragmented teams produce fragmented platforms. Organizational silos become architectural silos. Ignore that, and growth gives you a platform that just reflects your organizational problems instead of solving them.
The answer is the Inverse Conway Maneuver: don’t accept team structure as a given, but design it deliberately to produce the platform architecture you actually want. Shared infrastructure belongs at the center; ownership of specific components stays distributed. This is not a technology question. It’s an organizational decision – and it needs to be made before growth makes it for you.
What that looks like under real growth pressure is what Siddharth Vijay covers in his DevOpsCon Berlin session “From 30 to 120 Engineers: How Platform Engineering Scales Teams & Technology“. Not a success story – a report on the decisions that made the difference.
Good Governance Doesn’t Need a Gatekeeper
Most platform teams build governance the way they know it: as a review checkpoint. A team wants to deploy something, integrate a new service, change a data structure – and waits for approval. That works as long as the organization is small enough for the checkpoint to stay on top of things. As the number of teams, systems, and decisions grows, the same structure becomes a bottleneck.
Anna Lavrova calls this a control reflex – not malice, but a systemic response to fear. Fear of outages, compliance breaches, blame. The governance structure that emerges isn’t poorly designed. It was designed for a different organizational size – and never updated.
The alternative she describes: architecture not as a control point, but as a system of constraints and interfaces that guides teams without micromanaging them. When a team starts something new, the first question should be – do we already have a process for this? A data model? A scaffold? And the answer should be findable before the question is asked. Not sitting in an architect’s inbox.
Governance that delivers this doesn’t need an instance. Teams move through it themselves.
Lavrova brings this argument as a keynote to DevOpsCon Berlin – “From Gatekeepers to Enablers: Rethinking Architecture in Agile & DevOps“. Not a case for less control, but an argument for better structure.
STAY TUNED
Learn more about DevOpsCon
A Tool Collection Is Not a Platform
Developers make dozens of small decisions every day that the platform should be making for them. Which tool, which pipeline, which pattern? That’s not a sign of developers being overwhelmed – it’s a sign of a platform that isn’t doing its job of providing orientation.
Tool collections don’t emerge from bad decisions. They emerge from incremental, locally sensible decisions that nobody ever evaluated for their combined effect. Eventually teams notice they’re spending more time understanding the platform than using it – and the path of least resistance leads around it.
Mark Boyd, whom we recently interviewed on the podcast on this topic, reduces it to a single question: imagine the newest developer on your team starts today. Can they hit the ground running – as fast as they could with an external SaaS tool? If not, the platform isn’t a product yet. It’s a system that demands onboarding instead of eliminating the need for it.
The anti-pattern he describes is concrete: an organization sets a goal to expose 98 percent of its capabilities as APIs. It becomes a KPI. Teams work toward it. Then a customer wants to use one of those APIs – and gets the answer: available Q4 2027. The platform was treated as an internal modernization project, not as an offering for real users. The KPI beat the value.
The way out runs through a clear product perspective: who are the users, what do they need, and how do we measure whether the platform actually delivers that? At DevOpsCon Berlin, Eduardo Piairo addresses this transition directly in “Building a Platform, Not a Tool Collection“ – drawing on the experience of a company that is right in the middle of it.
AI Agents Need Platforms That Make Context Machine-Readable
The conversation about AI in platforms focuses too much on models. The real shift is elsewhere: agents are not artifacts you deploy and forget. They are autonomous actors that access services, make decisions, and intervene in live systems – with minimal human involvement. What that means when the platform sets no boundaries is what DevOps legend John Willis shows in “When AI Agents Go Rogue“ – through real incidents, not hypothetical scenarios (also at DevOpsCon Berlin).
This confronts platforms with a requirement that classical platform engineering never faced: context must be machine-readable. What agents need to act safely has until now been written exclusively for humans – documentation, permission concepts, system boundaries. Platforms that can’t provide this either force manual intervention or produce uncontrolled behavior.
The right entry point is where failure is still contained. Internal documentation workflows are a sensible starting place. Onboarding and incident triage follow once confidence in controllability has grown. Not because the technology isn’t applicable elsewhere – but because iterative adoption is the only way to build governance and trust before agents start influencing critical decisions. What this means strategically for platform engineering as a discipline is what Engin Diri addresses in “What is AI Platform Engineering and Why Should You Care?“
Platform as Product – Not as Infrastructure Project
The four stress factors share a common root. Growth, governance, tool chaos, AI agents – they all escalate in the same place: where the platform was built as an internal system, not as a product for real users.
A platform that wasn’t built as a product has no user in mind – it manages technical components. Who deploys it, who works with it every day, what those people actually need: none of that was ever made a design question. That’s what produces the Franken-platform.
Russell Miles has asked that question in dozens of organizations. With his keynote he brings those findings to DevOpsCon Berlin. For those who want to go deeper, there’s the option to work with him in the two-day workshop “Mastering Platform Engineering – From Design to Value“ – a working session on the decisions that actually make the difference – with someone who has made them.
These are the questions at the center of DevOpsCon Berlin – with Russell Miles, Anna Lavrova, John Willis as keynote speakers, and many others who don’t theorize about platform engineering but practice it in real organizations every day. I’m looking forward to continuing this conversation with you there.
🔍 FAQ
1. What exactly is a "Franken-platform" and how do I know if I have one?
A "Franken-platform" is a collection of tools and processes that grew organically over time without deliberate design. You likely have one if your developers face high cognitive load, deal with contradictory processes, or frequently build "shadow solutions" because the official platform is too high-friction to use.
2. How can we scale our engineering team without the platform falling apart?
The key is the "Inverse Conway Maneuver." Instead of letting your platform mirror your current organizational silos, you must deliberately design your team structures to promote the architecture you want. By centralizing shared infrastructure while keeping component ownership distributed, the platform becomes a scalable foundation rather than a reflection of organizational friction.
3. Does platform governance always mean adding more "gatekeepers"?
No. Effective modern governance shifts away from manual review checkpoints, which inevitably become bottlenecks. Instead, it uses architecture as a system of "constraints and interfaces." By providing pre-approved scaffolds and data models, teams can move through the process autonomously, finding answers within the platform rather than waiting for an architect's approval.
4. How does the rise of AI agents change the requirements for platform engineering?
AI agents act as autonomous users that intervene in live systems. To manage them safely, platforms must make context—such as documentation, permissions, and system boundaries—machine-readable. Without this structured metadata, AI agents cannot act reliably, leading to uncontrolled behavior or "rogue" incidents.





