Why Agile Transformations Fail (And It Is Rarely Because of Agile)
At some point in the last decade, the Agile transformation consultancy became one of the more reliable growth industries on record. Not because transformation was succeeding, precisely, but because it kept needing to be done again. Depending on which survey you read, somewhere around 97% of organisations now use Agile methods. The other 3% are presumably still waiting for a business case to be approved. Yet despite this near-universal adoption, the gap between investment and claimed impact has, by most honest accounts, got wider rather than narrower. The transformation market is now worth tens of billions globally and still appears to fail more often than it succeeds, which is either a remarkable indictment of the methodology or a remarkable indictment of the organisations applying it. The obvious question is which, and the less obvious question is whether the failure is quite the kind of failure most people think it is.
The theatre problem
The explanation that circulates most often, agile theatre, ceremonies without principles, leadership that approves the initiative without changing anything about its own behaviour, is not wrong. We have run enough training days to confirm that the recognition in the room when you describe a standup that is really a status report, or a retrospective that has been producing the same three action items every fortnight for a year and a half, is immediate and slightly pained. Everyone knows, but nobody has said it out loud.
The "operating system" metaphor that transformation literature reaches for is slightly corporate in register, but it captures something genuine. Many organisations layer Agile ceremonies onto power structures that have not shifted at all: decision latency, approval chains, incentive models, and demand management remain fundamentally unchanged. Jira fills up, standups happen, velocity gets measured with some rigour, and the organisation declares itself Agile and waits for something to improve.
What tends to improve, mostly, is the reporting.
The incentive problem nobody wants to discuss
Here is the thing the standard transformation narrative consistently underplays: most of what looks like cultural resistance is actually rational behaviour responding quite sensibly to the environment it is in. If leaders are penalised for forecast variance, they will demand certainty. If middle managers survive on local optimisation and utilisation metrics, they will resist flow-based thinking. If procurement cycles take nine months, no volume of retrospectives will create genuine responsiveness.
Transformation conversations frequently reach a point labelled "we need a mindset shift," at which they tend to stall indefinitely, not because mindset is unimportant but because mindset language has a habit of psychologising structural problems. The manager who insists on fixed-scope commitments is not, in most cases, a person who needs re-educating in empiricism; they are a person responding quite logically to an environment that still punishes them for volatility. Changing the mindset without changing the environment is approximately as effective as asking someone to think warm thoughts in a cold room.
Middle management is where most agile transformations go to quietly expire, and it is usually not because middle managers are unusually resistant to change. It is because they are left in a vacuum. Executives approve the initiative, teams get trained, and the layer of people who actually control budgets, performance reviews, recruitment, and day-to-day prioritisation find themselves with a new vocabulary and no clearer understanding of what their role now is. In our experience, the resulting uncertainty produces something that looks like resistance from the outside but is really just self-preservation in an ambiguous environment. Give people clarity and a reason to back the change, and most of them will. Leave them guessing, and they will default to whatever got them promoted.
The failure mode nobody discusses at all
There is a pattern that gets even less airtime than the incentive problem, and it is arguably the more interesting one.
A great many "failed Agile transformations" are not failures of commitment or execution but of categorisation. Organisations apply a single operating model across radically different types of work: product discovery, infrastructure support, regulated delivery, vendor coordination, operational response, platform engineering, long-cycle science or engineering, procurement-heavy change programmes. SAFe or Scrum gets rolled out uniformly, reporting symmetry is achieved, and the organisation develops a coherent story about how it works.
What it frequently loses in the process is contextual intelligence. Teams become Agile-shaped without necessarily becoming more adaptive to the specific demands of their actual work. The framework imposes a common vocabulary over fundamentally incompatible delivery realities, and what looked like alignment turns out to be homogenisation, with some of those teams needing to change in almost the opposite direction from others. Applying the same model to both produces a tidy org chart and a confused delivery organisation.
This complicates the standard transformation narrative rather badly, because that narrative tends to assume there is a coherent "successful Agile state" that every organisation is failing, through insufficient courage or commitment, to reach. Sometimes the failure is not insufficient commitment to the destination but the wrong destination for that part of the organisation, which is a harder conversation to have with a consultancy that has already sold you the framework.
And then there is AI
A significant number of organisations are currently in the process of placing generative AI infrastructure on top of delivery systems that were not working especially well beforehand, and the promise is acceleration. The risk, which is not discussed with quite the frequency it deserves, is that they are about to accelerate the production of things nobody needed, from processes that were already broken, at lower unit cost and higher volume than before.
Fragmented governance does not become coherent when it is generated by a language model at scale. An overloaded portfolio does not become focused because each item now has a better-written description. The organisations that struggled to maintain meaningful product thinking in an Agile context are not obviously going to find it easier when the feature factory runs faster. There is a phrase for this that seems about right: the AI mirage. It looks, from a certain angle, like progress.
The uncomfortable conclusion
The most honest observation in this whole debate is probably this: a lot of organisations never actually wanted agility. What they wanted was the appearance of adaptability while preserving managerial predictability and control. Agile became a language layer over essentially unchanged systems, with standups giving the impression of responsiveness, retrospectives the impression of continuous improvement, and the Jira board the impression of transparency. And because appearances are not nothing, being genuinely useful for certain audiences including boards, investors, and recruitment candidates, the transformation was, in a limited sense, successful. It just was not the transformation that was advertised.
Transformation fatigue, that increasingly familiar phenomenon in which employees simply stop believing the next initiative will be meaningfully different from the last one, is the natural consequence. People are quite perceptive, and they notice, over time, the distance between the stated values and the actual incentives.
The hardest part of any agile transformation has been said many times and remains true each time it is said: it is not teaching teams Scrum. Scrum is, relatively speaking, quite teachable. The hard part is persuading an organisation to genuinely decentralise judgement, expose its weak decisions earlier, and reduce the coordination overhead that exists largely to preserve the appearance of control. That requires changing what gets rewarded, what gets funded, and who gets listened to when something is not working.
Those are not technical problems but, to a considerable degree, political ones, and political problems have always been rather harder to resolve in a retrospective.