What We Learned by Evaluating Enterprise Architecture Across 30 Government Ministries
- Sunil Dutt Jha

- 3 days ago
- 4 min read

Over the last two decades, ICMG has evaluated Enterprise Architecture across more than 30 government ministries and public-sector bodies, spanning investment promotion, finance, industry, environment, transport, energy, health, education, and large national digital programs. These evaluations were not conducted as audits of compliance, nor as reviews of EA documentation.
They were undertaken to understand one question consistently: how much real enterprise structure exists, and how reliably does it survive scale, policy change, and execution pressure.
Observation 1: Government EA Cannot Be Evaluated the Way Corporate EA Is
What became clear very early is that government EA cannot be evaluated the way corporate EA is. Ministries operate under continuous policy churn, overlapping mandates, regulatory interdependence, and political timelines.
Any EA maturity assessment that does not account for these forces quickly becomes theoretical. As a result, ICMG’s evaluations focused on observable anatomical behavior—how initiatives were approved, how logic was embedded in systems, how exceptions propagated, and how operational breakdowns were explained (or not explained) anatomically.
Observation 2: The Highest Consistent Maturity Observed Was EA Operating Inside IT
Across these 30 ministries, the highest consistent maturity observed was EA operating inside IT.
Governments had digital platforms, portals, shared services, interoperability layers, and EA units. IT Architecture boards existed. Standards were published.
In many cases, there were dozens—sometimes hundreds—of concurrent digital initiatives. Yet when these initiatives were examined anatomically, they rarely followed a single internal IT anatomy.
Approval logic was duplicated across systems, workflow sequencing differed by program, and data definitions diverged silently.
In maturity terms, even where EA existed formally, only about 20–30% of real IT-level anatomy consistency was present.
Observation 3: Each New IT Project Behaves as If It Is the First One
Across ministries, we repeatedly observed that each new IT project behaves as if no prior project exists. Workflows are redesigned, approval steps are reordered, rules are reinterpreted, and data definitions are recreated—often even when the same platforms and vendors are used.
In practice, what is called architecture compliance usually means only one thing: the project is using an approved vendor, platform, or technology stack. Architecture boards review whether the right tools are selected, not whether the project behaves the same way as existing systems.
This creates a dangerous illusion of consistency. Projects may run on the same technology, yet operate differently in how approvals work, how exceptions are handled, and how decisions are enforced. Over time, governments accumulate multiple digital systems that look aligned from the outside but behave differently in operations.
This is why, even where EA formally exists, IT-level consistency remains limited.
Architecture is treated as procurement control, not as a mechanism to ensure that new projects behave like extensions of what already exists.
Observation 4: When Evaluation Extended Beyond IT, the Gap Widened Sharply
When we extended evaluation beyond IT into ministries themselves, the gap widened.
Ministries ran multiple schemes, reforms, and programs within the same functional domain, yet each initiative often introduced its own interpretation of process, approvals, data, and exceptions.
From an EA perspective, this means department-level architecture did not exist as a stable entity. There was no one ministry, one anatomy.
Instead, there were multiple, parallel mini-anatomies competing inside the same mandate. This is why Level 2 EA—department-centric architecture—is largely absent as an operating reality in governments today.
Observation 5: Most Government Failures Occur at Intersections, Not Within Ministries
Attempts to unify ministries anatomically at the enterprise level exposed an even deeper truth.
Government failures were rarely due to poor execution within a single ministry. They emerged at intersections—where policy intent met regulation, where incentives met compliance, where approvals crossed jurisdictions, and where delivery depended on capacity that existed elsewhere.
Without a unified government-level anatomy, these intersections became negotiation zones rather than anatomical pathways.
Decisions were approved first and reconciled later, often publicly and expensively.
Why the ICMG EA Maturity Model Was Formalised
This is the context in which the ICMG EA Maturity Model was formalised. It was not designed as a theoretical ladder, but as a way to distinguish between what actually exists today and what requires deliberate elevation.
Level 1—EA inside IT—is the ceiling most governments currently operate under, and even there, structural completeness is partial.
Levels 2 and 3 are not “higher scores” to be achieved; they represent fundamentally different operating conditions that require governments to stop allowing scheme-by-scheme and ministry-by-ministry reinvention.
Observation 6: Scale Amplifies the Absence of Anatomy
One consistent insight from evaluating 30 ministries is that scale does not improve EA maturity; it amplifies its absence.
Larger budgets, more schemes, and more digital programs increase the cost of anatomy inconsistency. Without a shared anatomy, governments accumulate approval delays, regulatory rework, incentive leakage, and post-launch correction cycles.
These are not governance problems. They are anatomical ones.
Why Rating and Elevation Must Be Separated
For this reason, ICMG separates EA maturity rating from enterprise anatomy elevation.
Rating answers what exists today—primarily inside IT, and only partially.
EA Elevation addresses what must be built if governments want policy, regulation, incentives, and delivery to operate as one system.
This distinction is essential, because treating non-existent maturity levels as if they are common only obscures the real work required.



