The conversation about AI (Artificial Intelligence) and enterprise architecture has developed a predictable shape. Vendors describe capabilities that sound transformative. Practitioners try to evaluate claims against the actual mechanics of their daily work. The gap between the two is usually larger than either party acknowledges, and organizations that try to close it too quickly end up with AI implementations that require more human correction than the work they replaced.
This is an honest account of where AI agents add genuine, near-term value in enterprise architecture — and where the claims exceed the reality.
What AI Agents Are Actually Good At in EA
Four areas of EA (Enterprise Architecture) work are well-suited to AI augmentation, drawn from research NovoCircle published in the Intelligent Automation for Enterprise Architecture whitepaper. They vary in maturity and in how much organizational prerequisite they require.
Extracting Architecture Content From Unstructured Data
A significant portion of what architects do every week is transcription: taking information gathered from interviews, system documentation, technical specifications, or operational data sources and entering it into the repository. This is necessary work and it does not require architectural judgment. It requires time, attention, and familiarity with the repository’s element types and naming conventions.
AI agents are genuinely good at this. Retrieval-based agents grounded on company documents can extract architecture content — applications named, integrations described, infrastructure referenced — and translate it into structured repository entries. Deterministic agents connected to source databases can query operational systems directly and generate element candidates from the data they return. Inference agents can discover relationships within data sets that would take a human analyst days to identify manually.
The output of these agents still requires architect review. Correctness of the content — whether an extracted application is correctly typed, whether an inferred relationship accurately reflects the actual dependency — is a judgment call that automation does not reliably make. But the reduction in time spent on initial population is real and significant.
Scaling Architecture Analysis
Architecture analysis is the practice of explaining how the various facets of an organization, system, or ecosystem relate to each other and assessing the impact of proposed changes on different components. Business and IT (Information Technology) leaders look for architects to help in this area because of their skills in breaking complex situations into chunks that can be understood.
This is also the area where AI agents offer the most dramatic scale improvement, and the comparison is direct: manual analysis by an architect is constrained to three to five facets of an ecosystem at a time and takes weeks to months. AI-enhanced analysis covers enterprise-wide data and takes minutes.
What makes this possible is not that AI understands architecture better than architects do. It is that AI can process exponentially larger datasets than a human can hold in working memory at the same time. For analysis tasks that are primarily about identifying patterns across large data sets — application portfolio health, redundancy identification, dependency mapping at scale, technology landscape reporting — AI agents are better suited than human analysts by a wide margin.
The boundary is the same as modeling: AI can process and pattern-match at scale. AI cannot determine whether the patterns it identifies are significant, whether the impact it identifies is acceptable, or what the organization should actually do about what it found. Those are judgment calls.
Moving Governance Completeness Checks Upstream
As described more fully in the governance post in this series, AI can be used to automate the completeness checking that currently consumes early portions of architecture review sessions. Whether an element has all required properties populated, whether relationship types conform to established standards, whether a submission meets the structural requirements for review — these are mechanical questions with objectively correct answers that do not require senior architect time.
Automating these checks at the point of model creation — rather than at the point of review — means that the review session can focus on content and reasoning rather than mechanics. This is where the productivity impact on senior architects is most direct.
Generating Supporting Documentation
Architecture review boards require documentation: summaries of current state and target state, identification of proposed changes and their affected elements, impact analysis across dependent components. Generating this documentation manually is time-consuming and produces inconsistent output depending on who wrote it.
AI can generate this documentation from repository data. Given a well-maintained repository, an AI agent can produce a review packet that describes what exists, what is proposed to change, what the affected elements and dependencies are, and what the impact analysis reveals — consistently, and in a fraction of the time a human would spend assembling the same information.
The caveat, as always: the quality of the output depends entirely on the quality of the repository. AI-generated documentation from a poorly maintained repository is wrong quickly and confidently. This is one of the reasons repository quality is a prerequisite, not an afterthought.
What AI Agents Are Not Good At in EA
The boundary between what AI augments and what it cannot replace in enterprise architecture is not about technical capability — it is about the nature of judgment.
Architecture Is Not Just Analysis — It’s Translation
The most valuable thing architects do is not modeling or analysis. It is translating between the business world and the technology world: facilitating the interactive conversations that produce architectural decisions, helping stakeholders understand the implications of choices they are considering, and eliciting from those stakeholders the requirements and constraints that make good architecture possible.
This is the human-to-human work where the real value of architecture comes from. Generative AI can certainly support architects in preparing for these conversations — generating summaries, producing background materials, drafting questions. But it is not a substitute for the conversation itself. The iterative back-and-forth of a working session, where an architect’s understanding of a business problem deepens through dialogue and an executive’s understanding of the technology implications sharpens through questions, cannot be replaced by a language model generating a transcript of what the conversation might have contained.
Architecture teams that attempt to replace stakeholder engagement with AI-generated content find that the content is technically accurate and organizationally ignored — because stakeholders engage with architecture when an architect engages with them, not when they receive a document.
Determining Whether Architecture Is Correct
Automation can check whether a model is complete. It cannot determine whether the model is correct — whether the architectural decision is sound, whether the tradeoffs were appropriately evaluated, whether the approach will create problems three years from now that nobody is anticipating.
This distinction matters practically. An AI agent that checks whether all required properties of an element are populated is doing completeness checking. An AI agent that assesses whether the integration approach described in those properties is architecturally appropriate is making a correctness judgment that it is not qualified to make. The architectural review board exists for the second type of question, not the first.
Organizations that confuse these two categories — either by expecting automation to do correctness checking, or by allowing automation to produce outputs that stakeholders treat as correctness assessments — make expensive mistakes.
Governance Without Data Quality
AI-powered governance and stakeholder engagement capabilities depend entirely on the quality and consistency of the repository data they operate on. An AI agent that queries a repository full of inconsistently named elements, incomplete relationships, and outdated models produces outputs that are wrong in ways that are harder to detect than the original manual outputs were.
This is not a reason to avoid AI augmentation. It is a reason to sequence it correctly. The organizations that get the most value from AI in EA are the ones that invested in repository quality first — that built Stage 3 and Stage 4 capability, maintained governance, and produced a repository that is trustworthy as a data source before they tried to use it as an AI substrate.
Organizations that try to use AI to compensate for poor repository quality discover that AI cannot fix a data quality problem. It can only propagate it faster.
How to Think About Sequencing
The most practical framework for sequencing AI augmentation in EA — drawn from the NovoCircle Intelligent Automation for Enterprise Architecture whitepaper — is to start where the highest concentration of manual effort exists and where the data quality prerequisite is lowest.
Current state modeling (AI-assisted extraction from documents and operational systems) is the right starting point for most organizations. It addresses the area where architects spend the most time on manual tasks, and it begins to build the data quality foundation that makes later automation more valuable.
Architecture analysis is the second area, not because it is less important, but because analysis quality depends on model quality. A well-maintained application inventory unlocks meaningful AI-enhanced portfolio analysis. A partial inventory does not.
Governance automation follows modeling stabilization. The completeness checks are only as good as the standards they check against, and those standards need to be consistently defined and applied before automation is worth deploying.
Stakeholder engagement augmentation — connecting the architecture repository to the organization’s broader AI ecosystem — is last, not because it is unimportant, but because it surfaces data quality issues to the widest audience. It should be deployed on a repository that is ready to answer questions accurately.
What This Means for Your Practice
AI augmentation of EA workflows is real, available now, and meaningfully different from the AI hype that surrounds most technology discussions. The productivity gains are not speculative — they are measurable in the reduction of hours architects spend on transcription, the acceleration of analysis tasks from weeks to minutes, and the recovery of senior architect time from completeness checking.
The work required to capture those gains is also real. Repository quality is a prerequisite. Governance is a prerequisite. The modeling standards that make automation meaningful have to exist before automation can be deployed.
If your organization is evaluating where AI augmentation could have the most impact on your EA practice, book a discovery call. The assessment starts by locating your practice on the maturity arc and identifying which AI investments your current foundation actually supports.
The full framework is available in the Intelligent Automation for Enterprise Architecture whitepaper and the The Long Arc: Six Stages of EA Evolution.
Ryan Schmierer is the founder of NovoCircle, a technology advisory practice specializing in Modern Enterprise Architecture and Intelligent Automation.