The AI Architecture Series – Part 1
Walk into any industry conference right now, and you’ll hear the same words in every session: AI assistant, copilot, agent, MCP, skill, connector, model. Every vendor promises their version is the most powerful, most intuitive, most production-ready. The result is noise, not clarity.
The challenge for anyone making technology decisions today isn’t finding AI tools. There’s no shortage of those. The challenge is making sense of what different tools actually do, how they relate to each other, and which combination makes sense for a specific problem.
Here’s a plain-language guide to the building blocks behind modern AI systems: what they are, how they connect, and what they make possible when they work together, published in four parts, one each week. Get ready to ask much sharper questions to any AI vendor you talk to.
The hidden cost of choosing features over infrastructure
Some technology choices are low-stakes. You try something, it doesn’t work, and you move on with minimal damage. But in regulated workflows, data-sensitive processes, and systems that need to be explainable to regulators or clients, the architecture choices made early tend to compound over time.
This isn’t unique to any one industry. A hospital system that deploys an AI tool without considering how its outputs will be traced and verified faces different risks than a hospital that builds that traceability in from the start. A law firm that automates document review without encoding its own methodology into the system will get generic outputs instead of firm-grade analysis. A financial institution that chooses an AI feature over an AI infrastructure will eventually face fragmented systems and sunk costs.
This is a shared pattern: an AI feature that works impressively in a demo but can’t explain its outputs, can’t be traced back to a source, and can’t integrate into governed workflows ends up as a liability waiting to be discovered, dressed up as a productivity tool.
The more useful question to ask any AI vendor: Is this built to last, and can it actually fit into a system I can govern?
A first look at how to evaluate AI tools in regulated environments
The AI landscape is genuinely exciting, and the pace of change is real. But the organisations that get lasting value from these technologies will be those that look past the feature announcements and ask harder questions about what’s underneath.
It’s a fair question, and some will push it further: even if general language models fall short today, won’t they just get there eventually? As capabilities improve at the pace they have been, the gap might close on its own. But this assumes the problem is one of raw intelligence, and it isn’t. Think of it like a brilliant generalist doctor. No matter how knowledgeable they become, you still want a cardiac surgeon operating on your heart, not because the generalist isn’t capable, but because the surgeon has spent years developing the specific expertise, tools, and protocols for that exact situation. General models will keep getting more powerful. But the need for domain-specific data, methodology, and governed workflows is a structural requirement of high-stakes work, not a temporary gap to be closed.
This is by design: foundation models are trained on broadly available information, not on a firm’s proprietary data, years of accumulated methodology, or the nuanced judgment that comes from operating in a specific regulatory environment.
THE AI ARCHITECTURE SERIES – PART 1
6 questions to ask any AI vendor
Before you commit to a tool or platform
ARCHITECTURE & GOVERNANCE
1
What data sources does this system connect to, and how?
2
Can it apply our methodology consistently, or is every output a guess?
3
Can the workflow be audited?
4
Can it scale into production without fragmenting into a collection of disconnected tools?
QUALITY & RELIABILITY
5
What metrics are used to measure system quality?
6
How and how often is the system tested for quality issues?
Those are the questions that separate infrastructure from features, and systems that hold up under scrutiny from ones that don’t.
Answering them starts with understanding the building blocks: the actual layers that make a modern AI system work, and what each one is responsible for.
The next three parts of this series go deeper into the building blocks of modern AI systems, real financial services use cases, and where the technology is heading.
Follow us on LinkedIn or subscribe to our newsletter so you don’t miss them.




