Investing in the Age of AI
Abstract artwork of multicolored translucent ribbons twisting over a teal background, with pale circles along the left edge.
AIArticles

What AI Systems Are Actually Made Of: The Architecture Explained

Published: April 20, 2026
Modified: April 20, 2026
Key Takeaways
  • Modern AI systems are built from four distinct layers — LLM, MCP, skills, and agents — each solving a different problem. Buying a tool without understanding which layers it covers is how organisations end up with impressive demos and unreliable production systems.
  • Access and intelligence are not the same thing. An AI that can reach your data still needs methodology and domain expertise to do anything useful with it.
  • The major AI labs have converged on the same architecture. The terminology differs, but the underlying layers are consistent — which means the integrations and skills you build today are less likely to be locked to a single vendor.

The AI Architecture Series – Part 2

In the first part of this series, we established why choosing the right AI architecture matters, especially in regulated environments where the wrong decision compounds over time. This time, we get into the building blocks themselves.

Modern AI systems aren’t monolithic. They’re built from distinct layers, each doing a different job. Understanding what each layer is for makes it much easier to evaluate tools, ask the right questions, and avoid the trap of buying features instead of infrastructure.

The examples in this piece are intentionally simple. The goal here is to show how the layers connect, not to model real workflows. In the next article, we will get into what this architecture looks like in practice for financial services specifically.

Layer 1: The LLM, the reasoning engine

A large language model, or LLM, is the core of most modern AI systems. It’s the component that reads, writes, summarises, analyses, and generates responses. When people talk about GPT-4, Claude, or Gemini, they’re talking about LLMs.

Think of the LLM as the brain. It can reason and generate across an enormous range of topics and formats, but on its own, it doesn’t know your internal data, your organisation’s specific context, or the methodology your team uses. It’s general-purpose intelligence: powerful, but raw.

The rest of the layers exist to connect that intelligence to the right information and direct it toward specific, consistent outcomes.

LAYER 1 – LLM ONLY

The model knows a lot

But only what it learned during training. No live data. No company context. No connection to the outside world.

AI Assistant
What’s the weather like in our New York Office right now?
I don’t have access to real-time weather data or your company’s office locations. Generally speaking, New York in March can range from cold to mild – typically 35-55°F.

⚠️ What’s still missing

The model has no way to reach anything beyond its training data. It can’t check live information, query your internal systems, or know anything specific to your organization. It’s general-purpose intelligence with no connection to your world.

Layer 2: MCPs, the connection layer

MCP stands for Model Context Protocol. It’s an open standard that defines how AI systems connect to external data sources and tools. Originally introduced by Anthropic and increasingly adopted across the industry, MCP replaces a tangle of one-off integrations with a common protocol, much like how USB-C replaced a drawer full of incompatible cables.

In practice, an MCP server exposes capabilities (querying a database, reading from a document store, checking a live data feed, running a piece of code) in a way that AI systems can discover and use consistently. Instead of building a custom integration for every tool an AI needs to access, you connect them via MCP, and any AI system that supports the standard can use them.

One thing worth emphasizing: MCP handles access, not intelligence. It determines where the AI can reach; what it does with what it finds is still up to the model and the layers above.

LAYER 2 – MCP

MCP gives the model a standard way to reach outside itself

To live data, APIs, and internal tools – without building a bespoke connection for each use.

AI Assistant
What’s the weather like in our New York Office right now?

Under the hood

MCP 🌤 Weather Service – Got 30 day forecast
I don’t have access to real-time weather data or your company’s office locations. Generally speaking, New York in March can range from cold to mild – typically 35-55°F.

⚠️ What’s still missing

The model can now access external data — but it has no sense of how your organization works. It doesn’t know your preferred output format, the methodology your team follows, or the domain expertise that makes your analysis distinctive. Accessible data is not the same as useful data.

Layer 3: Skills, the know-how layer

If MCP answers the question “what can the AI access?”, skills answer the question “how should the AI do this particular thing”?

A “skill” is a reusable, portable package that encodes a specific workflow, methodology, or domain expertise. It might bundle instructions, examples, templates, and structured logic into something an AI system can load and apply consistently. And like MCP, skills are built as an open standard — meaning a well-designed skill can be shared and reused across different AI systems that support the specification, without being tied to any one platform.

A good metaphor comes from The Matrix: when Neo needs to learn kung fu, the knowledge isn’t acquired through years of practice; it’s installed. Skills work similarly. They give an AI system a reusable capability that encodes expertise in a form it can apply immediately and consistently.

Where this becomes especially powerful is in organizations that have “a specific way of doing things.” For example, analysts’ reports in the financial industry follow a particular structure. Compliance memos have strict format and tone requirements. Procurement decisions follow a predefined approval methodology. Without skills, you’d need to re-explain those requirements every time you do a task. With skills, the methodology is encoded once and applied consistently, regardless of who runs it or which underlying model powers it.

In practical terms, a skill is just a structured package of documents: instructions, examples, templates, and reference material that the model reads and follows, much like the methodology guides or standard operating procedures your team already maintains. The difference is that it’s formatted so an AI system can apply it consistently and automatically, rather than relying on someone to remember the right steps.

Not every organisation is ready to build full skills from day one. Most major platforms offer a lighter version of the same idea through projects: a way of organising and persisting context, instructions, and files across conversations so the model carries your preferences and knowledge forward without needing to be re-briefed each time. Projects are less powerful and portable than a skill, but they are a practical starting point for teams that want consistency without the overhead of creating and distributing skills across the organization.

The distinction to hold onto: MCP = access. Skills = method.

LAYER 3 – SKILLS

The skill tells the model what to know. The MCP tells it where to look.

Together, a question that would have stumped the model alone gets a precise, structured answer.

AI Assistant
What’s the weather like in each of our offices today?

Under the hood

Skill 🏢 Company Offices – 5 locations
MCP 🌤 Weather Service – Got 30 day forecast

Here is the weather in all the office locations:

New York 44°F Partly Cloudy   London 54°F Overcast   SĂŁo Paulo 75°F Showers

Singapore 88°F Humid     Madrid 61°F Clear

⚠️ What’s still missing

The model can access the right data and apply your methodology — but complex, multi-step workflows still need someone to manually coordinate each stage. The more steps a task involves, the more human effort is required to string them together.

Layer 4: Agents, the orchestration layer

An agent is a system that combines an LLM with tools, instructions, and logic to carry out multi-step tasks with a degree of autonomy. Where a simple assistant answers a question and stops, an agent plans, retrieves, executes, checks its own outputs, and adapts when something doesn’t go as expected.

If the LLM is the brain, MCP is the connective tissue that links it to the world, and skills are the learned capabilities it can draw on, then the agent is the operator that puts everything together to actually get something done.

This is where “AI assistant” becomes “AI system.” An assistant just answers based on its fixed knowledge. An agent can explore information, and take autonomous actions.

LAYER 4 – AGENT

Agents plan, decide, and execute

The LLM reasons, MCPs connect, skills provide method, and the agent coordinates all of it toward an outcome.

AI Assistant
Which weeks next month are best for visiting each of our offices?

Under the hood

Skill 🏢 Company Offices – 5 locations found
MCP 🌤 Weather Service – Got 30 day forecast
Agent 👤 Best travel windows identified
MCP đź“… Calendar – Open weeks found
Agent 👤 Week-by-week itinerary ready

The best months for visiting the offices are the following:

Apr 7-11 New York – SĂŁo Paulo   Apr 22-24 Singapore

Apr 14-18 London – Madrid

Across all four layers, one principle holds: each layer solves a different problem. The LLM provides reasoning. MCP provides access. Skills provide method. Agents provide orchestration. A system that is strong in one layer but weak in another will show its gaps quickly in production: impressive in a demo, unreliable at scale.

How the major AI labs are implementing these layers

The leading AI labs have converged on remarkably similar architectures, even though they approached the problem from different starting points.

The clearest sign of this convergence is MCP itself. What began as Anthropic’s open-source protocol has been adopted by every major platform. OpenAI, Google, and Microsoft all support MCP, making it the shared standard for connecting AI systems to external tools and data. For organizations building integrations, this means the connections you invest in today are far less likely to be locked to a single vendor.

Beyond the connection layer, each lab is filling in the stack in its own way:

  • Anthropic has applied the same “open standard” mindset to when launching Skills — a portable way to encode methodology and domain expertise that works across platforms. Claude’s Plugins bundle MCP connections and skills together into installable packages, making it straightforward to give an AI system both access and know-how in one step.
  • OpenAI has built its connection ecosystem under the umbrella of Apps, which give ChatGPT access to external tools and data sources.
  • Microsoft has made Copilot Studio its agent-building platform, with MCP-powered connectors and a new Notebooks feature for persistent project context.
  • Google supports MCP across its developer and cloud platforms, and recently introduced Projects as a way to persist context and instructions across conversations.

The terminology differs across platforms, but the underlying concepts map to the same layers:

THE AI ARCHITECTURE SERIES – PART 2

How the major AI labs implement the four layers

The same architecture, four different vocabularies


Anthropic OpenAI Microsoft Google
Main GenAI app Claude ChatGPT Copilot Gemini
Persistent context Projects Projects Notebooks Projects
Connection layer Connectors Apps Connectors Extensions; MCP (dev/cloud only)
Agent capabilities Claude Cowork Agent Mode Copilot Studio Gemini Agent

For anyone building AI-powered workflows in financial services, this convergence is good news — but it doesn’t eliminate the governance question. How each platform handles data connections, methodology, and multi-step orchestration determines whether the system you build today will hold up under scrutiny tomorrow.

What’s next: agents that act and agents that collaborate

Many of the tools people already use have agents running under the hood. When ChatGPT writes code, searches the web, and synthesises a response in a single session, that’s an agent at work. The same is true of Claude, Microsoft Copilot, and a growing number of enterprise products. But today, these agents still operate within the boundaries of a chat window and a set of predefined tool connections. That’s starting to change.

The first frontier is agents that can operate a full computer environment on your behalf: browsing, clicking, navigating across applications, rather than being limited to chat. Anthropic’s Cowork and OpenAI’s Agent mode give the AI its own sandboxed workspace where it can carry out tasks across whatever tools are available, while open-source projects like OpenClaw take a different approach, letting an AI agent run directly on your local machine with access to your browser, files, and messaging apps.

The second frontier is agents that coordinate with each other. Today, if you want an AI system to hand off a task, say, from a research agent to a compliance-checking agent built on a different platform, someone has to wire that up manually. Google’s A2A protocol is designed to change that, giving agents a common language for delegating tasks across platforms and vendors.

Both frontiers follow the same pattern: more autonomy, broader access, higher stakes. An agent that can browse the web and operate applications on your behalf is far more powerful than one confined to a chat window — but it also carries more risk. The industry is still working out how to make these systems safe, auditable, and controllable enough for regulated environments. For financial services teams, that’s a space worth watching closely but approaching with care.

In the next issue of this series, we will show what these four layers look like in practice for financial services workflows where each layer does specific, traceable work.

Follow us on LinkedIn or subscribe to our newsletter so you don’t miss them.

Stay updated

Get market insights from our experts directly in your inbox.

Yago González

Senior Product Manager, GenAI Initiatives, Clarity AI

Yago González leads the strategy behind Clarity AI platform's generative AI capabilities. Previously, he pioneered the integration of generative AI at Iberia, Spain's flag carrier, as part of International Airlines Group.

Research and Insights

Latest news and articles

AI

What AI Systems Are Actually Made Of: The Architecture Explained

Modern AI systems aren't monolithic. Understanding the four layers they're built from, and what each one does, is how you evaluate tools that actually hold up in production.

Climate

The Truth in the Budget: What Green CapEx Reveals About the Climate Transition

The transition to a low-carbon economy is often framed through commitments: net-zero targets, transition plans, and long-term strategies. However, the pace and credibility of this transition ultimately depend on how capital is allocated.  Capital expenditure (CapEx) provides one of the most tangible indicators of corporate transition progress. Unlike climate targets or transition plans, CapEx reflects…

AI

Why Most AI Pitches Are Missing the Point

Not all AI tools are built for regulated environments. Learn what separates AI infrastructure from AI features, and the six questions to ask any vendor.