Spoiler: Everything you know about Bedrock Agents is actually built on this
AWS just announced AgentCore at the New York Summit yesterday, and I immediately dove in to understand what this means for anyone building AI agents on AWS.
Here’s the thing that nobody tells you upfront about AWS’s agent services: Bedrock Agents is just a pre-configured wrapper around AgentCore.
I know, I know—I should have figured this out earlier. But once I understood this relationship, AWS’s entire agent strategy clicked into place. It’s not two competing services—it’s a foundation platform (AgentCore) and a beginner-friendly abstraction layer (Bedrock Agents) built on top of it.
Think of it like this:
- AgentCore = The engine, the foundation, the infrastructure layer
- Bedrock Agents = The pre-configured car built on that engine
This changes everything about how you should think about building AI agents on AWS.
What AgentCore Actually Is (In Plain English)
After working with it hands-on, here’s how I’d explain AgentCore to another developer:
AgentCore is AWS’s infrastructure platform for running AI agents at scale, no matter what framework you built them with.
That last part is the game-changer. Want to use LangGraph? Go ahead. Prefer CrewAI? Cool. Built something custom? No problem. AgentCore doesn’t care. It just provides the plumbing—memory management, session handling, tool connections, identity management, observability—so you can focus on your agent’s actual logic.
The Modular Services
AgentCore isn’t one monolithic thing. It’s a collection of services you can mix and match:
- Runtime – Where your agent code actually runs. Serverless, scales automatically, supports sessions up to 8 hours (way longer than Lambda’s 15 minutes)
- Memory – Handles both short-term conversation memory and long-term knowledge. The key thing: this is a managed service. You don’t set up a database for it.
- Gateway – Tool discovery and routing. Makes it easy for your agent to find and use the tools it needs. More on this in a minute.
- Identity – Manages who your agents are and what they can access, with OAuth support for external services.
- Browser & Code Interpreter – Pre-built tools your agents can use out of the box.
- Observability – Built-in tracing so you can actually see what your agent is thinking and doing.
How I Actually Tested This (The Lazy Smart Way)
Look, I didn’t build some elaborate production system for my first AgentCore exploration. That’s the trap everyone falls into—they think they need a full-blown application to understand a platform.
Instead, I did what any automation engineer would do: I spun up a basic customer support agent and used n8n webhooks as the tools. This let me test AgentCore’s orchestration, memory handling, and Gateway functionality without spending days building Lambda functions.
The revelation: AgentCore genuinely doesn’t care where your tools live. n8n webhook? Cool. Lambda function? Great. Random API on the internet? Go for it. Once you register it with Gateway, AgentCore treats them all the same.
This is actually huge for rapid prototyping. You can test the agent platform without building production-grade tooling first.
The Terraform Question (And Why I Almost Wasted Time)
Here’s a decision I faced early: should I Terraform everything?
My initial instinct was yes—I’m pursuing my AWS Solutions Architect certification, I should be thinking infrastructure-as-code, right? But after actually working with AgentCore, I realized that’s not the right question.
The real question is: what SHOULD you Terraform?
Things that work great with Terraform:
- Lambda functions that serve as your agent’s tools
- IAM roles and policies (because managing these via console is madness)
- Supporting infrastructure: DynamoDB tables, S3 buckets, etc.
- Anything you need to replicate across environments or multiple agents
Things that DON’T fit the Terraform model:
- AgentCore Memory (it’s an API-created managed service, not infrastructure)
- The AgentCore Runtime itself
- Gateway configuration and tool registration
Here’s what I landed on: hybrid approach wins.
Use Terraform for the traditional AWS resources that make sense. Use the AgentCore Python SDK (bedrock-agentcore-starter-toolkit
) for the agent-specific stuff that’s designed to be created via API.
The SDK is actually really elegant for development. It’s not that Terraform CAN’T do AgentCore stuff—it’s that the SDK is purpose-built for it and makes iteration way faster. Save Terraform for when you’re templating infrastructure across multiple agents or environments.
This is one of those cases where the “best practice” of “Terraform all the things” isn’t actually the best practice.
What I Wish the Documentation Told Me First
The official AWS docs for AgentCore are… fine. They’re technically accurate. But they don’t tell you the stuff that actually matters when you’re trying to figure out what this thing is.
So here’s what I wish someone had told me on day one:
AgentCore is AWS’s answer to: “How do we let developers use ANY framework while giving them enterprise infrastructure?”
That’s it. That’s the entire point.
AWS realized that forcing everyone into Bedrock Agents’ opinionated patterns was limiting. Developers want to use LangGraph, or CrewAI, or their own custom framework. But they still need the boring-but-critical stuff: memory management, security, identity, observability, scalability.
AgentCore provides those infrastructure services without caring what framework you built your agent with.
- Bedrock Agents = “We’ll handle everything, just configure it”
- AgentCore = “Here’s the infrastructure, build what you want”
Neither is better—they serve different needs. But understanding this distinction is the key to knowing which one you actually need.
Critical Architecture Concepts
1. Control Plane vs. Data Plane
This separation is fundamental to understanding AgentCore, especially for automation and IAM configuration:
Control Plane (bedrock-agentcore-control.{region}.amazonaws.com
)
- For creating and managing your agent infrastructure
- Operations: CreateAgentRuntime, CreateGateway, CreateMemory
- Used during setup and configuration changes
Data Plane (bedrock-agentcore.{region}.amazonaws.com
)
- For actually running your agent
- Operations: InvokeAgentRuntime, CreateEvent, RetrieveMemoryRecords
- Used during production runtime
Why this matters:
- You need different IAM permissions for each plane
- In n8n workflows or automation, you’ll mostly call the Data Plane
- Your CI/CD pipeline interacts with the Control Plane
- This separation enables proper security boundaries and least-privilege access
2. Memory is Managed, Not Provisioned
AgentCore Memory is fundamentally different from traditional database-backed memory solutions. You don’t provision infrastructure for it—you create it with an API call, and AWS manages everything behind the scenes. This is actually one of AgentCore’s biggest advantages: zero infrastructure overhead for memory management.
Think of it like S3: you don’t provision storage servers, you just create a bucket and use it.
3. Gateway is AWS’s MCP Implementation
If you’re familiar with Model Context Protocol (MCP), here’s the key insight: AgentCore Gateway is essentially AWS’s managed MCP server.
What this means in practice:
- Gateway automatically converts your APIs and Lambda functions into MCP-compatible tools
- It handles the MCP protocol implementation—you don’t write MCP server code
- Any MCP-compatible client can discover and use your Gateway tools
- You get authentication, authorization, and request routing built-in
The practical workflow:
- Point Gateway at your Lambda function or API
- Gateway exposes it as an MCP tool with auto-generated schemas
- Agents can discover it using MCP tool discovery
- Gateway includes semantic search (
x_amz_bedrock_agentcore_search
) for finding tools by natural language
When You Actually Need Gateway:
- Single tool? You can skip Gateway and call Lambda directly
- 2-3+ tools? Gateway becomes valuable for MCP-style discovery and management
- Building for production? Gateway provides the security and observability layers you’d otherwise build yourself
- Want ecosystem compatibility? Gateway’s MCP implementation means your tools can work with other MCP clients, not just AgentCore
Think of it as: instead of building your own MCP server and handling all the protocol details, Gateway does it for you as a managed service. It’s the difference between running your own web server versus using API Gateway.
AgentCore vs. Bedrock Agents: When to Use What
Now that I’ve worked with both, here’s my honest take:
Use Bedrock Agents when:
- You’re prototyping and need something fast
- Your use case fits the templates (Q&A bot, document search, etc.)
- You want AWS to handle literally everything
- Your team isn’t heavily code-focused
Think of it as: The guided tour with guardrails
Use AgentCore when:
- You need to use a specific framework (LangGraph, CrewAI, etc.)
- You’re building something custom that doesn’t fit templates
- You need sessions longer than basic timeouts (up to 8 hours)
- You want control over the agent’s architecture
- You’re going to production and need real observability
- You want MCP compatibility with external tools
Think of it as: The developer’s toolkit
The progression I recommend:
- Start with Bedrock Agents for your POC
- Validate the concept works
- Rebuild on AgentCore when you need more control
- Since Bedrock Agents runs on AgentCore anyway, you’re just lifting the abstraction layer
Comparing AgentCore to Azure AI Foundry
I’ve been watching Azure AI Foundry closely since it became publicly available, and I got hands-on experience with it earlier this year (you can read about that in this post). I was genuinely waiting to see if and when AWS would make moves to catch up in certain areas where Azure had the advantage.
The big one? MCP support. Azure announced native Model Context Protocol integration earlier this year, which sounded great on paper. But here’s the reality: Azure’s MCP implementation is actually pretty janky too.
In Azure, you connect MCP servers outside the UI through configuration, and then in the UI itself you just see “mcp” listed as a knowledge attachment—not even clearly labeled as a tool. You get zero visibility into what tools are actually available or being used. It’s oddly opaque for something they were promoting as a major feature.
AWS did it better. With AgentCore Gateway, you can actually view the tools associated with your Gateway right in the UI. You know what’s connected, what’s available, and how it’s being used. The MCP implementation feels more complete and production-ready.
Azure AI Foundry is great if:
- Your company lives in Microsoft 365
- You want agents that work directly in Teams and Outlook
- Visual development tools are important to you
- You’re primarily using OpenAI models
- Automated red teaming and evaluations are critical (Azure’s PyRIT integration is solid)
AgentCore is better if:
- You’re already in AWS ecosystem
- You want framework flexibility (use ANY framework)
- You need those 8-hour long-running sessions
- You prefer infrastructure-as-code approaches
- You want MCP connectivity that’s actually transparent and manageable
Neither is “better”—they’re optimized for different ecosystems. If you’re Microsoft-heavy, Azure makes sense. If you’re AWS-heavy (like most of my clients), AgentCore is now a legitimate production platform with better MCP tooling.
My Minimal Production Architecture
After all the experimentation, here’s what I landed on for a production-ready agent:
Core Components:
├── AgentCore Memory (API-created, managed service)
├── AgentCore Runtime (containerized agent code)
├── IAM Role (with appropriate permissions)
└── Gateway (MCP-compatible tool server)
Tools Layer:
├── Lambda Functions (your custom business logic)
└── n8n Webhooks (or any HTTP-accessible tools)
Optional Traditional Infrastructure (via Terraform):
├── DynamoDB Tables (if needed for tool data)
├── S3 Buckets (if needed for file storage)
└── Additional IAM Policies (for tool-specific access)
This gives you everything you need without the complexity of a full sample application.
For My Fellow n8n Users
If you’re building workflows in n8n (like I am), here’s how AgentCore fits:
- Use HTTP Request nodes to call the Data Plane endpoints
- Store your AWS credentials as n8n credentials
- You can trigger agent sessions from n8n workflows
- n8n can serve as your tool layer—expose workflows as webhooks and register them in Gateway
- The 8-hour session limit means you can handle truly async workflows
One of my early tests used n8n to provide the tools while AgentCore handled the agent runtime and orchestration. The Gateway makes it trivial to register n8n webhooks as MCP-compatible tools.
The Bottom Line: This Is Where AWS Is Betting
After spending 2 days hands-on with AgentCore, here’s my take as someone pursuing Solutions Architect certification and building automation solutions for clients:
AgentCore is AWS’s long-term agent platform. Bedrock Agents will continue to exist as the easy entry point, but the innovation is happening at the AgentCore level.
The clues are everywhere:
- Bedrock Agents is literally built on AgentCore infrastructure
- AgentCore supports MCP (Model Context Protocol) natively
- The 8-hour session runtime is designed for real production workloads
- Framework-agnostic architecture means it won’t become obsolete when the “hot framework” changes next year
Is it perfect? Hell no. It’s in preview, the documentation has gaps, and I spent way too much time figuring out the Control Plane vs Data Plane distinction. But the architectural vision is sound.
My recommendation based on actual usage:
If you’re prototyping → Start with Bedrock Agents. Get something working fast.
If you’re going to production → Seriously evaluate AgentCore. The control and flexibility are worth the learning curve.
If you’re already outgrowing Lambda → AgentCore solves the session timeout and memory management problems you’re probably hacking around right now.
If you’re building with n8n (like me) → AgentCore integrates beautifully. Let n8n handle the tools, AgentCore handles the agent runtime.
For those building with AI and AWS: If you’re doing any serious AI work—whether for an AWS client, partner, or your own projects—understanding AgentCore’s architecture is probably your next step. The Control/Data plane separation, the IAM model, the managed services vs self-hosted decision framework—this is classic AWS patterns applied to AI infrastructure, and it’s where the platform is heading.
Your Turn: What Agent Platform Are You Using?
Seriously, I want to know—are you building on AWS, Azure, or something else entirely? Have you tried AgentCore yet, or are you sticking with Bedrock Agents?
And if you’re working with n8n and AWS agents, I’d love to compare notes. The integration patterns are still being figured out, and I’m definitely learning as I go.
Drop a comment or reach out—I’m always down to talk agent architecture and AWS infrastructure.
Leave a Reply