I’ve completed a decent amount of AI training over the past two years. Eighteen online courses in the first three months of 2024 alone, and Connecticut’s free AI Academy earlier this year to see if I could recommend it here. When something new pops up from Anthropic, I want to know more about it.
On March 12, Anthropic announced the Claude Certified Architect – Foundations (CCA). It’s their first technical certification, and it arrived alongside a $100 million investment in a new Claude Partner Network. I asked Claude to pull the 40-page exam guide. Here’s what we found.
What It Is
The CCA is a proctored, multiple-choice exam with 60 questions. No Claude, no external tools, no documentation during the test. It validates that you can design and build production-grade applications using Claude’s core technologies: the Claude API, the Claude Agent SDK, Claude Code, and the Model Context Protocol (MCP).
Scores are reported on a scale of 100 to 1,000. The minimum passing score is 720. There’s no penalty for guessing — unanswered questions are simply scored as incorrect.
What It Covers
The exam is organized into five domains:
| Domain | Weight |
|---|---|
| Agentic Architecture & Orchestration | 27% |
| Claude Code Configuration & Workflows | 20% |
| Prompt Engineering & Structured Output | 20% |
| Tool Design & MCP Integration | 18% |
| Context Management & Reliability | 15% |
Nearly half the exam focuses on agentic systems and Claude Code workflows. This isn’t an AI fundamentals course. The exam guide describes scenario-based questions drawn from real production use cases — building customer support agents, designing multi-agent research systems, integrating Claude Code into CI/CD pipelines, and extracting structured data from unstructured documents. Four of six possible scenarios are selected at random for each exam.
The sample questions in the guide confirm the depth. One asks how to fix an agent that skips customer verification before processing refunds. Another asks where to store a team-wide custom slash command. The correct answers require knowing specific implementation details — programmatic hooks versus prompt-based enforcement, project-scoped .claude/commands/ versus user-scoped ~/.claude/commands/. This is a systems design exam, not a vocabulary test.
Who It’s For
The exam guide describes the ideal candidate as a “solution architect who designs and implements production applications with Claude” with at least six months of hands-on experience with the Claude API, Agent SDK, Claude Code, and MCP.
If you’ve been building with Claude professionally — writing agentic loops, configuring CLAUDE.md hierarchies, designing MCP tool interfaces — this is aimed at you. If you’re still in the learning phase, this isn’t the place to start. Anthropic says additional certifications for developers and sellers are coming later in 2026.
Who Can Take It
Right now, only members of Anthropic’s Claude Partner Network can access the exam. The Partner Network is free to join — but it’s designed for organizations that are bringing Claude to market. Think consulting firms, AI solution providers, and systems integrators. The first 5,000 partner company employees get early access at no cost. After that, it’s $99 per attempt.
If you’re an individual practitioner — someone who uses Claude Code every day but isn’t reselling Claude services — there is no individual signup path at this time. You can’t register as a solo user. Your employer would need to join the Partner Network, and that only makes sense if the organization is actively implementing Claude for clients or building Claude into its products.
That leaves a lot of heavy Claude users—myself included—on the outside looking in.
How to Prepare
The exam guide includes four hands-on preparation exercises and eight study recommendations. The exercises are substantial — building a multi-tool agent with escalation logic, configuring Claude Code for a team development workflow, constructing a structured data extraction pipeline, and designing a multi-agent research system. Each exercise maps to specific exam domains.
Anthropic also publishes a practice exam that mirrors the real exam’s format and scenarios, with explanations after each answer. The guide recommends completing it before sitting for the real thing.
The most practical preparation advice: build things. The questions test production architecture decisions, not textbook knowledge. If you’ve been using Claude Code to ship real projects, you’ve already been studying.
My Take
I’ve watched the AI certification landscape closely. Most of what’s out there tests general knowledge — what is a large language model, what is prompt engineering, what is responsible AI. Those courses have value for people getting started. But they don’t test whether you can actually build and deploy something.
This exam is different. It tests specific, opinionated decisions about how to architect systems with Claude. When should you use programmatic hooks instead of prompt instructions? How do you split a 14-file code review across multiple passes to avoid attention dilution? When does the Batch API make sense versus real-time calls? These are the kinds of questions that only come from experience.
Whether I’ll take it depends on whether access opens beyond the Partner Network. At $99 an attempt, the price isn’t the barrier. The partner requirement is.
I’ll be watching this one.
