Stack
Anthropic's public-facing stack is mostly inferred from their hiring posts, GitHub org, and the rare engineering write-up [1][2]. The Claude.ai web app is a React frontend with a Tailwind-flavoured component system; the Console (developer surface) uses the same framework and shares the visual language [3].
Backend is a mix of Python and Rust. The Python API surface fronts the model serving infra; Rust shows up in roles for performance-critical inference paths and tooling [2]. They run their own inference fleet rather than wholly renting from cloud providers — the volume of GPU/TPU and "inference platform" job postings on their careers page through 2024-2025 is hard to read any other way [2].
The reference servers for MCP (Model Context Protocol) — the open standard they shipped in late 2024 — are TypeScript and live in the public anthropics GitHub org alongside SDKs in Python, Node, Go, Java, and Ruby [4]. Models are exposed via the Anthropic API directly, plus AWS Bedrock and Google Vertex AI as resale partners [5].
What's not public: the specifics of their RL training infrastructure, their internal evaluation harness, and the fine-grained data pipeline beyond the broad "curated and filtered web data" framing in their model cards [5]. None of this is hidden adversarially — it's just the kind of detail a frontier-model lab doesn't put in a blog post.
- [1] Anthropic engineering blogaccessed 2026-04-29
- [2] Anthropic careersaccessed 2026-04-29
- [3] Claude.ai web appaccessed 2026-04-29
- [4] Anthropic GitHub organisationaccessed 2026-04-29
- [5] Anthropic API documentationaccessed 2026-04-29
Hiring
Anthropic is hiring aggressively across research, product, and platform. As of April 2026 the public careers page lists ~250 open roles split between San Francisco (HQ), London, Zurich, and remote [1]. The shape of the roster: roughly half research and research engineering, the other half product, GTM, and platform.
Notable concentrations: applied AI engineers (the "hands on a keyboard" agent-building roles), RL engineers, and a steady stream of trust & safety / policy hires [1][2]. They're also hiring inference engineers and "model performance" specialists, which lines up with the self-hosted inference signal in the stack section.
LinkedIn shows headcount roughly tripled across 2023-2025 and continues to grow at a measured pace; they aren't doing the public-spectacle hiring waves you see at OpenAI [3]. Compensation tracks frontier-lab norms — the leaked 2024 ranges put senior research scientists at $300k-$700k base plus equity [3].
If you're a solo founder studying who to hire — or who to compete with for talent — the salient pattern is that Anthropic's roles are unusually specific. "RL engineer focused on reward modelling" rather than "ML engineer". That specificity is itself a hiring signal: the team is large enough to specialise.
- [1] Anthropic careers pageaccessed 2026-04-29
- [2] Anthropic Greenhouse job boardaccessed 2026-04-29
- [3] Anthropic on LinkedInaccessed 2026-04-29
AI tools
Anthropic ships the models. The Claude family — Haiku, Sonnet, Opus — covers the price/intelligence curve from $0.80/M input tokens (Haiku) to $15/M (Opus) as of April 2026 [1]. The 4.7 generation (current at writing) added the 1M-token context window on Opus and improved tool-use latency across the family.
What they ship around the models matters as much as the models themselves. The Anthropic SDKs (Python, Node, Go, Java, Ruby) are first-party and widely used — the Node SDK alone has ~5k weekly npm installs [2]. Prompt caching, batch processing, and the Files API are exposed through every SDK [3].
MCP — the Model Context Protocol — is the bigger structural bet. It's an open standard for connecting LLMs to tools, data, and other systems; the spec is on GitHub, the reference servers are TypeScript, and adoption has spread to OpenAI's Responses API, Google, and a long tail of indie tools [4]. For an AI lab, shipping a protocol other labs adopt is a different kind of distribution than shipping a model.
Internally, Anthropic uses Claude extensively to build Claude. Their own engineering blog posts describe Claude-driven code review, eval generation, and red-teaming in their training pipeline [5]. Less publicly: their Slack and internal documentation are reportedly Claude-augmented end-to-end. You can't verify this from outside; it shows up in offhand mentions in podcast interviews with the engineering team.
- [1] Anthropic API pricingaccessed 2026-04-29
- [2] @anthropic-ai/sdk on npmaccessed 2026-04-29
- [3] Prompt caching docsaccessed 2026-04-29
- [4] Model Context Protocolaccessed 2026-04-29
- [5] Anthropic researchaccessed 2026-04-29
Recent news
In the last six months, three things are worth tracking. First, the Claude 4.7 release in early 2026 added the 1M-token context window on Opus and a meaningful step in coding ability (their published SWE-bench and Aider numbers, with the usual caveats about benchmark contamination) [1].
Second, the Series F in early 2026 closed at a $61.5B valuation according to multiple outlets [2]. The notable thing isn't the headline number — it's that Anthropic kept Google and Amazon as strategic anchors while broadening the institutional base; both prior backers re-upped [2][3].
Third, MCP adoption crossed a real threshold: OpenAI's Responses API and Google's agent SDK both speak MCP as of Q1 2026 [4]. For a lab that didn't invent the model + agent framing, owning the connection layer between models and the rest of an engineer's stack is a quietly strong position.
Worth watching but not yet news: their hiring in London and Zurich suggests EU regulatory presence is being built; their public posture on the EU AI Act has been measured-but-engaged through 2025-2026.
- [1] Claude 4.7 announcementaccessed 2026-04-29
- [2] Anthropic funding (TechCrunch coverage)accessed 2026-04-29
- [3] Anthropic — The Informationaccessed 2026-04-29
- [4] MCP adoption pageaccessed 2026-04-29