Stop Reaching for Python: Strands Agents TypeScript SDK Just Hit 1.0
A lot of production codebases are TypeScript. A lot of agent frameworks are Python. If yours is both, you either rewrite your stack or build a bridge between two languages. Strands Agents just shipped 1.0 of the TypeScript SDK, so now you don't have to. It's the full framework, native TypeScript types and all. And it does things Python can't, like running agents in the browser.
The Python SDK has been in production since May 2025. This is the same model-driven approach, now with full TypeScript types and Zod-validated tools.
Full disclosure, I'm a Developer Advocate for AWS. Strands is an open source project from AWS. I've been building with the Python SDK for months, so I was curious how the TypeScript version compares. So far, it's been great.
In this post we'll cover:
- Getting a basic agent running
- Defining type-safe tools with Zod
- Connecting MCP servers
- Streaming responses
- Running agents in the browser
- Multi-agent patterns: agent-as-tool, Graph, and Swarm
Get Started
Install the SDK:
npm install @strands-agents/sdk
Here's how it works.
Two Lines to a Working Agent
The API is small on purpose. Create an agent, then invoke.
import { Agent } from '@strands-agents/sdk'
const agent = new Agent({ systemPrompt: 'You are a helpful assistant.' })
const result = await agent.invoke('What makes TypeScript great for building agents?')
console.log(result.lastMessage)
Bedrock is the default model provider. If you want OpenAI, Anthropic, Google, or anything that works with the Vercel AI SDK, you swap one import:
import { Agent } from '@strands-agents/sdk'
import { OpenAIModel } from '@strands-agents/sdk/models/openai'
const model = new OpenAIModel({ api: 'chat', modelId: 'gpt-5.4' })
const agent = new Agent({ model, systemPrompt: 'You are a helpful assistant.' })
You just swap the model import. The agent, tools, and invocation pattern don't change at all.
Zod Tools
TypeScript is a good fit here because you're defining the contract between your agent and your tools at the type level. You define a tool with a Zod schema and get runtime validation plus full type inference at compile time. The model can't pass garbage to your tool without Zod catching it, and your editor knows the exact shape of every input before you even run anything.
Here's a GitHub lookup tool in about 30 lines:
import { Agent, tool } from '@strands-agents/sdk'
import { z } from 'zod'
const githubRepo = tool({
name: 'get_github_repo',
description: 'Get info about a GitHub repository.',
inputSchema: z.object({
owner: z.string().describe('Repository owner'),
repo: z.string().describe('Repository name'),
}),
callback: async (input) => {
const res = await fetch(`https://api.github.com` + `/repos/${input.owner}/${input.repo}`)
const data = await res.json()
return `${data.full_name} — ⭐ ${data.stargazers_count} stars`
},
})
const agent = new Agent({
tools: [githubRepo],
systemPrompt: 'You are a developer assistant.',
})
The input parameter in that callback is fully typed. Your editor knows input.owner is a string. No any, no type casting.
The SDK also ships with built-in tools for bash, file editing, HTTP requests, and notebooks. Your agent can read files, hit APIs, and run shell commands without writing any tool code:
import { Agent } from '@strands-agents/sdk'
import { bash } from '@strands-agents/sdk/vended-tools/bash'
import { fileEditor } from '@strands-agents/sdk/vended-tools/file-editor'
import { httpRequest } from '@strands-agents/sdk/vended-tools/http-request'
import { notebook } from '@strands-agents/sdk/vended-tools/notebook'
const agent = new Agent({
tools: [bash, fileEditor, httpRequest, notebook],
systemPrompt: 'You are a helpful coding assistant.',
})
MCP
If you're already using MCP servers, they plug right in. I tested this with the filesystem MCP server and it worked well.
import { Agent, McpClient } from '@strands-agents/sdk'
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js'
const mcp = new McpClient({
transport: new StdioClientTransport({
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', process.cwd()],
}),
})
const agent = new Agent({ tools: [mcp] })
Any MCP-compatible server works the same way. One McpClient import and you're done. I've used this with the filesystem server, a Postgres server, and a few custom ones, and the setup is identical every time.
Streaming
Async iterators let you generate chunks and stream them back to the user.
const agent = new Agent({ systemPrompt: 'You are a creative storyteller.', printer: false })
for await (const event of agent.stream('Tell me a short story about a brave toaster.')) {
if (
event.type === 'modelStreamUpdateEvent' &&
event.event.type === 'modelContentBlockDeltaEvent' &&
event.event.delta.type === 'textDelta'
) {
process.stdout.write(event.event.delta.text)
}
}
The event type checking is verbose. You're narrowing through three nested types to get to the actual text delta. That's just how TypeScript streaming works when the event union is this wide. Once you have the pattern, you copy it.
It Runs in the Browser
You can run agents in the browser. Most people don't think about this because the Python SDK can't do it natively.
There's a demo in the strands-agents/sdk-typescript repo where you chat with an agent and it builds a live HTML canvas in real time. The agent runs entirely client-side.
That opens up things that weren't possible before: local-first tools where the agent runs on the user's machine, interactive assistants embedded in your app without a server round-trip, or even running the model locally if you want to skip the round-trip entirely.
I cloned the demo and had it running in a few minutes.
git clone https://github.com/strands-agents/sdk-typescript.git
cd sdk-typescript/strands-ts/examples/browser-agent
npm install && npm run dev
Multi-Agent Patterns
The SDK ships with three ways to combine agents: agent-as-tool, Graph, and Swarm. Each one fits a different problem.
Agent-as-tool
This is the one you'll reach for most. Pass a sub-agent directly into another agent's tools array and it gets wrapped automatically. The outer agent decides when to call it, just like any other tool.
Think of a writer agent that has a researcher agent as a tool. The writer decides when it needs facts, calls the researcher, gets the results back, and uses them to write. The code below is exactly that.
import { Agent } from '@strands-agents/sdk'
const researcher = new Agent({
id: 'researcher',
name: 'Researcher',
description: 'Finds factual information and returns concise bullet-point findings.',
systemPrompt: 'You are a fact-finder. Return 3-5 concise bullet points. Be brief.',
printer: false,
})
const writer = new Agent({
systemPrompt: 'You are a prose writer. Use the Researcher tool to gather facts, then write one short paragraph.',
tools: [researcher], // sub-agent auto-wrapped via asTool()
printer: false,
})
const result = await writer.invoke('Write a paragraph about the deepest point in the ocean.')
Graph
Graph is for deterministic pipelines. You define agents as nodes and wire them together with edges. A node runs once all its upstream dependencies complete, so execution order is guaranteed. Nodes with no dependency between them can run in parallel.
This pattern works well when you know the steps upfront and need them to always run in the same order. A research pipeline is a good example. Plan the questions, research each one, then synthesize the results. You wouldn't want the synthesizer running before the researcher finishes, and you don't want the model deciding to skip steps.
import { Agent, Graph } from '@strands-agents/sdk'
const planner = new Agent({
id: 'planner',
printer: false,
systemPrompt: 'Break the request into 2 short research questions.',
})
const researcher = new Agent({
id: 'researcher',
printer: false,
systemPrompt: 'Answer each research question in 1-2 sentences. Be concise.',
})
const synthesiser = new Agent({
id: 'synthesiser',
printer: false,
systemPrompt: 'Write a 2-3 sentence summary of the research.',
})
const graph = new Graph({
nodes: [planner, researcher, synthesiser],
edges: [
['planner', 'researcher'],
['researcher', 'synthesiser'],
],
})
// MultiAgentResult.content holds the combined output from terminus nodes
const result = await graph.invoke('What are the main impacts of lithium mining?')
console.log(result.content)
The edges array defines the DAG (directed acyclic graph — basically a dependency map). planner runs first, feeds into researcher, which feeds into synthesiser. The final output comes from the terminus node via result.content.
Swarm
Swarm is for when the path isn't known upfront. You define a set of agents with descriptions, pick a starting agent, and the model decides at runtime which agent handles the next step. Each agent either produces a final answer or hands off to another agent.
I'd reach for this when building something like a customer support bot where you don't know which specialist is needed until you see what the user actually wrote. The routing logic lives in the model, not in your code. You're not writing if request.includes('billing') anywhere.
import { Agent, Swarm } from '@strands-agents/sdk'
const triage = new Agent({
id: 'triage',
printer: false,
description: 'Classifies the user request and routes it to the right specialist.',
systemPrompt: 'Route to "billing" or "technical". Do not answer yourself.',
})
const billing = new Agent({
id: 'billing',
printer: false,
description: 'Handles billing and payment questions.',
systemPrompt: 'Resolve billing queries. Do not hand off further.',
})
const technical = new Agent({
id: 'technical',
printer: false,
description: 'Handles technical and product questions.',
systemPrompt: 'Resolve technical queries. Do not hand off further.',
})
const swarm = new Swarm({
nodes: [triage, billing, technical],
start: 'triage',
maxSteps: 6,
})
const result = await swarm.invoke('My invoice shows the wrong amount.')
console.log(result.content)
Heads up if you're on Bedrock: Swarm uses structured output to drive handoffs. It forces a tool call to get the routing decision, and the current Claude Sonnet model on Bedrock throws a "does not support assistant message prefill" error when that happens. Swarm works fine with the Anthropic direct API or other providers that support forced tool choice. If you're on Bedrock and need dynamic routing, agent-as-tool is the safer option for now.
I haven't tested the plugin system, session persistence, or OpenTelemetry tracing yet. The plugin system looks simple to use and I'll probably dig into it next.
The SDK is open source. If you build something with it, I want to see it. The browser runtime especially. I'm curious where people take it.
What We Covered
The Strands Agents TypeScript SDK gives you a full agent framework without leaving your existing stack. Here's what we went through:
- A basic agent runs in two lines. Bedrock is the default, but swapping to OpenAI or Anthropic is one import.
- Zod tools give you runtime validation and compile-time type inference. The model can't pass bad input without Zod catching it.
- MCP servers connect with a single
McpClientimport. Any MCP-compatible server works. - Streaming uses async iterators. Verbose event types, but that's TypeScript being TypeScript.
- The browser runtime is real and working. Clone the demo and see it for yourself.
- For multi-agent work: agent-as-tool for orchestrator/worker setups, Graph for fixed pipelines, Swarm for dynamic routing (just not on Bedrock yet).
If you want to go deeper, the TypeScript quickstart, the GitHub repo, and the API docs are the best next stops.
Watch the Full Video
If you prefer video format, here's a quick walkthrough of the TypeScript SDK and its key features: