๐ Build AI agents that seamlessly combine LLM reasoning with real-world actions via MCP tools โ in just a few lines of TypeScript.
https://github.com/Kong/volcano-sdk.git
The TypeScript SDK for Multi-Provider AI Agents
Build agents that chain LLM reasoning with MCP tools. Mix OpenAI, Claude, Mistral in one workflow. Parallel execution, branching, loops. Native retries, streaming, and typed errors.
๐ Read the full documentation at volcano.dev โ
๐ค Automatic Tool SelectionLLM automatically picks which MCP tools to call based on your prompt. No manual routing needed. |
๐งฉ Multi-Agent CrewsDefine specialized agents and let the coordinator autonomously delegate tasks. Like automatic tool selection, but for agents. |
๐ฌ Conversational ResultsAsk questions about what your agent did. Use.summary() or .ask() instead of parsing JSON.
|
๐ง 100s of ModelsOpenAI, Anthropic, Mistral, Bedrock, Vertex, Azure. Switch providers per-step or globally. |
๐ Advanced PatternsParallel execution, branching, loops, sub-agent composition. Enterprise-grade workflow control. |
๐ก StreamingStream tokens in real-time as LLMs generate them. Perfect for chat UIs and SSE endpoints. |
๐ก๏ธ TypeScript-FirstFull type safety with IntelliSense. Catch errors before runtime. |
๐ ObservabilityOpenTelemetry traces and metrics. Export to Jaeger, Prometheus, DataDog, or any OTLP backend. |
โก Production-ReadyBuilt-in retries, timeouts, error handling, and connection pooling. Battle-tested at scale. |
npm install volcano-sdk
That's it! Includes MCP support and all common LLM providers (OpenAI, Anthropic, Mistral, Llama, Vertex).
import { agent, llmOpenAI, mcp } from "volcano-sdk";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini"
});
const weather = mcp("http://localhost:8001/mcp");
const tasks = mcp("http://localhost:8002/mcp");
// Agent automatically picks the right tools
const results = await agent({ llm })
.then({
prompt: "What's the weather in Seattle? If it will rain, create a task to bring an umbrella",
mcps: [weather, tasks] // LLM chooses which tools to call
})
.run();
// Ask questions about what happened
const summary = await results.summary(llm);
console.log(summary);
import { agent, llmOpenAI } from "volcano-sdk";
const llm = llmOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Define specialized agents
const researcher = agent({ llm, name: 'researcher', description: 'Finds facts and data' })
.then({ prompt: "Research the topic." })
.then({ prompt: "Summarize the research." });
const writer = agent({ llm, name: 'writer', description: 'Creates content' })
.then({ prompt: "Write content." });
// Coordinator autonomously delegates to specialists
const results = await agent({ llm })
.then({
prompt: "Write a blog post about quantum computing",
agents: [researcher, writer] // Coordinator decides when done
})
.run();
// Ask what happened
const post = await results.ask(llm, "Show me the final blog post");
console.log(post);
We welcome contributions! Please see our Contributing Guide for details.
Apache 2.0 - see LICENSE file for details.