1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362---
title: "Volcano SDK - Build MCP-powered AI Agents"
---
# Volcano SDK ๐
A TypeScript SDK for building AI agents that combine LLM reasoning with real-world actions via MCP tools.
**Design Philosophy:** Volcano provides a fluent, chainable API that scales from simple single-step agents to complex multi-provider workflows with parallel execution, retries, and streaming.
## Key Features
### Automatic Tool Selection
LLM automatically picks which MCP tools to call based on your prompt. No manual routing needed.
### Multi-Agent Crews
Define specialized agents and let the coordinator autonomously delegate tasks. Like automatic tool selection, but for agents.
### Conversational Results
Ask questions about what your agent did. Use `.summary()` or `.ask()` instead of parsing JSON.
### 100s of Models
OpenAI, Anthropic, Mistral, Bedrock, Vertex, Azure. Switch providers per-step or globally.
### Advanced Patterns
Parallel execution, branching, loops, sub-agent composition. Enterprise-grade workflow control.
### Streaming
Stream tokens in real-time as LLMs generate them. Perfect for chat UIs and SSE endpoints.
### TypeScript-First
Full type safety with IntelliSense. Catch errors before runtime.
### Observability
OpenTelemetry traces and metrics. Export to Jaeger, Prometheus, DataDog, or any OTLP backend.
### Production-Ready
Built-in retries, timeouts, error handling, and connection pooling. Battle-tested at scale.
## Installation
Install Volcano SDK - everything you need for OpenAI, Anthropic, Mistral, Llama, and Google Vertex providers:
```bash
npm install volcano-sdk
```
That's it! The core SDK includes MCP support and all common LLM providers.
### Optional Provider Dependencies
For provider-specific features, install additional packages:
```bash
# For AWS Bedrock
npm install @aws-sdk/client-bedrock-runtime @aws-sdk/credential-providers
# For Azure AI (if using credential chain)
npm install @azure/identity
```
### Optional Observability
For production observability with OpenTelemetry:
```bash
# For distributed tracing and metrics
npm install @opentelemetry/api
npm install @opentelemetry/sdk-node
npm install @opentelemetry/exporter-trace-otlp-http
```
See the [Observability guide](observability) for complete setup and backend configuration.
**Requirements:** Node.js 18.17 or later
## Quick Start
### Hello World
The simplest possible agent with a single LLM step:
```typescript
import { agent, llmOpenAI } from "volcano-sdk";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const results = await agent({ llm })
.then({ prompt: "Say hello to Marco in one short sentence." })
.run();
console.log(results[0].llmOutput);
// Output: "Hello Marco! Hope you're having a great day!"
```
### Two-Step Agent with MCP Tools
Automatically discover and use MCP tools across multiple steps:
```typescript
import { agent, llmOpenAI, mcp } from "volcano-sdk";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const astro = mcp("http://localhost:3211/mcp");
const steps = await agent({ llm })
.then({
prompt: "Determine the astrological sign for 1993-07-11.",
mcps: [astro], // Automatic tool selection
})
.then({
prompt: "Write a one-line fortune for that sign.",
// Context from previous step is automatically included
})
.run();
console.log(steps[0].toolCalls); // Tools that were called
console.log(steps[1].llmOutput); // Fortune based on the sign
```
### Multi-Provider Workflow
Use different LLM providers for different steps in the same workflow:
```typescript
import { agent, llmOpenAI, llmAnthropic, llmMistral } from "volcano-sdk";
const openai = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
const claude = llmAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
model: "claude-3-5-haiku-20241022",
});
const mistral = llmMistral({
apiKey: process.env.MISTRAL_API_KEY!,
model: "mistral-small-latest",
});
const results = await agent()
.then({ llm: openai, prompt: "Extract key data from this report..." })
.then({ llm: claude, prompt: "Analyze the data for patterns" })
.then({ llm: mistral, prompt: "Write a creative summary" })
.run();
// Each step uses a different provider. Context flows automatically.
```
### Next Steps
- Explore [LLM Providers](providers) for configuration options
- Learn [Advanced Patterns](patterns) for complex workflows
- Discover [Features](features) like streaming and error handling
- Check the [API Reference](api) for complete documentation
## Use Cases
Volcano SDK excels in scenarios that require LLM reasoning combined with real-world actions:
### Customer Support Automation
Build agents that analyze support tickets, look up customer data via MCP tools, and generate personalized responses across multiple LLMs for quality.
```typescript
await agent({ llm: gpt4 })
.then({ prompt: "Classify ticket urgency", mcps: [ticketing] })
.branch((h) => h[0].llmOutput?.includes("URGENT"), {
true: (a) => a.then({ mcp: pagerduty, tool: "create_incident" }),
false: (a) => a.then({ llm: claude, prompt: "Draft response" }),
})
.run();
```
### Content Moderation Pipeline
Analyze content in parallel (sentiment, topics, violations), then route based on results with conditional branching.
```typescript
await agent({ llm })
.parallel({
sentiment: { prompt: "Analyze sentiment" },
topics: { prompt: "Extract topics" },
violations: { prompt: "Check policy violations" },
})
.switch(
(h) =>
h[0].parallel?.violations.llmOutput?.includes("VIOLATION")
? "flag"
: "approve",
{
flag: (a) => a.then({ mcp: moderation, tool: "flag_content" }),
approve: (a) => a.then({ mcp: cms, tool: "publish" }),
}
)
.run();
```
### Data Processing Workflows
Process large datasets with loops, use local Llama for preprocessing, then GPT-4 for final analysis.
```typescript
await agent()
.forEach(documents, (doc, a) =>
a.then({ llm: llama, prompt: `Summarize: ${doc}` })
)
.then({ llm: gpt4, prompt: "Aggregate all summaries into final report" })
.run();
```
### Multi-Step Research Agent
Use MCP tools to gather data, multiple LLMs for analysis, and callbacks for real-time updates.
```typescript
await agent({ llm: gpt4 })
.then({ prompt: "Research topic X", mcps: [web, database] })
.then({ llm: claude, prompt: "Analyze findings" })
.then({ llm: mistral, prompt: "Write comprehensive report" })
.run({
onStep: (step) => updateUI(step), // Real-time progress
onToken: (token) => streamToClient(token) // Token streaming
});
```
## Volcano vs Others
Factual comparison with other AI frameworks:
| Feature | Volcano SDK | AI SDK | LangChain |
| ---------------------------- | -------------------------------------------------------- | --------------------- | --------------------- |
| **API Style** | Fluent, chainable `.then()` | Function-based | Class-based chains |
| **MCP Integration** | Native, first-class | Via custom tools | Via custom tools |
| **Multi-Provider Workflows** | Built-in, per-step LLM switching | Manual configuration | Supported via chains |
| **Advanced Patterns** | Parallel, branching, loops, sub-agents | Manual implementation | Via agents framework |
| **Retry Strategies** | 3 modes: immediate, delayed, exponential | Basic retry | Via callbacks |
| **Streaming** | `run()` with `onStep`/`onToken` callbacks | `streamText()` | Via callbacks |
| **Type Safety** | Full TypeScript with inference | Full TypeScript | TypeScript available |
| **Error Handling** | Typed errors with metadata (stepId, provider, retryable) | Standard errors | Custom error handlers |
| **Context Management** | Automatic with tunable limits | Manual memory | Built-in memory |
| **Step Hooks** | Pre/post hooks per step | Not available | Via callbacks |
| **Connection Pooling** | Automatic for MCP | N/A | Manual |
| **Bundle Size** | Minimal (peer deps only) | Minimal | Large (~500KB+) |
### When to Choose Volcano
- **MCP-first applications:** Native support for Model Context Protocol
- **Multi-provider workflows:** Need to mix different LLMs in one workflow
- **Complex control flow:** Parallel execution, branching, loops required
- **Production reliability:** Need retries, timeouts, and error metadata
- **TypeScript projects:** Want excellent type inference and IntelliSense
## Core Concepts
### Agents
An agent is a workflow builder that chains together steps. Each step can call an LLM, execute MCP tools, or both. Create an agent with `agent()`:
```typescript
const myAgent = agent({
llm, // Default LLM provider (OpenAI, Claude, etc.)
instructions: "...", // Global system instructions for ALL steps
timeout: 60, // Default timeout in seconds (prevents hanging)
retry: { retries: 3 }, // Retry configuration (automatic failure recovery)
contextMaxChars: 20480, // Context size limit (prevent token overflow)
contextMaxToolResults: 8, // How many tool results to keep in context
});
```
**Note:** All options are optional. The minimal configuration is `agent({ llm })`.
### Steps
Each step in a workflow can perform one or more of the following:
- **Generate text** using any supported LLM provider
- **Call MCP tools** either explicitly or via automatic selection
- **Override configuration** such as timeout, LLM provider, or retry strategy
- **Access context** from previous steps automatically
- **Execute hooks** before or after execution
### Context & History
Each step automatically receives context from previous steps. This includes:
- **Previous LLM responses** from earlier steps
- **Tool results** from MCP tool calls
- **Automatic compaction** to stay within token limits
Use `resetHistory()` to clear context:
```typescript
await agent({ llm })
.then({ prompt: "Analyze this document..." })
.then({ prompt: "What were the key points?" }) // Has context from step 1
.resetHistory() // ๐งน Clear all context
.then({ prompt: "Now analyze this OTHER document..." }) // Fresh start
.run();
```
**Note:** Use `resetHistory()` in long workflows to prevent context from growing too large. Each agent instance maintains its own history.
### Multi-Provider Workflows
Different LLM providers can be used for different steps in the same workflow:
```typescript
import { agent, llmOpenAI, llmAnthropic, llmMistral } from "volcano-sdk";
const openai = llmOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const claude = llmAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY! });
const mistral = llmMistral({ apiKey: process.env.MISTRAL_API_KEY! });
await agent()
.then({ llm: openai, prompt: "Get astrological sign for 1993-07-11" })
.then({ llm: claude, prompt: "Analyze personality traits" })
.then({ llm: mistral, prompt: "Write a creative horoscope" })
.run();
// Each step uses a different LLM. Context flows between steps.
```
**Example use cases:**
- GPT-4 for data extraction, Claude for analysis, Mistral for summaries
- Local Llama for preprocessing, GPT-4 for final processing
- Test different models to optimize cost and quality
## Questions or Feature Requests?
We'd love to hear from you!
### GitHub Repository
Visit our GitHub repository for:
- [Report bugs or issues](https://github.com/Kong/volcano-sdk/issues)
- [Request features or ask questions](https://github.com/Kong/volcano-sdk/discussions)
- [Star the project](https://github.com/Kong/volcano-sdk) if you find it useful
- [Contribute code or documentation](https://github.com/Kong/volcano-sdk/blob/main/CONTRIBUTING.md)
### โญ View on GitHub
- [View on GitHub](https://github.com/Kong/volcano-sdk) - Source code, issues, and discussions
- [NPM Package](https://www.npmjs.com/package/volcano-sdk) - Install via npm and view package details