Chain LLM calls with tools and branching logic

Build complex AI pipelines that chain multiple LLM calls, tools, and conditional branches. Built-in error recovery, parallel execution, and full observability.

Step ChainingBranching LogicParallel ExecutionError Recovery
import { Stack0 } from '@stack0/sdk'
const stack0 = new Stack0({ apiKey: process.env.STACK0_API_KEY })
// Build a multi-step AI pipeline
const result = await stack0.workflows.run({
steps: [
{
id: 'extract',
type: 'llm',
model: 'gpt-4o',
prompt: 'Extract key entities from this text: {{input.text}}',
outputSchema: {
people: 'string[]',
companies: 'string[]',
topics: 'string[]',
},
},
{
id: 'analyze',
type: 'llm',
model: 'gpt-4o',
prompt: 'Analyze sentiment for each entity: {{steps.extract.output}}',
dependsOn: ['extract'],
},
{
id: 'summarize',
type: 'llm',
model: 'gpt-4o-mini',
prompt: 'Write a brief summary combining: {{steps.extract.output}} and {{steps.analyze.output}}',
dependsOn: ['extract', 'analyze'],
},
],
input: {
text: articleContent,
},
onError: 'retry', // retry, skip, or abort
maxRetries: 3,
})
console.log(result.steps.summarize.output)
console.log(result.usage) // { totalTokens, totalCost, durationMs }

What's included

Step Chaining

Define dependencies between steps. The engine resolves execution order and passes outputs downstream.

Branching Logic

Conditionally execute steps based on outputs from previous steps. Build decision trees within your pipeline.

Parallel Execution

Independent steps run concurrently. Fan-out patterns reduce total pipeline latency significantly.

Error Recovery

Per-step retry, skip, or abort strategies. Exponential backoff for transient failures.

Observability

Full traces for every pipeline run. Inputs, outputs, latency, and token usage per step.

Cost Tracking

Real-time token and cost tracking across all steps. Set budget limits per pipeline or per organization.


Built for production

Declarative step chaining

Define steps with dependencies. The engine resolves the execution order and runs independent steps in parallel automatically.

Branching and conditionals

Route pipeline execution based on step outputs. Skip branches, fan out, or converge results with simple dependency rules.

Automatic error recovery

Configure retry, skip, or abort per step. Retries use exponential backoff so transient LLM failures don't kill your pipeline.

Full observability

Every step logs its inputs, outputs, latency, and token usage. Trace entire pipeline runs in your dashboard.

TypeScript SDK

Full type safety for step definitions and outputs. Your IDE catches schema mismatches before runtime.

Simple pricing

$0.001 per step execution. LLM token costs passed through at provider rates. No markup, no minimums.


Common implementations

Content Generation

Research a topic, generate an outline, write sections in parallel, then combine and edit the final draft.

Customer Support Triage

Classify incoming tickets, extract intent and urgency, route to the correct team, and draft a response.

Code Review Automation

Parse diffs, analyze each file for issues, aggregate findings, and generate a review summary.

Market Research

Scrape competitor pages, extract pricing and features, compare against your product, and generate a report.


FAQ

Frequently asked questions

Pipelines support up to 50 steps per execution. Steps can run sequentially or in parallel based on their dependency graph. For longer workflows, chain multiple pipeline runs together.

We support OpenAI (GPT-4o, GPT-4o-mini), Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku), and open-source models via our inference endpoints. You can mix models across steps to optimize for cost and quality.

Each step can be configured with retry, skip, or abort on failure. Retries use exponential backoff. The skip strategy continues the pipeline with a null output for the failed step. Abort stops the entire pipeline and returns partial results.

Yes. Define tool steps that call external APIs, run code, or query databases. Tools receive the output of previous steps as input and return structured data for downstream steps to consume.

You pay $0.001 per step execution plus the underlying LLM token costs. Parallel steps each count as one step. Retried steps count as additional executions. LLM costs are passed through at provider rates with no markup.


Ready to build?

Get started in minutes.

Get Started