Documentation Index
Fetch the complete documentation index at: https://docs.zavu.dev/llms.txt
Use this file to discover all available pages before exploring further.
Advanced patterns
You’ve shipped your first Function. Now what holds up at scale.Persistent state
Lambda is stateless. Cold starts wipe in-process variables. Pick one:Convex tables (recommended for Zavu users)
Tightly integrated. The auto-provisionedZAVU_API_KEY doesn’t grant
arbitrary table access — for that, create a project API key with broader
scopes and inject it as a secret. Or use Convex deployments outside our
managed scope.
Postgres (managed: PlanetScale, Neon, Supabase)
let sql = null pattern lets the same connection survive across warm
invocations — saves the connection handshake (~50ms) on subsequent calls.
Redis (Upstash for serverless-friendly)
For rate-limiting, deduplication, and short-lived state:Composing multiple functions
One project can have many functions. Use this for separation of concerns:| Function | Job |
|---|---|
support-agent | Reactive: handles WhatsApp inbound, has agent + tools. |
daily-digest | Scheduled: sends daily metric digest. Triggered by cron. |
cart-recovery | Triggered: fires on cart.abandoned event (custom). |
dlq-watcher | Triggered: fires on message.failed, retries with fallback. |
Observability
Structured logs
Use the framework’sctx.log so the dashboard’s logs panel can highlight
your output among Lambda’s ceremony lines:
Metrics → external sinks
Send important business events to a metrics service:fetch runs in parallel — don’t await it if you don’t care about delivery
guarantees. Use a fire-and-forget:
Error budgets and retries
The agent retries failed tool calls up to 2 times (LLM’s choice — it sees the error message and may try again). Beyond that, the LLM gives up and tells the customer. For tools that touch unreliable systems (3rd-party APIs), add your own retry-with-backoff:Testing
Local invoke
defineFunction fallback path. Doesn’t simulate LLM
tool calls — for that, deploy + use the real WhatsApp sender.
Unit tests for handlers
Tool handlers are plain functions. Extract them, test them:bun test or vitest locally — they don’t need to ship with the function.
Integration with the LLM
To test how the LLM ACTUALLY picks tools, you need a live agent. The fastest loop:Multi-agent on one sender
Not directly supported — one agent per sender. But you can simulate it with flows OR by having a “router” tool:mode from Redis at
each turn. (You’d inject mode into the agent via custom contact metadata,
which the agent reads automatically with includeContactMetadata: true.)
In practice: one focused agent > one mega-agent juggling modes. Multiple
senders / multiple functions is the canonical way.
Cost optimization
Per-conversation cost breaks down as:| Item | Order of magnitude |
|---|---|
| LLM tokens (gpt-4o-mini, 3-turn convo) | $0.0001–0.0005 |
| Lambda invocations (1 per tool call) | $0.000001 each |
| WhatsApp conversation (Meta) | $0.005–0.04 depending on country |
| Your DB / API calls | varies |
- Long prompts. Every turn sends the full system prompt + last N messages. A 1000-token system prompt at 10 turns of history = 10k tokens per reply. Trim relentlessly.
- High
contextWindowMessages. Default 10 is overkill for transactional agents. Drop to 4-6 if your conversations are short. - Re-reading large tool returns. If a tool returns 500 items, the LLM re-reads them every turn. Trim server-side.
Migration paths
From dashboard-configured AI Agent → Function
You already have an agent and tools created from the dashboard. To move them under code-managed control:- Write
defineAgent({...})matching your existing config. - Write
defineTool({...})for each existing tool, including the samename,description,parameters. zavu deploy. The reconciler sees existing rows with matching(senderId, name)and takes ownership — patches them to match your code AND marks them managed.
+ ToolName (took over manual) for each.
From that point on, dashboard edits are blocked. Code is source of truth.
From a custom webhook receiver → Function
You have a Vercel function listening for Zavu webhooks. To move:zavu fn initand copy your handler intodefineFunction.- Set up triggers via CLI instead of webhook URLs on senders:
- Disable the webhook on the sender (or leave it — both work in parallel during migration).
Limits to know
| Resource | Hard limit | Soft (CLI rejects) |
|---|---|---|
| Function slug length | 23 chars | Auto-enforced |
| Function name | 80 chars | — |
| Memory | 1024 MB | — |
| Timeout per invocation | 30 sec | — |
| Source size | 900 KB | — |
| Bundled zip size | 6 MB | Triggers different code path |
| Dependencies declared | 30 packages | — |
| Secrets per function | 50 | — |
| Secret value | 4 KB | — |
| Tools per agent | unlimited in API; ~20 practical | LLM degrades past ~10 |
Next
Restaurant example
Complete booking agent with persistence.
Runtime versions
Pinning, upgrades, security patches.
