Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.zavu.dev/llms.txt

Use this file to discover all available pages before exploring further.

defineAgent

defineAgent declares the AI agent that will run on a specific WhatsApp / SMS sender. Calling it inside your function source binds the agent to that sender on every zavu deploy.
import { defineAgent } from "@zavu/functions"

defineAgent({
  senderId: process.env.SENDER_ID!,
  name: "Bella",
  provider: "zavu",
  model: "openai/gpt-4o-mini",
  prompt: "You are Bella, host of Bella Pizzeria…",
  channels: ["whatsapp"],
})
After zavu deploy:
  • The agent is created (or updated) under that sender.
  • Tagged as managed by this function — the dashboard disables manual edits to prevent drift.
  • Marked enabled: true automatically.

Required fields

FieldTypeNotes
senderIdstringThe _id of an existing sender in your project. Usually process.env.SENDER_ID.
namestringDisplay name. Shown in zavu agents executions and the dashboard.
providerzavu | openai | anthropic | google | mistralLLM provider.
modelstringModel id (depends on provider — see below).
promptstringSystem prompt. Multi-line OK (use backticks).

Optional fields

FieldDefaultDescription
channels["*"]Which channels trigger the agent: whatsapp, sms, telegram, email, *.
messageTypes["text"]Message types: text, image, audio, …
apiKeyRequired when provider !== "zavu" (or pre-create a secret in the dashboard).
contextWindowMessages10How many previous messages to include in each LLM call.
temperatureprovider default0–2.
maxTokensunboundedCap on response length.
sessionTimeoutMinutes60After this idle gap, the agent starts a new conversation.
includeContactMetadatatrueIf true, contact name / metadata is included as context.
enabledtrueSet to false to deploy the agent but keep it inactive.

Providers

No API key needed. LLM costs are billed directly from your Zavu balance at pass-through rates.
defineAgent({
  ...,
  provider: "zavu",
  model: "openai/gpt-4o-mini",   // provider/model id, the gateway picks the right backend
})
Available models on the gateway:
modelBackendBest for
openai/gpt-4o-miniOpenAI GPT-4o miniCheapest, fastest
openai/gpt-4oOpenAI GPT-4oHigh-quality reasoning
anthropic/claude-3-5-haiku-20241022Anthropic HaikuFast, good tool-use
anthropic/claude-3-5-sonnet-20241022Anthropic SonnetBest multi-step reasoning
google/gemini-1.5-flashGoogle FlashCheap multilingual
google/gemini-1.5-proGoogle ProLong context windows

Bring your own key (BYOK)

Pass provider matching the vendor and apiKey (or pre-create a secret in the dashboard and rely on it being already stored).
defineAgent({
  ...,
  provider: "openai",
  model: "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY!,   // set via `zavu fn secrets set`
})
Models follow each vendor’s own naming (gpt-4o-mini, claude-3-5-sonnet-20241022, gemini-1.5-flash, etc) — no provider/ prefix when using BYOK.
On first deploy, if you pass apiKey, we create a row in apiSecrets encrypted with AES-256-GCM and reference it from the agent. Subsequent deploys without apiKey reuse the same stored secret. To rotate, pass a new apiKey and redeploy.

Prompts

The prompt field is the system message every conversation starts with. It’s where you set the persona, rules, and guardrails.

Patterns that work

prompt: `You are Bella, host of Bella Pizzeria.

Your job:
- Help guests view the menu and book reservations.
- ONLY use the tools provided — never invent prices or availability.
- Confirm every reservation by reading back the time and code before closing.

Tone:
- Friendly, brief (WhatsApp messages should fit one screen).
- Match the language the customer writes in.

Edge cases:
- If asked something outside your scope (delivery, jobs, complaints), say:
  "Para eso te paso con un humano." and stop responding.
`

Anti-patterns

Don’t put prices, menu items, or any data that changes in the prompt. Put them in tool handlers so they stay current without redeploys.Don’t make the prompt longer than ~2,000 characters. LLMs lose focus on the rules with verbose prompts; tools are how you scope behavior.

Triggers (which messages reach the agent)

By default the agent fires on every inbound message to its sender. Restrict with channels and messageTypes:
defineAgent({
  ...,
  channels: ["whatsapp"],         // only WhatsApp, ignore SMS to the same sender
  messageTypes: ["text"],         // ignore images, audio, etc
})
If you need fine-grained event triggers (e.g. only when a specific sender fires message.inbound), use explicit triggers in addition.

One agent per file

defineAgent is allowed multiple times only when each call has a different (senderId, name) pair, in which case each tool must explicitly bind to one:
defineAgent({ senderId: process.env.SENDER_A!, name: "Sales",   ...  })
defineAgent({ senderId: process.env.SENDER_B!, name: "Support", ...  })

defineTool({
  name: "lookup_order",
  agent: "Support",                // explicit binding
  description: "...",
  parameters: { ... },
  handler: async (args) => { ... },
})
For most use cases you want one agent per function file. Multiple senders sharing identical logic? Use the same code in multiple functions, each with its own SENDER_ID secret.

Updates and ownership

Every zavu deploy reconciles the live agent to match what’s in the code.
StateReconcile behavior
No agent exists yetCreate it, mark as managed by this function.
Manually-created agent with same (senderId, name)Takes ownership. Future edits are blocked in the dashboard.
Agent managed by THIS functionUpdate fields to match the manifest.
Agent managed by a DIFFERENT functionSkipped with a warning. Rename in code or delete the other function first.
When you remove defineAgent from your code and redeploy, the managed agent is deleted (along with its managed tools). Manual agents are never touched.

Disabling without deleting

To pause an agent without removing the code:
defineAgent({
  ...,
  enabled: false,
})
Redeploy. The agent stays in the database; messages to its sender will be processed by webhooks instead.

Common patterns

Don’t hard-code a language. Tell the prompt to match the user:
prompt: `…
Always reply in the language the customer wrote in.
If unclear, default to Spanish.`
The LLM handles language detection per turn — no need for routing logic.
Add a tool that signals escalation (e.g., creates a ticket in your CRM or notifies an operator on Slack). Tell the prompt when to use it:
prompt: `…
If the customer asks for a refund, asks to "speak to a human", or
expresses strong frustration, call escalate_to_agent and tell them
someone will reach out within 15 minutes.`
Same code, different SENDER_ID secret in each function. The prompt can even include process.env.BRAND_NAME to customize per tenant:
defineAgent({
  ...,
  name: process.env.BRAND_NAME!,
  prompt: `You are the assistant for ${process.env.BRAND_NAME}…`,
})
With includeContactMetadata: true (default), the LLM sees:
  • Contact’s display name (if known).
  • Custom metadata fields on the contact.
  • Channel they wrote from.
Set metadata via client.contacts.update(contactId, { metadata: {...} }). Useful for “Hello $name” style personalization without a tool call.

Next

defineTool

Give the agent actions to execute.

Secrets

Store SENDER_ID, API keys, and other env vars.