AI Agents
An AI Agent is an automated responder attached to a sender that uses large language models (LLMs) to handle incoming messages. When a customer sends a message, the agent processes it and generates an intelligent response.What is an AI Agent?
Think of an AI Agent as a virtual assistant for your messaging. It can:- Answer customer questions using your knowledge base
- Collect information through conversational flows
- Execute actions via webhook integrations
- Escalate to humans when needed
How Agents Work
When a message arrives, the agent processes it through several stages:Processing Flow
- Check for active flow: If the contact is in a conversational flow, continue that flow
- Match flow triggers: Check if the message should start a new flow (keywords, intent)
- Retrieve context: Get conversation history and relevant knowledge base chunks
- Generate response: Call the LLM with context and system prompt
- Send response: Deliver the response via the same channel
Key Components
LLM Configuration
Configure which AI model powers your agent:| Setting | Description |
|---|---|
| Provider | OpenAI, Anthropic, Google, Mistral, or Zavu (managed) |
| Model | The specific model (e.g., gpt-4o-mini, claude-3-5-sonnet) |
| System Prompt | Instructions that define the agent’s personality and behavior |
| Temperature | Creativity level (0 = deterministic, 2 = creative) |
| Max Tokens | Maximum response length |
| Context Window | Number of previous messages to include |
Flows
Flows are deterministic conversation paths for structured interactions like lead capture or appointment booking. They combine fixed messages, data collection, and AI-generated responses.- Guaranteed data collection (name, email, phone)
- Consistent messaging for compliance
- Step-by-step processes (booking, ordering)
Tools
Tools allow your agent to execute actions by calling your webhooks. The LLM can decide when to use a tool based on the conversation.| Example Tool | Use Case |
|---|---|
check_order_status | Look up order tracking info |
create_ticket | Open a support ticket |
book_appointment | Schedule a calendar event |
get_inventory | Check product availability |
Knowledge Base
A Knowledge Base stores documents that the agent can reference when answering questions. Using RAG (Retrieval Augmented Generation), the agent searches for relevant content and includes it in its context.- Upload FAQs, product docs, policies
- Automatic chunking and embedding
- Semantic search for relevant context
Pricing
AI Agent costs are pass-through with no markup. You pay exactly what the AI providers charge:| Provider | Model | Input | Output | ~Cost/Message |
|---|---|---|---|---|
| OpenAI | gpt-4o-mini | $0.15/1M | $0.60/1M | ~$0.0002 |
| OpenAI | gpt-4o | $2.50/1M | $10/1M | ~$0.003 |
| Anthropic | claude-3-5-haiku | $0.25/1M | $1.25/1M | ~$0.0003 |
| Anthropic | claude-3-5-sonnet | $3/1M | $15/1M | ~$0.005 |
| gemini-1.5-flash | $0.075/1M | $0.30/1M | ~$0.0001 |
If you use the Zavu provider, we handle the API key and charge your account balance directly.
Best Practices
Write Clear Prompts
Your system prompt defines the agent’s behavior. Be specific about tone, limitations, and when to escalate.
Use Flows for Structure
For data collection or multi-step processes, use flows instead of relying on the LLM to remember steps.
Add Knowledge Base
Upload FAQs and product docs so the agent can provide accurate, specific answers.
Monitor Costs
Check execution logs regularly. Use smaller models like gpt-4o-mini for cost efficiency.