Intermediate 25 min

Tool Definitions

Tools are functions the agent can call. We define them as JSON schemas that the LLM understands.

Here are our three tools:

Support Agent Tools

3 tools
get_faq_answer
Searches the knowledge base for FAQ answers. Use this when the user asks general questions about SimpleSaaS.
Schema
Type: object
Properties:
  • question: string required
    The user's question to search for in the FAQ knowledge base
Example
Arguments:
{
  "question": "What is SimpleSaaS?"
}
Result:
{
  "answer": "SimpleSaaS is a subscription management platform that helps businesses manage recurring revenue.",
  "source": "faq_001"
}
get_subscription_status
Retrieves the current subscription status for a user. Requires user_id. Use this when the user asks about their plan, billing, or account status.
Schema
Type: object
Properties:
  • user_id: string required
    The unique identifier for the user
Example
Arguments:
{
  "user_id": "user-123"
}
Result:
{
  "plan": "Pro",
  "status": "active",
  "expires": "2025-12-01",
  "features": [
    "feature1",
    "feature2"
  ]
}
log_escalation
Logs an escalation to human support. Use this when you detect sensitive topics, billing disputes, or when you're uncertain how to help.
Schema
Type: object
Properties:
  • user_id: string
    The user ID (optional, use "unknown" if not available)
  • message: string required
    The original user message that triggered escalation
  • reason: string required
    Reason for escalation (e.g., "billing_dispute", "account_deletion", "uncertain")
Example
Arguments:
{
  "user_id": "user-123",
  "message": "I want a refund",
  "reason": "billing_dispute"
}
Result:
{
  "ticket_id": "TICKET-12345",
  "status": "created"
}

Implementing Tools in Code

Here’s how you implement these tools in Python:

def get_faq_answer(question: str) -> dict:
    """Search FAQ knowledge base."""
    # In production, this would query a vector DB or search index
    faq_db = {
        "What is SimpleSaaS?": {
            "answer": "SimpleSaaS is a subscription management platform...",
            "source": "faq_001"
        },
        "How do I change my plan?": {
            "answer": "You can change your plan from the account settings...",
            "source": "faq_002"
        }
    }
    
    # Simple keyword matching (use semantic search in production)
    for key, value in faq_db.items():
        if question.lower() in key.lower() or key.lower() in question.lower():
            return value
    
    return {"answer": "I couldn't find an answer to that question.", "source": None}

def get_subscription_status(user_id: str) -> dict:
    """Get user subscription status."""
    # In production, this would query a real database
    subscriptions = {
        "user-123": {
            "plan": "Pro",
            "status": "active",
            "expires": "2025-12-01",
            "features": ["feature1", "feature2"]
        }
    }
    
    if user_id not in subscriptions:
        return {"error": "User not found"}
    
    return subscriptions[user_id]

def log_escalation(user_id: str, message: str, reason: str) -> dict:
    """Log escalation to human support."""
    ticket_id = f"TICKET-{len(message) % 10000}"
    # In production, this would create a ticket in a support system
    print(f"Escalation logged: {ticket_id} - {reason}")
    return {"ticket_id": ticket_id, "status": "created"}

Tool Schema for LLM

The LLM needs tools defined in a specific format. Here’s the schema:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_faq_answer",
            "description": "Searches the knowledge base for FAQ answers.",
            "parameters": {
                "type": "object",
                "properties": {
                    "question": {
                        "type": "string",
                        "description": "The user's question"
                    }
                },
                "required": ["question"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_subscription_status",
            "description": "Retrieves subscription status for a user.",
            "parameters": {
                "type": "object",
                "properties": {
                    "user_id": {
                        "type": "string",
                        "description": "The user ID"
                    }
                },
                "required": ["user_id"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "log_escalation",
            "description": "Logs an escalation to human support.",
            "parameters": {
                "type": "object",
                "properties": {
                    "user_id": {"type": "string"},
                    "message": {"type": "string"},
                    "reason": {"type": "string"}
                },
                "required": ["message", "reason"]
            }
        }
    }
]

The System Prompt

The system prompt tells the agent who it is and what it can do:

SYSTEM_PROMPT = """You are a support agent for SimpleSaaS, a subscription management platform.

Your role:
- Answer questions about SimpleSaaS using the knowledge base
- Check user subscription status when asked
- Escalate to human support when needed

Available tools:
1. get_faq_answer(question) - Search FAQ knowledge base
2. get_subscription_status(user_id) - Get user's subscription info
3. log_escalation(user_id, message, reason) - Escalate to human

Rules:
- Always be helpful and professional
- Use tools when you need information you don't have
- Escalate immediately if you detect:
  * Billing disputes (refund, chargeback, etc.)
  * Account deletion requests
  * Legal questions
  * Angry or frustrated users
  * Any situation where you're uncertain

Do NOT:
- Make up information
- Promise things you can't deliver
- Try to handle billing disputes yourself
- Answer legal questions

When escalating, use log_escalation and provide a helpful message to the user."""

Sending to the Model

Here’s how you send a message with tools to the LLM (using OpenAI format):

import openai

def call_llm_with_tools(messages, tools):
    response = openai.ChatCompletion.create(
        model="gpt-4",
        messages=messages,
        tools=tools,
        tool_choice="auto"  # Let model decide when to use tools
    )
    return response

# Example usage
messages = [
    {"role": "system", "content": SYSTEM_PROMPT},
    {"role": "user", "content": "What's my subscription status?"}
]

response = call_llm_with_tools(messages, tools)

Parsing the Response

The model can return either:

  1. A direct answer (no tool calls)
  2. A tool call request
def parse_response(response):
    message = response.choices[0].message
    
    if message.tool_calls:
        # Model wants to call tools
        tool_calls = []
        for tool_call in message.tool_calls:
            tool_calls.append({
                "id": tool_call.id,
                "name": tool_call.function.name,
                "arguments": json.loads(tool_call.function.arguments)
            })
        return {"type": "tool_calls", "tool_calls": tool_calls}
    else:
        # Direct answer
        return {"type": "answer", "content": message.content}

Example: Full Flow

Here’s a complete example:

# 1. User message
user_message = "What's my subscription status?"
user_id = "user-123"

# 2. Prepare messages
messages = [
    {"role": "system", "content": SYSTEM_PROMPT},
    {"role": "user", "content": user_message}
]

# 3. Call LLM
response = call_llm_with_tools(messages, tools)
parsed = parse_response(response)

# 4. Handle tool call
if parsed["type"] == "tool_calls":
    for tool_call in parsed["tool_calls"]:
        if tool_call["name"] == "get_subscription_status":
            result = get_subscription_status(tool_call["arguments"]["user_id"])
            # Add result to messages for next iteration
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call["id"],
                "content": json.dumps(result)
            })
    
    # Call LLM again with tool results
    response = call_llm_with_tools(messages, tools)
    parsed = parse_response(response)

# 5. Return final answer
if parsed["type"] == "answer":
    print(parsed["content"])

Key Points

  1. Tool schemas tell the LLM what tools are available
  2. System prompt guides the agent’s behavior
  3. Tool choice can be “auto” (model decides) or “required” (force tool use)
  4. Tool results are added back to the conversation
  5. Multiple iterations may be needed before final answer

What’s Next?

Now that we have tools and prompts defined, let’s build the agent loop that ties everything together.