Function Calling Overview
Why Function Calling Matters
The Problem: Early tool use required fragile text parsing to extract tool names and arguments from LLM output, leading to frequent failures.
The Solution: Native function calling lets the LLM output structured tool invocations directly, with the API handling parameter validation and formatting.
Real Impact: Function calling reduced tool invocation errors by over 90% and is now the standard way to build tool-using agents.
Real-World Analogy
Think of function calling like a voice-activated smart home:
- Function Schema = The list of devices and their controls
- Function Call = "Turn on living room lights to 80%"
- Execution = Smart hub sends the command to the device
- Result = "Living room lights set to 80%"
- Parallel Calls = "Turn on lights AND set thermostat to 72"
Function Calling Flow
Schema Definition
Define function name, description, and parameters using JSON Schema. The LLM uses this to decide when and how to call.
Model Decision
The LLM decides whether to call a function and which one, based on the request and available functions.
Structured Output
Instead of free text, the model outputs a structured function call with typed arguments.
Result Integration
Your code executes the function, and the result is sent back to the model to generate a final response.
Defining Functions
from openai import OpenAI
client = OpenAI()
tools = [{
"type": "function",
"function": {
"name": "search_database",
"description": "Search the product database",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"limit": {"type": "integer", "default": 10}
},
"required": ["query"]
}
}
}]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Find laptops under $1000"}],
tools=tools
)
# Model returns structured function call
tool_call = response.choices[0].message.tool_calls[0]
# tool_call.function.name = "search_database"
# tool_call.function.arguments = '{"query":"laptops under $1000"}'
Assistant chose to call: get_weather
Arguments: {"location": "San Francisco", "unit": "celsius"}
Function result: {"temperature": 18, "condition": "foggy"}
Final response: The weather in San Francisco is 18C and foggy.
Common Mistake
Wrong: Trusting function arguments without validation
Why it fails: LLMs can generate malformed JSON, incorrect types, or even inject unexpected values. For example, a location field might contain SQL or prompt injection attempts.
Instead: Always validate function arguments against the schema before execution. Use try/except blocks and return clear error messages to the model.
Parallel Function Calls
Parallel Execution
- OpenAI: Model can return multiple tool_calls in one response
- Claude: Supports multiple tool_use blocks in a single response
- Execute concurrently: Run all tool calls in parallel for speed
- Return all results: Send all results back in subsequent messages
Model requested 2 parallel calls:
1. get_weather({"location": "New York"})
2. get_weather({"location": "London"})
Results returned simultaneously.
Final: New York is 25C sunny; London is 15C rainy.
Error Handling
| Error Type | Cause | Solution |
|---|---|---|
| Invalid JSON | Malformed arguments | Return parse error, model self-corrects |
| Missing params | Required field omitted | Return validation error with details |
| Tool not found | Hallucinated function name | Return list of available tools |
| Execution error | Tool throws exception | Return error message as tool result |
Deep Dive: How Function Calling Works Internally
During training, models are fine-tuned on examples of function schemas paired with correct function calls. At inference time, the model's output logits are constrained (via grammar-guided decoding or post-processing) to produce valid JSON matching the provided schema. This is why function calling is more reliable than asking the model to output JSON in free text -- the decoding process enforces structural validity. OpenAI, Anthropic, and Google all use similar approaches but with different schema formats and multi-turn conventions.
Provider Comparison
| Provider | API Field | Parallel Calls | Forced Calling |
|---|---|---|---|
| OpenAI | tools + tool_choice | Yes | tool_choice: {type: "function"} |
| Anthropic | tools + tool_choice | Yes | tool_choice: {type: "tool"} |
| tools + tool_config | Yes | tool_config: {mode: "any"} | |
| Mistral | tools + tool_choice | Yes | tool_choice: "any" |
Quick Reference
| Concept | Description | Key Point |
|---|---|---|
| Function Schema | JSON Schema definition of tool | Good descriptions improve accuracy |
| tool_choice | Controls when model calls tools | auto, none, required, specific |
| Parallel Calls | Multiple tools in one response | Execute concurrently for speed |
| Streaming | Stream function call arguments | Show progress during long calls |
| Validation | Check args before execution | Prevent injection and errors |