Tool Use
The LLM's ability to call your code by emitting structured function calls instead of free-text answers.
Last updated: April 26, 2026
Definition
Tool use (also called function calling) is the mechanism that turns an LLM from a writer into an actor. You define a set of functions with JSON schemas describing inputs. The LLM, given a user goal, picks a tool, fills in arguments, and emits a structured call. Your code executes the function, returns the result, and the LLM continues. The quality of your tool definitions matters as much as the prompt. Bad descriptions lead to wrong tool picks or malformed arguments.
Code Example
tools = [{
"name": "search_orders",
"description": "Search a customer's orders by email. "
"Returns up to 10 most recent orders.",
"input_schema": {
"type": "object",
"properties": {
"email": {"type": "string", "description": "Customer email"},
"limit": {"type": "integer", "default": 10},
},
"required": ["email"],
},
}]
response = client.messages.create(model="claude-sonnet-4-6",
tools=tools, messages=msgs)Tool definitions are JSON schemas. Description quality drives tool-pick accuracy.
When To Use
Required for any agent that needs to do something other than chat. The first tool you should give an agent is almost always search.
Building with Tool Use?
I've shipped this pattern in real production systems. If you want a second pair of eyes on your architecture, that's what I do.