Skip to main content

Understanding tools and when to use them

Tools are extra capabilities that the agent can trigger autonomously during a conversation. Instead of just generating text based on what it knows, the agent can interact with external systems — fetch a record, insert a lead, create an event, look up a document — all within the natural flow of the conversation. The decision to use or not use a tool belongs to the language model itself. When the agent receives a message from the user, the LLM evaluates the context, identifies whether any external action is needed to respond well, and, if so, selects the most appropriate tool. This mechanism is the native tool calling of large models — it does not require manual routing logic or conditional flows programmed by you. Tools are indicated when the ideal response depends on information that is not in the prompt and that changes frequently (prices, availability, a customer’s history), or when the agent needs to perform an action in the world — record data, schedule a meeting, update a status. If the information is static and rarely changes, simply include it directly in the prompt or knowledge base.
Enabling too many tools simultaneously increases the model’s reasoning space and can raise the cost per response. Prefer to activate only the tools the agent actually needs for its specific use case.

How the agent uses tools

The agent does not follow a fixed order of calls. At each turn of the conversation, the model evaluates the accumulated context and decides whether it needs to trigger any tool. The cycle is:
  1. The model reads the user’s message and the conversation history.
  2. If it identifies that it needs external data or needs to perform an action, it selects the most appropriate tool and builds the arguments.
  3. The tool is executed and the result is returned to the model.
  4. The model incorporates the result and generates the response to the user.
This cycle can repeat multiple times in a single interaction — the agent can look up a free slot in Google Calendar and, afterward, create an event with a Google Meet link, all before responding to the user.
Tools with write access (inserting, updating, deleting records) should only be activated when the use case requires it — and always accompanied by a well-guided prompt. The agent can execute irreversible actions if the instructions are ambiguous.

Types of tools in Timely.ai

Timely.ai offers tools of different categories, each with a specific purpose:

HTTP request

Allows the agent to make calls to external APIs via HTTP. You configure the URL, method (GET, POST, PUT, PATCH, DELETE), headers, and request body. The agent dynamically builds the parameters based on the conversation context and injects the response into the context before replying.
  • Ideal for integrating systems that expose a REST API but do not have native support in Timely.ai.
  • Supports authentication via header (Bearer token, API key) configured in the tool definition.

Code execution

Executes code snippets generated by the model itself in an isolated environment. Useful for complex calculations, data transformations, and dynamic report generation that would be difficult to express in natural language alone. Allows the agent to search for up-to-date information on the internet during a conversation. The search result is incorporated into the context before the model formulates the response — useful for questions about recent events, market prices, or any data that changes frequently and is not in the knowledge base. Triggers Timely.ai’s RAG mechanism: the user’s question is converted into a vector embedding (model text-embedding-3-small, 1,536 dimensions) and compared with the chunks indexed in the workspace knowledge base. The most relevant excerpts are injected into the context before the response.

Datagrid tools

Connect the agent to the workspace’s custom tables. Available actions are:
  • semantic_search — search by semantic similarity across table records.
  • similarity_search — search by approximate text matching.
  • insert_row — inserts a new record with the fields provided by the agent.
  • update_row — updates an existing record identified by the row_id.

Workflow execution

Triggers an internal Timely.ai workflow as part of the agent’s response. The agent calls the workflow with the defined parameters, waits for completion, and uses the result in the conversation.

Follow-up

Schedules a follow-up message to be sent to the user after a time interval. The agent determines the content and time of the follow-up based on the conversation context — useful for reminders, re-engagement, and post-service check-ins.

Contextual memory

Allows the agent to persist information about the contact between conversations. The agent can save preferences, interaction history, or any relevant data, and retrieve this information in future sessions without the user needing to repeat the context.

PDF reading

Extracts and processes the content of PDF documents sent by the user during the conversation. The extracted text is incorporated into the context so the agent can answer questions about the document, summarize its content, or execute actions based on the information in the file.

Composio integrations

Connect the agent to 29 external apps through Composio’s authentication and execution layer — Google Calendar, Google Meet, Notion, Slack, Gmail, HubSpot, Stripe, GitHub, LinkedIn, Twitter/X, Meta Ads, Google Sheets, Browser Tool, Browse.ai, and more. Each app exposes a set of individual tools that you can activate or deactivate in the agent configuration.

MCP (Model Context Protocol)

Anthropic’s open protocol that allows connecting the agent to any external MCP server — internal, third-party, or developed by your team. While Composio covers popular apps with ready-made connectors, MCP allows you to expose proprietary tools (queries to internal APIs, ERP, legacy systems) without depending on any catalog.

Best practices

  • Activate read tools before enabling write tools and validate behavior in the Playground.
  • Write clear descriptions for each tool — the model uses this text to decide when to trigger it.
  • MCP and Composio tools inject workflow instructions into the system prompt, making behavior more predictable.
  • Monitor token consumption per session when adding multiple tools — each active tool increases the context size.

Key point

Tools transform the agent from a response system into an agent capable of acting. The choice of which tools to activate — and with what instructions — directly determines the quality, cost, and security of the agent’s behavior in production.