How It Works
The AI Transformation node makes a direct call to a language model with a prompt you define. There is no agent persona or conversation history. The execution flow follows five steps:- Resolves
{{ }}templates in the input field and prompt with real execution data - Assembles the model call with the prompt and configured parameters
- Waits for the response — 30-second timeout
- Parses the result: JSON when
output_formatisjson, string when it istext - Exposes the result via
{{ $json.result }}for downstream nodes
Configuration Options
- Input Field —
{{ }}expression pointing to the text to process - Prompt — instruction for the model; accepts
{{ }}templates with dynamic data - Model —
gpt-4o-mini,gpt-4o,claude-3-haiku, orclaude-3-sonnet - Output format —
textfor a free string orjsonfor a structured object - JSON Schema — optional; guides the model on the expected fields when
output_formatisjson - Temperature / Max Tokens — variation control (0.0 to 1.0) and token limit (100 to 4000)
Common Use Cases
- Extract entities from unstructured forms or emails
- Classify intent or sentiment to route service tickets
- Summarize long tickets before notifying the team
- Normalize date, phone, or address formats
Example
Prompt:output_format: json:
Tip
Use a temperature between 0.0 and 0.2 for extraction and classification — deterministic responses reduce failures in downstream nodes that consume the generated JSON.