Skip to main content

Data Flow

Information flows through your workflow from one step to the next. When you connect nodes, the output from one becomes the input for another. The system figures out the right order to run things in and can run independent steps at the same time for speed.

How Data Flows

When you connect nodes together, data flows from the output of one node to the input of another. Think of it like a river—data flows downstream from node to node:

  1. Each node receives input from connected nodes above it
  2. The node processes that input and produces output
  3. That output flows to all connected nodes below it
  4. The process continues until all nodes have executed

Connections show the direction of data flow—from output handles (bottom of nodes) to input handles (top of nodes). The system validates connections as you make them, ensuring data types match and the flow makes sense.

Execution Order

The system automatically figures out the right order to run nodes:

  • Dependency-based: Nodes run only after their input nodes have finished
  • Parallel execution: Nodes that don't depend on each other run at the same time for speed
  • Sequential when needed: Connected nodes run one after another, in the right order

For example, if you have three nodes where Node 1 feeds into Node 2, and Node 2 feeds into Node 3, they run sequentially: 1 → 2 → 3. But if you have Node 1 feeding into both Node 2 and Node 3 (and Node 2 and 3 aren't connected), then Node 2 and 3 run in parallel after Node 1 finishes.

This means your workflows run as fast as possible—the system doesn't wait when it doesn't need to.

Using Variables and Placeholders

You can reference data from earlier steps using placeholders and variables:

  • Templates: Use {{variable}} in prompts and settings to reference data from earlier steps
  • Variable resolution: The system automatically resolves variables when the workflow runs
  • Input mapping: Connect outputs directly to inputs by selecting what data to use

Example: In an AI prompt, you might write:
"Analyze this data: {{user_input}}"
When the workflow runs, {{user_input}} gets replaced with actual data from the previous step.

Each node's output is available to subsequent nodes. You can access specific fields from the output using dot notation or by referencing the output name.

Multiple Connections

A single output can feed into multiple nodes:

  • One output can connect to many inputs (fan-out)
  • Multiple outputs can connect to one input (fan-in)
  • You can build complex branching logic this way

This is useful when you want the same data processed in different ways. For example, you might send user input to both an AI analysis node and a data validation node at the same time, then combine their results later.

Conditional Flow

Logic nodes let you control data flow based on conditions:

  • Condition nodes: Route data to different paths based on true/false conditions
  • Switch nodes: Route data to different paths based on specific values
  • Loop nodes: Process data multiple times—while loops repeat until a condition is false, for-each loops process each item in a list

When a condition node evaluates to true, data flows to the "true" path. When it's false, data flows to the "false" path. This lets you build workflows that make decisions and handle different scenarios.

Tips for Data Flow

  • Test your workflow as you build it—check that data flows correctly between steps
  • Use clear variable names so you can reference them easily in templates
  • The system validates connections automatically—if it's invalid, you'll see an error
  • Parallel execution happens automatically—you don't need to configure it