Skip to Content

Code Node

Execute custom Python or JavaScript code for data transformation, API integration, and business logic.

Overview

The Code Node runs arbitrary code within workflows, enabling custom logic that isn’t covered by other node types. It supports:

  • Python and JavaScript runtimes
  • External libraries via dependency management
  • Inline code editor with syntax highlighting
  • Input/output typing with JSON schema validation
  • Sandboxed execution with resource limits
  • Error handling with detailed stack traces

When to Use

Use a Code Node when you need to:

  • Transform data between formats (JSON to CSV, XML parsing, etc.)
  • Perform calculations or aggregations not available in other nodes
  • Call external APIs with custom authentication or logic
  • Implement business rules specific to your domain
  • Parse or validate complex data structures
  • Integrate with internal services or databases

For simple LLM invocation, use Prompt Node instead.

Configuration

Runtime Selection

Choose Python or JavaScript:

RuntimeVersionUse Cases
Python3.11+Data science, ML inference, complex transformations
JavaScriptNode.js 20+JSON manipulation, web APIs, async operations

Code Editor

Write code in the inline editor with:

  • Syntax highlighting - Python or JavaScript
  • Error detection - Lint errors shown inline
  • Auto-completion - Standard library suggestions
  • Formatting - Auto-format with Black (Python) or Prettier (JS)

Input Schema

Define expected inputs with JSON schema:

{ "type": "object", "properties": { "text": { "type": "string", "description": "Text to process" }, "options": { "type": "object", "properties": { "lowercase": { "type": "boolean" } } } }, "required": ["text"] }

Inputs are validated before code execution. Invalid inputs fail fast with schema error.

Output Schema

Define expected outputs:

{ "type": "object", "properties": { "result": { "type": "string", "description": "Processed text" }, "metadata": { "type": "object", "description": "Processing metadata" } }, "required": ["result"] }

Code must return data matching this schema or execution fails.

Dependencies

Install external libraries:

Python:

requests==2.31.0 pandas==2.1.0 numpy==1.24.0

JavaScript:

axios@1.6.0 lodash@4.17.21

Dependencies are cached and reused across executions.

Installing dependencies adds latency to first execution. Use built-in libraries when possible.

Python Code Node

Execution Environment

Python code runs in a sandboxed environment with:

  • Python 3.11 runtime
  • Standard library access (json, re, datetime, etc.)
  • Input variable input_data (dict matching input schema)
  • Return value must be dict matching output schema

Example: Text Transformation

Input Schema:

{ "type": "object", "properties": { "text": { "type": "string" }, "uppercase": { "type": "boolean" } } }

Code:

def main(input_data): text = input_data['text'] uppercase = input_data.get('uppercase', False) if uppercase: result = text.upper() else: result = text.lower() return { 'result': result, 'original_length': len(text), 'transformed_length': len(result) }

Output Schema:

{ "type": "object", "properties": { "result": { "type": "string" }, "original_length": { "type": "integer" }, "transformed_length": { "type": "integer" } } }

Example: API Integration

Dependencies:

requests==2.31.0

Code:

import requests def main(input_data): url = input_data['api_url'] headers = input_data.get('headers', {}) response = requests.get(url, headers=headers, timeout=10) response.raise_for_status() return { 'status_code': response.status_code, 'data': response.json(), 'headers': dict(response.headers) }

Example: Data Validation

Code:

import re from datetime import datetime def main(input_data): fields = input_data['extracted_fields'] errors = [] # Validate invoice number format if not re.match(r'^INV-\d{6}$', fields.get('invoice_number', '')): errors.append('Invalid invoice number format') # Validate date try: datetime.strptime(fields.get('date', ''), '%Y-%m-%d') except ValueError: errors.append('Invalid date format') # Validate amount if fields.get('total_amount', 0) <= 0: errors.append('Total amount must be positive') return { 'is_valid': len(errors) == 0, 'errors': errors, 'validated_fields': fields if len(errors) == 0 else None }

JavaScript Code Node

Execution Environment

JavaScript code runs in Node.js with:

  • Node.js 20 runtime
  • Standard modules (fs, path, http, etc.)
  • Input variable inputData (object matching input schema)
  • Return value must be object matching output schema
  • Async/await support

Example: JSON Transformation

Code:

async function main(inputData) { const { source_data, mapping } = inputData; const result = {}; for (const [target_key, source_path] of Object.entries(mapping)) { // Simple JSONPath implementation const value = source_path.split('.').reduce( (obj, key) => obj?.[key], source_data ); result[target_key] = value; } return { transformed_data: result, field_count: Object.keys(result).length }; }

Example: HTTP Request

Dependencies:

axios@1.6.0

Code:

const axios = require('axios'); async function main(inputData) { const { url, method, data, headers } = inputData; try { const response = await axios({ method: method || 'GET', url, data, headers, timeout: 10000 }); return { success: true, status: response.status, data: response.data, headers: response.headers }; } catch (error) { return { success: false, error_message: error.message, status: error.response?.status }; } }

Example: Data Aggregation

Code:

async function main(inputData) { const { records } = inputData; const summary = records.reduce((acc, record) => { const category = record.category; if (!acc[category]) { acc[category] = { count: 0, total_amount: 0, items: [] }; } acc[category].count += 1; acc[category].total_amount += record.amount; acc[category].items.push(record.id); return acc; }, {}); return { summary, total_records: records.length, unique_categories: Object.keys(summary).length }; }

Security Considerations

Code nodes execute in sandboxed environments with restrictions:

Allowed Operations

  • Read input data
  • Perform computations
  • Call external HTTP APIs (with timeout)
  • Use installed dependencies
  • Write to stdout/stderr (for logging)

Restricted Operations

  • File system access - No reading/writing local files
  • Network access - Limited to HTTP/HTTPS (no raw sockets)
  • Process spawning - Cannot execute shell commands
  • Environment variables - No access to system env vars
  • Resource limits - Memory and CPU caps enforced

Never include credentials directly in code. Use gateway configuration for API keys and secrets.

Best Practices

  1. Validate inputs - Don’t trust upstream data; check types and bounds
  2. Set timeouts - External API calls should have explicit timeouts
  3. Handle errors - Use try/catch and return error details in output
  4. Limit memory usage - Avoid loading large datasets into memory
  5. Use environment injection - Reference secrets via inputData.api_key from secure storage

Resource Limits

Code nodes are subject to runtime limits:

ResourceDefault LimitConfigurable
Execution time300s (5 min)Yes, up to 600s
Memory512 MBYes, up to 2 GB
CPU1 vCPUNo
Dependency size100 MBNo
Output size10 MBNo

Configure limits in node definition:

{ "timeout": 600, "memory_limit_mb": 1024 }

Exceeding limits causes immediate termination with error.

Error Handling

Code node errors are captured and reported:

{ "success": false, "error_type": "RuntimeError", "error_message": "division by zero", "stack_trace": "...", "execution_time_ms": 123 }

Handle errors in downstream nodes or configure retry:

{ "error_handling": { "on_error": "retry", "max_retries": 3, "retry_delay_ms": 1000 } }

Testing

Test code nodes before deploying:

  1. Click Test in node editor
  2. Provide sample input matching input schema
  3. Click Run Test
  4. Inspect output and logs
  5. Iterate until correct

Use diverse test inputs to validate edge cases.

Performance Tips

  1. Minimize dependencies - Each import adds overhead
  2. Cache expensive computations - Store results in node output for reuse
  3. Use streaming for large data - Don’t load entire dataset into memory
  4. Parallelize when possible - Use async/await (JS) or multiprocessing (Python)
  5. Profile slow operations - Add timing logs to identify bottlenecks

Example Configurations

Invoice Field Extraction

import re from datetime import datetime def main(input_data): text = input_data['invoice_text'] # Extract invoice number inv_match = re.search(r'Invoice\s*#?\s*:?\s*(\S+)', text, re.I) invoice_number = inv_match.group(1) if inv_match else None # Extract date date_match = re.search(r'Date\s*:?\s*(\d{1,2}/\d{1,2}/\d{4})', text, re.I) date = date_match.group(1) if date_match else None # Extract total total_match = re.search(r'Total\s*:?\s*\$?([\d,]+\.\d{2})', text, re.I) total_amount = float(total_match.group(1).replace(',', '')) if total_match else None return { 'invoice_number': invoice_number, 'date': date, 'total_amount': total_amount, 'confidence': 0.85 if all([invoice_number, date, total_amount]) else 0.5 }
Last updated on