localFloww SDK

Embed and automate Floww workflows programmatically.

The localFloww SDK lets you load, run, and automate .floww workflow files from any Node.js environment — scripts, servers, CI pipelines, or desktop applications. It uses the same execution runtime as the Floww desktop app, but without the canvas or UI layer.

Installation

Install the SDK from npm:

npm install @floww/localfloww

Or with yarn:

yarn add @floww/localfloww
Requirements
The localFloww SDK requires Node.js 18 or later. It has no native dependencies and works on Linux, macOS, and Windows.

Basic Usage

The simplest use case: load a workflow file and run it.

const { LocalFloww } = require('@floww/localfloww');

async function main() {
  // Create an instance
  const floww = new LocalFloww();

  // Load a workflow file
  await floww.load('./my-workflow.floww');

  // Set variables (if the workflow uses them)
  floww.setVariable('apiKey', process.env.API_KEY);
  floww.setVariable('outputDir', './results');

  // Run the workflow
  const result = await floww.run();

  // Inspect results
  console.log('Status:', result.status);        // 'completed' | 'error'
  console.log('Duration:', result.duration, 'ms');
  console.log('Nodes executed:', result.nodesExecuted);

  // Get output from a specific node
  const finalNode = result.getNodeOutput('node-005');
  console.log('Final output:', finalNode);
}

main().catch(console.error);

Loading from a string

You can also load a workflow from a JSON string instead of a file path:

const workflowJson = fs.readFileSync('./workflow.floww', 'utf-8');
await floww.loadFromString(workflowJson);

API Reference

The LocalFloww class provides the following methods:

MethodReturnsDescription
new LocalFloww(options?) LocalFloww Create a new instance. Options: { timeout?, plugins?, logLevel? }
load(filePath) Promise<void> Load a .floww workflow file from disk.
loadFromString(json) Promise<void> Load a workflow from a JSON string.
run(options?) Promise<RunResult> Execute the loaded workflow. Options: { dryRun?, startFrom?, signal? }
watch(filePath, callback) Watcher Watch a .floww file for changes and re-run automatically. The callback receives RunResult after each run.
getNodes() Node[] Get all node instances in the loaded workflow.
getNode(nodeId) Node | null Get a specific node by ID.
setVariable(name, value) void Set a workflow variable before running.
getVariable(name) any Read a workflow variable's current value.
on(event, callback) void Subscribe to execution events (same events as the desktop app).
off(event, callback) void Unsubscribe from an event.
destroy() void Clean up resources. Call when done with the instance.

RunResult

interface RunResult {
  status: 'completed' | 'error' | 'cancelled';
  duration: number;               // Total execution time in ms
  nodesExecuted: number;          // Number of nodes that ran
  errors: Array<{
    nodeId: string;
    message: string;
  }>;
  getNodeOutput(nodeId: string): any;  // Get a specific node's output
}

Constructor options

const floww = new LocalFloww({
  // Maximum execution time in ms (default: 300000 = 5 min)
  timeout: 60000,

  // Array of plugin paths to load
  plugins: ['./plugins/my-custom-nodes'],

  // Log level: 'silent' | 'error' | 'warn' | 'info' | 'debug'
  logLevel: 'info'
});

Custom Runners

The default execution runtime handles standard Floww nodes. If you need to customize how nodes execute — for example, to run them in a Docker container, a remote server, or a different language runtime — you can create a custom runner.

const { LocalFloww, BaseRunner } = require('@floww/localfloww');

class DockerRunner extends BaseRunner {
  /**
   * Called for each node execution.
   * Override this to change where/how the node runs.
   */
  async executeNode(node, inputs, config) {
    // Run the node in a Docker container
    const container = await this.docker.run({
      image: `floww-nodes/${node.type}`,
      env: {
        INPUTS: JSON.stringify(inputs),
        CONFIG: JSON.stringify(config)
      }
    });

    const output = await container.getOutput();
    return JSON.parse(output);
  }

  /**
   * Called once before the workflow starts.
   * Use for setup (e.g., start a Docker network).
   */
  async setup() {
    this.docker = new DockerClient();
    await this.docker.createNetwork('floww-workflow');
  }

  /**
   * Called after the workflow completes or errors.
   * Use for cleanup.
   */
  async teardown() {
    await this.docker.removeNetwork('floww-workflow');
  }
}

// Use the custom runner
const floww = new LocalFloww({
  runner: new DockerRunner()
});

await floww.load('./pipeline.floww');
const result = await floww.run();
Partial overrides
Your custom runner only needs to override the methods you want to change. Any method you don't override falls back to the default implementation. You can also override executeNode only for specific node types and delegate the rest to super.executeNode().

CI/CD Patterns

localFloww works well in continuous integration and deployment pipelines. Here are common patterns.

GitHub Actions

# .github/workflows/run-floww.yml
name: Run Floww Workflow

on:
  push:
    branches: [main]

jobs:
  run-workflow:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - run: npm install @floww/localfloww

      - name: Run workflow
        env:
          API_KEY: ${{ secrets.API_KEY }}
        run: |
          node -e "
            const { LocalFloww } = require('@floww/localfloww');
            (async () => {
              const f = new LocalFloww();
              await f.load('./workflows/deploy-pipeline.floww');
              f.setVariable('apiKey', process.env.API_KEY);
              const r = await f.run();
              if (r.status === 'error') process.exit(1);
              console.log('Done in', r.duration, 'ms');
            })();
          "

Docker

# Dockerfile
FROM node:20-alpine
WORKDIR /app
RUN npm install @floww/localfloww
COPY workflows/ ./workflows/
COPY run.js .
CMD ["node", "run.js"]

Parallel execution

Run multiple workflows simultaneously by creating separate LocalFloww instances:

const workflows = ['etl.floww', 'report.floww', 'notify.floww'];

const results = await Promise.all(
  workflows.map(async (file) => {
    const floww = new LocalFloww({ timeout: 120000 });
    await floww.load(`./workflows/${file}`);
    const result = await floww.run();
    floww.destroy();
    return { file, ...result };
  })
);

results.forEach(r => {
  console.log(`${r.file}: ${r.status} (${r.duration}ms)`);
});

Embedding in Applications

localFloww can be used as a library inside larger Node.js applications — web servers, CLI tools, Electron apps, or any JavaScript environment.

Express.js example

const express = require('express');
const { LocalFloww } = require('@floww/localfloww');

const app = express();
app.use(express.json());

app.post('/api/run-workflow', async (req, res) => {
  const { workflowPath, variables } = req.body;

  const floww = new LocalFloww({ timeout: 30000 });

  try {
    await floww.load(workflowPath);

    // Set any variables from the request
    for (const [key, value] of Object.entries(variables || {})) {
      floww.setVariable(key, value);
    }

    // Listen for progress
    floww.on('node:executed', (event) => {
      console.log(`  Node ${event.nodeId} done (${event.duration}ms)`);
    });

    const result = await floww.run();
    res.json({
      status: result.status,
      duration: result.duration,
      nodesExecuted: result.nodesExecuted
    });
  } catch (err) {
    res.status(500).json({ error: err.message });
  } finally {
    floww.destroy();
  }
});

app.listen(3000, () => console.log('Listening on :3000'));

Cancellation with AbortController

const controller = new AbortController();

// Cancel after 10 seconds
setTimeout(() => controller.abort(), 10000);

const result = await floww.run({ signal: controller.signal });
// result.status will be 'cancelled' if aborted
Memory management
Always call floww.destroy() when you are done with a LocalFloww instance, especially in long-running server processes. Each instance holds the workflow graph, execution state, and any loaded plugins in memory.