Logo

LlamaIndex Workflows

LlamaIndex Workflows is a simple and lightweight engine for JavaScript and TypeScript apps.

LlamaIndex Workflow (LlamaFlow) is a library for streaming event-driven programming in JavaScript and TypeScript. It provides a simple and lightweight orchestration solution for building complex workflows with minimal boilerplate.

It combines event-driven programming, async context and streaming to create a flexible and efficient way to handle data processing tasks.

The essential concepts of LlamaFlow are:

  • Events: are the core building blocks of LlamaFlow. They represent data that flows through the system.
  • Handlers: are functions that process events and can produce new events.
  • Context: is the environment in which events are processed. It provides access to the event stream and allows sending new events.
  • Workflow: is the collection of events, handlers, and context that define the processing logic.

Getting Started

npm i @llama-flow/core
 
yarn add @llama-flow/core
 
pnpm add @llama-flow/core
 
bun add @llama-flow/core
 
deno add npm:@llama-flow/core

First Example

With workflowEvent and createWorkflow, you can create a simple workflow that processes events.

import { OpenAI } from "openai";
import { createWorkflow, workflowEvent } from "@llama-flow/core";
 
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});
 
const startEvent = workflowEvent<string>();
const stopEvent = workflowEvent<string>();
 
const workflow = createWorkflow();
 
workflow.handle([startEvent], async (event) => {
  const response = await openai.chat.completions.create({
    // ...
    messages: [{ role: "user", content: event.data }],
  });
 
  return stopEvent(response.choices[0].message.content);
});
 
workflow.handle([stopEvent], (event) => {
  console.log("Response:", event.data);
});
 
const { sendEvent } = workflow.createContext();
sendEvent(startEvent.with("Hello, LlamaFlow!"));

Parallel processing with async handlers

Tool calls are a common pattern in LLM applications, where the model generates a call to an external function or API.

With LlamaFlow

LlamaFlow provides abort signals and parallel processing out of the box.

workflow.handle([toolCallEvent], ({ data: { id, name, args } }) => {
  const { signal, sendEvent } = getContext();
  signal.onabort = () =>
    sendEvent(
      toolCallResultEvent.with({
        role: "tool",
        tool_call_id: id,
        content: "ERROR WHILE CALLING FUNCTION" + signal.reason.message,
      }),
    );
 
  const result = callFunction(name, args);
  return toolCallResultEvent.with({
    role: "tool",
    tool_call_id: id,
    content: result,
  });
});

You can collect the results of the tool calls from the stream and send them back to the workflow.

workflow.handle([startEvent], async (event) => {
  const { sendEvent, stream } = getContext();
  // ...
  if (response.choices[0].message.tool_calls.length > 0) {
    response.choices[0].message.tool_calls.map((toolCall) => {
      const name = toolCall.function.name;
      const args = JSON.parse(toolCall.function.arguments);
      sendEvent(
        toolCallEvent.with({
          id: toolCall.id,
          name,
          args,
        }),
      );
    });
    let counter = 0;
    const results = stream
      .until(() => counter++ === response.choices[0].message.tool_calls.length)
      .filter(toolCallResultEvent)
      .toArray();
    return sendEvent(
      startEvent.with([...event.data, ...results.map((r) => r.data)]),
    );
  }
  return stopEvent(response.choices[0].message.content);
});
const { sendEvent } = workflow.createContext();
sendEvent(
  startEvent.with([
    {
      role: "user",
      content: "Hello, LlamaFlow!",
    },
  ]),
);

Ship to Production easily

We provide tons of middleware and integrations to make it easy to ship your workflows to production.

Hono /w Cloudflare Workers

import { Hono } from "hono";
import { createHonoHandler } from "@llama-flow/core/interrupter/hono";
import { openaiChatWorkflow, startEvent, stopEvent } from "@/lib/workflow";
 
const app = new Hono();
 
app.post(
  "/workflow",
  createHonoHandler(
    openaiChatWorkflow,
    async (ctx) => startEvent.with(ctx.req.json()),
    stopEvent,
  ),
);
 
export default app;

Next.js

import { createNextHandler } from "@llama-flow/core/interrupter/next";
import { openaiChatWorkflow, startEvent, stopEvent } from "@/lib/workflows";
 
export const { GET } = createNextHandler(
  openaiChatWorkflow,
  async (req) => startEvent.with(req.body),
  stopEvent,
);

On this page