Logo
Getting Started

Installation

How to install and set up LlamaIndex.TS for your project.

Quick Start

Install the core package:

npm i llamaindex
pnpm add llamaindex
yarn add llamaindex
bun add llamaindex

In most cases, you'll also need an LLM provider and the Workflow package:

npm i @llamaindex/openai @llamaindex/workflow
pnpm add @llamaindex/openai @llamaindex/workflow
yarn add @llamaindex/openai @llamaindex/workflow
bun add @llamaindex/openai @llamaindex/workflow

Environment Setup

API Keys

Most LLM providers require API keys. Set your OpenAI key (or other provider):

export OPENAI_API_KEY=your-api-key

Or use a .env file:

echo "OPENAI_API_KEY=your-api-key" > .env
Never commit API keys to your repository.

Loading Environment Variables

For Node.js applications:

node --env-file .env your-script.js

For other environments, see the deployment-specific guides below.

TypeScript Configuration

LlamaIndex.TS is built with TypeScript and provides excellent type safety. Add these settings to your tsconfig.json:

{
  "compilerOptions": {
    // Essential for module resolution
    "moduleResolution": "bundler", // or "nodenext" | "node16" | "node"
    
    // Required for Web Stream API support
    "lib": ["DOM.AsyncIterable"],
    
    // Recommended for better compatibility
    "target": "es2020",
    "module": "esnext"
  }
}

Running your first agent

Set up

If you don't already have a project, you can create a new one in a new folder:

npm init
npm i -D typescript @types/node
npm i @llamaindex/openai @llamaindex/workflow llamaindex zod
pnpm init
pnpm add -D typescript @types/node
pnpm add @llamaindex/openai @llamaindex/workflow llamaindex zod
yarn init
yarn add --dev typescript @types/node
yarn add @llamaindex/openai @llamaindex/workflow llamaindex zod
bun init
bun add --dev typescript @types/node
bun add @llamaindex/openai @llamaindex/workflow llamaindex zod

Run the agent

Create the file example.ts. This code will:

  • Create two tools for use by the agent:
    • A sumNumbers tool that adds two numbers
    • A divideNumbers tool that divides numbers
  • Give an example of the data structure we wish to generate
  • Prompt the LLM with instructions and the example, plus a sample transcript
import { openai } from "@llamaindex/openai";
import { agent } from "@llamaindex/workflow";
import { tool } from "llamaindex";
import { z } from "zod";

const sumNumbers = tool({
  name: "sumNumbers",
  description: "Use this function to sum two numbers",
  parameters: z.object({
    a: z.number().describe("The first number"),
    b: z.number().describe("The second number"),
  }),
  execute: ({ a, b }: { a: number; b: number }) => `${a + b}`,
});

const divideNumbers = tool({
  name: "divideNumbers",
  description: "Use this function to divide two numbers",
  parameters: z.object({
    a: z.number().describe("The dividend a to divide"),
    b: z.number().describe("The divisor b to divide by"),
  }),
  execute: ({ a, b }: { a: number; b: number }) => `${a / b}`,
});

async function main() {
  const mathAgent = agent({
    tools: [sumNumbers, divideNumbers],
    llm: openai({ model: "gpt-4.1-mini" }),
    verbose: false,
  });

  const response = await mathAgent.run("How much is 5 + 5? then divide by 2");
  console.log(response.data);
}

void main().then(() => {
  console.log("Done");
});

To run the code:

npx tsx example.ts
pnpm dlx tsx example.ts
yarn dlx tsx example.ts
bun x tsx example.ts

You should expect output something like:

{
  result: '5 + 5 is 10. Then, 10 divided by 2 is 5.',
  state: {
    memory: Memory {
      messages: [Array],
      tokenLimit: 30000,
      shortTermTokenLimitRatio: 0.7,
      memoryBlocks: [],
      memoryCursor: 0,
      adapters: [Object]
    },
    scratchpad: [],
    currentAgentName: 'Agent',
    agents: [ 'Agent' ],
    nextAgentName: null
  }
}
Done

Performance Optimization

Tokenization Speed

Install gpt-tokenizer for 60x faster tokenization (Node.js environments only):

npm i gpt-tokenizer
pnpm add gpt-tokenizer
yarn add gpt-tokenizer
bun add gpt-tokenizer

LlamaIndex will automatically use this when available.

Deployment Guides

Choose your deployment target:

LLM/Embedding Providers

Go to LLM APIs and Embedding APIs to find out how to use different LLM and embedding providers beyond OpenAI.

What's Next?

Edit on GitHub

Last updated on