open source

Local LLM call
debugger

Capture every LLM API call during development. Request/response bodies, token usage, cost estimation, and latency in a local web UI.

$ manageprompt start copied

Everything you need to debug LLM calls

Full capture

Request & response bodies with visual flow visualization.

#

Token tracking

Input, output, cache read & write tokens for every call.

$

Cost estimation

Real-time cost calculated via models.dev pricing data.

»

Streaming

Captures both streaming and non-streaming calls automatically.

Real-time UI

WebSocket-powered live updates as calls happen.

>_

Project scoped

SQLite DB per project directory. Zero config needed.

Two lines to get started

AI SDK Middleware

recommended
pnpm add manageprompt
import { generateText, wrapLanguageModel } from "ai"; import { openai } from "@ai-sdk/openai"; import { devToolsMiddleware } from "manageprompt"; const model = wrapLanguageModel({ model: openai("gpt-4o"), middleware: devToolsMiddleware(), });

capture()

import OpenAI from "openai"; import { capture } from "manageprompt"; const openai = new OpenAI(); const response = await capture( { model: "gpt-4o-mini", messages }, (input) => openai.chat.completions.create(input), );

Works with any LLM API — OpenAI, Anthropic, Google, Mistral, and more.

Get started in seconds

Homebrew

brew install techulus/tap/manageprompt

Go

go install github.com/techulus/manage-prompt/cmd/manageprompt@latest

npm

pnpm add manageprompt