Open Source

Local LLM call debugger

Capture every LLM API call during development. Full request/response details, token usage, cost estimation, and latency — all in a local web UI.

$ manageprompt start Copied!

Everything you need to debug LLM calls

Full Capture

Request & response bodies, headers with API keys automatically masked.

#

Token Tracking

Input, output, cache read & write tokens for every call.

$

Cost Estimation

Real-time cost calculated via models.dev pricing data.

»

Streaming

SSE streaming forwarded in real-time with zero buffering delay.

Real-time UI

WebSocket-powered live updates as calls happen.

>_

Project Scoped

SQLite DB per project directory. Zero config needed.

Two lines to get started

AI SDK Middleware

Recommended
pnpm add manageprompt
import { generateText, wrapLanguageModel } from "ai"; import { openai } from "@ai-sdk/openai"; import { manageprompt } from "manageprompt"; const model = wrapLanguageModel({ model: openai("gpt-4o"), middleware: manageprompt(), });

Proxy Mode

import OpenAI from "openai"; const client = new OpenAI({ baseURL: "http://localhost:54321/v1", defaultHeaders: { "X-ManagePrompt-Target": "https://api.openai.com/v1" } });

Works with any LLM API — OpenAI, Anthropic, Google, Mistral, and more.

Get started in seconds

Homebrew

brew install techulus/tap/manage-prompt

Go

go install github.com/techulus/manage-prompt/cmd/manageprompt@latest

npm

pnpm add manageprompt