3 tools. Zero friction.Fewer tokens. More signal.

Distill is an open-source MCP server that compresses LLM context: AST-aware smart file reading, auto-compression, and a sandboxed TypeScript SDK. Up to 98% token savings.
$bunx distill-mcp setup
40-98%tokens.saved
3tools.active
7languages
tooling · 03

Three tools. One per problem.

auto_optimize01 / 03

Universal compression

Auto-detects content type — build output, logs, diffs, code, stacktraces — and compresses accordingly. Text arrives pre-compressed before it enters context.

−92%
40–98% tokens
smart_file_read02 / 03

AST reading

7 languages (TS, JS, Python, Go, Rust, PHP, Swift). 5 modes: auto, full, skeleton, extract, search. Reads only the structure you need — never the whole file.

class Context {
   fn compress()
   fn parse()
} // skeleton · 7 lang
7 languages
code_execute03 / 03

TypeScript sandbox

Chain 5–10 operations (read, git, compress, search) in a single MCP call. QuickJS WASM, 7 security layers, no network or raw fs access.

QuickJS WASM
workflow · 04

One call. Five steps.

Instead of chaining ten tool calls — each round-trip bloats context — Distill composes the chain inside the sandbox. The model only receives the final, already distilled result.

  • 01read · ast skeleton
  • 02git diff · head~3
  • 03compress · auto mode
  • 04search · ripgrep
  • 05return · JSON
~ distill · sandbox.ts
get started

Ready to distill
your context?

One command, zero config. Native Claude Code integration in 30 seconds.

Distill - Save 98% LLM Tokens with Smart Context Compression