DeepSeek V4 · Terminal-native agent
DeepSeek TUI is the keyboard-driven coding agent that ships with your shell.
DeepSeek TUI is a coding agent that runs in your terminal. It reads and edits files, runs shell commands, searches the web, manages git, and coordinates sub-agents from a fast TUI built on Rust and ratatui.
Terminal coding agent for DeepSeek V4. It runs from the deepseek command, streams reasoning blocks, edits local workspaces with approval gates, and includes an auto mode that chooses both model and thinking level per turn.
What makes it different
It keeps the agent loop inside the same place you already run builds, tests, and git.
Instead of bouncing between a browser UI and your repo, DeepSeek TUI anchors the workflow in the terminal. The project pairs a deepseek dispatcher CLI with a companion deepseek-tui binary so installs stay predictable across npm, Cargo, Homebrew, direct release downloads, and Docker images published to GHCR.
Under the hood, tool calls flow through a typed registry covering shell execution, file operations, git, web search and browsing, apply-patch flows, sub-agents, MCP integrations, and native RLM batching. Streaming responses use an OpenAI-compatible client so reasoning blocks arrive while work is still in flight, and an LSP subsystem can surface diagnostics from rust-analyzer, pyright, TypeScript, gopls, or clangd after edits land.
Auto mode that routes every turn
Use --model auto or /model auto so a lightweight router picks Flash versus Pro and sets thinking to off, high, or max before the real request runs. You still see the concrete route in the UI and in cost tracking.
Three safety postures
Plan mode stays read-only while the model explores, Agent mode keeps interactive approvals for risky tools, and YOLO mode auto-approves inside a trusted workspace while preserving plan visibility.
Session durability
Save and resume long sessions, rely on a durable task queue that survives restarts, and use workspace rollback snapshots that avoid mutating your repository’s .git directory.
Architecture snapshot
How the pieces connect before your next prompt ships.
The README describes the wiring as dispatcher to TUI to async engine to streaming client, with tool outputs feeding back into the transcript. The sections below translate that mental model into a short on-ramp.
Dispatcher accepts the command
You invoke deepseek with flags for model, provider, or headless modes such as HTTP/SSE serving.
TUI renders the live transcript
ratatui draws the composer, attachments, reasoning stream, and approval surfaces while the engine schedules tool calls.
Tools execute with guardrails
Each capability routes through the registry so shell, git, MCP, and sub-agent calls stay typed and auditable.
Diagnostics re-enter context
After edits, LSP feedback can be pulled back into the conversation before the model continues reasoning.
Operating modes
Pick the posture that matches how much autonomy you want.
| Mode | What you should expect |
|---|---|
| Plan | Read-only investigation: the model explores, updates plans, and writes checklists before mutating files. |
| Agent | Default interactive loop with approval gates, multi-step tool use, and visible checklists for transparency. |
| YOLO | Auto-approved tools in a trusted workspace while still surfacing plans so you can audit what happened. |
Quick start
Go from install to a verified deepseek doctor run.
Prebuilt binaries target Linux x64, Linux ARM64 (v0.8.8 onward), macOS x64, macOS ARM64, and Windows x64. Tier-1 Rust targets—including musl, riscv64, and FreeBSD—are supported when building from source with the instructions in the upstream repository.
Install the published artifacts
Choose npm (npm install -g deepseek-tui), Cargo (deepseek-tui-cli plus deepseek-tui), Homebrew, a GitHub Release archive, or the GHCR container image. The npm package downloads matching Rust binaries; it is not a separate JavaScript runtime.
Authenticate once
On first launch the app prompts for a DeepSeek API key and stores it in ~/.deepseek/config.toml. You can also run deepseek auth set --provider deepseek or export DEEPSEEK_API_KEY for non-interactive shells.
Launch with auto routing
Run deepseek --model auto to let the router choose Flash or Pro and set thinking per turn, or pin a model when you need deterministic benchmarking.
Validate the toolchain
Use deepseek doctor for connectivity checks and deepseek doctor --json when automation needs structured diagnostics.
FAQ
Answers grounded in the public README and docs tree.
What is DeepSeek TUI in one sentence?
It is a coding agent for DeepSeek models that runs in your terminal, combining streaming reasoning, tool execution, and a ratatui interface so you stay inside the shell.
Which models does it target?
The project is built around DeepSeek V4 identifiers such as deepseek-v4-pro and deepseek-v4-flash, including 1M-token context windows, streaming reasoning blocks, and prefix-cache-aware cost reporting.
Can it talk to MCP servers?
Yes. MCP integration is first-class, with documentation covering configuration, validation commands like deepseek mcp validate, and a dispatcher MCP stdio server for editor workflows.
Does it only work with DeepSeek’s hosted API?
No. The README documents additional providers—including NVIDIA NIM, Fireworks, OpenRouter-style OpenAI-compatible endpoints, self-hosted SGLang or vLLM, and Ollama—so you can route to the stack you operate.
Where should I read next?
Start with docs/ARCHITECTURE.md for internals, docs/MCP.md for tooling extensions, and docs/RUNTIME_API.md if you need the HTTP/SSE server for headless automation.
Primary sources