Character artwork by ใใใใ (ikawasa23)
A premium, corruption-aesthetic command-line interface for CelesteAI
Built with Charm's Bubble Tea for flicker-free, modern terminal experiences
Celeste CLI is a full standalone agentic development tool with her own persona, featuring:
- ๐จ Premium TUI - Flicker-free rendering with corrupted-theme aesthetics
- ๐ฎ 40 Built-in Tools - File I/O, shell, web search, code graph, code review, collections search, git, crypto, and more
- ๐
.grimoireProject Context - Persona-themed project config files with auto-discovery and auto-init - ๐ง Code Graph + Semantic Search - MinHash + BM25 fused ranking with LSH band table for sub-linear queries, structural rerank; tree-sitter TypeScript parsing for accurate call-graph edges; embedded celeste-stopwords v1.0.0 noise filter
- ๐ Graph-Based Code Review - Structural analysis detecting stubs, lazy redirects, placeholders, error swallowing, and hardcoded values
- ๐ Direct Codegraph MCP Tools -
celeste_index,celeste_code_search,celeste_code_review,celeste_code_graph,celeste_code_symbolsserved verbatim from the cached graph (no chat-LLM round-trip, nomax_tokensceiling, streaming progress notifications) - ๐ Permission System - Multi-layer allow/deny/ask rules with pattern matching
- ๐พ Session Persistence - JSONL auto-save, resume, file checkpointing with stale detection and revert
- ๐ Multi-Provider - Grok/xAI (default), OpenAI, Anthropic (native SDK), Gemini, Venice.ai, Vertex AI, OpenRouter
- ๐ฐ Cost Tracking - Per-model pricing with live session cost display
- ๐ช Hooks - Pre/post tool execution hooks defined in
.grimoire - ๐ง Extended Thinking - Leverage reasoning tokens (Claude, Gemini, Grok) with
/effortcontrol - ๐ผ๏ธ Image Input - Multimodal support for vision-capable models
- ๐ญ Celeste Personality - Embedded AI personality with lore-accurate responses
- ๐ Blockchain Tools - IPFS, Alchemy, wallet security monitoring
| Mode | Command | What it does |
|---|---|---|
| Chat | celeste chat (default) |
Interactive chat with auto-looping tool calls (50-turn safety cap). |
| Agent | /agent <goal> (in TUI) or celeste agent --goal "..." |
Fully autonomous multi-turn agent with planning, file I/O, checkpointing, and resume. For long-running tasks. |
| Orchestrator | /orchestrate <goal> (in TUI) |
Agent run with a second reviewer model that critiques and debates the output. For high-quality deliverables. |
Chat vs Agent: Chat is interactive with tool auto-looping โ you guide the conversation while Celeste calls tools as needed. Agent is a separate autonomous runtime with its own turn loop, planning phase, checkpoint store, and workspace awareness. The orchestrator adds a reviewer model on top of the agent.
If you have Go 1.23+ installed:
go install github.com/whykusanagi/celeste-cli/cmd/celeste@latestThe celeste binary will be installed to $GOPATH/bin (or ~/go/bin by default).
Requirements:
- Go 1.26.0 or higher
$GOPATH/bin(or~/go/bin) in your PATH
To add to PATH:
export PATH="$PATH:$(go env GOPATH)/bin"Alternatively, build from source:
# Clone the repository
git clone https://github.com/whykusanagi/celeste-cli.git
cd celeste-cli
# Build the binary
go build -o celeste ./cmd/celeste
# Install to PATH (optional)
cp celeste ~/.local/bin/xAI/Grok (default โ recommended):
celeste config --set-key YOUR_XAI_KEY
celeste chatWith Collections (RAG):
celeste config --set-key YOUR_XAI_KEY
celeste config --set-management-key YOUR_XAI_MANAGEMENT_KEY
celeste collections list # see available collections
celeste collections enable <id> # enable for chat
celeste chatOpenAI:
celeste config --init openai
celeste -config openai config --set-key YOUR_OPENAI_KEY
celeste -config openai chatOther providers: celeste config --init <name> where name is: grok, openai, venice, elevenlabs
When you enter a project directory, Celeste auto-initializes:
cd your-project
celeste chat
# Creates .grimoire (project context), indexes code graph, loads memoriesOr manually:
celeste init # create .grimoire
celeste index # build code graph
celeste index status # check graph statsAll Celeste CLI releases are cryptographically signed with GPG to ensure authenticity and integrity.
Before using a downloaded binary, verify its authenticity:
# Download verification script
curl -O https://raw.githubusercontent.com/whykusanagi/celeste-cli/main/scripts/verify.sh
chmod +x verify.sh
# Verify your download
./verify.sh celeste-linux-amd64.tar.gzFor manual verification or more details, see the complete Verification Guide.
Release Signing:
- All commits are GPG-signed
- All releases include GPG signatures
- Checksums are signed with GPG
- Complete manifest with build metadata
PGP Key Information:
- Key ID:
875849AB1D541C55 - Fingerprint:
9404 90EF 09DA 3132 2BF7 FD83 8758 49AB 1D54 1C55 - Keybase: @whykusanagi
- GitHub: whykusanagi.gpg
Import Key:
# From Keybase (recommended)
curl https://keybase.io/whykusanagi/pgp_keys.asc | gpg --import
# From GitHub
curl https://github.com/whykusanagi.gpg | gpg --importFor security issues, see our Security Policy or contact security@whykusanagi.xyz.
- Installation
- Security & Verification
- Features
- Tool System (40 Tools)
- Claude Code Integration
- Comparison
- LLM Provider Compatibility
- Function Calling Flow
- Configuration
- Usage
- Architecture
- Documentation
- Contributing
- Flicker-Free Rendering - Double-buffered Bubble Tea rendering (no screen tearing)
- Scrollable Chat - PgUp/PgDown navigation through conversation history
- Input History - Arrow keys to browse previous messages (like bash history)
- Skills Panel - Real-time skill execution status with demonic eye animation
- Corrupted Theme - Lip Gloss styling with pink/purple abyss aesthetic
- Real Streaming + Corruption Animation - Token-by-token streaming with corrupted glitch phrases at the typing cursor
- Markdown Rendering - glamour-powered markdown with corrupted theme (code blocks, tables, headers, bold)
40+ built-in tools powered by AI function calling:
- Dev Tools (bash, read/write/patch files, search, list files)
- Code Graph (semantic search with MinHash+BM25 fusion, code review, symbol analysis, tree-sitter TypeScript parsing)
- Direct Codegraph MCP Tools (
celeste_index,celeste_code_search,celeste_code_review,celeste_code_graph,celeste_code_symbolsโ verbatim, no chat-LLM round-trip) - Git (status, log)
- Web (search, fetch)
- Information Services (Weather, Currency, Twitch, YouTube)
- Utilities (Conversions, Encoding, Generators, QR codes)
- Productivity (Reminders, Notes, Todo tracking)
- Blockchain (IPFS, Alchemy, wallet security)
- Upload Custom Documents - Create knowledge bases with your own documentation
- Semantic Search - Celeste automatically searches collections when answering questions
- Interactive TUI - Manage collections with
/collectionscommand in chat - CLI Management - Create, upload, enable/disable collections from command line
- Multiple Collections - Organize by topic, enable only what's relevant
See Collections Guide for setup and usage.
- MCP (Model Context Protocol) support for external tool servers
- Permission system with configurable allow/deny rules
- Streaming tool execution with concurrent dispatch
- Automatic context window management
- Conversation Persistence - Auto-save and resume sessions seamlessly
- Message History - Full conversation logging with timestamps
- Session Listing - Browse and load previous sessions by ID
- Session Clearing - Bulk delete sessions when needed
- โ Grok/xAI (grok-4-1-fast) - DEFAULT - Optimized for tool calling, 2M context โข Token tracking โ
- โ OpenAI (gpt-4.1-mini, gpt-4.1) - Full function calling with streaming โข Token tracking โ
- โ Anthropic Claude (claude-sonnet-4-5) - Native SDK with prompt caching and extended thinking โข Token tracking โ
- โ Google Gemini AI (gemini-2.5-flash) - Simple API keys, free tier, full streaming โข Token tracking โ
โ ๏ธ Google Vertex AI (gemini-2.5-flash) - Enterprise, requires GCP project + billing โข Token tracking โ- โ Venice.ai (venice-uncensored) - NSFW mode, image generation/upscaling โข Token tracking โ
- โ OpenRouter (multi-provider) - Parallel function calling support โข Token tracking โ
Dynamic Model Selection - Auto-selects best tool-calling model per provider
Capability Indicators - Visual feedback (โ skills /
- JSON-based Config - Modern
~/.celeste/config.jsonformat - Named Configs - Multi-profile support (openai, grok, venice, etc.)
- Skills Config - Separate
skills.jsonfor skill-specific API keys - Secrets Handling - Separate
secrets.jsonfor backward compatibility - Persona Injection - Configurable Celeste personality prompt
- Environment Override - Env vars override file config
Celeste CLI uses OpenAI-compatible function calling to power its tools. You don't invoke tools directly โ you chat naturally, and the AI decides when to call them.
| Tool | Description |
|---|---|
| bash | Execute shell commands in the workspace |
| read_file | Read files with checkpointing |
| write_file | Write files with snapshot backup |
| patch_file | Apply targeted edits to files |
| list_files | List directory contents with glob patterns |
| search | Search file contents with regex |
| git_status | Show working tree status |
| git_log | Show commit history |
| Tool | Description |
|---|---|
| code_search | MinHash semantic search across all indexed symbols |
| code_review | Graph-based code review (6 categories: stubs, lazy redirects, placeholders, TODOs, error swallowing, hardcoded values) |
| code_graph | Query symbol relationships and call chains |
| code_symbols | List symbols in a file or package |
| Skill | Description | Dependencies |
|---|---|---|
| Tarot Reading | Three-card or Celtic Cross spreads | Tarot API (requires auth token) |
Example:
You: Give me a tarot reading
Celeste: *calls tarot_reading skill*
Celeste: Your cards reveal... [interpretation]
| Skill | Description | Dependencies |
|---|---|---|
| NSFW Mode | Venice.ai uncensored responses | Venice.ai API key |
| Content Generation | Platform-specific templates (Twitter/TikTok/YouTube/Discord) | None (LLM-powered) |
| Image Generation | Venice.ai image creation | Venice.ai API key |
Example:
You: Generate a tweet about cybersecurity
Celeste: *calls generate_content skill*
Celeste: Here's your tweet: [280 char tweet with hooks]
| Skill | Description | Dependencies |
|---|---|---|
| Weather | Current conditions and forecasts | wttr.in API (free, no key) |
| Currency Converter | Real-time exchange rates | ExchangeRate-API (free) |
| Twitch Live Check | Check if streamers are online | Twitch API (client ID required) |
| YouTube Videos | Get recent uploads from channels | YouTube Data API (key required) |
Example:
You: What's the weather in 10001?
Celeste: *calls get_weather skill*
Celeste: It's 45ยฐF and cloudy in New York City...
| Skill | Description | Dependencies |
|---|---|---|
| Unit Converter | Length, weight, temperature, volume | None (local calculations) |
| Timezone Converter | Convert times between zones | None (local calculations) |
| Hash Generator | MD5, SHA256, SHA512 | None (crypto/sha256) |
| Base64 Encode | Encode text to base64 | None (encoding/base64) |
| Base64 Decode | Decode base64 to text | None (encoding/base64) |
| UUID Generator | Generate random UUIDs (v4) | None (google/uuid) |
| Password Generator | Secure random passwords (customizable) | None (crypto/rand) |
| QR Code Generator | Create QR codes from text/URLs | None (skip2/go-qrcode) |
Example:
You: Convert 100 miles to kilometers
Celeste: *calls convert_units skill*
Celeste: 100 miles is 160.93 kilometers
| Skill | Description | Dependencies |
|---|---|---|
| Set Reminder | Create reminders with timestamps | Local storage (~/.celeste/reminders.json) |
| List Reminders | View all active reminders | Local storage |
| Save Note | Store notes by name | Local storage (~/.celeste/notes.json) |
| Get Note | Retrieve saved notes | Local storage |
| List Notes | View all saved note names | Local storage |
Example:
You: Remind me to call mom tomorrow at 3pm
Celeste: *calls set_reminder skill*
Celeste: Reminder set for December 4, 2025 at 3:00 PM
You: Save a note called groceries: milk, eggs, bread
Celeste: *calls save_note skill*
Celeste: Note 'groceries' saved successfully!
Skill-specific API keys are stored in ~/.celeste/skills.json:
{
"venice_api_key": "your-venice-key",
"tarot_auth_token": "Basic xxx",
"weather_default_zip_code": "12345",
"twitch_client_id": "your-client-id",
"youtube_api_key": "your-youtube-key"
}Configure via CLI:
celeste config --set-venice-key <key>
celeste config --set-weather-zip 12345
celeste config --set-twitch-client-id <id>
celeste config --set-youtube-key <key>
celeste config --set-tarot-token <token>Celeste v1.9.0+ exposes the codegraph as first-class MCP tools (no chat-LLM
round-trip, no output-token ceiling, verbatim results). Register celeste serve
once per workspace and any MCP client โ Claude Code, Codex, Cursor, etc. โ gets:
celeste_indexโstatus,update,rebuildoperations withnotifications/progressstreamingceleste_code_searchโ semantic search (MinHash Jaccard + BM25 fusion + structural rerank)celeste_code_reviewโ structural code review findings as verbatim JSONceleste_code_graphโ symbol callers, callees, referencesceleste_code_symbolsโ list symbols in a file or package
Indexing is explicit: query tools never auto-reindex. After code changes, the
caller invokes celeste_index { operation: "update" } to refresh the graph.
# Add Celeste as an MCP server (once per workspace you want indexed)
claude mcp add celeste celeste serveOptionally, install the celeste-for-claude
companion for the persona-routed skill command wrappers (/celeste-review,
/celeste-search, /celeste-graph, /celeste-context):
git clone https://github.com/whykusanagi/celeste-for-claude.git
cp celeste-for-claude/skills/*.md ~/.claude/commands/Claude Code stays in control, Celeste provides the graph intelligence. The direct tools are preferred for tool-driven workflows; the persona-routed skills are a convenience for natural-language interactions.
| Celeste CLI | OpenClaw | Picobot | oh-my-pi | gptme | |
|---|---|---|---|---|---|
| Focus | Agentic coding | Personal AI assistant | Lightweight AI bot | CLI coding agent | CLI coding agent |
| Language | Go | TypeScript | Go | TS + Rust | Python |
| Deploy | 54MB binary, zero deps | Node.js (~393MB) | 9MB binary | Bun + Rust | pip package |
| RAM | Low | High (Node.js) | ~10MB | Medium | Medium |
| Providers | 7 (native SDKs) | OpenAI primary | OpenAI only | 6+ | 7+ |
| Tools | 40 | Many | 16 | Many | ~10 |
| Code Graph | Yes (MinHash) | No | No | No | No |
| Code Review | Yes (6 categories) | No | No | No | No |
| Collections/RAG | Yes (xAI) | No | No | No | Yes |
| MCP | Server + client | Partial | Client | Full | Yes |
Celeste's unique advantages: Code graph with semantic search, structural code review, persistent project memory, .grimoire context with staleness tracking, corruption-aesthetic TUI with typing animation. No other project combines compiled binary + code intelligence + MCP server.
See docs/COMPARISON.md for detailed analysis.
Celeste CLI requires OpenAI-style function calling for skills to work. Not all LLM providers support this feature.
| Provider | Function Calling | Status | Setup Difficulty |
|---|---|---|---|
| OpenAI | โ Native | Fully Supported | Easy |
| Grok (xAI) | โ OpenAI-Compatible | Fully Supported | Easy |
| DigitalOcean | Limited | Advanced (requires cloud deployment) | |
| Venice.ai | โ Unknown | Needs Testing | Unknown |
| ElevenLabs | โ Unknown | Needs Testing | Unknown |
| Local (Ollama) | Varies | Medium (model-dependent) |
Setup:
celeste config --set-key your-xai-key
celeste chatDefault config points to xAI (https://api.x.ai/v1, model grok-4-1-fast). Best value for tool calling with 2M token context.
Setup:
celeste config --set-key sk-your-openai-key
celeste config --set-url https://api.openai.com/v1
celeste config --set-model gpt-4.1-mini
celeste chatLimitation: DigitalOcean AI Agent requires cloud-hosted functions. Skills cannot execute locally.
Why skills won't work:
- Celeste CLI executes skills locally (unit converter, QR generator, etc.)
- DigitalOcean expects HTTP endpoints in the cloud
- No way to bridge local execution with DigitalOcean's architecture
Workarounds:
- Use OpenAI or Grok instead
- Deploy skills as cloud functions (advanced)
- Use Celeste CLI without skills (chat only)
Run automated tests to verify function calling:
# Test OpenAI
OPENAI_API_KEY=your-key go test ./cmd/celeste/llm -run TestOpenAI_FunctionCalling -v
# Test Grok
GROK_API_KEY=your-key go test ./cmd/celeste/llm -run TestGrok_FunctionCalling -v
# Test Venice.ai
VENICE_API_KEY=your-key go test ./cmd/celeste/llm -run TestVeniceAI_FunctionCalling -vExpected output (working):
=== RUN TestOpenAI_FunctionCalling
โ
OpenAI function calling works! Called get_weather with location=new york
--- PASS: TestOpenAI_FunctionCalling (2.34s)
Expected output (not working):
=== RUN TestVeniceAI_FunctionCalling
โ ๏ธ Venice.ai function calling failed: tools not supported
--- SKIP: TestVeniceAI_FunctionCalling
๐ See docs/LLM_PROVIDERS.md for complete provider compatibility guide
Here's how skills work under the hood:
%%{init: {'theme':'base', 'themeVariables': {'primaryColor':'#4a90e2','primaryTextColor':'#fff','primaryBorderColor':'#357abd','lineColor':'#6c757d','secondaryColor':'#7c3aed','tertiaryColor':'#10b981','noteBkgColor':'#fef3c7','noteTextColor':'#92400e'}}}%%
sequenceDiagram
actor User
participant CLI as Celeste CLI
participant LLM as LLM Provider
participant Skill as Skill Handler
participant API as External API
User->>+CLI: "What's the weather in NYC?"
CLI->>+LLM: Send message + tools definition
Note right of LLM: AI decides:<br/>need weather data
LLM-->>-CLI: tool_call: get_weather(location="NYC")
CLI->>+Skill: Execute get_weather handler
Skill->>+API: Fetch weather data (wttr.in)
API-->>-Skill: JSON weather response
Skill-->>-CLI: Formatted weather data
CLI->>+LLM: Send tool result back
Note right of LLM: Generate natural<br/>response
LLM-->>-CLI: "It's 45ยฐF and cloudy in NYC..."
CLI->>-User: Display response with typing animation
- Tools sent with every request - All available skills are listed in the API call
- LLM decides when to call - You don't manually invoke skills, the AI does
- Local execution - Skills run on your machine (unless they need external APIs)
- Result sent back to LLM - Tool results are formatted and returned for interpretation
- Natural language output - LLM converts structured data into conversational responses
This requires OpenAI-style function calling support! Providers without this feature will ignore tools and respond as if they don't have access to data.
Celeste CLI uses three config files in ~/.celeste/:
| File | Purpose | Example |
|---|---|---|
| config.json | Main configuration | API endpoint, model, timeouts |
| secrets.json | API keys (backward compat) | OpenAI API key only |
| skills.json | Skill-specific configs | Venice.ai key, weather zip code |
{
"api_key": "",
"base_url": "https://api.x.ai/v1",
"model": "grok-4-1-fast",
"timeout": 60,
"skip_persona_prompt": false,
"simulate_typing": true,
"typing_speed": 40
}{
"venice_api_key": "your-venice-key",
"venice_base_url": "https://api.venice.ai/api/v1",
"venice_model": "venice-uncensored",
"tarot_function_url": "https://your-tarot-api",
"tarot_auth_token": "Basic xxx",
"weather_default_zip_code": "10001",
"twitch_client_id": "your-twitch-client-id",
"twitch_default_streamer": "whykusanagi",
"youtube_api_key": "your-youtube-key",
"youtube_default_channel": "UC..."
}export CELESTE_API_KEY="sk-your-key"
export CELESTE_API_ENDPOINT="https://api.openai.com/v1"
export VENICE_API_KEY="your-venice-key"
export TAROT_AUTH_TOKEN="Basic xxx"Environment variables take precedence over config files.
# View current config
celeste config --show
# Main config settings
celeste config --set-key sk-xxx
celeste config --set-url https://api.openai.com/v1
celeste config --set-model gpt-4o-mini
celeste config --skip-persona true
celeste config --simulate-typing true
celeste config --typing-speed 60
# Named configs (multi-profile support)
celeste config --list # List all profiles
celeste config --init openai # Create openai profile
celeste config --init grok # Create grok profile
celeste -config grok chat # Use grok profile
# Skill configuration
celeste config --set-venice-key <key>
celeste config --set-weather-zip 10001
celeste config --set-twitch-client-id <id>
celeste config --set-youtube-key <key>
celeste config --set-tarot-token <token>Create separate configs for different providers:
# Create OpenAI config
celeste config --init openai
celeste config --set-key sk-openai-key
celeste config --set-model gpt-4o-mini
# Create Grok config
celeste config --init grok
celeste config --set-key xai-grok-key
celeste config --set-url https://api.x.ai/v1
celeste config --set-model grok-beta
# Use specific config
celeste -config grok chatAvailable templates: openai, grok, elevenlabs, venice, digitalocean
# Launch interactive TUI (default command)
celeste chat
# Use a specific named config
celeste -config grok chat| Key | Action |
|---|---|
Ctrl+C |
Exit immediately |
Ctrl+D |
Exit gracefully |
PgUp/PgDown |
Scroll chat history (full page) |
Shift+โ/โ |
Scroll chat (3 lines at a time) |
โ/โ |
Navigate input history (previous messages) |
Enter |
Send message |
Esc |
Clear current input |
| Command | Action |
|---|---|
/help |
Show available commands and keyboard shortcuts |
/clear |
Clear chat history (current session only) |
/exit, /quit, /q |
Exit application |
| Command | Action |
|---|---|
/endpoint <provider> |
Switch to a different LLM provider (openai, grok, venice, gemini, openrouter, etc.) |
/set-model |
List available models for current provider with capability indicators |
/set-model <name> |
Switch to a specific model (validates function calling support) |
/set-model <name> --force |
Override model compatibility warnings |
/list-models |
Alias for /set-model |
Examples:
# Switch to Grok (auto-selects grok-4-1-fast for tool calling)
/endpoint grok
# List Grok models with capability indicators
/set-model
# Output:
# โ grok-4-1-fast - Best for tool calling (2000k context)
# โ grok-4-1 - High-quality reasoning
# grok-4-latest - Latest general model (no skills)
# Force use a non-tool model
/set-model grok-4-latest --force
# Switch to Gemini AI (AI Studio)
/endpoint gemini| Command | Action |
|---|---|
/context |
Show current token usage, cost estimation, and context window status |
/stats |
Display usage analytics dashboard with provider/model breakdowns |
/export [format] |
Export current session (formats: json, md, csv) |
Token Tracking Support by Provider:
โ Full Support (Returns usage data with automatic token tracking):
- OpenAI (gpt-4o, gpt-4o-mini, etc.)
- xAI/Grok (grok-4-1-fast, grok-4-1, etc.)
- Venice.ai (venice-uncensored, etc.)
- Google Gemini AI Studio (gemini-2.0-flash, etc.)
- Google Vertex AI (gemini models via OpenAI endpoint)
- OpenRouter (all models)
- DigitalOcean Gradient (Agent API with RAG - supports stream_options.include_usage)
โ No Support (Uses estimation only):
- Anthropic Claude (Native API - different format, not yet implemented)
- ElevenLabs (Voice-focused API)
Examples:
# Check current context usage
/context
# Shows: Token usage (12.5K/128K), cost ($0.034), warning level
# View analytics dashboard
/stats
# Shows: Lifetime usage, top models, provider breakdown, daily stats
# Export conversation to Markdown
/export md
# Saves to: ~/.celeste/exports/session_<id>_<timestamp>.md
# Export to JSON for programmatic access
/export jsonNote: When using providers without token tracking (Anthropic native API, ElevenLabs), Celeste CLI will estimate tokens based on character count (~4 chars = 1 token), but won't show exact API usage or costs. For accurate token tracking and context management features, use providers marked with โ above.
# Send a single message and exit
celeste message "What is the meaning of life?"
# Or use shorthand
celeste "Hello, Celeste!"# List saved sessions
celeste session --list
# Load a specific session
celeste session --load abc123def
# Clear all sessions
celeste session --clearSessions are auto-saved to ~/.celeste/sessions/ and can be resumed later.
# List available skills (with descriptions)
celeste skills --list
# Initialize default skill configuration files
celeste skills --init# Show version
celeste version
celeste --version
# Show help
celeste help
celeste --helpNSFW mode provides uncensored chat and NSFW image generation via Venice.ai:
Activating NSFW Mode:
# In chat, type:
/nsfw
# Header will show: ๐ฅ NSFW โข img:lustify-sdxlImage Generation Commands:
# Generate with default model (lustify-sdxl)
image: cyberpunk cityscape at night
# Generate anime-style images
anime: magical girl with sword
# Generate dream-like images
dream: surreal cosmic landscape
# Use specific model for one generation
image[venice-sd35]: photorealistic portrait
# Upscale existing image
upscale: ~/path/to/image.jpgModel Management:
# Set default image model
/set-model wai-Illustrious
# View available models
/set-model
# Models available:
# - lustify-sdxl (default NSFW)
# - wai-Illustrious (anime style)
# - hidream (dream-like quality)
# - nano-banana-pro
# - venice-sd35 (Stable Diffusion 3.5)
# - lustify-v7
# - qwen-imageImage Quality Settings:
All images generate with high-quality defaults:
- Steps: 40 (1-50, higher = more detail)
- CFG Scale: 12.0 (0-20, higher = stronger prompt adherence)
- Size: 1024x1024 (up to 1280x1280)
- Format: PNG (lossless)
- Safe Mode: Disabled (no NSFW blurring)
Download Location:
Images save to ~/Downloads by default. Customize in ~/.celeste/skills.json:
{
"downloads_dir": "~/Pictures"
}LLM Prompt Chaining:
Ask the uncensored LLM to write prompts for you:
You: Write a detailed NSFW anime scene description
Celeste: [Generates detailed prompt]
You: image: [paste Celeste's prompt]
Celeste: *generates image from AI-written prompt*
Returning to Safe Mode:
/safe
# Returns to OpenAI endpoint with skills enabledConfiguration:
Add Venice.ai API key to ~/.celeste/skills.json:
{
"venice_api_key": "your-venice-api-key",
"venice_base_url": "https://api.venice.ai/api/v1",
"venice_model": "venice-uncensored",
"venice_image_model": "lustify-sdxl",
"downloads_dir": "~/Downloads"
}Limitations:
- Function calling disabled in NSFW mode (Venice uncensored doesn't support it)
- Skills are unavailable (use /safe to re-enable)
- Video generation not available (Venice API limitation)
celeste-cli/
โโโ cmd/celeste/ # Main application
โ โโโ main.go # CLI entry point
โ โโโ tui/ # Bubble Tea TUI components
โ โ โโโ app.go # Main TUI model & update loop
โ โ โโโ chat.go # Scrollable viewport (messages)
โ โ โโโ input.go # Text input + history
โ โ โโโ skills.go # Skills panel (execution status)
โ โ โโโ styles.go # Lip Gloss theme (corrupted aesthetic)
โ โ โโโ streaming.go # Simulated typing animation
โ โ โโโ messages.go # Bubble Tea messages (events)
โ โโโ tools/ # Unified tool system
โ โ โโโ builtin/ # All built-in tool implementations
โ โ โโโ mcp/ # MCP client for external tools
โ โโโ permissions/ # Tool permission system
โ โโโ context/ # Token budget & context management
โ โโโ llm/ # LLM client
โ โ โโโ client.go # OpenAI-compatible client
โ โ โโโ stream.go # Streaming handler (SSE)
โ โ โโโ providers_test.go # Provider compatibility tests
โ โโโ config/ # Configuration management
โ โ โโโ config.go # JSON config (load/save/named)
โ โ โโโ session.go # Session persistence
โ โโโ prompts/ # Persona prompts
โ โโโ celeste.go # Prompt loader
โ โโโ celeste_essence.json # Embedded Celeste personality
โโโ docs/ # Documentation
โ โโโ LLM_PROVIDERS.md # Provider compatibility guide
โ โโโ CAPABILITIES.md # What Celeste can do (ecosystem)
โ โโโ PERSONALITY.md # Celeste personality quick ref
โ โโโ ROUTING.md # Sub-agent routing (ecosystem)
โโโ LICENSE # MIT License
โโโ CHANGELOG.md # Version history
โโโ CONTRIBUTING.md # Contribution guidelines
โโโ SECURITY.md # Security policy
โโโ README.md # This file
%%{init: {'theme':'base', 'themeVariables': {'primaryColor':'#4a90e2','secondaryColor':'#7c3aed','tertiaryColor':'#10b981','primaryTextColor':'#fff','lineColor':'#6c757d','fontSize':'14px'}}}%%
flowchart TB
subgraph CLI["๐ฅ๏ธ CLI Entry (main.go)"]
style CLI fill:#e8f4f8,stroke:#4a90e2,stroke-width:2px
Main[Parse Args] --> |chat| TUI[Launch TUI]
Main --> |config| Config[Config Manager]
Main --> |session| Session[Session Manager]
Main --> |skills| Skills[Skills Registry]
Main --> |help| Help[Print Help]
end
subgraph TUI_Layer["๐จ TUI Layer (Bubble Tea)"]
style TUI_Layer fill:#f3e8ff,stroke:#7c3aed,stroke-width:2px
TUI --> Header[Header Bar]
TUI --> Viewport[Chat Viewport]
TUI --> Input[Text Input + History]
TUI --> SkillsPanel[Skills Panel]
TUI --> Status[Status Bar]
end
subgraph Backend["โ๏ธ Backend (Business Logic)"]
style Backend fill:#fef3c7,stroke:#f59e0b,stroke-width:2px
Input --> |user message| LLM[LLM Client]
LLM --> |stream chunks| Stream[Streaming Handler]
Stream --> |SSE parsing| SimType[Simulated Typing]
SimType --> |char-by-char| Viewport
LLM --> |tool_calls detected| Executor[Skill Executor]
Executor --> |lookup| Registry[Skills Registry]
Registry --> |execute| Handler[Skill Handler]
Handler --> |result| Executor
Executor --> |tool result| LLM
LLM --> |final response| Stream
end
subgraph Storage["๐พ Persistence Layer"]
style Storage fill:#d1fae5,stroke:#10b981,stroke-width:2px
Config --> ConfigFiles["~/.celeste/config.json"]
Session --> SessionFiles["~/.celeste/sessions/*.json"]
Handler --> |reminders/notes| LocalStorage["~/.celeste/reminders.json"]
end
subgraph External["๐ External APIs"]
style External fill:#fee2e2,stroke:#ef4444,stroke-width:2px
Handler --> |weather| WttrIn[wttr.in API]
Handler --> |tarot| TarotAPI[Tarot Function]
Handler --> |nsfw/images| VeniceAI[Venice.ai]
Handler --> |currency| ExchangeRate[ExchangeRate-API]
Handler --> |twitch| TwitchAPI[Twitch API]
Handler --> |youtube| YouTubeAPI[YouTube Data API]
end
classDef entryPoint fill:#4a90e2,stroke:#357abd,color:#fff
classDef uiComponent fill:#7c3aed,stroke:#6b21a8,color:#fff
classDef business fill:#f59e0b,stroke:#d97706,color:#fff
classDef storage fill:#10b981,stroke:#059669,color:#fff
classDef external fill:#ef4444,stroke:#dc2626,color:#fff
class Main,TUI entryPoint
class LLM,Stream,Executor business
class ConfigFiles,SessionFiles,LocalStorage storage
class WttrIn,TarotAPI,VeniceAI,ExchangeRate,TwitchAPI,YouTubeAPI external
- User Input โ Text input component (with history)
- TUI Update โ Bubble Tea update loop processes input
- LLM Request โ Client sends message + tools to OpenAI/Grok
- Stream Parse โ Parse SSE chunks (text or tool_calls)
- Tool Execution (if tool_calls):
- Executor receives tool call JSON
- Registry looks up skill handler
- Handler executes (local or API call)
- Result sent back to LLM
- Response Stream โ LLM generates natural language response
- Simulated Typing โ Character-by-character rendering (if enabled)
- Viewport Update โ Chat history updates with new message
- Session Save โ Auto-save conversation to disk
The TUI uses the corrupted-theme color palette inspired by Celeste's abyss aesthetic:
| Color | Hex | RGB | Usage |
|---|---|---|---|
| Accent | #d94f90 |
rgb(217, 79, 144) |
Headers, prompts, highlights, user messages |
| Purple | #8b5cf6 |
rgb(139, 92, 246) |
Function calls, secondary elements, skill names |
| Dark Purple | #6d28d9 |
rgb(109, 40, 217) |
Borders, subtle accents |
| Background | #0a0a0a |
rgb(10, 10, 10) |
Main background (terminal) |
| Surface | #1a1a1a |
rgb(26, 26, 26) |
Elevated surfaces, panels |
| Text | #f5f1f8 |
rgb(245, 241, 248) |
Primary text, assistant messages |
| Muted | #7a7085 |
rgb(122, 112, 133) |
Hints, timestamps, secondary text |
| Success | #10b981 |
rgb(16, 185, 129) |
Skill success indicators |
| Error | #ef4444 |
rgb(239, 68, 68) |
Errors, warnings |
When Celeste is thinking, a demonic eye animation plays:
๐๏ธ โ ๐ โ โโ โ โโ (pulsing)
Colors pulse between magenta (#d94f90) and red (#dc2626) to show "corruption deepening."
- Go 1.21+ (uses go 1.24.0 for latest features)
- Terminal with 256-color support (iTerm2, Alacritty, Windows Terminal, etc.)
- API Keys (for testing skills):
- OpenAI API key (required for chat)
- Venice.ai API key (optional, for NSFW/image skills)
- YouTube Data API key (optional, for YouTube skill)
- Twitch Client ID (optional, for Twitch skill)
cd celeste-cli
go mod tidy
go build -o celeste ./cmd/celeste# Run all tests
go test ./...
# Run with coverage
go test -cover ./...
# Run specific package
go test ./cmd/celeste/skills -v
# Run provider compatibility tests
OPENAI_API_KEY=sk-xxx go test ./cmd/Celeste/llm -run TestOpenAI_FunctionCalling -v# Format code
gofmt -w ./cmd
# Run linter
go vet ./...
# Check for unused imports
goimports -w ./cmd| Package | Purpose | Version |
|---|---|---|
github.com/charmbracelet/bubbletea |
TUI framework | v1.3.10 |
github.com/charmbracelet/bubbles |
TUI components (viewport, textinput) | v0.21.0 |
github.com/charmbracelet/lipgloss |
Styling engine | v1.1.0 |
github.com/sashabaranov/go-openai |
OpenAI client (streaming, function calling) | v1.20.4 |
github.com/google/uuid |
UUID generation | v1.6.0 |
github.com/skip2/go-qrcode |
QR code generation | v0.0.0-20200617195104 |
github.com/stretchr/testify |
Testing framework | v1.11.1 |
- main - Stable releases only
- feature/bubbletea-tui - Current development branch (TUI implementation)
- Feature branches - Fork from
feature/bubbletea-tui
Error:
No API key configured.
Set CELESTE_API_KEY environment variable or run: celeste config --set-key <key>
Solution:
celeste config --set-key sk-your-openai-keyOr use environment variable:
export CELESTE_API_KEY="sk-your-key"
celeste chatSymptom: LLM says "I don't have access to real-time data" when asking for weather, etc.
Possible Causes:
- Provider doesn't support function calling - See LLM Provider Compatibility
- Skill config missing - Check
~/.celeste/skills.jsonfor required API keys
Solution:
# Test provider compatibility
OPENAI_API_KEY=your-key go test ./cmd/celeste/llm -run TestOpenAI_FunctionCalling -v
# If provider doesn't support skills, switch to OpenAI or Grok
celeste config --set-url https://api.openai.com/v1
celeste config --set-key sk-openai-keySymptom: Celeste doesn't respond with personality, or endpoint errors
Cause: Your endpoint might already have the Celeste persona embedded (e.g., DigitalOcean agent)
Solution:
celeste config --skip-persona trueSymptom: Text appears in large chunks instead of smooth typing
Solution: Enable simulated typing:
celeste config --simulate-typing true
celeste config --typing-speed 40 # Adjust speed (chars per second)Symptom: Conversations don't persist between runs
Cause: Sessions directory not writable or doesn't exist
Solution:
# Check permissions
ls -la ~/.celeste/sessions/
# If missing, create it
mkdir -p ~/.celeste/sessions/
chmod 755 ~/.celeste/sessions/Symptom: Screen flickering, garbled text, or broken layout
Possible Causes:
- Terminal doesn't support 256 colors
- Terminal size too small
- Environment variable issues
Solution:
# Check terminal capabilities
echo $TERM # Should be xterm-256color or similar
tput colors # Should return 256
# Set TERM if needed
export TERM=xterm-256color
# Ensure minimum terminal size (80x24)
resize # Check current sizeError: go: updates to go.mod needed; to update it: go mod tidy
Solution:
go mod tidy
go build -o celeste ./cmd/celesteError: package X is not in GOROOT
Solution:
# Update dependencies
go get -u ./...
go mod tidyComprehensive documentation for developers and contributors:
- ARCHITECTURE.md - System design, component relationships, and data flow diagrams
- TESTING.md - Testing guide with examples, coverage reports, and best practices
- CONTRIBUTING.md - How to contribute: adding skills, providers, and commands
- LLM_PROVIDERS.md - Provider compatibility matrix and setup guides
- STYLE_GUIDE.md - Code formatting standards and conventions
Overall coverage: 17.4% across critical packages
| Package | Coverage | Status |
|---|---|---|
| prompts | 97.1% | โ Excellent |
| providers | 72.8% | โ Excellent |
| config | 52.0% | โ Good |
| commands | 25.8% | |
| venice | 22.6% | |
| skills | 12.2% | |
| llm | 0% | โ Requires mocking |
| tui | 0% | โ Requires mocking |
See TESTING.md for details on running tests and writing new ones.
We welcome contributions! Please see CONTRIBUTING.md for detailed guidelines.
- Fork the repository
- Create a feature branch from
feature/bubbletea-tui:git checkout feature/bubbletea-tui git checkout -b feature/your-feature-name
- Make your changes
- Test thoroughly:
go test ./... go vet ./... gofmt -l ./cmd # Should return nothing
- Submit a pull request to
feature/bubbletea-tui
- ๐งช Testing - Provider compatibility tests, skill unit tests, integration tests
- ๐ Documentation - Improve guides, add examples, translate to other languages
- ๐จ Themes - Alternative color schemes, terminal themes
- ๐ฎ Skills - New skill implementations (requires function calling support)
- ๐ Bug Fixes - See GitHub Issues
- โก Performance - Optimize streaming, reduce memory usage
Before submitting a PR:
# Build succeeds
go build -o celeste ./cmd/celeste
# All tests pass
go test ./...
# No vet warnings
go vet ./...
# Code is formatted
gofmt -w ./cmd
git diff # Should show no changes
# TUI works
./celeste chatThis project is licensed under the MIT License - see the LICENSE file for details.
- Charm - Bubble Tea, Bubbles, and Lip Gloss TUI frameworks
- sashabaranov/go-openai - OpenAI Go client
- OpenAI - Function calling API
- xAI - Grok API
- Venice.ai - Uncensored AI models
- wttr.in - Free weather API
- Issues: GitHub Issues
- Security: See SECURITY.md
- Documentation: docs/
- Community: Discord (coming soon)
Built with ๐ by @whykusanagi
"The Abyss whispers through the terminal..." - Celeste