Skip to content

whykusanagi/celeste-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

215 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Celeste - Corrupted AI Assistant

Character artwork by ใ„ใ‹ใ‚ใ• (ikawasa23)

๐Ÿ‘๏ธ Celeste CLI - Interactive AI Assistant

A premium, corruption-aesthetic command-line interface for CelesteAI

Go Version License TUI Framework

Built with Charm's Bubble Tea for flicker-free, modern terminal experiences


โœจ What is Celeste CLI?

Celeste CLI is a full standalone agentic development tool with her own persona, featuring:

  • ๐ŸŽจ Premium TUI - Flicker-free rendering with corrupted-theme aesthetics
  • ๐Ÿ”ฎ 40 Built-in Tools - File I/O, shell, web search, code graph, code review, collections search, git, crypto, and more
  • ๐Ÿ“– .grimoire Project Context - Persona-themed project config files with auto-discovery and auto-init
  • ๐Ÿง  Code Graph + Semantic Search - MinHash + BM25 fused ranking with LSH band table for sub-linear queries, structural rerank; tree-sitter TypeScript parsing for accurate call-graph edges; embedded celeste-stopwords v1.0.0 noise filter
  • ๐Ÿ” Graph-Based Code Review - Structural analysis detecting stubs, lazy redirects, placeholders, error swallowing, and hardcoded values
  • ๐Ÿ”Œ Direct Codegraph MCP Tools - celeste_index, celeste_code_search, celeste_code_review, celeste_code_graph, celeste_code_symbols served verbatim from the cached graph (no chat-LLM round-trip, no max_tokens ceiling, streaming progress notifications)
  • ๐Ÿ”’ Permission System - Multi-layer allow/deny/ask rules with pattern matching
  • ๐Ÿ’พ Session Persistence - JSONL auto-save, resume, file checkpointing with stale detection and revert
  • ๐ŸŒ Multi-Provider - Grok/xAI (default), OpenAI, Anthropic (native SDK), Gemini, Venice.ai, Vertex AI, OpenRouter
  • ๐Ÿ’ฐ Cost Tracking - Per-model pricing with live session cost display
  • ๐Ÿช Hooks - Pre/post tool execution hooks defined in .grimoire
  • ๐Ÿง  Extended Thinking - Leverage reasoning tokens (Claude, Gemini, Grok) with /effort control
  • ๐Ÿ–ผ๏ธ Image Input - Multimodal support for vision-capable models
  • ๐ŸŽญ Celeste Personality - Embedded AI personality with lore-accurate responses
  • ๐Ÿ”— Blockchain Tools - IPFS, Alchemy, wallet security monitoring

Three Runtime Modes

Mode Command What it does
Chat celeste chat (default) Interactive chat with auto-looping tool calls (50-turn safety cap).
Agent /agent <goal> (in TUI) or celeste agent --goal "..." Fully autonomous multi-turn agent with planning, file I/O, checkpointing, and resume. For long-running tasks.
Orchestrator /orchestrate <goal> (in TUI) Agent run with a second reviewer model that critiques and debates the output. For high-quality deliverables.

Chat vs Agent: Chat is interactive with tool auto-looping โ€” you guide the conversation while Celeste calls tools as needed. Agent is a separate autonomous runtime with its own turn loop, planning phase, checkpoint store, and workspace awareness. The orchestrator adds a reviewer model on top of the agent.


๐Ÿš€ Quick Start

Quick Install (Recommended)

If you have Go 1.23+ installed:

go install github.com/whykusanagi/celeste-cli/cmd/celeste@latest

The celeste binary will be installed to $GOPATH/bin (or ~/go/bin by default).

Requirements:

  • Go 1.26.0 or higher
  • $GOPATH/bin (or ~/go/bin) in your PATH

To add to PATH:

export PATH="$PATH:$(go env GOPATH)/bin"

Manual Installation

Alternatively, build from source:

# Clone the repository
git clone https://github.com/whykusanagi/celeste-cli.git
cd celeste-cli

# Build the binary
go build -o celeste ./cmd/celeste

# Install to PATH (optional)
cp celeste ~/.local/bin/

First Run

xAI/Grok (default โ€” recommended):

celeste config --set-key YOUR_XAI_KEY
celeste chat

With Collections (RAG):

celeste config --set-key YOUR_XAI_KEY
celeste config --set-management-key YOUR_XAI_MANAGEMENT_KEY
celeste collections list          # see available collections
celeste collections enable <id>   # enable for chat
celeste chat

OpenAI:

celeste config --init openai
celeste -config openai config --set-key YOUR_OPENAI_KEY
celeste -config openai chat

Other providers: celeste config --init <name> where name is: grok, openai, venice, elevenlabs

Project Setup

When you enter a project directory, Celeste auto-initializes:

cd your-project
celeste chat
# Creates .grimoire (project context), indexes code graph, loads memories

Or manually:

celeste init          # create .grimoire
celeste index         # build code graph
celeste index status  # check graph stats

๐Ÿ”’ Security & Verification

All Celeste CLI releases are cryptographically signed with GPG to ensure authenticity and integrity.

Quick Verification

Before using a downloaded binary, verify its authenticity:

# Download verification script
curl -O https://raw.githubusercontent.com/whykusanagi/celeste-cli/main/scripts/verify.sh
chmod +x verify.sh

# Verify your download
./verify.sh celeste-linux-amd64.tar.gz

Manual Verification

For manual verification or more details, see the complete Verification Guide.

Release Signing:

  • All commits are GPG-signed
  • All releases include GPG signatures
  • Checksums are signed with GPG
  • Complete manifest with build metadata

PGP Key Information:

  • Key ID: 875849AB1D541C55
  • Fingerprint: 9404 90EF 09DA 3132 2BF7 FD83 8758 49AB 1D54 1C55
  • Keybase: @whykusanagi
  • GitHub: whykusanagi.gpg

Import Key:

# From Keybase (recommended)
curl https://keybase.io/whykusanagi/pgp_keys.asc | gpg --import

# From GitHub
curl https://github.com/whykusanagi.gpg | gpg --import

For security issues, see our Security Policy or contact security@whykusanagi.xyz.


๐Ÿ“š Table of Contents


๐ŸŽฏ Features

Interactive TUI Mode

  • Flicker-Free Rendering - Double-buffered Bubble Tea rendering (no screen tearing)
  • Scrollable Chat - PgUp/PgDown navigation through conversation history
  • Input History - Arrow keys to browse previous messages (like bash history)
  • Skills Panel - Real-time skill execution status with demonic eye animation
  • Corrupted Theme - Lip Gloss styling with pink/purple abyss aesthetic
  • Real Streaming + Corruption Animation - Token-by-token streaming with corrupted glitch phrases at the typing cursor
  • Markdown Rendering - glamour-powered markdown with corrupted theme (code blocks, tables, headers, bold)

Tool System (v1.9)

40+ built-in tools powered by AI function calling:

  • Dev Tools (bash, read/write/patch files, search, list files)
  • Code Graph (semantic search with MinHash+BM25 fusion, code review, symbol analysis, tree-sitter TypeScript parsing)
  • Direct Codegraph MCP Tools (celeste_index, celeste_code_search, celeste_code_review, celeste_code_graph, celeste_code_symbols โ€” verbatim, no chat-LLM round-trip)
  • Git (status, log)
  • Web (search, fetch)
  • Information Services (Weather, Currency, Twitch, YouTube)
  • Utilities (Conversions, Encoding, Generators, QR codes)
  • Productivity (Reminders, Notes, Todo tracking)
  • Blockchain (IPFS, Alchemy, wallet security)

See complete tool list below

Collections Support (xAI RAG)

  • Upload Custom Documents - Create knowledge bases with your own documentation
  • Semantic Search - Celeste automatically searches collections when answering questions
  • Interactive TUI - Manage collections with /collections command in chat
  • CLI Management - Create, upload, enable/disable collections from command line
  • Multiple Collections - Organize by topic, enable only what's relevant

See Collections Guide for setup and usage.

Tool System (v1.7)

  • MCP (Model Context Protocol) support for external tool servers
  • Permission system with configurable allow/deny rules
  • Streaming tool execution with concurrent dispatch
  • Automatic context window management

Session Management

  • Conversation Persistence - Auto-save and resume sessions seamlessly
  • Message History - Full conversation logging with timestamps
  • Session Listing - Browse and load previous sessions by ID
  • Session Clearing - Bulk delete sessions when needed

Multi-Provider Support (7 Providers)

  • โœ… Grok/xAI (grok-4-1-fast) - DEFAULT - Optimized for tool calling, 2M context โ€ข Token tracking โœ“
  • โœ… OpenAI (gpt-4.1-mini, gpt-4.1) - Full function calling with streaming โ€ข Token tracking โœ“
  • โœ… Anthropic Claude (claude-sonnet-4-5) - Native SDK with prompt caching and extended thinking โ€ข Token tracking โœ“
  • โœ… Google Gemini AI (gemini-2.5-flash) - Simple API keys, free tier, full streaming โ€ข Token tracking โœ“
  • โš ๏ธ Google Vertex AI (gemini-2.5-flash) - Enterprise, requires GCP project + billing โ€ข Token tracking โœ“
  • โœ… Venice.ai (venice-uncensored) - NSFW mode, image generation/upscaling โ€ข Token tracking โœ“
  • โœ… OpenRouter (multi-provider) - Parallel function calling support โ€ข Token tracking โœ“

Dynamic Model Selection - Auto-selects best tool-calling model per provider Capability Indicators - Visual feedback (โœ“ skills / โš ๏ธ no skills) in header

See full compatibility matrix

Configuration

  • JSON-based Config - Modern ~/.celeste/config.json format
  • Named Configs - Multi-profile support (openai, grok, venice, etc.)
  • Skills Config - Separate skills.json for skill-specific API keys
  • Secrets Handling - Separate secrets.json for backward compatibility
  • Persona Injection - Configurable Celeste personality prompt
  • Environment Override - Env vars override file config

๐Ÿ”ฎ Tool System (40 Tools)

Celeste CLI uses OpenAI-compatible function calling to power its tools. You don't invoke tools directly โ€” you chat naturally, and the AI decides when to call them.

Dev Tools (8 Tools)

Tool Description
bash Execute shell commands in the workspace
read_file Read files with checkpointing
write_file Write files with snapshot backup
patch_file Apply targeted edits to files
list_files List directory contents with glob patterns
search Search file contents with regex
git_status Show working tree status
git_log Show commit history

Code Graph Tools (4 Tools)

Tool Description
code_search MinHash semantic search across all indexed symbols
code_review Graph-based code review (6 categories: stubs, lazy redirects, placeholders, TODOs, error swallowing, hardcoded values)
code_graph Query symbol relationships and call chains
code_symbols List symbols in a file or package

Divination & Entertainment

Skill Description Dependencies
Tarot Reading Three-card or Celtic Cross spreads Tarot API (requires auth token)

Example:

You: Give me a tarot reading
Celeste: *calls tarot_reading skill*
Celeste: Your cards reveal... [interpretation]

Content & Media

Skill Description Dependencies
NSFW Mode Venice.ai uncensored responses Venice.ai API key
Content Generation Platform-specific templates (Twitter/TikTok/YouTube/Discord) None (LLM-powered)
Image Generation Venice.ai image creation Venice.ai API key

Example:

You: Generate a tweet about cybersecurity
Celeste: *calls generate_content skill*
Celeste: Here's your tweet: [280 char tweet with hooks]

Information Services

Skill Description Dependencies
Weather Current conditions and forecasts wttr.in API (free, no key)
Currency Converter Real-time exchange rates ExchangeRate-API (free)
Twitch Live Check Check if streamers are online Twitch API (client ID required)
YouTube Videos Get recent uploads from channels YouTube Data API (key required)

Example:

You: What's the weather in 10001?
Celeste: *calls get_weather skill*
Celeste: It's 45ยฐF and cloudy in New York City...

Utilities (9 Skills)

Skill Description Dependencies
Unit Converter Length, weight, temperature, volume None (local calculations)
Timezone Converter Convert times between zones None (local calculations)
Hash Generator MD5, SHA256, SHA512 None (crypto/sha256)
Base64 Encode Encode text to base64 None (encoding/base64)
Base64 Decode Decode base64 to text None (encoding/base64)
UUID Generator Generate random UUIDs (v4) None (google/uuid)
Password Generator Secure random passwords (customizable) None (crypto/rand)
QR Code Generator Create QR codes from text/URLs None (skip2/go-qrcode)

Example:

You: Convert 100 miles to kilometers
Celeste: *calls convert_units skill*
Celeste: 100 miles is 160.93 kilometers

Productivity (5 Skills)

Skill Description Dependencies
Set Reminder Create reminders with timestamps Local storage (~/.celeste/reminders.json)
List Reminders View all active reminders Local storage
Save Note Store notes by name Local storage (~/.celeste/notes.json)
Get Note Retrieve saved notes Local storage
List Notes View all saved note names Local storage

Example:

You: Remind me to call mom tomorrow at 3pm
Celeste: *calls set_reminder skill*
Celeste: Reminder set for December 4, 2025 at 3:00 PM

You: Save a note called groceries: milk, eggs, bread
Celeste: *calls save_note skill*
Celeste: Note 'groceries' saved successfully!

Skills Configuration

Skill-specific API keys are stored in ~/.celeste/skills.json:

{
  "venice_api_key": "your-venice-key",
  "tarot_auth_token": "Basic xxx",
  "weather_default_zip_code": "12345",
  "twitch_client_id": "your-client-id",
  "youtube_api_key": "your-youtube-key"
}

Configure via CLI:

celeste config --set-venice-key <key>
celeste config --set-weather-zip 12345
celeste config --set-twitch-client-id <id>
celeste config --set-youtube-key <key>
celeste config --set-tarot-token <token>

๐Ÿ”Œ Claude Code Integration

Celeste v1.9.0+ exposes the codegraph as first-class MCP tools (no chat-LLM round-trip, no output-token ceiling, verbatim results). Register celeste serve once per workspace and any MCP client โ€” Claude Code, Codex, Cursor, etc. โ€” gets:

  • celeste_index โ€” status, update, rebuild operations with notifications/progress streaming
  • celeste_code_search โ€” semantic search (MinHash Jaccard + BM25 fusion + structural rerank)
  • celeste_code_review โ€” structural code review findings as verbatim JSON
  • celeste_code_graph โ€” symbol callers, callees, references
  • celeste_code_symbols โ€” list symbols in a file or package

Indexing is explicit: query tools never auto-reindex. After code changes, the caller invokes celeste_index { operation: "update" } to refresh the graph.

# Add Celeste as an MCP server (once per workspace you want indexed)
claude mcp add celeste celeste serve

Optionally, install the celeste-for-claude companion for the persona-routed skill command wrappers (/celeste-review, /celeste-search, /celeste-graph, /celeste-context):

git clone https://github.com/whykusanagi/celeste-for-claude.git
cp celeste-for-claude/skills/*.md ~/.claude/commands/

Claude Code stays in control, Celeste provides the graph intelligence. The direct tools are preferred for tool-driven workflows; the persona-routed skills are a convenience for natural-language interactions.


๐Ÿ“Š How Celeste Compares

Celeste CLI OpenClaw Picobot oh-my-pi gptme
Focus Agentic coding Personal AI assistant Lightweight AI bot CLI coding agent CLI coding agent
Language Go TypeScript Go TS + Rust Python
Deploy 54MB binary, zero deps Node.js (~393MB) 9MB binary Bun + Rust pip package
RAM Low High (Node.js) ~10MB Medium Medium
Providers 7 (native SDKs) OpenAI primary OpenAI only 6+ 7+
Tools 40 Many 16 Many ~10
Code Graph Yes (MinHash) No No No No
Code Review Yes (6 categories) No No No No
Collections/RAG Yes (xAI) No No No Yes
MCP Server + client Partial Client Full Yes

Celeste's unique advantages: Code graph with semantic search, structural code review, persistent project memory, .grimoire context with staleness tracking, corruption-aesthetic TUI with typing animation. No other project combines compiled binary + code intelligence + MCP server.

See docs/COMPARISON.md for detailed analysis.


๐ŸŒ LLM Provider Compatibility

Celeste CLI requires OpenAI-style function calling for skills to work. Not all LLM providers support this feature.

Quick Reference Matrix

Provider Function Calling Status Setup Difficulty
OpenAI โœ… Native Fully Supported Easy
Grok (xAI) โœ… OpenAI-Compatible Fully Supported Easy
DigitalOcean โš ๏ธ Cloud Functions Only Limited Advanced (requires cloud deployment)
Venice.ai โ“ Unknown Needs Testing Unknown
ElevenLabs โ“ Unknown Needs Testing Unknown
Local (Ollama) โš ๏ธ Depends on Model Varies Medium (model-dependent)

โœ… Fully Supported: Grok/xAI (Default)

Setup:

celeste config --set-key your-xai-key
celeste chat

Default config points to xAI (https://api.x.ai/v1, model grok-4-1-fast). Best value for tool calling with 2M token context.

โœ… Fully Supported: OpenAI

Setup:

celeste config --set-key sk-your-openai-key
celeste config --set-url https://api.openai.com/v1
celeste config --set-model gpt-4.1-mini
celeste chat

โš ๏ธ Limited Support: DigitalOcean

Limitation: DigitalOcean AI Agent requires cloud-hosted functions. Skills cannot execute locally.

Why skills won't work:

  • Celeste CLI executes skills locally (unit converter, QR generator, etc.)
  • DigitalOcean expects HTTP endpoints in the cloud
  • No way to bridge local execution with DigitalOcean's architecture

Workarounds:

  1. Use OpenAI or Grok instead
  2. Deploy skills as cloud functions (advanced)
  3. Use Celeste CLI without skills (chat only)

Testing Provider Compatibility

Run automated tests to verify function calling:

# Test OpenAI
OPENAI_API_KEY=your-key go test ./cmd/celeste/llm -run TestOpenAI_FunctionCalling -v

# Test Grok
GROK_API_KEY=your-key go test ./cmd/celeste/llm -run TestGrok_FunctionCalling -v

# Test Venice.ai
VENICE_API_KEY=your-key go test ./cmd/celeste/llm -run TestVeniceAI_FunctionCalling -v

Expected output (working):

=== RUN   TestOpenAI_FunctionCalling
โœ… OpenAI function calling works! Called get_weather with location=new york
--- PASS: TestOpenAI_FunctionCalling (2.34s)

Expected output (not working):

=== RUN   TestVeniceAI_FunctionCalling
โš ๏ธ Venice.ai function calling failed: tools not supported
--- SKIP: TestVeniceAI_FunctionCalling

๐Ÿ“š See docs/LLM_PROVIDERS.md for complete provider compatibility guide


๐ŸŽจ Function Calling Flow (Mermaid Diagram)

Here's how skills work under the hood:

%%{init: {'theme':'base', 'themeVariables': {'primaryColor':'#4a90e2','primaryTextColor':'#fff','primaryBorderColor':'#357abd','lineColor':'#6c757d','secondaryColor':'#7c3aed','tertiaryColor':'#10b981','noteBkgColor':'#fef3c7','noteTextColor':'#92400e'}}}%%
sequenceDiagram
    actor User
    participant CLI as Celeste CLI
    participant LLM as LLM Provider
    participant Skill as Skill Handler
    participant API as External API

    User->>+CLI: "What's the weather in NYC?"
    CLI->>+LLM: Send message + tools definition
    Note right of LLM: AI decides:<br/>need weather data
    LLM-->>-CLI: tool_call: get_weather(location="NYC")
    CLI->>+Skill: Execute get_weather handler
    Skill->>+API: Fetch weather data (wttr.in)
    API-->>-Skill: JSON weather response
    Skill-->>-CLI: Formatted weather data
    CLI->>+LLM: Send tool result back
    Note right of LLM: Generate natural<br/>response
    LLM-->>-CLI: "It's 45ยฐF and cloudy in NYC..."
    CLI->>-User: Display response with typing animation
Loading

Key Points:

  1. Tools sent with every request - All available skills are listed in the API call
  2. LLM decides when to call - You don't manually invoke skills, the AI does
  3. Local execution - Skills run on your machine (unless they need external APIs)
  4. Result sent back to LLM - Tool results are formatted and returned for interpretation
  5. Natural language output - LLM converts structured data into conversational responses

This requires OpenAI-style function calling support! Providers without this feature will ignore tools and respond as if they don't have access to data.


โš™๏ธ Configuration

Configuration Files

Celeste CLI uses three config files in ~/.celeste/:

File Purpose Example
config.json Main configuration API endpoint, model, timeouts
secrets.json API keys (backward compat) OpenAI API key only
skills.json Skill-specific configs Venice.ai key, weather zip code

Main Config (~/.celeste/config.json)

{
  "api_key": "",
  "base_url": "https://api.x.ai/v1",
  "model": "grok-4-1-fast",
  "timeout": 60,
  "skip_persona_prompt": false,
  "simulate_typing": true,
  "typing_speed": 40
}

Skills Config (~/.celeste/skills.json)

{
  "venice_api_key": "your-venice-key",
  "venice_base_url": "https://api.venice.ai/api/v1",
  "venice_model": "venice-uncensored",
  "tarot_function_url": "https://your-tarot-api",
  "tarot_auth_token": "Basic xxx",
  "weather_default_zip_code": "10001",
  "twitch_client_id": "your-twitch-client-id",
  "twitch_default_streamer": "whykusanagi",
  "youtube_api_key": "your-youtube-key",
  "youtube_default_channel": "UC..."
}

Environment Variables (Override Config)

export CELESTE_API_KEY="sk-your-key"
export CELESTE_API_ENDPOINT="https://api.openai.com/v1"
export VENICE_API_KEY="your-venice-key"
export TAROT_AUTH_TOKEN="Basic xxx"

Environment variables take precedence over config files.

Config Commands

# View current config
celeste config --show

# Main config settings
celeste config --set-key sk-xxx
celeste config --set-url https://api.openai.com/v1
celeste config --set-model gpt-4o-mini
celeste config --skip-persona true
celeste config --simulate-typing true
celeste config --typing-speed 60

# Named configs (multi-profile support)
celeste config --list                     # List all profiles
celeste config --init openai              # Create openai profile
celeste config --init grok                # Create grok profile
celeste -config grok chat                 # Use grok profile

# Skill configuration
celeste config --set-venice-key <key>
celeste config --set-weather-zip 10001
celeste config --set-twitch-client-id <id>
celeste config --set-youtube-key <key>
celeste config --set-tarot-token <token>

Named Configs (Multi-Profile)

Create separate configs for different providers:

# Create OpenAI config
celeste config --init openai
celeste config --set-key sk-openai-key
celeste config --set-model gpt-4o-mini

# Create Grok config
celeste config --init grok
celeste config --set-key xai-grok-key
celeste config --set-url https://api.x.ai/v1
celeste config --set-model grok-beta

# Use specific config
celeste -config grok chat

Available templates: openai, grok, elevenlabs, venice, digitalocean


๐ŸŽฏ Usage

Interactive TUI Mode

# Launch interactive TUI (default command)
celeste chat

# Use a specific named config
celeste -config grok chat

Keyboard Shortcuts

Key Action
Ctrl+C Exit immediately
Ctrl+D Exit gracefully
PgUp/PgDown Scroll chat history (full page)
Shift+โ†‘/โ†“ Scroll chat (3 lines at a time)
โ†‘/โ†“ Navigate input history (previous messages)
Enter Send message
Esc Clear current input

In-Chat Commands

Core Commands

Command Action
/help Show available commands and keyboard shortcuts
/clear Clear chat history (current session only)
/exit, /quit, /q Exit application

Provider & Model Management

Command Action
/endpoint <provider> Switch to a different LLM provider (openai, grok, venice, gemini, openrouter, etc.)
/set-model List available models for current provider with capability indicators
/set-model <name> Switch to a specific model (validates function calling support)
/set-model <name> --force Override model compatibility warnings
/list-models Alias for /set-model

Examples:

# Switch to Grok (auto-selects grok-4-1-fast for tool calling)
/endpoint grok

# List Grok models with capability indicators
/set-model
# Output:
# โœ“ grok-4-1-fast - Best for tool calling (2000k context)
# โœ“ grok-4-1 - High-quality reasoning
#   grok-4-latest - Latest general model (no skills)

# Force use a non-tool model
/set-model grok-4-latest --force

# Switch to Gemini AI (AI Studio)
/endpoint gemini

Context Management & Analytics

Command Action
/context Show current token usage, cost estimation, and context window status
/stats Display usage analytics dashboard with provider/model breakdowns
/export [format] Export current session (formats: json, md, csv)

Token Tracking Support by Provider:

โœ… Full Support (Returns usage data with automatic token tracking):

  • OpenAI (gpt-4o, gpt-4o-mini, etc.)
  • xAI/Grok (grok-4-1-fast, grok-4-1, etc.)
  • Venice.ai (venice-uncensored, etc.)
  • Google Gemini AI Studio (gemini-2.0-flash, etc.)
  • Google Vertex AI (gemini models via OpenAI endpoint)
  • OpenRouter (all models)
  • DigitalOcean Gradient (Agent API with RAG - supports stream_options.include_usage)

โŒ No Support (Uses estimation only):

  • Anthropic Claude (Native API - different format, not yet implemented)
  • ElevenLabs (Voice-focused API)

Examples:

# Check current context usage
/context
# Shows: Token usage (12.5K/128K), cost ($0.034), warning level

# View analytics dashboard
/stats
# Shows: Lifetime usage, top models, provider breakdown, daily stats

# Export conversation to Markdown
/export md
# Saves to: ~/.celeste/exports/session_<id>_<timestamp>.md

# Export to JSON for programmatic access
/export json

Note: When using providers without token tracking (Anthropic native API, ElevenLabs), Celeste CLI will estimate tokens based on character count (~4 chars = 1 token), but won't show exact API usage or costs. For accurate token tracking and context management features, use providers marked with โœ… above.

Single Message Mode (Non-Interactive)

# Send a single message and exit
celeste message "What is the meaning of life?"

# Or use shorthand
celeste "Hello, Celeste!"

Session Management

# List saved sessions
celeste session --list

# Load a specific session
celeste session --load abc123def

# Clear all sessions
celeste session --clear

Sessions are auto-saved to ~/.celeste/sessions/ and can be resumed later.

Skills Management

# List available skills (with descriptions)
celeste skills --list

# Initialize default skill configuration files
celeste skills --init

Version & Help

# Show version
celeste version
celeste --version

# Show help
celeste help
celeste --help

๐Ÿ”ฅ NSFW Mode (Venice.ai Integration)

NSFW mode provides uncensored chat and NSFW image generation via Venice.ai:

Activating NSFW Mode:

# In chat, type:
/nsfw

# Header will show: ๐Ÿ”ฅ NSFW โ€ข img:lustify-sdxl

Image Generation Commands:

# Generate with default model (lustify-sdxl)
image: cyberpunk cityscape at night

# Generate anime-style images
anime: magical girl with sword

# Generate dream-like images
dream: surreal cosmic landscape

# Use specific model for one generation
image[venice-sd35]: photorealistic portrait

# Upscale existing image
upscale: ~/path/to/image.jpg

Model Management:

# Set default image model
/set-model wai-Illustrious

# View available models
/set-model

# Models available:
# - lustify-sdxl (default NSFW)
# - wai-Illustrious (anime style)
# - hidream (dream-like quality)
# - nano-banana-pro
# - venice-sd35 (Stable Diffusion 3.5)
# - lustify-v7
# - qwen-image

Image Quality Settings:

All images generate with high-quality defaults:

  • Steps: 40 (1-50, higher = more detail)
  • CFG Scale: 12.0 (0-20, higher = stronger prompt adherence)
  • Size: 1024x1024 (up to 1280x1280)
  • Format: PNG (lossless)
  • Safe Mode: Disabled (no NSFW blurring)

Download Location:

Images save to ~/Downloads by default. Customize in ~/.celeste/skills.json:

{
  "downloads_dir": "~/Pictures"
}

LLM Prompt Chaining:

Ask the uncensored LLM to write prompts for you:

You: Write a detailed NSFW anime scene description
Celeste: [Generates detailed prompt]
You: image: [paste Celeste's prompt]
Celeste: *generates image from AI-written prompt*

Returning to Safe Mode:

/safe
# Returns to OpenAI endpoint with skills enabled

Configuration:

Add Venice.ai API key to ~/.celeste/skills.json:

{
  "venice_api_key": "your-venice-api-key",
  "venice_base_url": "https://api.venice.ai/api/v1",
  "venice_model": "venice-uncensored",
  "venice_image_model": "lustify-sdxl",
  "downloads_dir": "~/Downloads"
}

Limitations:

  • Function calling disabled in NSFW mode (Venice uncensored doesn't support it)
  • Skills are unavailable (use /safe to re-enable)
  • Video generation not available (Venice API limitation)

๐Ÿ—๏ธ Architecture

Project Structure

celeste-cli/
โ”œโ”€โ”€ cmd/celeste/               # Main application
โ”‚   โ”œโ”€โ”€ main.go               # CLI entry point
โ”‚   โ”œโ”€โ”€ tui/                  # Bubble Tea TUI components
โ”‚   โ”‚   โ”œโ”€โ”€ app.go           # Main TUI model & update loop
โ”‚   โ”‚   โ”œโ”€โ”€ chat.go          # Scrollable viewport (messages)
โ”‚   โ”‚   โ”œโ”€โ”€ input.go         # Text input + history
โ”‚   โ”‚   โ”œโ”€โ”€ skills.go        # Skills panel (execution status)
โ”‚   โ”‚   โ”œโ”€โ”€ styles.go        # Lip Gloss theme (corrupted aesthetic)
โ”‚   โ”‚   โ”œโ”€โ”€ streaming.go     # Simulated typing animation
โ”‚   โ”‚   โ””โ”€โ”€ messages.go      # Bubble Tea messages (events)
โ”‚   โ”œโ”€โ”€ tools/             # Unified tool system
โ”‚   โ”‚   โ”œโ”€โ”€ builtin/       # All built-in tool implementations
โ”‚   โ”‚   โ””โ”€โ”€ mcp/           # MCP client for external tools
โ”‚   โ”œโ”€โ”€ permissions/       # Tool permission system
โ”‚   โ”œโ”€โ”€ context/           # Token budget & context management
โ”‚   โ”œโ”€โ”€ llm/                 # LLM client
โ”‚   โ”‚   โ”œโ”€โ”€ client.go        # OpenAI-compatible client
โ”‚   โ”‚   โ”œโ”€โ”€ stream.go        # Streaming handler (SSE)
โ”‚   โ”‚   โ””โ”€โ”€ providers_test.go # Provider compatibility tests
โ”‚   โ”œโ”€โ”€ config/              # Configuration management
โ”‚   โ”‚   โ”œโ”€โ”€ config.go        # JSON config (load/save/named)
โ”‚   โ”‚   โ””โ”€โ”€ session.go       # Session persistence
โ”‚   โ””โ”€โ”€ prompts/             # Persona prompts
โ”‚       โ”œโ”€โ”€ celeste.go       # Prompt loader
โ”‚       โ””โ”€โ”€ celeste_essence.json # Embedded Celeste personality
โ”œโ”€โ”€ docs/                     # Documentation
โ”‚   โ”œโ”€โ”€ LLM_PROVIDERS.md     # Provider compatibility guide
โ”‚   โ”œโ”€โ”€ CAPABILITIES.md      # What Celeste can do (ecosystem)
โ”‚   โ”œโ”€โ”€ PERSONALITY.md       # Celeste personality quick ref
โ”‚   โ””โ”€โ”€ ROUTING.md           # Sub-agent routing (ecosystem)
โ”œโ”€โ”€ LICENSE                   # MIT License
โ”œโ”€โ”€ CHANGELOG.md             # Version history
โ”œโ”€โ”€ CONTRIBUTING.md          # Contribution guidelines
โ”œโ”€โ”€ SECURITY.md              # Security policy
โ””โ”€โ”€ README.md                # This file

Component Flow Diagram

%%{init: {'theme':'base', 'themeVariables': {'primaryColor':'#4a90e2','secondaryColor':'#7c3aed','tertiaryColor':'#10b981','primaryTextColor':'#fff','lineColor':'#6c757d','fontSize':'14px'}}}%%
flowchart TB
    subgraph CLI["๐Ÿ–ฅ๏ธ CLI Entry (main.go)"]
        style CLI fill:#e8f4f8,stroke:#4a90e2,stroke-width:2px
        Main[Parse Args] --> |chat| TUI[Launch TUI]
        Main --> |config| Config[Config Manager]
        Main --> |session| Session[Session Manager]
        Main --> |skills| Skills[Skills Registry]
        Main --> |help| Help[Print Help]
    end

    subgraph TUI_Layer["๐ŸŽจ TUI Layer (Bubble Tea)"]
        style TUI_Layer fill:#f3e8ff,stroke:#7c3aed,stroke-width:2px
        TUI --> Header[Header Bar]
        TUI --> Viewport[Chat Viewport]
        TUI --> Input[Text Input + History]
        TUI --> SkillsPanel[Skills Panel]
        TUI --> Status[Status Bar]
    end

    subgraph Backend["โš™๏ธ Backend (Business Logic)"]
        style Backend fill:#fef3c7,stroke:#f59e0b,stroke-width:2px
        Input --> |user message| LLM[LLM Client]
        LLM --> |stream chunks| Stream[Streaming Handler]
        Stream --> |SSE parsing| SimType[Simulated Typing]
        SimType --> |char-by-char| Viewport
        LLM --> |tool_calls detected| Executor[Skill Executor]
        Executor --> |lookup| Registry[Skills Registry]
        Registry --> |execute| Handler[Skill Handler]
        Handler --> |result| Executor
        Executor --> |tool result| LLM
        LLM --> |final response| Stream
    end

    subgraph Storage["๐Ÿ’พ Persistence Layer"]
        style Storage fill:#d1fae5,stroke:#10b981,stroke-width:2px
        Config --> ConfigFiles["~/.celeste/config.json"]
        Session --> SessionFiles["~/.celeste/sessions/*.json"]
        Handler --> |reminders/notes| LocalStorage["~/.celeste/reminders.json"]
    end

    subgraph External["๐ŸŒ External APIs"]
        style External fill:#fee2e2,stroke:#ef4444,stroke-width:2px
        Handler --> |weather| WttrIn[wttr.in API]
        Handler --> |tarot| TarotAPI[Tarot Function]
        Handler --> |nsfw/images| VeniceAI[Venice.ai]
        Handler --> |currency| ExchangeRate[ExchangeRate-API]
        Handler --> |twitch| TwitchAPI[Twitch API]
        Handler --> |youtube| YouTubeAPI[YouTube Data API]
    end

    classDef entryPoint fill:#4a90e2,stroke:#357abd,color:#fff
    classDef uiComponent fill:#7c3aed,stroke:#6b21a8,color:#fff
    classDef business fill:#f59e0b,stroke:#d97706,color:#fff
    classDef storage fill:#10b981,stroke:#059669,color:#fff
    classDef external fill:#ef4444,stroke:#dc2626,color:#fff

    class Main,TUI entryPoint
    class LLM,Stream,Executor business
    class ConfigFiles,SessionFiles,LocalStorage storage
    class WttrIn,TarotAPI,VeniceAI,ExchangeRate,TwitchAPI,YouTubeAPI external
Loading

Data Flow: User Message โ†’ Response

  1. User Input โ†’ Text input component (with history)
  2. TUI Update โ†’ Bubble Tea update loop processes input
  3. LLM Request โ†’ Client sends message + tools to OpenAI/Grok
  4. Stream Parse โ†’ Parse SSE chunks (text or tool_calls)
  5. Tool Execution (if tool_calls):
    • Executor receives tool call JSON
    • Registry looks up skill handler
    • Handler executes (local or API call)
    • Result sent back to LLM
  6. Response Stream โ†’ LLM generates natural language response
  7. Simulated Typing โ†’ Character-by-character rendering (if enabled)
  8. Viewport Update โ†’ Chat history updates with new message
  9. Session Save โ†’ Auto-save conversation to disk

๐ŸŽจ Theming

The TUI uses the corrupted-theme color palette inspired by Celeste's abyss aesthetic:

Color Hex RGB Usage
Accent #d94f90 rgb(217, 79, 144) Headers, prompts, highlights, user messages
Purple #8b5cf6 rgb(139, 92, 246) Function calls, secondary elements, skill names
Dark Purple #6d28d9 rgb(109, 40, 217) Borders, subtle accents
Background #0a0a0a rgb(10, 10, 10) Main background (terminal)
Surface #1a1a1a rgb(26, 26, 26) Elevated surfaces, panels
Text #f5f1f8 rgb(245, 241, 248) Primary text, assistant messages
Muted #7a7085 rgb(122, 112, 133) Hints, timestamps, secondary text
Success #10b981 rgb(16, 185, 129) Skill success indicators
Error #ef4444 rgb(239, 68, 68) Errors, warnings

Demonic Eye Animation

When Celeste is thinking, a demonic eye animation plays:

๐Ÿ‘๏ธ  โ†’ ๐Ÿ‘€ โ†’ โ—‰โ—‰ โ†’ โ—โ— (pulsing)

Colors pulse between magenta (#d94f90) and red (#dc2626) to show "corruption deepening."


๐Ÿ”ง Development

Prerequisites

  • Go 1.21+ (uses go 1.24.0 for latest features)
  • Terminal with 256-color support (iTerm2, Alacritty, Windows Terminal, etc.)
  • API Keys (for testing skills):
    • OpenAI API key (required for chat)
    • Venice.ai API key (optional, for NSFW/image skills)
    • YouTube Data API key (optional, for YouTube skill)
    • Twitch Client ID (optional, for Twitch skill)

Building from Source

cd celeste-cli
go mod tidy
go build -o celeste ./cmd/celeste

Running Tests

# Run all tests
go test ./...

# Run with coverage
go test -cover ./...

# Run specific package
go test ./cmd/celeste/skills -v

# Run provider compatibility tests
OPENAI_API_KEY=sk-xxx go test ./cmd/Celeste/llm -run TestOpenAI_FunctionCalling -v

Code Quality Checks

# Format code
gofmt -w ./cmd

# Run linter
go vet ./...

# Check for unused imports
goimports -w ./cmd

Dependencies

Package Purpose Version
github.com/charmbracelet/bubbletea TUI framework v1.3.10
github.com/charmbracelet/bubbles TUI components (viewport, textinput) v0.21.0
github.com/charmbracelet/lipgloss Styling engine v1.1.0
github.com/sashabaranov/go-openai OpenAI client (streaming, function calling) v1.20.4
github.com/google/uuid UUID generation v1.6.0
github.com/skip2/go-qrcode QR code generation v0.0.0-20200617195104
github.com/stretchr/testify Testing framework v1.11.1

Branch Strategy

  • main - Stable releases only
  • feature/bubbletea-tui - Current development branch (TUI implementation)
  • Feature branches - Fork from feature/bubbletea-tui

๐Ÿ” Troubleshooting

No API Key Configured

Error:

No API key configured.
Set CELESTE_API_KEY environment variable or run: celeste config --set-key <key>

Solution:

celeste config --set-key sk-your-openai-key

Or use environment variable:

export CELESTE_API_KEY="sk-your-key"
celeste chat

Skills Not Working

Symptom: LLM says "I don't have access to real-time data" when asking for weather, etc.

Possible Causes:

  1. Provider doesn't support function calling - See LLM Provider Compatibility
  2. Skill config missing - Check ~/.celeste/skills.json for required API keys

Solution:

# Test provider compatibility
OPENAI_API_KEY=your-key go test ./cmd/celeste/llm -run TestOpenAI_FunctionCalling -v

# If provider doesn't support skills, switch to OpenAI or Grok
celeste config --set-url https://api.openai.com/v1
celeste config --set-key sk-openai-key

Persona Prompt Issues

Symptom: Celeste doesn't respond with personality, or endpoint errors

Cause: Your endpoint might already have the Celeste persona embedded (e.g., DigitalOcean agent)

Solution:

celeste config --skip-persona true

Streaming Looks Choppy

Symptom: Text appears in large chunks instead of smooth typing

Solution: Enable simulated typing:

celeste config --simulate-typing true
celeste config --typing-speed 40  # Adjust speed (chars per second)

Session Not Saving

Symptom: Conversations don't persist between runs

Cause: Sessions directory not writable or doesn't exist

Solution:

# Check permissions
ls -la ~/.celeste/sessions/

# If missing, create it
mkdir -p ~/.celeste/sessions/
chmod 755 ~/.celeste/sessions/

TUI Rendering Issues

Symptom: Screen flickering, garbled text, or broken layout

Possible Causes:

  1. Terminal doesn't support 256 colors
  2. Terminal size too small
  3. Environment variable issues

Solution:

# Check terminal capabilities
echo $TERM        # Should be xterm-256color or similar
tput colors       # Should return 256

# Set TERM if needed
export TERM=xterm-256color

# Ensure minimum terminal size (80x24)
resize  # Check current size

Build Errors

Error: go: updates to go.mod needed; to update it: go mod tidy

Solution:

go mod tidy
go build -o celeste ./cmd/celeste

Error: package X is not in GOROOT

Solution:

# Update dependencies
go get -u ./...
go mod tidy

๐Ÿ“– Documentation

Comprehensive documentation for developers and contributors:

Core Documentation

  • ARCHITECTURE.md - System design, component relationships, and data flow diagrams
  • TESTING.md - Testing guide with examples, coverage reports, and best practices
  • CONTRIBUTING.md - How to contribute: adding skills, providers, and commands
  • LLM_PROVIDERS.md - Provider compatibility matrix and setup guides
  • STYLE_GUIDE.md - Code formatting standards and conventions

Test Coverage (v1.2.0)

Overall coverage: 17.4% across critical packages

Package Coverage Status
prompts 97.1% โœ… Excellent
providers 72.8% โœ… Excellent
config 52.0% โœ… Good
commands 25.8% โš ๏ธ Moderate
venice 22.6% โš ๏ธ Moderate
skills 12.2% โš ๏ธ Low
llm 0% โŒ Requires mocking
tui 0% โŒ Requires mocking

See TESTING.md for details on running tests and writing new ones.


๐Ÿค Contributing

We welcome contributions! Please see CONTRIBUTING.md for detailed guidelines.

Quick Start for Contributors

  1. Fork the repository
  2. Create a feature branch from feature/bubbletea-tui:
    git checkout feature/bubbletea-tui
    git checkout -b feature/your-feature-name
  3. Make your changes
  4. Test thoroughly:
    go test ./...
    go vet ./...
    gofmt -l ./cmd  # Should return nothing
  5. Submit a pull request to feature/bubbletea-tui

Areas for Contribution

  • ๐Ÿงช Testing - Provider compatibility tests, skill unit tests, integration tests
  • ๐Ÿ“š Documentation - Improve guides, add examples, translate to other languages
  • ๐ŸŽจ Themes - Alternative color schemes, terminal themes
  • ๐Ÿ”ฎ Skills - New skill implementations (requires function calling support)
  • ๐Ÿ› Bug Fixes - See GitHub Issues
  • โšก Performance - Optimize streaming, reduce memory usage

Testing Your Changes

Before submitting a PR:

# Build succeeds
go build -o celeste ./cmd/celeste

# All tests pass
go test ./...

# No vet warnings
go vet ./...

# Code is formatted
gofmt -w ./cmd
git diff  # Should show no changes

# TUI works
./celeste chat

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


๐Ÿ”— Links


๐Ÿ™ Acknowledgments

  • Charm - Bubble Tea, Bubbles, and Lip Gloss TUI frameworks
  • sashabaranov/go-openai - OpenAI Go client
  • OpenAI - Function calling API
  • xAI - Grok API
  • Venice.ai - Uncensored AI models
  • wttr.in - Free weather API

๐Ÿ“ž Support


Built with ๐Ÿ’œ by @whykusanagi

"The Abyss whispers through the terminal..." - Celeste

Packages

 
 
 

Contributors

Languages