Skip to content

sleepymole/tap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tap

Tap Tab, code lands.

Lightweight AI autocomplete for VS Code.

Features

  • Inline ghost-text completions as you type
  • Full context awareness: AST parsing, import resolution, recently edited files
  • Multiple FIM templates auto-selected per model (Qwen, DeepSeek, CodeLlama, StarCoder, etc.)
  • In-memory LRU cache for instant repeat completions
  • Debounced streaming with real-time output filtering

Install

npm install
npm run build
npx vsce package
# Install the generated .vsix in VS Code
code --install-extension tap-0.1.0.vsix

Configure

Open VS Code settings and configure under Tap:

Setting Default Description
tap.enabled true Enable/disable autocomplete
tap.provider "ollama" LLM provider: openai, ollama, or anthropic
tap.model "qwen2.5-coder:7b" Model name
tap.apiKey "" API key (not needed for local Ollama)
tap.apiBase "" API base URL (overrides provider default)
tap.contextLength 8192 Model context window size
tap.maxPromptTokens 1024 Max tokens in the autocomplete prompt
tap.debounceDelay 350 Debounce delay in ms
tap.multilineCompletions "auto" always, never, or auto
tap.disableInFiles ["*.prompt"] Glob patterns to skip
tap.template "" Custom FIM template (overrides auto-detection)

Quick Start with Ollama (local, no API key)

ollama pull qwen2.5-coder:7b

Then in VS Code settings:

{
  "tap.provider": "ollama",
  "tap.model": "qwen2.5-coder:7b"
}

OpenAI

{
  "tap.provider": "openai",
  "tap.model": "gpt-4o-mini",
  "tap.apiKey": "sk-..."
}

Anthropic Claude

{
  "tap.provider": "anthropic",
  "tap.model": "claude-sonnet-4-20250514",
  "tap.apiKey": "sk-ant-..."
}

Claude uses a non-FIM prompt template (hole filler). It works but is slower and less accurate than native FIM models.

Development

npm install
npm run watch    # Watch mode
npm run build    # Production build

Architecture

src/
  extension.ts              # Entry point
  config.ts                 # VS Code settings -> LLM
  types.ts                  # Shared types

  llm/                      # LLM abstraction
    BaseLLM.ts              # Base class with fetch/retry
    llms/openai.ts          # OpenAI (native FIM)
    llms/ollama.ts          # Ollama (auto FIM detection)
    llms/anthropic.ts       # Anthropic (hole-filler template)

  autocomplete/             # Core pipeline
    CompletionProvider.ts   # Orchestrator
    context/                # Context gathering (AST, imports)
    generation/             # Streaming from LLM
    filtering/              # Real-time output filtering
    templating/             # FIM template selection
    postprocessing/         # Final cleanup
    snippets/               # Snippet aggregation

  vscode/                   # VS Code integration
    CompletionProvider.ts   # InlineCompletionItemProvider
    statusBar.ts            # Toggle button
    recentlyEdited.ts       # Track edits

Acknowledgments

Tap is derived from the Continue autocomplete system, originally licensed under the Apache License 2.0. We gratefully acknowledge the Continue team for their excellent work on the autocomplete pipeline, FIM templates, and context gathering architecture.

License

Apache License 2.0

About

Lightweight AI autocomplete for VS Code.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors