ipyai is a terminal IPython extension with four AI backends:
- Codex API (
codex-api, default) — hits the Codexresponsesendpoint directly using your~/.codex/auth.jsontoken - Codex (
codex) — local Codex app-server - Claude CLI (
claude-cli) — drives theclaude -pCLI, so usage counts against your Claude subscription rather than API billing - Claude API (
claude-api)
It is aimed at terminal IPython, not notebook frontends.
pip install -e ipyaiipyai uses safepyrun for live Python state. Backend requirements:
codex-api: localcodexlogin so~/.codex/auth.jsonholds a valid access tokencodex: local Codex app-server accessclaude-cli: localclaudeCLI install and Claude Code login (subscription auth)claude-api: Anthropic API access
There are several ways to send an AI prompt from ipyai:
Dot prefix (.) — In normal IPython mode, start any line with . to send it as a prompt. Everything after the dot is sent to the selected backend. Continuation lines (without a dot) are included too, so you can write multi-line prompts:
.explain what this dataframe transform is doing.draft a plan for this notebook:
focus on state management
and failure casesPrompt mode — When prompt mode is on, every line you type is sent as an AI prompt by default. To run normal Python code instead, prefix the line with ;. Shell commands (!) and magics (%) still work as usual. There are three ways to enable prompt mode:
opt-p(Alt-p) — toggle prompt mode on/off at any time from the terminal-pflag — start ipyai in prompt mode:ipyai -pprompt_modeconfig — set"prompt_mode": trueinconfig.jsonto always start in prompt mode
You can also toggle prompt mode during a session with %ipyai prompt.
ipyaiFlags:
ipyai -r # resume last session for the selected backend
ipyai -r 43 # resume session 43
ipyai -l session.ipynb # load a saved notebook session at startup
ipyai -b claude-cli # select backend: codex-api | codex | claude-cli | claude-api
ipyai -p # start in prompt modeOn exit, ipyai prints the session ID so you can resume later.
%load_ext ipyaiipyai is a normal IPython session — you can run Python code exactly as you would in plain IPython. On top of that, you can send prompts to the selected AI backend as described above. %ipyai / %%ipyai magics are also available.
Useful commands:
%ipyai
%ipyai model sonnet
%ipyai completion_model haiku
%ipyai think m
%ipyai code_theme monokai
%ipyai log_exact true
%ipyai prompt
%ipyai save mysession
%ipyai load mysession
%ipyai sessions
%ipyai resetFor each AI prompt, ipyai sends:
- recent IPython code as
<code> - string-literal note cells as
<note> - recent outputs as
<output> - the current request as
<user-request> - referenced live variables as
<variable> - referenced shell command output as
<shell>
Prompt history is stored in SQLite; for compatibility, the table is currently named claude_prompts. Session metadata is stored in IPython's sessions.remark JSON, including cwd, backend, and provider_session_id.
ipyai exposes the same custom tools across all backends:
pyrun: run Python in the live IPython namespacebash: run an allowed shell command viasafecmdstart_bgterm: start a persistent shell sessionwrite_stdin: send input to a persistent shell session and read outputclose_bgterm: close a persistent shell sessionlnhashview_file: view hash-addressed file lines for verified editsexhash_file: apply verified hash-addressed edits to a file
When using the Claude CLI backend, it also enables these built-in Claude Code tools:
BashEditReadSkillWebFetchWebSearchWrite
Custom tools are exposed to claude -p through an in-process unix-socket MCP server plus a small stdio bridge subprocess (ipyai-mcp-bridge), so the subscription-driven CLI can still call live-kernel tools like pyrun.
The ipyai CLI loads safepyrun before ipyai, so pyrun is available by default in normal terminal use.
bash, start_bgterm, write_stdin, close_bgterm, lnhashview_file, and exhash_file are seeded into the user namespace by ipyai.
When using the Claude CLI backend, ipyai enables the built-in Skill tool and passes --setting-sources user,project so claude -p loads your normal user- and project-level skills.
%ipyai save <filename> writes a notebook snapshot. It stores:
- code cells
- note cells
- AI responses as markdown cells
- prompt metadata including both
promptandfull_prompt
%ipyai load <filename> restores that notebook into a fresh session.
ipyai -l <filename> does the same during startup.
Backend restore is backend-specific:
claude-cli: synthesizes a fresh Claude transcript JSONL each turn andclaude -p --resumes from itclaude-api: reuses the saved local prompt history directly on each turncodex-api: reuses the saved local prompt history directly on each turn (same flat-history flow asclaude-api)codex: starts a fresh thread and sends the loaded notebook as XML context once
Alt-.: AI inline completionAlt-p: toggle prompt modeAlt-Up/Down: history navigationAlt-Shift-W: paste all Python code blocks from the last responseAlt-Shift-1throughAlt-Shift-9: paste the Nth Python code blockAlt-Shift-Up/Down: cycle through extracted Python blocks
Config lives under XDG_CONFIG_HOME/ipyai/:
config.jsonsysp.txtexact-log.jsonl
config.json supports:
{
"backend": "codex-api",
"models": {
"claude-cli": {"model": "sonnet", "completion_model": "haiku", "think": "m"},
"claude-api": {"model": "claude-sonnet-4-6", "completion_model": "claude-haiku-4-5-20251001","think": "m"},
"codex": {"model": "gpt-5.4", "completion_model": "gpt-5.4-mini", "think": "m"},
"codex-api": {"model": "gpt-5.4", "completion_model": "gpt-5.4-mini", "think": "m"}
},
"code_theme": "monokai",
"log_exact": false,
"prompt_mode": false
}See DEV.md.