Prepper helps you practice interviews with an AI interviewer.
backend/: Flask API that proxies requests to OpenRouterfrontend/: Next.js web appapp/: reusable Python package + CLI for local interview practice and benchmarking
Use the dedicated setup guide: SETUP.md or run setup from the project root:
./prepper.sh --setupFrom the project root:
./prepper.sh./prepper.sh opens an interactive option browser. Use the arrow keys to move, Enter to select or review a command, Space to toggle checkbox options, ? for help, and Esc to go back. Pressing Esc on the first screen exits the browser.
The browser groups commands by Dev, Benchmark, Test, Interactive, and Setup. It shows the exact command before running it. Setup asks for a y/n confirmation because it creates env files, Python virtualenvs, and installs dependencies.
Pass --color with no other flags to open the browser with color enabled:
./prepper.sh --colorYou can also run modes explicitly:
./prepper.sh --dev
./prepper.sh -d
./prepper.sh --dev --all
./prepper.sh --dev --backend
./prepper.sh --dev --frontend
./prepper.sh --dev --color--dev and --dev --all start backend and frontend together and print both logs in one terminal. Press Ctrl+C to stop all services. Use --backend or --frontend when you only need one side.
Run all tests from the project root with either command:
./prepper.sh --test
./prepper.sh -t
./prepper.sh --test --all
./prepper.sh --test --colorThis executes backend, CLI, local tooling, and frontend tests in order. It stops on the first failure.
Run one suite at a time with:
./prepper.sh --test --backend
./prepper.sh --test --frontend
./prepper.sh --test --cli
./prepper.sh --test --toolsAdd --color to --dev or --test to force colored runner, pytest, Node test, and Next dev output when your terminal supports it. For interactive prepper-cli transcripts, pass --color with --interactive or -i. Benchmark transcripts started with --benchmark or -b use colored output by default.
You can still run the underlying commands manually when debugging a suite:
cd frontend && npm run test:unit
(cd backend && .venv/bin/python -m pytest tests -q)
(cd app && .venv/bin/python -m pytest tests -q)
backend/.venv/bin/python -m pytest toolsprepper/
|- backend/
|- frontend/
|- app/
`- tools/
cp .env.example .env # then set LLM_API_KEY or OPENROUTER_API_KEY
cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
python run.pyBackend URL: http://127.0.0.1:5000
The app uses an OpenAI-compatible chat completions client. The generic env names below are preferred:
LLM_API_KEY=your_key_here
LLM_BASE_URL=https://openrouter.ai/api/v1
LLM_MODEL=openai/gpt-5.4The existing OPENROUTER_API_KEY, OPENROUTER_BASE_URL, and OPENROUTER_MODEL names still work as fallbacks.
For a local llama.cpp server:
llama-server \
-m /path/to/Ministral-3-3B-Instruct-2512-Q4_K_M.gguf \
-c 16384 \
--host 127.0.0.1 \
--port 8080Then set root .env:
LLM_API_KEY=local-dummy
LLM_BASE_URL=http://127.0.0.1:8080/v1
LLM_MODEL=ministralUse an instruct GGUF model for interview/chat behavior.
GET /healthPOST /api/chat
Example payload:
{
"message": "How can I improve for behavioral interviews?",
"conversation_history": [
{ "role": "user", "content": "..." },
{ "role": "assistant", "content": "..." }
]
}conversation_history is optional. If provided, the backend forwards the last 10 messages as context.
Enable detailed logs from the backend + prepper-cli:
cd backend
LOG_LEVEL=DEBUG python run.py- Logs are written to
backend/logs/backend.log - Debug entries are also printed in console
- API responses are unchanged (debug info is only logged)
Default log level is INFO.
Each prompt file in app/src/prepper_cli/prompts/ supports YAML front matter settings:
---
id: coding_focus
name: Coding Interview
temperature: 0.3
top_p: 1.0
frequency_penalty: 0.2
presence_penalty: 0.0
max_tokens: 1200
---These settings are applied automatically when a prompt is selected.
CLI default behavior: if you do not pass model-setting override flags, prepper-cli uses these prompt-file values.
The bundled interview prompts currently default max_tokens to 1200; lower this further when running local models with small context windows.
cp .env.example .env # If not already done
cd app
python -m venv .venv
source .venv/bin/activate
pip install -e .Interactive mode is the default when you run the installed prepper-cli command directly:
prepper-cliFrom the project root, you can run the same CLI without activating app/.venv manually:
./prepper.sh --interactive
./prepper.sh -iHelp and all other flags are passed through unchanged:
./prepper.sh --interactive --help
./prepper.sh -i --interview-style coding_focus --difficulty hardPick a specific interviewer style:
prepper-cli --interview-style coding_focusList available prompts:
prepper-cli --list-interview-stylesInterview tuning:
prepper-cli --interview-style coding_focus --difficulty hard --question-limit 4 --pass-threshold 7.5Model settings overrides:
prepper-cli --interview-style coding_focus --temperature 0.2 --top-p 0.9 --frequency-penalty 0.3 --presence-penalty -0.2 --max-tokens 500Color + language:
./prepper.sh -i --color --language de --interview-style behavioral_focus
./prepper.sh -i --color --language fr --interview-style behavioral_focusBenchmark mode runs a full simulated interview between:
- interviewer style (
--interview-style) - simulated candidate (
strongby default, or--weak-candidate)
Use it to compare prompt quality and interviewer strictness.
Run a default benchmark (strong candidate):
./prepper.sh --benchmark --interview-style behavioral_focus
./prepper.sh -b --interview-style behavioral_focusSimulate a weak candidate:
./prepper.sh -b --interview-style behavioral_focus --weak-candidateShort, strict coding benchmark:
./prepper.sh -b --interview-style coding_focus --difficulty hard --question-limit 3 --pass-threshold 8.0Use one model for interview runtime and another for final interviewer scoring:
./prepper.sh -b --interview-style behavioral_focus --model openai/gpt-5.4 --benchmark-model openai/gpt-4.1Print only comparable benchmark result JSON:
./prepper.sh --benchmark-json --interview-style behavioral_focus
./prepper.sh -i --benchmark-json --interview-style behavioral_focusThe JSON result includes the runtime model, benchmark scoring model, and resolved runtime model settings.
German and French benchmark run:
./prepper.sh -b --interview-style behavioral_focus --language de --question-limit 2
./prepper.sh -b --interview-style behavioral_focus --language fr --question-limit 2Notes:
--benchmarkprints the live transcript and bottom evaluation summary-bis the short alias for--benchmark--benchmark-jsonruns benchmark mode without transcript output and prints interviewer result JSON- Use either
--benchmarkor--benchmark-json, not both --strong-candidateand--weak-candidateonly work in benchmark mode- If you omit both, benchmark uses the strong candidate profile
--temperature,--top-p,--frequency-penalty,--presence-penalty, and--max-tokensoverride runtime model settings
cd frontend
npm install
cp .env.local.example .env.local # defaults to http://127.0.0.1:5000
npm run devFrontend URL: http://localhost:3000
If localhost:5000 is unreliable on macOS, keep backend URL as 127.0.0.1:5000.