Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,32 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.7.1] - 2026-05-05

A focused pass on provider error handling, surfaced by a 5-persona pre-release review.

### Fixed

- **Provider 4xx errors now show the inner error message instead of a raw JSON dump.** When any provider returned the standard `{error: {message, type, code}}` shape (OpenAI, Azure OpenAI, OpenRouter, etc.), `parseAPICallError`'s extraction chain short-circuited on the truthy parent `error` object, the `typeof errMsg === "string"` guard rejected it, and the parser fell through to dumping the raw response body — which appeared as `APIError: Bad Request: {?:?}` after telemetry redaction collapsed string values to `?`. Telemetry caught users retrying broken model selections 3+ times in the same session because the surfaced error gave no clue about the cause. Users now see actionable text such as `APIError: Bad Request: The model 'gpt-5-codex' does not exist or you do not have access to it.` The OR-chain is replaced with explicit-typeof ternaries that mirror `parseStreamError`'s pattern, so a truthy non-string at any tier cannot block a valid string further down the chain. (#789, closes #788)
- **Bedrock / AWS Lambda `errorMessage` shape is now extracted.** AWS APIs that return `{errorMessage: "...", errorType: "..."}` (Lambda style) previously fell through the OpenAI/Anthropic-shaped chain to a raw-body dump. Added `body.errorMessage` to the extraction ladder in both `parseAPICallError` and `parseStreamError`.
- **Streaming error path no longer dumps `Unknown: {"type":"error",...}` for non-OpenAI codes.** `parseStreamError` previously handled only 4 OpenAI error codes (`context_length_exceeded`, `insufficient_quota`, `usage_not_included`, `invalid_prompt`); everything else fell through to `JSON.stringify(e)`. Added a default fallback that runs the same string-typeof chain as `parseAPICallError`, so any extractable provider message becomes a clean api_error.
- **`model_not_found` no longer triggers a silent retry storm.** OpenAI 404s are forced retryable in general (some legitimate models 404 transiently), but `error.code === "model_not_found"` now short-circuits to `isRetryable: false` — the user sees the actionable error on attempt 1 instead of after 5 silent retries.

### Added

- **`altimate models` discoverability hint on model-not-found errors.** When `error.code === "model_not_found"`, the surfaced message now ends with `Run \`altimate models\` to see available models.` so the next step is one command away.
- **Provider-API-Errors troubleshooting reference** at `docs/docs/reference/troubleshooting.md` covering model-not-found, unauthorized, rate-limited, context-overflow, and HTML-page error classes.

### Privacy

- **`Telemetry.maskString` now redacts email addresses and internal hostnames.** Pre-fix, the JSON-quote masking rule incidentally collapsed everything inside provider error JSON to `?`. The provider-error fix unwraps that JSON, which means provider-side identifiers (caller emails, internal `*.local` / `*.internal` / RFC1918 / IPv6 loopback / ULA / link-local / AWS IMDS endpoints) now flow as plain English. Added explicit redaction patterns so they're masked before reaching telemetry, the share backend, or local session storage. The masker is kept in sync with `parseAPICallError`'s `maskInternalHost` (same internal-endpoint coverage); query-string and fragment characters (`+`, `#`, `,`, `;`) are inside the trailing char class so secrets past the `<internal-host>` marker don't survive. `sk-…` and `Bearer …` token redaction is unchanged.
- **`metadata.url` on `MessageV2.APIError` masks internal hosts and strips basic-auth userinfo.** When `error.url` points at `localhost`, `*.local`, `*.internal`, an RFC1918 IPv4, IPv6 loopback / ULA / link-local, or the AWS IMDS address (`169.254.169.254`), the host is rewritten to `internal-host.redacted` before the URL lands on the parsed error. Basic-auth userinfo (`user:pass@…`) is stripped on **every** URL — internal or public — since a credential in a public-host URL is at least as risky as one in an internal proxy. Public-host URLs are otherwise preserved verbatim for debugging.
- **`responseBody` is capped at 4KB** at the `parseAPICallError` boundary. Without this, a hostile or verbose gateway could persist a 100KB+ body into local storage and (for shared sessions) the share backend.

### Testing

- 46 adversarial tests covering JSON-scalar bodies, prototype-pollution attempts, 100KB error messages, malformed JSON, every-tier null/numeric extraction, Bedrock `errorMessage` precedence, the `parseStreamError` fallback for unknown codes, the `model_not_found` retry-storm carve-out, the `altimate models` hint, the responseBody cap, the metadata.url internal-host masking (incl. IPv6 loopback/ULA/link-local, AWS IMDS, public-host basic-auth userinfo strip, RFC1918 boundary checks, lookalike-hostname guards), and the new email / internal-host `maskString` patterns (incl. IMDS, IPv6, and query-fragment leak guards).

## [0.7.0] - 2026-05-03

### Changed
Expand Down
24 changes: 24 additions & 0 deletions docs/docs/reference/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,30 @@ altimate --print-logs --log-level DEBUG
3. If behind a proxy, set `HTTPS_PROXY` (see [Network](network.md))
4. Try a different provider to isolate the issue

### Provider API Errors

**Symptoms:** `APIError: <status>: <message>` shown in chat output. Common forms:

- `APIError: Bad Request: The model 'foo' does not exist or you do not have access to it.`
- `APIError: Unauthorized: Invalid API key`
- `APIError: Rate limit exceeded`

As of v0.7.1, altimate-code surfaces the **inner provider message** instead of dumping the raw JSON body. The status prefix (`Bad Request:`, `Unauthorized:`, etc.) comes from the provider's HTTP status code; everything after the colon is the provider's text verbatim.

**Solutions by error class:**

1. **Model not found** (`APIError: Bad Request: The model '<name>' does not exist...`) — list the models your provider currently exposes and re-run with one of them:
```bash
altimate models <provider>
```
`model_not_found` errors no longer auto-retry; the message you see is the first attempt, not the fifth.
2. **Unauthorized / 401** — re-run `altimate auth login <provider>` and re-issue the request.
3. **Rate limited / 429** — altimate-code automatically retries on rate-limit responses (including plain-text 429s from Alibaba/DashScope). If you keep hitting rate limits, lower `parallel_tool_calls` or switch to a less-saturated model.
4. **Context overflow** — switch to a larger-context model or trim earlier turns with `/compact`. Detection covers Anthropic, Bedrock, OpenAI, Gemini, xAI, Groq, OpenRouter, DeepSeek, Copilot, llama.cpp, LM Studio, MiniMax, Kimi, Moonshot, Azure OpenAI, and HTTP 413.
5. **HTML page returned** — usually a gateway/proxy error. The CLI returns a friendly hint pointing at `altimate auth login` rather than dumping the raw HTML.

**Privacy note:** error messages flow through the same redaction layer as everything else (`sk-…`, `Bearer …`, email addresses, and `*.local` / `*.internal` / RFC1918 / IPv6 loopback / ULA / link-local / AWS IMDS hostnames are masked before reaching telemetry). Internal-host URLs in `metadata.url` are also redacted before they reach local storage or shared sessions, and basic-auth userinfo (`user:pass@…`) is stripped from every URL regardless of whether the host is internal.

### Tool Execution Errors

**Symptoms:** "No native handler" or tool execution failures for data engineering tools.
Expand Down
20 changes: 20 additions & 0 deletions packages/opencode/src/altimate/telemetry/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1060,6 +1060,26 @@ export namespace Telemetry {
return s
.replace(/sk-(?:ant-)?[A-Za-z0-9_-]{20,}/g, "sk-***")
.replace(/Bearer\s+[A-Za-z0-9._-]{20,}/gi, "Bearer ***")
// Email addresses — providers occasionally echo caller identity in error text.
.replace(/[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}/g, "<email>")
// Internal hostnames in URLs — keeps parity with `parseAPICallError`'s
// `maskInternalHost` so an error message containing the same URL doesn't
// leak through telemetry while metadata.url is masked. Covers:
// *.local / *.internal / *.localhost
// RFC1918 IPv4: 10/8, 172.16/12, 192.168/16, plus 127/8 loopback
// AWS IMDS / link-local IPv4: 169.254/16
// IPv6 in brackets: [::1] loopback, [fc??::/[fd??:: ULA, [fe80:: link-local
// Char class includes `+`, `#`, `,`, `;` so secrets in query/fragment
// don't survive past the redaction marker. Over-masking is the correct
// failure mode here.
.replace(
// `(?:[^\/\s@]+@)?` allows optional basic-auth userinfo
// (`user:pass@`) before the host so URLs like
// `https://admin:hunter2@10.0.0.5/x` are still recognized as internal
// and redacted whole. The credential goes with the host into <internal-host>.
/\bhttps?:\/\/(?:[^\/\s@]+@)?(?:localhost|127\.\d+\.\d+\.\d+|10\.\d+\.\d+\.\d+|192\.168\.\d+\.\d+|172\.(?:1[6-9]|2\d|3[01])\.\d+\.\d+|169\.254\.\d+\.\d+|0\.0\.0\.0|\[(?:::1|fc[0-9a-f]{2}:[^\]]*|fd[0-9a-f]{2}:[^\]]*|fe80:[^\]]*)\]|[A-Za-z0-9.-]+\.(?:local|internal|localhost))(?::\d+)?[\w/.?=&%+#,;~!*'()@:-]*/gi,
"<internal-host>",
)
.replace(/'(?:[^'\\]|\\.)*'/g, "?")
.replace(/"(?:[^"\\]|\\.)*"/g, "?")
.replace(/\s+/g, " ")
Expand Down
152 changes: 140 additions & 12 deletions packages/opencode/src/provider/error.ts
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,21 @@ export namespace ProviderError {
function isOpenAiErrorRetryable(e: APICallError) {
const status = e.statusCode
if (!status) return e.isRetryable
// altimate_change start — upstream_fix: don't retry-storm on model_not_found.
// OpenAI 404s are forced retryable below because some legitimate models 404
// transiently, but `model_not_found` will never recover; retrying 5x just
// delays the user seeing the (now-readable) error message.
if (status === 404) {
try {
const body = e.responseBody ? JSON.parse(e.responseBody) : null
if (body?.error?.code === "model_not_found") return false
} catch {
// Malformed JSON on a 404 falls through to "force retryable" below —
// intentional; some providers emit non-JSON 404 bodies for transient
// model availability blips and those should still retry.
}
}
// altimate_change end
// openai sometimes returns 404 for models that are actually available
return status === 404 || e.isRetryable
}
Expand Down Expand Up @@ -61,19 +76,27 @@ export namespace ProviderError {

try {
const body = JSON.parse(e.responseBody)
// altimate_change start — upstream_fix: OpenAI errors use {error: {message}} shape;
// the original `body.message || body.error || body.error?.message` short-circuits on
// the parent object, fails the typeof string guard, and dumps the raw body. Use an
// explicit-typeof ternary so a truthy non-string at any level can't block a valid
// string further down the chain (matches parseStreamError's pattern below).
// altimate_change start — upstream_fix: extract provider error messages
// across the four shapes in the wild:
// 1. {error: {message: "..."}} — OpenAI / Azure OpenAI / OpenRouter
// 2. {message: "..."} — Anthropic-style top-level
// 3. {errorMessage: "..."} — Bedrock / AWS Lambda
// 4. {error: "..."} — legacy plain-string shape
// The original `body.message || body.error || body.error?.message` short-
// circuited on a truthy parent object, failed the `typeof === "string"`
// guard, and dumped the raw body. Use an explicit-typeof ternary so a
// truthy non-string at any tier can't block a valid string further down
// the chain (matches parseStreamError's pattern below).
const errMsg =
typeof body.error?.message === "string"
? body.error.message
: typeof body.message === "string"
? body.message
: typeof body.error === "string"
? body.error
: undefined
: typeof body.errorMessage === "string"
? body.errorMessage
: typeof body.error === "string"
? body.error
: undefined
if (errMsg) return `${msg}: ${errMsg}`
// altimate_change end
} catch {}
Expand Down Expand Up @@ -161,6 +184,32 @@ export namespace ProviderError {
responseBody,
}
}

// altimate_change start — upstream_fix: extend extraction to non-OpenAI error
// codes. The switch above only handles 4 OpenAI shapes; everything else fell
// through to `JSON.stringify(e)` in the caller (session/message-v2.ts), which
// showed users `Unknown: {"type":"error",...}`. Apply the same string-typeof
// chain we use in parseAPICallError so any extractable provider message lands
// as a clean api_error.
const fallbackMsg =
typeof body?.error?.message === "string"
? body.error.message
: typeof body?.message === "string"
? body.message
: typeof body?.errorMessage === "string"
? body.errorMessage
: typeof body?.error === "string"
? body.error
: undefined
if (fallbackMsg) {
return {
type: "api_error",
message: fallbackMsg,
isRetryable: false,
responseBody,
}
}
// altimate_change end
}

export type ParsedAPICallError =
Expand All @@ -179,6 +228,67 @@ export namespace ProviderError {
metadata?: Record<string, string>
}

// altimate_change start — cap responseBody at 4KB before it lands on a
// MessageV2.APIError. Without this cap, a hostile gateway returning a 100KB
// body (or just verbose providers like LiteLLM) would inflate local storage,
// share-backend uploads, and diagnostic dumps.
const RESPONSE_BODY_CAP = 4096
function capResponseBody(body: string | undefined): string | undefined {
if (!body) return body
if (body.length <= RESPONSE_BODY_CAP) return body
return body.slice(0, RESPONSE_BODY_CAP) + `…[truncated ${body.length - RESPONSE_BODY_CAP} chars]`
}
// altimate_change end

// altimate_change start — sanitize metadata.url before it lands on the
// parsed error. Two transforms are applied:
// (1) basic-auth userinfo (`user:pass@…`) is stripped on every URL,
// internal or public — a credential in a misconfigured proxy URL
// must not flow into telemetry / local storage / share regardless
// of where the URL points.
// (2) the hostname is rewritten to `internal-host.redacted` if it
// matches an internal endpoint (RFC1918, *.local, *.internal,
// localhost, *.localhost, IPv6 loopback / ULA / link-local, or
// the AWS IMDS address 169.254.169.254). Public provider URLs
// are otherwise preserved for debugging.
function maskInternalHost(url: string): string {
try {
const u = new URL(url)
// u.hostname keeps IPv6 brackets (e.g. "[::1]"); strip for regex match.
const host = u.hostname.replace(/^\[|\]$/g, "")
const hadCredentials = u.username !== "" || u.password !== ""
// Always clear userinfo — the credential is the riskier part of the URL.
u.username = ""
u.password = ""
const isInternal =
host === "localhost" ||
host === "0.0.0.0" || // any-interface bind, often misconfigured proxy
host.endsWith(".local") ||
host.endsWith(".internal") ||
host.endsWith(".localhost") ||
/^127\./.test(host) ||
/^10\./.test(host) ||
/^192\.168\./.test(host) ||
/^172\.(1[6-9]|2\d|3[01])\./.test(host) ||
/^169\.254\./.test(host) || // AWS IMDS / link-local IPv4
host === "::1" || // IPv6 loopback
/^fc[0-9a-f]{2}:/i.test(host) || // IPv6 ULA (RFC4193 fc00::/8)
/^fd[0-9a-f]{2}:/i.test(host) || // IPv6 ULA (RFC4193 fd00::/8)
/^fe80:/i.test(host) // IPv6 link-local
if (isInternal) {
u.hostname = "internal-host.redacted"
return u.toString()
}
// No host change but we may have removed credentials — re-serialize
// only if userinfo was present, otherwise return the original string
// so URLs round-trip untouched (preserves trailing slashes, casing).
return hadCredentials ? u.toString() : url
} catch {
return url
}
}
// altimate_change end

export function parseAPICallError(input: { providerID: ProviderID; error: APICallError }): ParsedAPICallError {
const m = message(input.providerID, input.error)
// Check responseBody for context_length_exceeded code (e.g., OpenAI-style errors)
Expand All @@ -188,20 +298,38 @@ export namespace ProviderError {
return {
type: "context_overflow",
message: m,
responseBody: input.error.responseBody,
// altimate_change start — cap responseBody on context_overflow path
responseBody: capResponseBody(input.error.responseBody),
// altimate_change end
}
}

const metadata = input.error.url ? { url: input.error.url } : undefined
// altimate_change start — append a `models` discoverability hint when the
// error code is model_not_found. Pairs with the retry-storm carve-out in
// isOpenAiErrorRetryable so the user sees the hint on the first attempt
// instead of after 5 silent retries.
let finalMessage = m
if (codeFromBody === "model_not_found") {
finalMessage = `${m} Run \`altimate models\` to see available models.`
}
Comment on lines +308 to +314
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Hint text doesn't match the docs or the changelog.

Three different spellings ship in this PR for the same string:

  • provider/error.ts:290Run `altimate models` to see available models.
  • docs/docs/reference/troubleshooting.md:47altimate-code models <provider>
  • CHANGELOG.md:21Run `altimate-code models` to see available models.

Per troubleshooting.md line 31 / error.ts line 99 (altimate auth login), the binary name in this repo is altimate, but the changelog/docs explanatory text uses altimate-code. Pick a single canonical form (likely altimate models to match the actual installed binary referenced elsewhere in the same docs), then align the changelog blurb and the doc step to the exact string this code emits — otherwise users who copy-paste from the doc will hit "command not found".

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@packages/opencode/src/provider/error.ts` around lines 285 - 291, The hint
text for model_not_found is inconsistent across the repo; standardize it to the
canonical binary name "altimate models". Update the string in provider/error.ts
(where finalMessage is built when codeFromBody === "model_not_found") to use
"Run `altimate models` to see available models.", and then update the matching
explanatory strings in docs/docs/reference/troubleshooting.md and CHANGELOG.md
to exactly the same text so copy-pasting the command works for users.

// altimate_change end

// altimate_change start — mask internal hostnames in metadata.url
const metadata = input.error.url ? { url: maskInternalHost(input.error.url) } : undefined
// altimate_change end
return {
type: "api_error",
message: m,
// altimate_change start — finalMessage carries the optional /models hint
message: finalMessage,
// altimate_change end
statusCode: input.error.statusCode,
isRetryable: input.providerID.startsWith("openai")
? isOpenAiErrorRetryable(input.error)
: input.error.isRetryable,
responseHeaders: input.error.responseHeaders,
responseBody: input.error.responseBody,
// altimate_change start — cap responseBody on api_error path
responseBody: capResponseBody(input.error.responseBody),
// altimate_change end
metadata,
}
}
Expand Down
22 changes: 20 additions & 2 deletions packages/opencode/test/provider/error.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,29 @@ describe("ProviderError.parseStreamError: SSE error classification", () => {
expect(ProviderError.parseStreamError({ type: "content", text: "hello" })).toBeUndefined()
})

test("returns undefined for unknown error codes", () => {
test("falls back to api_error with extracted message for unknown error codes (v0.7.1+)", () => {
// Behavior change in v0.7.1: previously returned undefined, which caused the
// caller to fall through to JSON.stringify(e). Now extracts the message via
// the same string-typeof chain used in parseAPICallError so users see a
// clean api_error instead of `Unknown: {"type":"error",...}`.
const result = ProviderError.parseStreamError({
type: "error",
error: { code: "unknown_code", message: "weird" },
})
expect(result?.type).toBe("api_error")
if (result && result.type === "api_error") {
expect(result.message).toBe("weird")
expect(result.isRetryable).toBe(false)
}
})

test("returns undefined when no extractable message exists for unknown code", () => {
// Last-resort behavior: extractor finds no string anywhere — caller falls
// back to JSON.stringify(e), which is at least visible if not friendly.
expect(
ProviderError.parseStreamError({
type: "error",
error: { code: "unknown_code", message: "weird" },
error: { code: "unknown_code" },
}),
).toBeUndefined()
})
Expand Down
Loading
Loading