Skip to content

codec: expand AnnotatedLlmRequest/Response extraction for OpenAI + Anthropic + hybrid payloads#76

Draft
afourniernv wants to merge 14 commits into
NVIDIA:mainfrom
afourniernv:feat/codec-ir-extraction
Draft

codec: expand AnnotatedLlmRequest/Response extraction for OpenAI + Anthropic + hybrid payloads#76
afourniernv wants to merge 14 commits into
NVIDIA:mainfrom
afourniernv:feat/codec-ir-extraction

Conversation

@afourniernv
Copy link
Copy Markdown
Contributor

DO NOT MERGE

DO NOT MERGE UNTIL LIVE TESTS ARE COMPLETED

Live/integration tests against running providers have NOT been performed yet. Please do not merge this PR until live validation is completed and explicitly signed off.


Summary

This PR expands normalized codec extraction around AnnotatedLlmRequest and AnnotatedLlmResponse for:

  • OpenAI Chat Completions (/v1/chat/completions)
  • OpenAI Responses (/v1/responses)
  • Anthropic Messages (/v1/messages)
  • Hybrid payload variants observed in inference gateways/provider bridges (vLLM, LiteLLM, SGLang patterns)

The goal is to extract more meaningful normalized state while preserving unmodeled provider-specific fields losslessly via extra.

Additive Request IR State (AnnotatedLlmRequest)

Added normalized optional fields (additive, backward-compatible at the payload level):

  • store: Option<bool>
  • previous_response_id: Option<String>
  • truncation: Option<Json>
  • reasoning: Option<Json>
  • include: Option<Json>
  • user: Option<String>
  • metadata: Option<Json>
  • service_tier: Option<String>
  • parallel_tool_calls: Option<bool>
  • max_output_tokens: Option<u64>
  • max_tool_calls: Option<u64>
  • top_logprobs: Option<u64>
  • stream: Option<bool>

Multimodal expansion in request content parts:

  • ContentPart::ImageUrl { image_url: OpenAiImageUrl }
  • OpenAiImageUrl { url, detail }

Additive Response IR State (ApiSpecificResponse)

OpenAI Responses variant expanded with:

  • previous_response_id
  • store
  • service_tier
  • truncation
  • reasoning
  • input_tokens_details
  • output_tokens_details

Anthropic Messages variant expanded with:

  • service_tier
  • container
  • content_blocks

OpenAI Responses request-side hardening

  • Added strict-first decode behavior for heterogeneous input arrays.
  • Removed silent lossy fallback behavior.
  • Preserves unparsed mixed input items in extra (_openai_responses_unparsed_input_items) for round-trip safety.
  • Handles Anthropic-style tool hint combinations when present in mixed gateway payloads.

Anthropic request-side updates

  • Expanded extraction for metadata/service-tier/tool parallelism semantics.
  • Added explicit tool_choice.type == "none" parity in decode/encode.
  • Preserves bridge/runtime extension fields in extra.

Hybrid payload coverage added

Fixture/test coverage for mixed/provider patterns:

  • vLLM-style Anthropic and OpenAI Responses hybrids
  • LiteLLM hybrid patterns for Anthropic + Responses
  • SGLang Responses extension payloads

Consumer blast-radius updates

Because request IR added new fields and a new ContentPart variant, downstream consumers were updated:

  • crates/adaptive: ContentPart match handling + request initializer updates
  • crates/ffi tests: request initializer updates
  • crates/wasm tests: request initializer updates
  • crates/python: constructor/coverage request initializer updates

Scope note

This PR intentionally avoids a larger architectural shift (e.g., provider sidecar wrapper/unified request abstraction rewrite). It keeps current AnnotatedLlmRequest / AnnotatedLlmResponse IR approach and expands extraction additively.

Validation performed

  • uv run pre-commit run --all-files (passed)
  • cargo test -p nemo-flow-adaptive (passed)
  • cargo test -p nemo-flow-ffi (passed)
  • cargo test -p nemo-flow-wasm (passed)
  • cargo test -p nemo-flow-python (passed)
  • cargo test -p nemo-flow codec:: (passed)

Validation pending (required before merge)

  • Live provider/inference tests against running endpoints (OpenAI-compatible + Anthropic-compatible + hybrid gateway paths)
  • End-to-end verification of round-trip behavior on production-like payload captures

Commit stack highlights

  • OpenAI request IR expansion + multimodal parts
  • OpenAI Responses response extraction expansion
  • Anthropic request + response extraction expansion
  • Hybrid fixture/test additions (vLLM/LiteLLM/SGLang)
  • Consumer fallout fixes (adaptive, ffi, wasm, python)
  • Formatting-only follow-up after pre-commit (rustfmt)

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 11, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Enterprise

Run ID: a471f64f-c2bf-4cbd-a227-27842f3c51d4

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added size:XL PR is extra large lang:rust PR changes/introduces Rust code labels May 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lang:rust PR changes/introduces Rust code size:XL PR is extra large

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant