Skip to content

🐛 Bugfix: Enhance OpenAIModel error handling and chunk processing#2887

Open
Zhi-a wants to merge 1 commit intodevelopfrom
develop_no_choice2
Open

🐛 Bugfix: Enhance OpenAIModel error handling and chunk processing#2887
Zhi-a wants to merge 1 commit intodevelopfrom
develop_no_choice2

Conversation

@Zhi-a
Copy link
Copy Markdown
Contributor

@Zhi-a Zhi-a commented Apr 28, 2026

  • Added validation for API response types to raise ValueError for unexpected string or dictionary responses.
  • Implemented safety checks to skip non-standard chunks that lack expected attributes, logging warnings for such cases.
  • Introduced unit tests to cover new error handling scenarios and ensure robust processing of API responses.

- Added validation for API response types to raise ValueError for unexpected string or dictionary responses.
- Implemented safety checks to skip non-standard chunks that lack expected attributes, logging warnings for such cases.
- Introduced unit tests to cover new error handling scenarios and ensure robust processing of API responses.
Copilot AI review requested due to automatic review settings April 28, 2026 07:14
@Zhi-a Zhi-a requested review from Dallas98 and WMC001 as code owners April 28, 2026 07:14
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR strengthens OpenAIModel.__call__ robustness by validating unexpected API response types and tolerating non-standard streaming chunks, with new unit tests covering these scenarios.

Changes:

  • Add early validation to raise ValueError when the completion API returns a str or dict instead of a stream/iterator.
  • Skip and warn on streamed “chunks” that don’t expose the expected choices attribute, continuing processing for remaining chunks.
  • Add unit tests for the new response-type validation and non-standard chunk handling.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
sdk/nexent/core/models/openai_llm.py Adds response-type validation and non-standard chunk skipping/warning during streaming.
test/sdk/core/models/test_openai_llm.py Adds tests covering string/dict API responses and streaming chunks missing choices / with empty choices.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +155 to +159
# Log warning and continue processing
if hasattr(chunk, '__str__'):
chunk_str = str(chunk)
logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}")
chunk_list.append(chunk)
Copy link

Copilot AI Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-standard chunks (no choices) are appended to chunk_list. Later the code assumes the last item has a .usage attribute (chunk_list[-1].usage), which will raise AttributeError if the stream ends with a non-standard chunk (or any chunk lacking .usage). Consider either (a) not appending non-standard chunks to chunk_list, or (b) updating the usage-extraction logic to find the last chunk that actually has a non-None .usage attribute before reading token counts.

Suggested change
# Log warning and continue processing
if hasattr(chunk, '__str__'):
chunk_str = str(chunk)
logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}")
chunk_list.append(chunk)
# Log warning and continue processing, but do not add unsupported
# objects to chunk_list because later logic expects standard
# completion chunks with the usual attributes.
if hasattr(chunk, '__str__'):
chunk_str = str(chunk)
logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}")

Copilot uses AI. Check for mistakes.
# ---------------------------------------------------------------------------


def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance, caplog):
Copy link

Copilot AI Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

caplog is included as a fixture argument but not used, and the test docstring mentions a warning. Either assert that the expected warning was logged (via caplog) or remove the unused fixture argument to keep the test intent clear.

Suggested change
def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance, caplog):
def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance):

Copilot uses AI. Check for mistakes.
@Zhi-a Zhi-a changed the title ✨ Enhance OpenAIModel error handling and chunk processing 🐛 Bugfix: Enhance OpenAIModel error handling and chunk processing Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants