🐛 Bugfix: Enhance OpenAIModel error handling and chunk processing#2887
🐛 Bugfix: Enhance OpenAIModel error handling and chunk processing#2887
Conversation
Zhi-a
commented
Apr 28, 2026
- Added validation for API response types to raise ValueError for unexpected string or dictionary responses.
- Implemented safety checks to skip non-standard chunks that lack expected attributes, logging warnings for such cases.
- Introduced unit tests to cover new error handling scenarios and ensure robust processing of API responses.
- Added validation for API response types to raise ValueError for unexpected string or dictionary responses. - Implemented safety checks to skip non-standard chunks that lack expected attributes, logging warnings for such cases. - Introduced unit tests to cover new error handling scenarios and ensure robust processing of API responses.
There was a problem hiding this comment.
Pull request overview
This PR strengthens OpenAIModel.__call__ robustness by validating unexpected API response types and tolerating non-standard streaming chunks, with new unit tests covering these scenarios.
Changes:
- Add early validation to raise
ValueErrorwhen the completion API returns astrordictinstead of a stream/iterator. - Skip and warn on streamed “chunks” that don’t expose the expected
choicesattribute, continuing processing for remaining chunks. - Add unit tests for the new response-type validation and non-standard chunk handling.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
sdk/nexent/core/models/openai_llm.py |
Adds response-type validation and non-standard chunk skipping/warning during streaming. |
test/sdk/core/models/test_openai_llm.py |
Adds tests covering string/dict API responses and streaming chunks missing choices / with empty choices. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Log warning and continue processing | ||
| if hasattr(chunk, '__str__'): | ||
| chunk_str = str(chunk) | ||
| logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}") | ||
| chunk_list.append(chunk) |
There was a problem hiding this comment.
Non-standard chunks (no choices) are appended to chunk_list. Later the code assumes the last item has a .usage attribute (chunk_list[-1].usage), which will raise AttributeError if the stream ends with a non-standard chunk (or any chunk lacking .usage). Consider either (a) not appending non-standard chunks to chunk_list, or (b) updating the usage-extraction logic to find the last chunk that actually has a non-None .usage attribute before reading token counts.
| # Log warning and continue processing | |
| if hasattr(chunk, '__str__'): | |
| chunk_str = str(chunk) | |
| logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}") | |
| chunk_list.append(chunk) | |
| # Log warning and continue processing, but do not add unsupported | |
| # objects to chunk_list because later logic expects standard | |
| # completion chunks with the usual attributes. | |
| if hasattr(chunk, '__str__'): | |
| chunk_str = str(chunk) | |
| logger.warning(f"Received non-standard chunk (no 'choices'): {chunk_str[:200]}") |
| # --------------------------------------------------------------------------- | ||
|
|
||
|
|
||
| def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance, caplog): |
There was a problem hiding this comment.
caplog is included as a fixture argument but not used, and the test docstring mentions a warning. Either assert that the expected warning was logged (via caplog) or remove the unused fixture argument to keep the test intent clear.
| def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance, caplog): | |
| def test_call_chunk_without_choices_attribute_continues_processing(openai_model_instance): |