Enhance prompt caching and clean up LinkedIn route and documentation#97
Merged
goldlabelapps merged 3 commits intomasterfrom Apr 23, 2026
Merged
Enhance prompt caching and clean up LinkedIn route and documentation#97goldlabelapps merged 3 commits intomasterfrom
goldlabelapps merged 3 commits intomasterfrom
Conversation
Bump version to 2.3.0, remove the LinkedIn-specific prompt endpoint and its router, and update routing/root listings accordingly. Rewrite /prompt handler to add SHA-256 prompt hashing, prefer exact-hash/text cache hits, and fall back to a tsvector rank-based match; store prompt_hash in the record data and populate search_vector on insert when supported. Improve response payloads to include cached/duration/model fields and ensure DB cursors/connections are closed in finally. Add tests for prompt behavior with mocked DB and GenAI client (tests/test_prompt.py) and update queue test expectations to the new filters structure (tests/test_queue.py).
Tidy documentation and repository artifacts: update README image alt text and normalize several section headings to H4 for consistency; remove committed output logs (pytest_output.txt, queue_output.txt) to avoid tracked generated artifacts; relocate/rename the Postman collection into tests (Python° file moved to tests/Python°.json).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request removes the
/prompt/linkedinendpoint and its related code, and significantly improves the/promptendpoint by adding robust database-backed caching for prompt completions. It also updates documentation formatting and makes a versioning change.API Endpoint Changes:
/prompt/linkedinendpoint and all related code, including thelinkedin.pyfile, its router imports/registrations, and references in the API root listing. [1] [2] [3] [4] [5]Prompt Endpoint Improvements:
/promptendpoint (prompt.py) to implement database-backed caching using a SHA256 hash of the prompt and, if available, full-text search with ranking. This reduces redundant calls to the LLM and speeds up repeated prompt requests. [1] [2]Documentation and Versioning:
README.mdfor clarity and consistency. [1] [2] [3] [4]app/__init__.pyfrom2.2.9to1.