Research code for claim-level correctness probes on Llama activations.
-
Updated
Apr 26, 2026 - Python
Research code for claim-level correctness probes on Llama activations.
Replication package for “Illocutionary Explanation Planning for Source-Faithful Explanations in Retrieval-Augmented Language Models” (xAI 2026): code, data, prompts, and evaluation for chain-of-illocution prompting in textbook-grounded RAG.
Add a description, image, and links to the factscore topic page so that developers can more easily learn about it.
To associate your repository with the factscore topic, visit your repo's landing page and select "manage topics."