Skip to content
#

ai-interpretability

Here are 17 public repositories matching this topic...

Framework for evaluating and steering generative image systems using geometry-first metrics, structural stress testing, and constraint-based analysis. Designed to expose compositional collapse, spatial priors, and model failure modes without accessing training data or model internals.

  • Updated May 3, 2026
  • Jupyter Notebook

Conformal Geometric Algebra (CGA) with efficient sequence modeling by introducing a recurrent rotor mechanism and a novel bit-masked hardware kernel that solves the computational bottleneck of Clifford products.

  • Updated Feb 6, 2026
  • Python

Do LLMs think like brains? We test GPT-2, BERT, Mistral, DeepSeek & Qwen+SAE against EEG data. Sparse features yield a 4.3× alignment jump. Working paper included.

  • Updated May 2, 2026
  • Jupyter Notebook

A NeuroAI project using Bernoulli-inspired fluid-flow analogy to explore how information moves through neural networks. The signal strength in the NN is defined as the "pressure" from Bernoulli's equation, the speed of information propagation as the "flow speed of fluid" and, the activation level as the "opening and closing of valves".

  • Updated Nov 15, 2025

Toy 6. An interactive phase-space instrument mapping Ψ = S/D — the ratio of capability to modeling depth that determines whether a system is in the viable, transitional, or failure-mode-dominant regime. Includes the Inner Crossing animation. Companion simulation for The Inner Crossing — Series 2, Part 3.

  • Updated May 16, 2026
  • HTML

Toy 7. An elimination-filter landscape applying two structural constraints simultaneously to map which objective classes can persist under sustained optimization pressure — and which cannot. Includes a four-stage scenario engine and open-question frontier. Companion simulation for The Shape of What Does Not End — Series 2, Part 4.

  • Updated May 16, 2026
  • HTML

Improve this page

Add a description, image, and links to the ai-interpretability topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-interpretability topic, visit your repo's landing page and select "manage topics."

Learn more