Privacy-preserving, AI-powered text simplification for neurodivergent readers
A desktop application that uses fine-tuned T5 transformer models to simplify text for dyslexia, ADHD, and autism accessibility needs. Runs completely offline on your local CPUβno cloud, no data sharing, no internet required.
# Install dependencies
npm install
# Start the application
npm run startThis launches the Electron app with Vite dev server. For production build, use npm run build.
# Install Python dependencies
pip install transformers torch spacy
python -m spacy download en_core_web_sm
# Run simplification
python simplify.py --input sample.txt --mode dyslexia --metrics| Mode | Best For | Key Features |
|---|---|---|
| Dyslexia | Readers with dyslexia | Short sentences, visual spacing, optional hyphenation, text-to-speech |
| ADHD Focus | Attention maintenance | Progress markers [1/N], bolded key nouns, focus navigation |
| Literal Clarity | Autism spectrum | Idiom replacement, jargon simplification, literal interpretations |
- π‘οΈ 100% Offline β All processing happens locally on your CPU
- β‘ Fast β T5-small model runs efficiently on modern CPUs (~1-2 sec per paragraph)
- π No Data Leaves Your Device β Perfect for sensitive documents
- π¦ Self-contained β No external API keys or cloud dependencies
- Themes: Light, Dark, High Contrast (WCAG AAA)
- Typography: Choose from Lexend, OpenDyslexic, Merriweather, or Monospace
- Layout: Adjustable font size (14-28px), line spacing (1.5-2.0)
- Motion: Reduce motion toggle for animations
- Hyphenation: Optional for dyslexia (per BDA guidelines)
Input Text β Preprocessing β Neural Simplification β Mode Formatting β Output
β (Cleanup) (T5 + ACCESS) (Rules-based) β
Raw text (Idioms/Jargon/Homophone) Sentence-by-sentence Dyslexia/ADHD/Autism Ready-to-read
-
Neural Simplification (T5 Transformer with ACCESS Control Tokens)
- Fine-tuned model (
t5/) on accessibility-focused parallel corpora - Preprocessing: Idiom/jargon replacement and homophone correction
- Sentence-level processing with ACCESS control tokens for mode-specific guidance
- Dynamic length constraints prevent over-summarization
- Handles spelling, grammar, and vocabulary simplification
- Fine-tuned model (
-
Rule-Based Post-Processing
- Dyslexia: Split on conjunctions, ensure punctuation, optional hyphenation
- ADHD: Bold first noun per sentence, add [1/N] markers
- Autism: Replace idioms with literal meanings, simplify jargon
-
Readability Metrics
- Word count, sentence length, Flesch Reading Ease
- Before/after comparison for measurable improvements
- Breaks complex sentences into short, single-idea statements
- One sentence per line with extra spacing
- Optional hyphenation for long words (disabled by default per BDA guidelines)
- Built-in text-to-speech with word boundary tracking
- Highlights paragraph as it reads aloud
- Converts paragraphs into a numbered bullet list:
[1/15] - Sentence... - Visual emphasis: First significant noun in each sentence is bolded
- Uses spaCy POS tagging for accurate noun detection (with fallback heuristics)
- Keyboard navigation: arrow keys to cycle through sentences
- Active sentence highlighting in the center of the viewport
- Idiom replacement: "break a leg" β "good luck"
- Jargon simplification: "PTSD" β "Post-Traumatic Stress Disorder"
- Context awareness: Prevents over-replacement
- Example: "He broke his leg" (literal injury) is NOT replaced with "good luck"
- 5,200+ idioms and 2,000+ jargon terms (morphologically expanded)
- Bolded replacements for easy identification
offline-text-accessibility/
βββ simplify.py # Core simplification engine (CLI)
βββ simplify_server.py # Persistent server for Electron
βββ dyslexia_mode.py # Dyslexia formatter
βββ adhd_mode.py # ADHD focus formatter
βββ autism_mode.py # Literal clarity formatter
βββ utils.py # Utilities: sentence splitting, metrics, homophones
βββ nlp_utils.py # spaCy integration (POS tagging, noun extraction)
βββ idiom_map.json # 5,207 idiom β literal mappings
βββ jargon_map.json # 2,054 jargon β plain language mappings
βββ electron-app/ # Desktop UI (React + Electron)
β βββ main.js
β βββ preload.js
β βββ src/
β βββ App.jsx
β βββ context/
β βββ components/
βββ t5/ # Fine-tuned T5 model (local)
βββ expand_maps.py # Utility: expand maps with morphological variants
βββ __tests__/ # Unit tests
βββ docs/ # Additional documentation
# Clone the repository
git clone https://github.com/yourusername/offline-text-accessibility.git
cd offline-text-accessibility
# Install Node.js dependencies
npm install
# (Optional) Install Python dependencies for backend
pip install transformers torch spacy
python -m spacy download en_core_web_sm
# Run the app
npm run startpip install transformers torch spacy
python -m spacy download en_core_web_sm- Paste or type your text in the left panel
- Select a mode: Dyslexia, ADHD Focus, or Literal Clarity
- Click "Simplify Text"
- View results in the right panel, customized for the selected mode
- Adjust settings (gear icon):
- Change theme (light/dark/high-contrast)
- Select font (Lexend, OpenDyslexic, etc.)
- Adjust font size and line spacing
- Enable/disable hyphenation (dyslexia)
- Reduce motion (accessibility)
The tool calculates and displays:
- Word count β before and after simplification
- Average sentence length β target: shorter is better
- Flesch Reading Ease β 0-100 score (higher = easier to read)
All changes are color-coded: green for improvement, red for regression.
The application consists of three layers:
-
Frontend β React UI with Electron shell
- State managed via React Context (AppContext, ThemeContext)
- IPC communication with Python backend
- TailwindCSS for styling with CSS custom properties
-
Backend β Python simplification server
- T5 transformer model (HuggingFace) with ACCESS control token guidance
- Preprocessing pipeline: Idiom/jargon replacement and homophone correction
- Sentence-level processing with dynamic length constraints to prevent over-summarization
- LRU model cache (up to 3 models)
- JSON-over-stdin/stdout protocol
-
Models β Fine-tuned T5 on accessibility data
- Location:
t5/(local directory) - ACCESS control token profiles for mode-specific guidance
- Dynamic length constraints and penalties for optimal output
- Tuned generation parameters (beam search, repetition penalty)
- Preprocessing-ready design for clean input
- Location:
# In simplify.py
model_choice = "small" # ./t5 (default, fastest)
# or
model_choice = "medium" # t5-medium (higher quality, more RAM)
# or
model_choice = "auto-task" # Auto-select based on text complexity
# or
model_choice = "auto-device" # Auto-select based on available RAM- Expanded lexical resources: Idiom map grown from 220β5,207 entries; jargon map from 100β2,054 entries via morphological expansion
- Full mode implementation: All three accessibility modes now fully functional
- spaCy integration: Accurate part-of-speech tagging for ADHD noun bolding
- Improved sentence splitting: Better handling of abbreviations, decimals, and titles
- Context-aware homophone correction: "hole" vs "whole", "knot" vs "not", etc.
- Comprehensive test suite: Unit tests for mode formatters and map integrity
- Detailed documentation: Architecture deep dive and developer guide
- English text only (models trained on English)
- Requires ~2GB RAM for T5-medium (smaller model uses less)
- Not a clinical tool β designed for general accessibility support
- No user studies yet β feedback welcome
MIT
- T5 model and HuggingFace transformers library
- spaCy for natural language processing
- TailwindCSS for utility-first styling
- The neurodiversity community for feedback and inspiration
- Architecture Deep Dive β Comprehensive technical overview
- CLAUDE.md β Development guide and project instructions
- docs/ β Additional evaluation and planning documents
Made with β€οΈ for neurodivergent readers. All processing happens on your device. Your text, your privacy.