Run a complete private AI ecosystem locally using Ollama, Open WebUI, and purpose-built tools for email, finance, 3D generation, image generation, speech-to-text, and social media marketing. No cloud. No subscriptions. No data leaving your machine.
| Tool | Purpose | Port |
|---|---|---|
| π¬ Open WebUI + SearXNG | Chat UI with private web search | :3000 / :8080 |
| π Quant AI | Stock & crypto analysis + portfolio tracker | :8000 |
| π§ Mail AI Manager | AI email classifier for any IMAP provider (Gmail, iCloud, Outlook, ProtonMail) β 100% local, no Gmail API | :5051 |
| π¨ Stable Diffusion | Local text-to-image generation Β· Apple Silicon MPS | :5050 |
| π§ TripoSR 3D Pipeline | Image β 3D mesh (Apple Silicon) | :5050 |
| π€ Tax AI Social | AI social media content engine for tax/accounting firms | :5055 |
| ποΈ Whisper STT | 100% local speech-to-text for Open WebUI | :9000 |
| π§ AI Model Training Studio | Fine-tune and train AI models on custom data. LoRA training, data processing, export to Ollama. | :8501 |
π¦ Ollama (localhost:11434)
mistral:7b Β· deepseek-coder:6.7b
Apple Silicon MPS Β· ~4-10GB RAM
β
ββββββββββββββββββββΌβββββββββββββββββββββββ
β β β
βΌ βΌ βΌ
π¬ Open WebUI π Quant AI π§ Mail AI
:3000 :8000 :5051
+ SearXNG FastAPI+ChromaDB IMAP + LLM
:8080 vectorbt Auto-triage
β β β
βΌ βΌ βΌ
π§ TripoSR π€ Tax AI Social π§ AI Training
:5053 :5055 :8501
Imageβ3D Instagram/ LoRA/Fine-tune
OBJ/GLB Facebook/TikTok Ollama Export
Apple Silicon Compliance Data Processing
- macOS (Apple Silicon M1/M2/M3/M4 recommended) or Linux
- 16 GB RAM recommended (8GB minimum for basic use)
- Docker Desktop β https://www.docker.com/products/docker-desktop/
- 15+ GB free disk space (for models + tools)
- Internet connection (for initial setup only)
git clone https://github.com/jup313/MYLLM.git
cd MYLLMchmod +x setup.sh install-tools.sh
./setup.shThis will:
- Install Ollama
- Download
mistral:7banddeepseek-coder:6.7bmodels - Start Open WebUI on http://localhost:3000
- Start SearXNG on http://localhost:8080
- Open http://localhost:3000
- Click Sign Up and create your admin account
- The first account is automatically admin
Chat with local AI models + private web search.
| URL | Description |
|---|---|
| http://localhost:3000 | Chat interface |
| http://localhost:8080 | SearXNG search engine |
- Click the β¨ sparkle icon in chat input
- Toggle Web Search ON
- Ask any question β AI searches privately via SearXNG
./install-tools.sh your@email.com yourpasswordAI-powered stock and crypto analysis running 100% locally.
cd quant_api
pip install -r requirements.txt
python main.py- FastAPI + ChromaDB vector database
- vectorbt for backtesting
- curl-cffi for market data
- Ask natural language questions about stocks and crypto
π Now works with ANY IMAP email provider (Gmail, iCloud, Outlook, ProtonMail) β no Gmail API restrictions!
Local AI that classifies, summarizes, categorizes, and drafts replies to your emails.
cd mail-ai-manager
python3 app.py
# Open: http://localhost:5051Migration from Gmail Manager to universal Mail AI Manager:
- β Works with any IMAP provider β Gmail, iCloud, Outlook, ProtonMail, etc.
- β No OAuth complexity β Just IMAP credentials (app-specific password)
- β 100% local β Zero cloud dependencies, zero API restrictions
- β Hybrid IMAP + AppleScript β Reliable with native macOS Mail.app fallback
- β Feature parity β All Gmail manager features ported to Mail system
- β Same AI classification β Work, spam, personal, urgent categorization
- β Approval queue β All actions require review before execution
- β Draft replies β LLM-powered reply suggestions
- β Unsubscribe handling β RFC 2369 compliant automatic unsubscribe
cd mail-ai-manager
python3 app.py # Start Flask server at http://localhost:5051
# In another terminal:
curl -X POST http://localhost:5051/api/mail/test-connection \
-H "Content-Type: application/json" \
-d '{
"mail_imap_host": "imap.gmail.com",
"mail_imap_port": "993",
"mail_imap_username": "your-email@gmail.com",
"mail_imap_password": "your-app-password",
"mail_account_name": "Gmail"
}'| Provider | IMAP Host | Port | Notes |
|---|---|---|---|
| Gmail | imap.gmail.com |
993 | Requires 16-char app-specific password |
| iCloud | imap.mail.me.com |
993 | Requires app-specific password |
| Outlook | outlook.office365.com |
993 | Use main password or app password if 2FA |
| ProtonMail | 127.0.0.1 |
1143 | Via IMAP Bridge (localhost) |
| Custom | Any IMAP server | Any | Any IMAP-compatible email service |
| Endpoint | Method | Purpose |
|---|---|---|
/api/status |
GET | Check system status (Ollama, mail connection) |
/api/mail/test-connection |
POST | Test IMAP credentials |
/api/mail/status |
GET | Check Mail connection status |
/api/setup |
POST | Save IMAP config |
/api/pipeline/run |
POST | Start email classification (max 30 emails) |
/api/pipeline/status |
GET | Check pipeline progress |
/api/emails |
GET | List classified emails |
/api/actions |
GET | List pending actions (archive, trash, flag) |
/api/actions/{id}/approve |
POST | Approve action |
/api/actions/{id}/reject |
POST | Reject action |
- PHASE4_QUICKSTART.md β 5-minute setup guide with curl examples
- PHASE4_TEST_REPORT.md β Full integration test results + troubleshooting
- MAIL_AI_SETUP.md β Provider-specific setup guides
- MAIL_AI_MIGRATION_GUIDE.md β Phase 3 migration details
Mail AI Manager (Phase 4)
βββ mail_client.py
β βββ IMAPMailClient (primary)
β βββ AppleScriptMailClient (fallback)
β βββ HybridMailClient (orchestrator)
βββ mail_action_engine.py
β βββ fetch_unread_mail() β Get emails from INBOX
β βββ run_pipeline() β Fetch β Classify β Route
β βββ execute_action() β Archive, trash, flag, read
β βββ Approval queue system
βββ app.py (Flask API)
βββ database.py (SQLite)
βββ llm_engine.py (Ollama integration)
{
"mail_imap_host": "imap.gmail.com",
"mail_imap_port": "993",
"mail_imap_username": "your-email@gmail.com",
"mail_imap_password": "your-app-password",
"mail_account_name": "Gmail",
"ollama_model": "mistral",
"auto_archive_spam": "true",
"require_approval": "true",
"auto_threshold": "0.90"
}- IMAP connect: ~500ms
- Fetch 30 emails: ~2-3s
- Classify 30 emails: ~10-30s (depends on LLM model)
- Full pipeline (30 emails): ~15-40s
- β Passwords stored locally in SQLite (no cloud)
- β Use app-specific passwords (not main account password)
- β 100% local processing β no data leaves your Mac
- β Ollama runs locally β no API calls to external services
- IMAP-compatible provider required
- Unsubscribe requires
List-Unsubscribeheader in email - AppleScript fallback limited to macOS Mail.app
- Local Ollama must be running separately
IMAP connection failed?
python3 << 'EOF'
import imaplib
try:
imap = imaplib.IMAP4_SSL('imap.gmail.com', 993)
imap.login('user@gmail.com', 'app-password')
print("β
Success!")
except Exception as e:
print(f"β Error: {e}")
EOFOllama not running?
curl http://localhost:11434/api/tags # Check status
ollama serve & # Start if neededThe folder has been renamed from gmail-ai-manager to mail-ai-manager to better reflect its universal IMAP provider support. All functionality remains the same.
Generate 3D meshes from images using Apple Silicon GPU (MPS).
cd triposr-pipeline
bash setup.sh # First time only β installs TripoSR
bash start-ui.sh
# Open: http://localhost:5050Pipeline:
Your prompt β Stable Diffusion image β TripoSR β OBJ/GLB mesh
- Uses Apple Silicon MPS (Metal Performance Shaders)
- Memory usage: ~6β10GB
- Output: OBJ + GLB files ready for Blender, Unity, web
- Best open-source 3D model for Mac 16GB
AI-powered social media content engine for tax preparation, tax resolution, and bookkeeping firms.
- Automatically generates Instagram, Facebook, and TikTok posts daily at 6 AM
- Uses your local Ollama LLM (
mistral:7b) β no OpenAI API needed - 35+ compliance rules block misleading tax claims automatically
- Human review dashboard β Approve β , Edit βοΈ, or Reject β before posting
- Auto-posts to Facebook/Instagram via Meta Graph API on approval
- Business contact info (phone, WhatsApp, email, website) auto-appended to every post
cd tax-ai-social
bash setup.sh
cp .env.example .env
# Edit .env β add your firm info and Meta API credentials
bash start.sh
# Open: http://localhost:5055- β‘ Generate Now β single post or full daily batch (5 posts)
- π Posts tab β review drafts, approve, edit, or reject
- βοΈ Business Settings tab β update firm name, phone, email, website, WhatsApp, fax, etc. without touching files
- 6 AM auto-generation β every morning, 5 posts ready for review
- Compliance checker β flags and blocks prohibited phrases before you see them
| Platform | Type | Auto-posts? |
|---|---|---|
| Image caption | β (with image URL) | |
| Text post | β | |
| TikTok | Video script | β Manual (record yourself) |
- Tax Preparation
- Tax Resolution (IRS debt, payment plans, OIC)
- Bookkeeping
| Model | Size | Best For |
|---|---|---|
mistral:7b |
4.4 GB | General chat, posts, email, social media |
deepseek-coder:6.7b |
3.8 GB | Code generation, debugging, quant analysis |
ollama pull llama3.2 # Meta's Llama 3.2 (3B)
ollama pull phi4 # Microsoft Phi-4 (14B)
ollama pull qwen2.5-coder # Qwen coding model
ollama list # See all installed models# Stop Docker services (Open WebUI + SearXNG)
docker compose down
# Start Docker services
docker compose up -d
# View logs
docker compose logs -f
# Restart a specific service
docker compose restart open-webuiMYLLM/
βββ README.md # This file
βββ docker-compose.yml # Open WebUI + SearXNG
βββ setup.sh # One-command setup
βββ install-tools.sh # Install Open WebUI tools
βββ shopping_search_tool.py # Amazon/eBay search tool
βββ quant_tool.py # Quant analysis Open WebUI tool
βββ searxng/ # SearXNG config
βββ quant_api/ # Quant AI FastAPI backend
βββ mail-ai-manager/ # Mail AI email manager (IMAP)
βββ triposr-pipeline/ # Image β 3D mesh pipeline
β βββ architecture.svg # Full stack diagram
β βββ ...
βββ tax-ai-social/ # β Tax AI Social Media Engine
β βββ app/
β β βββ main.py # Flask API + routes
β β βββ generator.py # Post generation engine
β β βββ compliance.py # 35+ tax compliance rules
β β βββ poster.py # Meta Graph API posting
β β βββ scheduler.py # 6 AM daily auto-generation
β β βββ database.py # SQLite post tracking
β βββ prompts/ # 7 platform-specific prompts
β βββ templates/index.html # Dark-mode dashboard
β βββ .env.example # Config template
β βββ requirements.txt
β βββ setup.sh
β βββ start.sh
βββ whisper-stt/ # β Local Voice / Speech-to-Text
βββ server.py # OpenAI-compatible STT API
βββ requirements.txt
βββ setup.sh
βββ start.sh
βββ README.md
ollama list # Check if running
ollama serve # Start if not runningdocker compose down
docker compose up -d
docker compose logsollama pull mistral:7b # Download the model
ollama serve # Make sure Ollama is runningEdit docker-compose.yml or the relevant .env file to change ports.
Everything runs 100% locally:
- No data sent to OpenAI, Anthropic, or any cloud service
- SearXNG proxies web searches anonymously
- Gmail AI only reads emails locally via OAuth β nothing uploaded
- Tax AI Social posts never leave until you click Approve
- All AI inference runs on your Mac's Apple Silicon chip
100% local speech-to-text for Open WebUI β talk to your AI instead of typing.
cd whisper-stt
bash start.sh
# Server starts at http://localhost:9000First run downloads the Whisper small model (~460 MB, one time).
- Open http://localhost:3000
- Profile icon β Settings β Audio
- Speech to Text:
- Engine: OpenAI API
- Base URL:
http://localhost:9000/v1 - API Key:
local
- Save β click the π€ microphone in chat to speak!
| Model | Size | Speed | Best For |
|---|---|---|---|
tiny |
~75 MB | ~0.3s | Quick testing |
base |
~145 MB | ~0.5s | English-only |
small |
~460 MB | ~1s | Recommended β |
medium |
~1.5 GB | ~3s | Best accuracy |
WHISPER_MODEL=medium bash start.sh # Use a larger model- β 100% private β audio never leaves your Mac
- β Works offline after first setup
- β Multi-language support (auto-detects)
- β OpenAI-compatible API format
Track your stock/crypto holdings with live prices, gain/loss, and allocation chart.
Open: http://localhost:8000/portfolio
| Feature | Details |
|---|---|
| π Positions table | Ticker, shares, avg cost, current price, gain/loss, day change |
| π© Allocation donut chart | Visual portfolio breakdown (Chart.js) |
| ποΈ Watchlist | Track tickers without holding them |
| β³ Live prices | Auto-fetches via Yahoo Finance on page load |
| β Add / remove | Simple form β no file editing needed |
cd quant_api
python main.py
# Open: http://localhost:8000/portfolioDetect meetings in emails and add them to Google Calendar automatically.
Open: http://localhost:5051 β click π Calendar in sidebar
| Feature | Details |
|---|---|
| π Meeting detection | Scans emails for Zoom, time mentions, meeting keywords |
| π€ LLM extraction | Uses Ollama to extract date/time/location from email body |
| π One-click add | Click π Add to Calendar on any detected meeting email |
| β Manual events | Create events directly from the dashboard |
| π Upcoming view | See next 7 days of Google Calendar events |
Requires: Google Calendar API scope (enabled automatically when you re-connect Gmail OAuth)
β οΈ If you already connected Gmail, click π Re-connect Gmail in Settings to add Calendar scope.
100% local text-to-image generation on your Mac. No API key. No cloud. No limits.
cd stable-diffusion
bash setup.sh # First time only (~2-3 min)
bash start.sh
# Open: http://localhost:5050| Feature | Details |
|---|---|
| β‘ SDXL-Turbo | Default model β ~3β8s per image on M1/M2/M3 |
| π§ 6 Models | SDXL-Turbo, SD 2.1, SD 1.5, DreamShaper, OpenJourney, Realistic Vision |
| π Any size | 512Γ512 to 1280Γ1280 with 1:1, 16:9, 9:16, 4:3 presets |
| π² Seed control | Reproduce or vary any image |
| ποΈ History | Browse all generated images |
| π± Social Post tab | Auto-generate images for Tax AI Social posts |
When Stable Diffusion is running (:5050), Tax AI Social will automatically generate images for Instagram and Facebook posts. Just start both services:
# Terminal 1
cd stable-diffusion && bash start.sh
# Terminal 2
cd tax-ai-social && bash start.shImages are auto-selected based on post content (IRS/debt β resolution style, family β family style, etc.).
# Generate image
curl -X POST http://localhost:5050/api/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "professional tax accountant, modern office, 4K", "model": "sdxl-turbo", "steps": 4}'
# Use in Python
from stable_diffusion.sd_client import generate_for_post
img_url = generate_for_post("Tax deadline April 15th", specialty="tax_preparation")-
Stable Diffusion image generationβ β Done! (stable-diffusion/+ Tax AI Social integration) - LinkedIn support for Tax AI Social
-
Quant AI portfolio tracker dashboardβ β Done! (/portfolio) -
Gmail AI calendar integrationβ β Done! (Calendar tab) -
Voice interface for Open WebUIβ β Done! (Whisper STT)
Fine-tune and train AI models on your custom data. LoRA training, data processing, export to Ollama. Built with Streamlit.
cd ai-model-training
streamlit run app.py
# Open: http://localhost:8501- π Data import β Load training data from PDF, TXT, or CSV files
- π LoRA training β Fine-tune models without retraining from scratch
- π§ Model selection β Train on any Ollama model (mistral:7b, llama2, etc.)
- π Training dashboard β Monitor loss, accuracy, training progress in real-time
- πΎ Export to Ollama β Save trained models as Ollama-compatible formats
- βοΈ Hyperparameter tuning β Control learning rate, batch size, epochs, etc.
- β Data validation β Automatic data cleaning and quality checks
cd ai-model-training
pip install -r requirements.txt
streamlit run app.py
# Open: http://localhost:8501| Feature | Details |
|---|---|
| π Multi-format input | PDF, TXT, JSON, CSV training data |
| π― LoRA training | Low-rank adaptation β fast fine-tuning |
| π§ Model selection | Any Ollama model supported |
| π Real-time monitoring | Loss curves, accuracy tracking |
| πΎ Export formats | GGUF, SafeTensors, PyTorch |
| π One-click deploy | Export directly to Ollama |
| π Data processing | Auto-tokenization, chunking, filtering |
Your Data (PDF/TXT) β Data Processing β LoRA Training β Model Export β Ollama
- Specialized domain knowledge β Train on your company docs, legal contracts, industry papers
- Custom business logic β Fine-tune to your specific use cases (tax terms, medical jargon, code style)
- Faster inference β Create smaller, specialized models for specific tasks
- Privacy-first β All training happens locally on your Mac
Built with β€οΈ using Ollama Β· Open WebUI Β· SearXNG Β· TripoSR Β· Streamlit Β· Flask Β· Apple Silicon π