Production-grade predictive maintenance scaffold with:
- full ML prediction for
failure risk (24-72h),fault class, andremaining useful life - custom hybrid RAG over
PostgreSQL + pgvector - agentic orchestration in
LangGraph FastAPIbackend andNext.jsfrontenduv-based Python 3.12 project management
The backend is organized into three layers:
ml: synthetic-first model training and online scoringrag: hybrid retrieval abstractions, seeded knowledge corpus, PostgreSQL schemaworkflow: LangGraph pipeline for prediction, retrieval, diagnosis, recommendation, guardrails, and escalation
The app now targets local PostgreSQL + pgvector first, with in-memory mode kept only as a fallback for bootstrap and tests. Model integration is shaped for Azure AI Foundry using the OpenAI-compatible endpoint pattern.
src/ragpredictivemaintenance/
api/ FastAPI routes
core/ settings, logging, app lifecycle
domain/ shared enums and schemas
ml/ feature engineering, synthetic training, scoring
observability/ Prometheus metrics
rag/ hybrid retriever and corpus loader
storage/ repositories and app container
workflow/ LangGraph orchestration
frontend/ Next.js dashboard scaffold
infra/ PostgreSQL bootstrap and AKS manifests
scripts/ utility entrypoints
tests/ API and workflow tests
Prerequisites:
- Python
3.12 uv- Node.js
20+for the frontend - Docker for local PostgreSQL
Setup:
uv sync
uv run ragpm-migrate
uv run uvicorn ragpredictivemaintenance.main:app --reload --app-dir srcThe backend seeds the knowledge corpus into PostgreSQL on startup and trains a bootstrap ML model on synthetic data if no model artifact exists.
uv run pytest
uv run ruff check .
uv run ragpm-migrate
uv run python scripts/train_models.py
uv run python scripts/e2e_smoke.py
docker compose up --buildWindows cmd.exe shortcut:
run_local_e2e.cmdThat script:
- starts Docker Desktop if needed
- waits for the Docker daemon
- runs
docker compose up --build -d - waits for
http://localhost:8000/health - runs
scripts\e2e_smoke.py - prints the frontend and API URLs
POST /api/v1/sensors/ingestPOST /api/v1/predictions/scoreGET /api/v1/assets/{asset_id}/healthPOST /api/v1/recommendations/generatePOST /api/v1/approvalsPOST /api/v1/feedbackGET /healthGET /metrics
Copy .env.example to .env and fill production settings for PostgreSQL and Azure AI Foundry.
Primary Foundry variables:
OPENAI_BASE_URL=https://<resource>.services.ai.azure.com/openai/v1/OPENAI_API_KEY=<resource-key>FOUNDRY_DEPLOYMENT_NAME=<deployment-name>FOUNDRY_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>optional
Compatibility aliases are still supported for one transition cycle:
AZURE_OPENAI_ENDPOINTAZURE_OPENAI_API_KEYAZURE_OPENAI_CHAT_DEPLOYMENT
Runtime split:
- host-run backend:
DATABASE_URL=postgresql+psycopg://postgres:postgres@localhost:5432/ragpm - containerized API: Compose overrides the host to
postgres:5432 - containerized frontend: Compose uses
INTERNAL_API_BASE_URL=http://api:8000andNEXT_PUBLIC_API_BASE_URL=http://localhost:8000