Skip to content

PrynAI/RAGPredictiveMaintenance

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predictive Maintenance MVP

Production-grade predictive maintenance scaffold with:

  • full ML prediction for failure risk (24-72h), fault class, and remaining useful life
  • custom hybrid RAG over PostgreSQL + pgvector
  • agentic orchestration in LangGraph
  • FastAPI backend and Next.js frontend
  • uv-based Python 3.12 project management

Architecture

The backend is organized into three layers:

  1. ml: synthetic-first model training and online scoring
  2. rag: hybrid retrieval abstractions, seeded knowledge corpus, PostgreSQL schema
  3. workflow: LangGraph pipeline for prediction, retrieval, diagnosis, recommendation, guardrails, and escalation

The app now targets local PostgreSQL + pgvector first, with in-memory mode kept only as a fallback for bootstrap and tests. Model integration is shaped for Azure AI Foundry using the OpenAI-compatible endpoint pattern.

Repository Layout

src/ragpredictivemaintenance/
  api/            FastAPI routes
  core/           settings, logging, app lifecycle
  domain/         shared enums and schemas
  ml/             feature engineering, synthetic training, scoring
  observability/  Prometheus metrics
  rag/            hybrid retriever and corpus loader
  storage/        repositories and app container
  workflow/       LangGraph orchestration
frontend/         Next.js dashboard scaffold
infra/            PostgreSQL bootstrap and AKS manifests
scripts/          utility entrypoints
tests/            API and workflow tests

Local Development

Prerequisites:

  • Python 3.12
  • uv
  • Node.js 20+ for the frontend
  • Docker for local PostgreSQL

Setup:

uv sync
uv run ragpm-migrate
uv run uvicorn ragpredictivemaintenance.main:app --reload --app-dir src

The backend seeds the knowledge corpus into PostgreSQL on startup and trains a bootstrap ML model on synthetic data if no model artifact exists.

Useful Commands

uv run pytest
uv run ruff check .
uv run ragpm-migrate
uv run python scripts/train_models.py
uv run python scripts/e2e_smoke.py
docker compose up --build

Windows cmd.exe shortcut:

run_local_e2e.cmd

That script:

  • starts Docker Desktop if needed
  • waits for the Docker daemon
  • runs docker compose up --build -d
  • waits for http://localhost:8000/health
  • runs scripts\e2e_smoke.py
  • prints the frontend and API URLs

API Overview

  • POST /api/v1/sensors/ingest
  • POST /api/v1/predictions/score
  • GET /api/v1/assets/{asset_id}/health
  • POST /api/v1/recommendations/generate
  • POST /api/v1/approvals
  • POST /api/v1/feedback
  • GET /health
  • GET /metrics

Environment

Copy .env.example to .env and fill production settings for PostgreSQL and Azure AI Foundry.

Primary Foundry variables:

  • OPENAI_BASE_URL=https://<resource>.services.ai.azure.com/openai/v1/
  • OPENAI_API_KEY=<resource-key>
  • FOUNDRY_DEPLOYMENT_NAME=<deployment-name>
  • FOUNDRY_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project> optional

Compatibility aliases are still supported for one transition cycle:

  • AZURE_OPENAI_ENDPOINT
  • AZURE_OPENAI_API_KEY
  • AZURE_OPENAI_CHAT_DEPLOYMENT

Runtime split:

  • host-run backend: DATABASE_URL=postgresql+psycopg://postgres:postgres@localhost:5432/ragpm
  • containerized API: Compose overrides the host to postgres:5432
  • containerized frontend: Compose uses INTERNAL_API_BASE_URL=http://api:8000 and NEXT_PUBLIC_API_BASE_URL=http://localhost:8000

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors