AI Engineer · Building production-grade LLM systems
RAG pipelines · Agentic AI · LangGraph · AWS · FastAPI
🏥 AI Medical Chatbot — Production RAG System
Answers medical questions grounded in verified clinical documents using retrieval-augmented generation
- RAG pipeline — PDF ingestion → chunking → Pinecone vector search → Gemini LLM generation
- Deployed on AWS EC2 with full CI/CD via GitHub Actions → Amazon ECR → self-hosted runner
- Stack: LangChain · Pinecone · Google Gemini · FastAPI · Docker · AWS EC2/ECR
✍️ AI Blog Generator — Agentic LangGraph Pipeline · Live Demo →
Generates high-quality blog posts using a multi-node agentic graph with self-evaluation and revision loops
- LangGraph agentic graph — title → outline → content → quality check → auto-revision loop
- Conditional routing — multilingual translation (Hindi, Marathi, French) via graph edges
- Tone control — professional / casual / academic / humorous writing styles
- Streaming API — real-time node-by-node progress via FastAPI
StreamingResponse - Stack: LangGraph · Groq LLaMA 3.1 · FastAPI · Streamlit · Render · LangSmith
AI / LLM
Backend & APIs
Cloud & DevOps
I focus on the gap between AI research and production systems — taking LLM capabilities and deploying them as reliable, observable, maintainable services:
- RAG systems that retrieve grounded context instead of hallucinating
- Agentic pipelines using LangGraph that loop, evaluate, and self-correct
- REST APIs with proper schemas, error handling, and streaming
- CI/CD pipelines that build, push to ECR, and deploy to EC2 automatically
I'm actively looking for AI Engineer / ML Engineer roles where I can work on production LLM systems.
- 💼 Currently at General Mills India
- 📍 Based in Mumbai, India
"Ship it, then improve it."