Personal Data Science Projects
-
Updated
Feb 15, 2023 - Jupyter Notebook
Personal Data Science Projects
Assess fairness of machine learning models and choose an appropriate fairness metric for your use case with Fairlearn
Demos about Teaching your Models to play fair with FairLearn
Learn different techniques for mitigating fairness-related harms using Fairlearn.
An end-to-end MLOps pipeline for a production-grade fraud detection model. This project demonstrates best practices including data versioning (DVC), experiment tracking (MLflow), CI/CD (GitHub Actions), containerization (Docker), deployment on GKE, and advanced model analysis (poisoning attacks, drift, fairness, explainability).
Drop-in encrypted Fairlearn metrics over CKKS. Same API surface; ciphertext arithmetic via TenSEAL or OpenFHE.
Microsoft Ignite - Getting started on your health-tech journey using responsible AI
AI-powered bias detection for datasets and ML models — with fairness metrics, natural language reports, and explainability tools.
Demo's of FairLearn and InterpretML as described in my article on responsible AI.
An ethically-aware deep learning project to predict credit card offer acceptance while mitigating income-based bias using SHAP, Fairlearn, and AIF360.
A platform developed with Cash App to help ML engineers detect and visualize biases in models using Fairlearn. Features include a collaborative and interactive dashboard (React, Chart.js), a Flask backend, and a secure MySQL database for data storage and analysis.
🔬 Drop in any ML model → get SHAP explainability, fairness audit & drift detection in seconds
Practical, implementation-ready AI governance framework aligned to NIST AI RMF — automated risk scoring, data lineage validation, bias detection, model cards, and a governance dashboard. Built for enterprise architects deploying AI in regulated environments.
A comprehensive bias analysis on bank loan data, examining potential unfairness in credit quality predictions across age groups
This repository was used for my thesis. The goal was to find a biased dataset, and mitigate its bias. That is done under the patients directory. Check the README file for more.
Student Success Model (SSM)
A demonstration of detecting and mitigating bias in AI.
Mitigating bias for our model used to identify Glioblastoma Multiforme (tumors) using Xgboost.
Fairness-aware predictive modeling using Random Forest, Equalized Odds constraints, and fairness–performance tradeoff analysis on the UCI Adult Income dataset.
Add a description, image, and links to the fairlearn topic page so that developers can more easily learn about it.
To associate your repository with the fairlearn topic, visit your repo's landing page and select "manage topics."