MLOps & Production Pipelines · ML Pipeline orchestration

ChurnFlow: End-to-End MLOps Pipeline

A production-grade MLOps pipeline for telecom customer churn prediction — integrating FastAPI model serving, Docker Compose multi-service orchestration, and GitHub Actions CI/CD in a single end-to-end system.

Architecture MLOps · Full Lifecycle Pipeline
Tech Stack
FastAPI EvidentlyAI MLflow Docker GitHub Actions scikit-learn Python

80%

Baseline Model Accuracy

AUC 84%

Predictive Power

CI/CD

Fully Automated GitHub Actions Pipeline

The Problem

Most ML models never make it to production — and those that do degrade silently without anyone noticing

The gap between a trained ML model and a production ML system is one of the most underestimated challenges in applied machine learning. A model that performs well in a notebook faces a gauntlet of production realities: it needs a serving endpoint that handles real-time requests, a monitoring layer that detects when input data drifts away from the training distribution, an automated retraining trigger that responds to that drift, an experiment tracking system that maintains a full audit trail of every model version, and a CI/CD pipeline that tests, builds, and deploys changes without manual intervention. Without all of these components working together, the model either never ships or ships and silently degrades as the real world diverges from its training data — both outcomes that represent a complete failure to deliver ML value.

The Solution

A full MLOps lifecycle system — from training to serving to drift detection to CI/CD — built as a production-ready multi-service platform

ChurnFlow is an end-to-end MLOps pipeline for telecom customer churn prediction that demonstrates the full production ML lifecycle in a single integrated system. A scikit-learn Logistic Regression model is trained on telecom churn data, serialized with joblib, and served via a FastAPI REST endpoint at /predict for real-time inference. MLflow tracks every experiment — parameters, metrics, and model artifacts — maintaining a complete versioned audit trail. EvidentlyAI monitors incoming prediction data for drift and automatically triggers model retraining when distribution shifts are detected. A Docker Compose configuration orchestrates three separate containers — API, monitoring, and MLflow — as an isolated, reproducible multi-service environment. GitHub Actions drives the CI/CD pipeline: on every merge to main, automated tests run via pytest, a Docker image is built and pushed to Docker Hub, and the updated system is ready for deployment — all without manual intervention.

Key Outcome

A production-ready MLOps system that demonstrates the full ML engineering lifecycle — real-time serving, drift-triggered retraining, experiment tracking, multi-service containerization, and automated CI/CD — built around a telecom churn prediction model that achieves 80% accuracy and AUC of 84%.

Technical Deep Dive

Architecture & Design

MLOps Pipeline

Model Development

Step 1

Data Ingestion

data_ingestion.py · Raw telecom churn data loading

Step 2

Feature Engineering

feature_engineering.py · pandas preprocessing pipeline

Step 3

Model Training

train_model.py · Logistic Regression · 80% acc · AUC 84%

Experiment Tracking · MLflow

Versioned Model Registry

Logs parameters, metrics, and artifacts for every run · Full audit trail across model versions · Stored in mlruns/ · UI accessible at /mlflow

Model Serving

REST API · FastAPI

Real-Time Inference Endpoint

app.py · Serves on-demand predictions at localhost:8000/predict · JSON request/response · joblib model loading · Swagger UI docs

Monitoring & Automated Retraining

Drift Detection · EvidentlyAI

Data & Prediction Monitoring

monitor.py · Monitors feature distributions vs. training baseline · Live dashboard at localhost:8001/view_report · Exports report.html

Automated Retraining

Drift-Triggered Pipeline

retrain.py · Triggered automatically on detected drift · New model versioned in MLflow · Endpoint updated

Containerization · Docker Compose

Container 1

API Service

api.Dockerfile · FastAPI inference container

Container 2

Monitoring Service

monitor.Dockerfile · EvidentlyAI drift monitoring

Container 3

MLflow Service

mlflow.Dockerfile · Experiment tracking UI + artifact store

CI/CD · GitHub Actions

Step 1 · On Push/PR

Automated Tests

pytest suite runs on every main branch merge

Step 2 · On Pass

Docker Build & Push

All 3 images built and pushed to Docker Hub automatically

Step 3 · On Push

Ready for Deployment

Updated system available — zero manual intervention

Model Development

Data Pipeline & Training

Three modular scripts handle the development pipeline sequentially — data_ingestion.py loads and partitions the raw telecom churn dataset, feature_engineering.py applies preprocessing and feature transformations via pandas, and train_model.py trains a scikit-learn Logistic Regression model achieving 80% accuracy and AUC of 84%. The trained model is serialized to the models/ directory via joblib for downstream serving.

Experiment Tracking

MLflow Model Registry

Every training and retraining run is logged to MLflow — parameters, metrics, and model artifacts are stored in the mlruns/ directory and accessible through the MLflow UI. This creates a complete, versioned audit trail across all model iterations, making it possible to compare runs, roll back to a previous version, and trace any production model to the exact data and code that produced it.

Model Serving

FastAPI REST Endpoint

app.py exposes the trained model as a REST API at localhost:8000/predict, accepting JSON feature payloads and returning churn probability scores in real time. The endpoint loads the serialized model via joblib and is documented through an auto-generated Swagger UI — making it immediately consumable by downstream applications, dashboards, or internal tooling without any additional integration work.

Monitoring

EvidentlyAI Drift Detection

monitor.py runs EvidentlyAI against incoming prediction data, comparing live feature distributions against the training baseline to detect data drift and prediction drift. Results are visualized in a live dashboard at localhost:8001/view_report and exported as report.html. When drift is detected, retrain.py is triggered automatically — retraining the model, versioning it in MLflow, and updating the serving endpoint without manual intervention.

Containerization

Docker Compose Multi-Service

Three dedicated Dockerfiles — api.Dockerfile, monitor.Dockerfile, mlflow.Dockerfile — define isolated containers for each service. docker-compose.yml orchestrates all three together, enabling the entire system to be spun up with a single docker compose up --build command. This ensures full environment reproducibility across development, testing, and deployment contexts.

CI/CD

GitHub Actions Automation

Every push or pull request to main triggers the GitHub Actions pipeline — pytest runs the full test suite, and on a successful pass, all three Docker images are built and pushed to Docker Hub automatically. Artifacts are stored in both GitHub Actions and MLflow. The result is a system where code changes are tested, containerized, and deployment-ready without any manual build or push steps.

Key Design Decisions

Three isolated containers prevent service coupling

Running the API, monitoring, and MLflow as separate Docker containers rather than a monolithic service means each component can be updated, scaled, or replaced independently. A change to the monitoring logic does not require rebuilding the API container. A new model version does not interrupt the drift detection service. This separation of concerns is a foundational production ML engineering principle that ChurnFlow demonstrates in practice.

Drift detection closes the retraining loop automatically

Without automated drift detection, a deployed model silently degrades as real-world data distribution shifts away from the training data — the most common failure mode for production ML systems. By connecting EvidentlyAI's drift reports directly to retrain.py, ChurnFlow ensures that model degradation triggers a response rather than just a report. The retraining loop closes automatically — new data, new model, new version in MLflow, updated endpoint.

CI/CD eliminates manual deployment as a production bottleneck

Manual deployment steps — building images, running tests, pushing to a registry — are a common source of deployment errors and a significant drag on iteration speed. By automating the entire sequence through GitHub Actions, ChurnFlow ensures that every code change that passes tests is automatically containerized and pushed to Docker Hub. This removes the human bottleneck from the deployment path and makes the system's release cycle as fast as the test suite.

Tech Stack

Technology Purpose
scikit-learn Logistic Regression model training and evaluation
FastAPI REST API for real-time churn prediction inference
MLflow Experiment tracking, model versioning, and artifact storage
EvidentlyAI Data and prediction drift detection with automated retraining trigger
Docker / Docker Compose Multi-container orchestration — API, monitoring, and MLflow services
GitHub Actions CI/CD pipeline — automated testing, Docker build, and Docker Hub push
joblib Model serialization and loading for serving
pandas Data preprocessing and feature engineering pipeline
pytest Automated unit and integration testing within CI/CD pipeline
Python Core language and system orchestration

Results & Metrics

What the system delivers

80%

Model Accuracy

Baseline Logistic Regression on telecom churn dataset — validated on held-out test set

AUC 84%

Predictive Power

Area under the ROC curve — strong discriminative power between churn and non-churn customers

CI/CD

Fully Automated Pipeline

GitHub Actions — automated testing, Docker build, and Docker Hub push on every main branch merge

🔁

Closed-loop drift detection and automated retraining

EvidentlyAI continuously monitors incoming feature distributions against the training baseline. When drift is detected, retrain.py triggers automatically — retraining the model, versioning it in MLflow, and updating the serving endpoint without any manual intervention. The full retraining loop closes without human involvement, keeping the production model aligned with live data distributions.

🐳

Full system up in one command — docker compose up --build

Three dedicated Dockerfiles define isolated, reproducible environments for the API, monitoring, and MLflow services. Docker Compose orchestrates all three together — the entire production stack can be spun up from scratch with a single command in any environment. This makes the system portable across development, staging, and production without environment-specific configuration changes.

Zero-touch deployment on every successful merge

Every push to main triggers the GitHub Actions pipeline — pytest runs the full test suite, and on a successful pass all three Docker images are built and pushed to Docker Hub automatically. No manual build steps, no manual pushes, no deployment bottlenecks. Code changes move from commit to deployment-ready container without any developer intervention beyond the merge itself.

📊

Complete versioned audit trail across every model iteration

MLflow logs parameters, metrics, and model artifacts for every training and retraining run — creating a complete, searchable history of every model version the system has ever produced. Any production model can be traced back to the exact data, code, and hyperparameters that generated it, and any previous version can be restored and redeployed from the registry.

🔌

Real-time inference via documented REST API

The FastAPI endpoint at localhost:8000/predict accepts JSON feature payloads and returns churn probability scores in real time, with auto-generated Swagger UI documentation making the API immediately consumable by downstream applications. Any tool or service that can make an HTTP POST request can integrate with the churn prediction system — no SDK, no custom client library required.