ML experiment tracking with MLflow
While developing a machine learning or deep learning model, several datasets might be trained on diverse architectures tuned with various hyperparameters. It is extremely challenging to keep track of so many moving parts and this causes problems for reproducibility, efficient organization, compliance to requirements, and pipeline management.
Experiment tracking tools bring sanity to this complexity, but most of them are proprietary. MLflow is a free, open-source, and popular platform for AI experiment tracking. Very flexible, it can:
- be combined with any machine learning or deep learning framework,
- work with any hyperparameter optimization tool,
- run a server anywhere.
While tracking models, datasets, and tuning experiments, it logs metrics, saves checkpoints, and provides an interactive user interface to visualize model performance and system usage. It can also displays SHAP (SHapley Additive exPlanations) charts and provides a model registry to store and organize models.
In this webinar, I will show you how to get started with MLflow and demo some of its most useful experiment tracking features.
MLflow also provides tools for deployment, LLM and agents evaluation, prompt management, and AI applications tracking, but these go beyond the scope of classic usage in data science and will not be covered here.
Slides (Click and wait: this reveal.js presentation is heavy and takes some time to load.)
Slides content for easier browsing.