Build a Culture of ML Testing and Model Quality

A Talk by Mohamed Elgendy
CEO & Co-founder, Kolena

Register to watch this content

By submitting you agree to the Terms & Privacy Policy
Watch this content now

About this talk

Machine learning engineers and data scientists spend most of their time testing and validating their models’ performance. But as machine learning products become more integral to our daily lives, the importance of building rigorous testing pipelines for model predictions will only increase.

Current ML evaluation techniques are falling short in their attempts to describe the full picture of model performance. Evaluating ML models by only using global metrics (like accuracy or F1 score) produces a low-resolution picture of a model’s performance and fails to describe the model performance across types of cases, attributes, scenarios.

It is rapidly becoming vital for ML teams to have a full understanding of when and how their models fail and to track these cases across different model versions to be able to identify regression. We’ve seen great results from teams implementing unit and functional testing techniques in their model testing. In this talk, we’ll cover why systematic unit testing is important and how to effectively test ML system behavior.

Stages covered by this talk

Have you got yours yet?

Our All-Access Passes are a must if you want to get the most out of this event.

Check them out

Learn from amazing companies like these

Kolena

Proudly supported by

Want to sponsor this event? Contact Us.


Loading content...

Loading content...