Model prediction interpretation in a human-readable form is a key for making a great machine learning system. Understanding how object features affect model predictions helps debug and explain system behavior to stakeholders. Today Nikita shows how to use SHAP Values for understanding model predictions.