Metrics in the wild! How to deal with biases when building auto-classification systems

Paola Oliva-Altamirano Principal Data Scientist /Innovation Lab, Our Community/SmartyGrants

When designing auto-classification systems we often trust standard metrics in-built in Machine Learning models. How much do you trust them? Or in other words, to which extent do these scores reflect the success of your algorithm? Knowing your data, the decisions that your model will influence, and the role that scoring plays in enhancing or mitigating biases should be essential for Statistic practitioners and product builders. In this talk I will share our lessons learned when building classifiers in the social sector, the biases we have encountered in multilabel text classifications, and our constant battle to design ethical, human centre products.