Handling Bias while building ML systems
At Eightfold.ai our mission has been to help enable the right career for everyone using the power of AI. We employ deep learning algorithms that leverage information from career data of 1 billion+ profiles - these algorithms in turn help organizations find the most relevant talent and individuals identify the best career options for themselves.
While building deep learning algorithms of such scale, it becomes important to ensure that biases of any form (both systemic and inadvertent) do not creep into the algorithms. In the context of employment, these biases could be anything including race, gender, religion, caste, nationality etc.
In this talk, we will discuss how we can build such large scale ML pipelines while ensuring that the outcomes from these ML models are free of bias.
Specifically we cover the practices to be followed during various stages of the ML pipeline from the choice of algorithms to training datasets to evaluation / metrics and monitoring.
We will begin by motivating the problem with the classical courtroom algorithm game, discuss equal opportunity algorithms, dive into AI explainability concepts (e.g. Shap feature explainers) and various monitoring metrics.