Submissions for MLOps November edition

On ML workflows, tools, automation and running ML in production

Ramjee Ganti

@rganti

ML Fairness 2.0 - Intersectional Group Fairness

Submitted Jul 14, 2021

Topic:
As more companies adopt AI, more people question the impact AI creates on society, especially on algorithmic fairness. However, most metrics that measure the fairness of AI algorithms today don’t capture the critical nuance of intersectionality. Instead, they hold a binary view of fairness, e.g., protected vs. unprotected groups. In this talk, we’ll discuss the latest research on intersectional group fairness using worst-case comparisons.

3 Key Takeaways:
The importance of fairness in AI
Why AI fairness is even more critical today
Why intersectional group fairness is critical to improving AI fairness

Slides: https://docs.google.com/presentation/d/1sqxhzOw3cItJc7lxrpJwgfBZqIgnHB4VSVVB9CAkb8U/edit?usp=sharing

Speaker:
I am Ramjee, work at Fiddler Labs Inc(http://fiddler.ai/). We believe that every AI solution should be responsible and ethical. With the mission to help every organization build trust into AI, we’ve developed a model performance monitoring platform, powered by explainable AI. Our platform empowers data science and engineering teams to track and troubleshoot ML issues, such as drift and integrity, faster with our unique XAI technology. The XAI not only pinpoints causes of the performance drops, it enables our users to quickly and easily validate and compare different models, and detect bias.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jump starting better data engineering and AI futures