Low rate loans for ladies, stags pay extra: The Role of Ethics in AI/ML
Fairness in AI is the intersection of analytical philosophy and mathematics; the goal is to specify an ethical system so accurately and completely that it can be implemented in computer code. In this talk, I’ll discuss what an AI/ML system will and won’t do, and dispell a number of myths that seem to be floating around.
I’ll then discuss the moral dilemmas that arise when the AI/ML system starts behaving in an “unfair” or “unethical” manner. I will not attempt to resolve these dilemmas, I will simply demonstrate that it is mathematically impossible (for humans or AIs) to do so in a manner satisfying to everyone.
- Fairness, Accountability and Transparency for ML in transaction data
- Preventing bias in data driven decision making
- Algorithms do not behave like a biased human. Their biases are entirely their own, and intuitions derived from racist humans do not translate.
- Fairness and bias prevention is a problem of analytical philosophy as much as one of mathematics. You need to address both.
- Because the morals of bias are different in India than in Silicon Valley, we need an Indian approach to these topics. We can’t simply copy a technique from a Google or Microsoft paper and apply it directly.
Direction of talk:
Algo Fairness is NOT about fixing incorrect predictions. It’s about correct predictions/decision processes that are considered in some sense “unfair”.
Most research in algo fairness is based on American/British moral intuitions around racism/sexism. Specifically:
- The (western) virtue ethics of not noticing. “I’m a good person because I never noticed that Tamils are pretty smart.”
- Racist preferences, or it’s bad to like Tamils more than one likes Punjabis.
- Individual equality, ignore group membership, evaluate people on their own merits.
- Group rights.
Most of this does not apply to the Indian context. Algo fairness will be different here.
Explain what algorithms do and don’t do. Dispel a few myths:
- If you train your algorithm on your 100% group X set of employees, it does NOT mean your algorithm will be biased against non-X people.
- Algorithms will NOT automatically use protected class information even if it’s available.
- If your input data is biased, algorithms will often fix the bias, and they may use protected class information to do this.
- Explain certain proposed concrete implementations of algorithmic fairness. Illustrate how these implementations are inconsistent with each other - i.e., it’s mathematically impossible to satisfy everyone.
Explain that non-algorithmic decisionmaking (e.g. humans in the loop) does not resolve any of these problems.
Illustrate with examples how the person implementing algorithm cannot puntor handwave on hard ethical questions. Algorithm says “issue loan” or “reject loan”, must make a choice.
Bring up need for involvement of people who understand math, otherwise journalists/activists will pick up the slack.
Chris Stucchio is a former physicist, high frequency trader and software developer. He’s currently the head of data science at Simpl. He’s been working in decision theory and bayesian optimization for the past 5 years, and has been teaching statistics to novices for much longer.
- Profile: https://chrisstucchio.com
- Previous talk (on a different topic): https://www.chrisstucchio.com/pubs/slides/pydelhi_2017/slides.html#1
- Other writing on this topic: https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/