Anthill Inside For members

Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Tickets

Loading…

##About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
  • Myths and realities of labelling datasets for Deep Learning.
  • Practical experience with using Knowledge Graphs for different use cases.
  • Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
  • Pros and cons of using custom and open source tooling for AI/DL/ML.

#Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


#Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


#Bronze Sponsor

iMerit Impetus

#Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Divij Joshi

Model Interpretability, Explainable AI and the Right to Information

Submitted Oct 18, 2019

Issues of ‘explainability in AI’ have emerged as an important theme in the development of machine learning and statistical modelling. Most studies look at explainability through the lens of model interpretability, in order to understand underlying machine learning models better and improve them for better optimisation. However, there is limited relevance of this understanding of interpretability to applied machine learning ‘in the wild’, that is, in their real-world applications and interactions with end-users. In the context of consequential automated decisions (particularly in administrative or governmental decisions), citizens turn to robust tools like the Right to Information, for instrumentally achieving openness and accountability of decision making systems. This poster session will attempt to locate points of tension between the three concepts of interpretability, explainability and the right to information, and will build a case for why and how machine decision making systems can incorporate elements of the right to information and due process.

Outline

Consequential machine decision making is now pervasive. Automated decisions (to different degrees of automation) are now applied in fields of welfare allocation, policing and criminal justice, finance and insurance and online content moderation, among others. Many of these tools use complex algorithmic systems, including machine learning techniques, which are conventionally difficult to interpret. Efforts toward interpretation have traditionally focused on model interpretation through explaining the ‘black box’ of algorithmic systems (for example through local linear explanations or models). However, these techniques of interpretability have limited significance where end-users are concerned, for a number of reasons, including the ability of a lay citizen to parse technical models, as well as the limited information it provides for achieving instrumental purposes of explanation (for example, the ability to use an explanation to overturn a decision). Some techniques have focused on explainability without opening the black box, including through methods like counterfactual explanations. However, limited work exists on how the non-interpretability of machine decisions impacts important constitutional concepts of due process and the right to information as well as legal mechanisms like the RTI Act which actualise these rights. The RTI Act, in particular, places positive obligations upon the state to explain certain decisions, including administrative decisions taken that impact individuals. The extent to which techniques of explainability in AI can be incorporated to ensure that the RTI remains a robust instrument for holding government systems accountable will be the focus of this session.

Speaker bio

I am a lawyer and a legal researcher, working in the field of technology policy. I have researched and written extensively on issues of internet openness and digital rights. In my role as a technology policy fellow at the Mozilla Foundation, I am focussing on creating policy for improving machine decision making systems in India.

Slides

https://speakerdeck.com/divijjoshi/machine-interpretability-explainability-and-the-right-to-information

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more