In today’s rapidly evolving healthcare landscape, ensuring the safety and reliability of digital health solutions is paramount. In this fireside chat, Sharika Varma will delve into the complexities of this critical issue, exploring best practices, regulatory considerations, and innovative approaches to maintaining high standards of safety in digital health products.
Sharika will be joined by Dr Anand Philip and Dr Hannah Thomas, consulting editors for this project, for the discussion.
Have questions for the speaker? Leave a comment.
Sharika Varma, Engineering Manager at Ada Health, has more than a decade of experience in software engineering and project management. She will share her unique insights from working at Ada on ensuring the safety of digital health products.
This session will be held online on the 6th of March from 11 am - 12 noon. You can RSVP to participate via Zoom or watch the livestream on YouTube.
This session is part of the AI and Risk Mitigation series which will host meet-ups/talks on agritech, healthtech, fintech, ed-tech, and public services. The aim of this project is to:
- Build a community that will engage in issues of AI security, privacy, etc. and best practices to mitigate them.
- Develop a knowledge repository in the form of practical guidelines and a self-regulated charter for Ethical AI based on these sessions.
- Post a comment here to suggest a topic you’d like to discuss. This should involve a brief outline of the use cases and challenges regarding AI implementation.
- Moderate/discuss a topic someone else is proposing.
- Spread the word among colleagues and friends.
- Join The Fifth Elephant Telegram group or WhatsApp group.
Anthill Inside is a community where topics in AI and Deep Learning such as tools and technologies, methodologies and strategies for incorporating AI and Deep Learning into various applications and businesses, and AI engineering are discussed. Furthermore, Anthill Inside places a strong emphasis on exploring and addressing ethical concerns, privacy, and the issue of bias both in practice and within AI products.
Follow us on Twitter at @anthillin. For queries, write to editorial@hasgeek.com or call +91-7676332020.