Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 05:10 PM IST
23 Sat
24 Sun
Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 05:10 PM IST
23 Sat
24 Sun
This video is for members only
Sachin
Large language models (LLMs) are known to generate unintended inaccurate responses, often called hallucinations. Most of these are harmless mistakes, like Google AI Overview suggesting to eat a rock a day. There’s a more concerning possibility: what if an attacker could deliberately cause specific hallucinations? This could allow the stealthy spread of targeted disinformation.
Our talk will introduce the concept of indirect prompt injections and show how malicious documents can be crafted to trigger particular hallucinations when added to a vector database. We’ll demonstrate case studies of proof-of-concept attacks on popular LLM chatbots like Notion AI, Microsoft Copilot, and Slack AI. Finally, we’ll explore secure design principles and tools to defend against these hidden threats.
This talk is designed for a diverse audience in the AI field. It will be particularly valuable for AI engineers working on LLM applications, AI security engineers focused on protecting these systems, product managers overseeing AI-powered products, and C-level executives making strategic decisions about AI implementation and security. Whether you’re hands-on with the technology or guiding its use at a high level, you’ll gain crucial insights into this emerging threat and its implications.
Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 05:10 PM IST
23 Sat
24 Sun
Hosted by
Supported by
Platinum Sponsor
Platinum Sponsor
Community sponsor
Venue host - Rootconf workshops
Community Partner
Community Partner
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}