Nov 2024
4 Mon
5 Tue
6 Wed
7 Thu
8 Fri
9 Sat
10 Sun
Nov 2024
11 Mon
12 Tue
13 Wed
14 Thu
15 Fri
16 Sat
17 Sun
Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 05:15 PM IST
23 Sat 09:00 AM – 06:15 PM IST
24 Sun
Sachin
Large language models (LLMs) are known to generate unintended inaccurate responses, often called hallucinations. Most of these are harmless mistakes, like Google AI Overview suggesting to eat a rock a day. There’s a more concerning possibility: what if an attacker could deliberately cause specific hallucinations? This could allow the stealthy spread of targeted disinformation.
Our talk will introduce the concept of indirect prompt injections and show how malicious documents can be crafted to trigger particular hallucinations when added to a vector database. We’ll demonstrate case studies of proof-of-concept attacks on popular LLM chatbots like Notion AI, Microsoft Copilot, and Slack AI. Finally, we’ll explore secure design principles and tools to defend against these hidden threats.
This talk is designed for a diverse audience in the AI field. It will be particularly valuable for AI engineers working on LLM applications, AI security engineers focused on protecting these systems, product managers overseeing AI-powered products, and C-level executives making strategic decisions about AI implementation and security. Whether you’re hands-on with the technology or guiding its use at a high level, you’ll gain crucial insights into this emerging threat and its implications.
Hosted by
Supported by
Platinum Sponsor
Platinum Sponsor
Venue host - Rootconf workshops
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}