Nov 2024
4 Mon
5 Tue
6 Wed
7 Thu
8 Fri
9 Sat
10 Sun
Nov 2024
11 Mon
12 Tue
13 Wed
14 Thu
15 Fri
16 Sat
17 Sun
Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 03:20 PM IST
23 Sat 09:30 AM – 06:15 PM IST
24 Sun
Sanjaykumar S
Large Language Models (LLMs) are reshaping industries by powering advancements in customer interactions, content generation, and critical business operations. However, these advancements come with significant security challenges, such as data leakage, prompt injection, model inversion, data poisoning, and ethical concerns related to accountability and transparency. These vulnerabilities highlight the need for stringent safeguards and governance frameworks to ensure secure deployment and responsible use of LLMs. In this talk, we will explore the various ways LLMs can be exploited, such as through prompt injection and model inversion. To illustrate these risks and their potential mitigation, we will share real-world examples, including instances where ChatGPT have been manipulated to provide unauthorized information, such as suggesting methods to bypass security measures or revealing sensitive product keys. We’ll also address how biases in AI models, including Google’s image models, underscore broader ethical and security challenges in AI deployment.
Through the lens of security engineers and technical leads, we will present actionable insights for secure LLM deployment, including improved detection methods, specific mitigation tools, and best practices for reducing vulnerabilities. Attendees will gain an understanding of proactive measures such as the OWASP Top 10 for LLM security, MLSecOps, Red Teaming exercises, and continuous vulnerability management through our experience of building such applications. By highlighting both vulnerabilities and effective solutions, we aim to empower organizations to adopt LLMs safely and responsibly.
The story arc of this talk will follow the journey of LLM adoption across critical industries—from early excitement, through unexpected vulnerabilities, to the development of innovative security measures. By highlighting both vulnerabilities and solutions, we aim to provide attendees with a balanced perspective, equipping them to navigate the evolving landscape of LLM security effectively.
Hosted by
Supported by
Platinum Sponsor
Platinum Sponsor
Venue host - Rootconf workshops
Community Partner
Community Partner
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}