Nov 2024
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri 09:00 AM – 05:10 PM IST
23 Sat
24 Sun
Sanjaykumar S
Large Language Models (LLMs) are reshaping industries by powering advancements in customer interactions, content generation, and critical business operations. However, these advancements come with significant security challenges, such as data leakage, prompt injection, model inversion, data poisoning, and ethical concerns related to accountability and transparency. These vulnerabilities highlight the need for stringent safeguards and governance frameworks to ensure secure deployment and responsible use of LLMs. In this talk, we will explore the various ways LLMs can be exploited, such as through prompt injection and model inversion. To illustrate these risks and their potential mitigation, we will share real-world examples, including instances where ChatGPT have been manipulated to provide unauthorized information, such as suggesting methods to bypass security measures or revealing sensitive product keys. We’ll also address how biases in AI models, including Google’s image models, underscore broader ethical and security challenges in AI deployment.
Through the lens of security engineers and technical leads, we will present actionable insights for secure LLM deployment, including improved detection methods, specific mitigation tools, and best practices for reducing vulnerabilities. Attendees will gain an understanding of proactive measures such as the OWASP Top 10 for LLM security, MLSecOps, Red Teaming exercises, and continuous vulnerability management through our experience of building such applications. By highlighting both vulnerabilities and effective solutions, we aim to empower organizations to adopt LLMs safely and responsibly.
The story arc of this talk will follow the journey of LLM adoption across critical industries—from early excitement, through unexpected vulnerabilities, to the development of innovative security measures. By highlighting both vulnerabilities and solutions, we aim to provide attendees with a balanced perspective, equipping them to navigate the evolving landscape of LLM security effectively.
Hosted by
Supported by
Platinum Sponsor
Platinum Sponsor
Community sponsor
Venue host - Rootconf workshops
Community Partner
Community Partner
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}