Rootconf Mini 2024

Geeking out on systems and security since 2012

Tickets

Loading…

Sanjaykumar S

@ssk14

Mitigating Emerging Threats in LLM Security

Submitted Oct 17, 2024

Abstract

Large Language Models (LLMs) are reshaping industries by powering advancements in customer interactions, content generation, and critical business operations. However, these advancements come with significant security challenges, such as data leakage, prompt injection, model inversion, data poisoning, and ethical concerns related to accountability and transparency. These vulnerabilities highlight the need for stringent safeguards and governance frameworks to ensure secure deployment and responsible use of LLMs. In this talk, we will explore the various ways LLMs can be exploited, such as through prompt injection and model inversion. To illustrate these risks and their potential mitigation, we will share real-world examples, including instances where ChatGPT have been manipulated to provide unauthorized information, such as suggesting methods to bypass security measures or revealing sensitive product keys. We’ll also address how biases in AI models, including Google’s image models, underscore broader ethical and security challenges in AI deployment.

Through the lens of security engineers and technical leads, we will present actionable insights for secure LLM deployment, including improved detection methods, specific mitigation tools, and best practices for reducing vulnerabilities. Attendees will gain an understanding of proactive measures such as the OWASP Top 10 for LLM security, MLSecOps, Red Teaming exercises, and continuous vulnerability management through our experience of building such applications. By highlighting both vulnerabilities and effective solutions, we aim to empower organizations to adopt LLMs safely and responsibly.

The story arc of this talk will follow the journey of LLM adoption across critical industries—from early excitement, through unexpected vulnerabilities, to the development of innovative security measures. By highlighting both vulnerabilities and solutions, we aim to provide attendees with a balanced perspective, equipping them to navigate the evolving landscape of LLM security effectively.

Key Takeaways

  • Identify LLM-Specific Security Risks: Understand the unique security challenges introduced by LLMs, such as prompt injection, data poisoning, model inversion attacks, and automated social engineering.
  • Learn Best Practices for Mitigation: Gain practical insights into mitigation strategies, such as comprehensive data protection, proactive monitoring, incident response, MLSecOps, and continuous vulnerability management.
  • Strategic Security Frameworks: Explore how organizations can leverage the OWASP Top 10 for LLM Security and ethical considerations, such as accountability and transparency, to enhance the safe adoption of LLMs.

Target Audience

  • Security Engineers: Focused on securing LLM applications and infrastructure.
  • Core Systems Developers: Handling infrastructure, networks, and cloud platforms for LLM deployments.
  • DevSecOps Teams: Integrating security into LLM development and operations.
  • AI/ML Engineers: Building and securing LLM-driven applications.
  • Tech Enthusiasts and IT Architects: Those curious about the operational and strategic impact of LLMs on enterprise security and responsible AI governance.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid Access Ticket

Hosted by

We care about site reliability, cloud costs, security and data privacy