Rootconf 2025 Annual Conference - 16th and 17th May
On platforms, distributed data systems & security
May 2025
12 Mon
13 Tue
14 Wed
15 Thu
16 Fri 09:15 AM – 07:15 PM IST
17 Sat 09:15 AM – 05:35 PM IST
18 Sun
Deepam Kanjani
@deepamkanjani
Submitted Apr 1, 2025
In the session, we’ll delve into the critical topic of AI-based threat modelling. You’ll gain valuable insights into how seemingly minor semantic drifts and manipulated inputs can drastically alter the effectiveness of your cybersecurity defences. This knowledge will be instrumental in protecting your systems from sophisticated attacks.
You’ll leave this talk with practical strategies, such as contextual validation layers and adversarial prompt testing (a technique to test the robustness of AI models against adversarial attacks), to secure your AI threat modelling environments from these sophisticated attacks. Learn to continuously calibrate and validate your AI models to maintain trust and accuracy.
Key Takeaways:
This session will equip you with the knowledge and tools to proactively detect and mitigate these subtle but impactful threats, empowering you to safeguard your software delivery pipelines.
Audience Beneficial For: Your role as a security architect, threat modeling specialist, AI security engineer, or cybersecurity leader is crucial in securely integrating AI-driven solutions into your security workflows. This session is designed to provide you with the knowledge and tools you need to succeed in this important task.
Deepam Kanjani is a cybersecurity leader and author, currently a Senior Product Security Manager at Atlassian. He specializes in building secure and scalable cybersecurity programs, and actively researches and speaks on AI security and threat modeling best practices.
Hosted by
Supported by
Gold Sponsor
Gold Sponsor
Sponsor
Sponsor
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}