Special Webinar Event Model Red-Teaming: Dynamic Security Analysis for LLMs
Featuring
REGISTER NOW & YOU COULD WIN A $250 Amazon.com Gift Card!
Must be in live attendance to qualify. Duplicate or fraudulent entries will be disqualified automatically.
About This Webinar
The rise of Large Language Models has many organizations rushing to integrate AI-powered tools into existing products, but they introduce significant new risk. OWASP has recently introduced the LLM Top 10 to highlight these novel threat vectors, including prompt injection and data exfiltration. However, existing AppSec tools are not designed to detect and remediate these vulnerabilities. In particular, static analysis (SAST), one of the most common tools, cannot be used since there is no code: machine-learning models are effectively "black boxes."
LLM red-teaming is emerging as a technique to minimize the vulnerabilities associated with LLM adoption, ensure data confidentiality, and verify that safety and ethical guardrails are being applied. It applies tactics associated with penetration testing and dynamic analysis (DAST) of traditional software to the new world of machine-learning models.
-
Host Mackenzie Putici Webinar Moderator, Future B2B
-
Featuring Clinton Herget Field CTO, Snyk
Join this session for an overview of LLM red-teaming principles including:
- What are some of the novel threat vectors associated with large language models, and how are these attacks carried out?
- Why are traditional vulnerability-detection tools (such as SAST and SCA) incapable of detecting the most serious risks in LLMs?
- How can the principles of traditional dynamic analysis be applied to machine learning models, and what types of new tools are needed?
- How should organizations begin to approach building an effective program for LLM red-teaming?