Securing the AI SDLC: AI Strategy & Governance and Secure AI Design | The AI TrustOps Masterclass Ch 2
Featuring
About This Webinar
As AI technologies become a core part of your software, you need a robust strategy and a secure design methodology to manage the new risks. This chapter focuses on the first two pillars of the AI TrustOps framework: AI Strategy & Governance and Secure AI Design. We will show you how to establish a clear accountability model for AI initiatives, document your risk posture, and build a cross-functional governance team. You'll also learn how to integrate AI-native risk indicators—like bias, explainability, and hallucinations—into your systems architecture and proactively model new threat vectors introduced by AI and ML models.
-
Host Scott Bekker Webinar Moderator Future B2B
-
Featuring Clinton Herget Field CTO Snyk
What You'll Learn
- Understand how to align AI goals with business objectives and create a cross-functional governance team to manage new risks.
- Learn to consider AI-native risks like bias and hallucinations as a fundamental part of your architecture.
- Discover how to proactively identify and manage AI threats, including conducting threat modeling for new AI/ML assets and ensuring data integrity by design.