Special Webinar Event Securing AI-Native Systems: Why AI Security Is Still an API Security Problem
Featuring
REGISTER NOW & YOU COULD WIN A $250 Amazon.com Gift Card!
Must be in live attendance to qualify. Duplicate or fraudulent entries will be disqualified automatically.
About This Webinar
Modern systems are AI-native and no longer bounded by a single web or mobile interface. Large language models are invoked through APIs, chained together via agents, enriched with contextual data through training and grounding, and connected to third-party services across cloud providers. Users invoke models through tool use. Agents call APIs. Sometimes models call other models.
For security teams, this means the attack surface has expanded—but not in an entirely new direction.
In this session, we'll break down why AI security is fundamentally an API security problem, and how AI-native architectures introduce new risks around data exposure, identity, and behavior that traditional application security tooling wasn't designed to handle.
We'll examine how threats like prompt injection, model misuse, shadow AI, and supply-chain poisoning emerge from the same underlying challenge: limited visibility and control over APIs and data flows. You'll learn how security teams can evolve their security strategy and adapt proven API security practices—discovery, testing, and runtime protection—to secure AI-native systems without slowing development.
-
Host Lacey Alexander Webinar Moderator, Future B2B
-
Featuring Mike Isbitski Principal Product Marketing Manager, Harness
Why You Should Join
- AI Systems Expand the API Attack Surface - LLMs, agents, RAG pipelines, and model-to-model workflows are all built on APIs. The fundamentals of API security still apply, but must also account for increased non-human callers, dynamic execution paths, and novel abuse patterns.
- AI-Native Architectures Introduce New Failure Modes - AI systems blend structured and unstructured data, often pulling sensitive data from internal systems and third-party providers. This creates risks around data leakage, over-permissioned agents, prompt injection, and model behavior that traditional request validation and access control techniques can't catch.
- Visibility Is Critical for AI Security - Most organizations already have AI usage they didn't intentionally build or procure. Without continuous discovery of APIs, models, and data flows across cloud services and operating environments, AI governance efforts fail and security teams are left chasing fires.
- API Security Must Evolve for Runtime and Agentic Behavior - Shift-left approaches remain critical for secure design and producing reasonably secure code, particularly as AI-generated code also gains favor. But AI systems also demand increased runtime observability and protection. AI agents act autonomously, models behave probabilistically, and AI threats evolve quickly, making real-time AI- and API-focused threat detection and response essential for organizations.