Securing APIs in the Age of AI: New Threats, Smarter Defenses
“AI is changing the rules of cybersecurity—and your APIs are now both the target and the enabler. Discover how AI agents, LLMs, and generative models are redefining API security in this must-read session from APIsec|CON 2025.”

Securing APIs in the Age of AI: New Threats, Smarter Defenses
By Anubhav Sharma, Senior Security Engineer at Cashfree Payments
Introduction: AI Is Powering Innovation—and a New Wave of API Threats
APIs are the nervous system of modern applications—and in the age of artificial intelligence, they’re more critical than ever. But this dependence brings an escalating risk. With AI agents making autonomous decisions and APIs delivering real-time data at scale, attackers now have powerful tools at their disposal to exploit both.
In this in-depth session from APIsec|CON 2025, Anubhav Sharma draws from nearly a decade in AppSec to explore how AI reshapes the API threat landscape—and how defenders can respond with layered, intelligent security strategies.
Why APIs Are the Backbone of AI
AI systems don't operate in isolation. They rely on APIs to:
- Access real-time data
- Interface with third-party tools
- Execute autonomous decisions
- Power LLMs, chatbots, and machine-learning workflows
Whether it’s a customer support chatbot, a fraud detection model, or an autonomous financial assistant—APIs are the invisible rails enabling it all.
A 2024 Postman report showed a 73% increase in AI-generated traffic, and APIs account for 41x annual growth in platforms like Gemini. As AI adoption surges, so does the attack surface.
The New API Threat Landscape in the AI Era
With great power comes a broader attack surface. Anubhav highlighted how AI introduces new threats that traditional API security strategies aren’t equipped to handle:
🚨 Key Threats Enabled by AI:
- Automated Brute Force & Token Abuse
LLMs can script token brute-force attacks, fuzz input combinations, and simulate human behavior to bypass rate limits. - Prompt Injection & Jailbreaking
Attackers craft inputs to manipulate LLMs into leaking sensitive information or executing unintended logic. - Model Poisoning & Data Leakage
Malicious users poison inputs or reverse-engineer responses to extract proprietary data via APIs. - AI-generated Phishing & Malware
Generative AI can spin up realistic phishing emails, polymorphic malware, or impersonate executives via voice synthesis. - M2M API Threats
Machine-to-machine (M2M) communication between AI agents introduces new risks in lateral movement and context exploitation.
Evolving Threat Models for AI-Driven Environments
Traditional threat models—based on static payloads and predictable behavior—don’t map well to AI-powered systems. In the new threat model, defenders must account for:
CategoryTraditional ThreatsAI-Era ThreatsInput HandlingXSS, SQLiPrompt Injection, JailbreakingAuth & AccessBrute Force, Broken AuthToken Misuse via AI Scripts, MFA Proxy BypassRecon & ExploitsManual DiscoveryAutomated Enumeration via LLMsSocial EngineeringEmail PhishingDeepfakes, AI-generated PhishingMalwareFile-based PayloadsPolymorphic AI-Generated MalwareSupply ChainCode InjectionsAI-suggested Vulnerable Snippets
Securing APIs in the AI Age: Defensive Strategies
Anubhav laid out a layered defense-in-depth approach tailored to today’s AI-powered threat landscape:
🔒 API Security Best Practices:
- Prompt & Output Sanitization
Validate both input prompts and generated responses for malicious patterns or leakage. - Adaptive Rate Limiting
Move beyond IP-based limits—implement behavior-based detection using ML models. - RBAC for AI Agents
Apply fine-grained access control not only for users, but for machine actors and AI tools. - AI Behavior Monitoring
Log and analyze AI prompts, context switches, hallucinations, and potential exploits. - API Discovery & Shadow API Management
Inventory all machine-facing APIs—including internal tools leveraged by LLMs. - Context-Aware Output Filtering
Prevent sensitive data from being exposed in generative AI responses through post-processing filters.
AI for API Security: The Defender’s Edge
The good news? AI isn’t just for attackers. Anubhav emphasized how defenders can fight fire with fire:
🛡️ AI-Powered Defense Use Cases
- Anomaly Detection: Spot abnormal API usage or prompt patterns.
- Bot Classification: Identify malicious vs. benign bots using ML classifiers.
- Rate Limiting Intelligence: Adapt thresholds based on real user behavior.
- Threat Classification: Automatically categorize incidents like injection, scraping, or abuse.
- Automated Runbooks: Generate and execute incident response steps using LLMs.
As of 2025, only 20% of enterprises feel fully prepared to leverage AI for defense. But by 2026, security budgets for AI-powered tools are projected to grow by 27%.
Final Thoughts: Adapt or Be Outpaced
The future of API security isn’t just about shifting left—it’s about shifting smarter. As AI evolves, so must our security posture.
✅ Understand how AI reshapes attack surfaces
✅ Modernize threat models to include prompt exploits and agent behavior
✅ Embed intelligent defense mechanisms into the API lifecycle
APIs are the connective tissue of AI—and if left unprotected, they’re the weakest link.
Latest Articles
Earn your APIsec University Certificate
Earn an APIsec University certificate and badge for completing any of our courses.
Post your badge on LinkedIn and share your accomplishments. You can even receive CPE credits for taking these courses.
