The Internet as we know it is seeing a massive change. Applications have sprinted from Web to APIs to be AI-powered - and are now transforming into being AI Agents & Autonomous AI Applications, meaning that the Web Economy is now transforming into an Agentic Economy.
For the last decade and a bit, the application security teams only had to grapple with the challenge of how to detect bad bots from genuine users accessing the applications. Bad bots scraped content, attempted credential stuffing, created fake accounts and application security products were designed to detect and mitigate these bot attacks.
Now with the advent of the Agentic Economy, it is no longer about deciding whether the request is from a bot or human, good bot or bad bot but to identify/classify between a bot, human and an AI Agent and within the AI Agent, understanding whether it is a genuine user making use of the AI agent for legitimate purposes or it’s an attacker using an AI agent to carry out malicious activities.
In this era of Agentic web, we now have AI agents that are not just fetching data, but they are acting like actual users. These AI Agents can now log into accounts, fill forms, navigate multi-step workflows, compare pricing, perform purchases and thus be able to make decisions autonomously. This form of Agentic behaviour is now possible either through pure play AI Agents or, more recently, in the form of fully autonomous Agentic Browsers.
Why are these AI Agents different from traditional bad bots?
Bad bots over time evolved from being basic scripts to headless browsers to the sophisticated bots that could mimic human-like behaviour with shallow interaction patterns. These typically used the known automation frameworks, came from datacentre IPs, had predictable timing patterns and were thus detectable by a robust bot management solution.
AI Agents are fundamentally different. They run inside full browser stacks, often headful or genuine browser-type environments. They execute JavaScript, manage cookies, maintain sessions, and can take autonomous actions. Lot of these AI Agents, such as ChatGPT-Agent, Browser Use etc open ephemeral browsers and can take all actions such as browsing the application, comparing prices, making purchases etc. The Agentic browsers such as Perplexity Comet, ChatGPT Atlas, GenSpark, Gemini in Chrome etc. work within the context of these browsers and can take actions fully autonomously.
Application owners do want to allow genuine AI Agents to access their application and take actions because that is where the Agentic Economy is heading but they cannot also blindly trust the requests coming from these AI Agents and hence would like to be able to identify malicious use of AI Agents.
Automation in the form of these AI Agents is no longer inherently malicious. But it is no longer harmless either.
The challenge for the application security vendor is multi-fold:
- Identify/classify a request coming to the customer application from an AI Agent
- Try to understand the intent of the AI Agent and based on that, arrive at a possible trust score for the AI Agent, to then allow the customers to decide what action they want to allow for that AI Agent
Not All AI Agents are the Same
Applications are now seeing newer kind of traffic from the AI Agents. But not all AI Agents are the same, and hence, the action on these AI Agents also needs to be different. You can have a helpful AI Agent working as a shopping assistant, doing travel booking, actually initiated by a user which needs to be allowed but at the same time, you can also have an attacker initiating a malicious AI Agent to abuse the system to run ATO attacks, fake registrations or inventory hoarding, which needs to be blocked.
The challenge between a bad bot vs an AI Agent is how to first identify the AI Agent, but then also know who is behind the use of the AI Agent and what is the actual intent. If we block all AI Agents, we break legitimate user experiences, but if we allow all AI Agents, then we enable large-scale abuse.
Traditional bot mitigation decision was binary: allow or block. You could still use different mitigation options such as CAPTCHA, Block, Crypto Challenge, JavaScript Challenge etc. But AI Agents and Agentic traffic require policy decisions. Also, security systems designed to detect anomalies based on traffic volume can now easily miss attacks happening inside normal-looking sessions happening through these AI Agents.
Ideal Approach towards AI Agent Management
The right approach towards AI Agent Management needs to be holistic and multi-layered. This should include:
- Real-time AI Agent Discovery & Classification: Able to uniquely identify and classify AI Agents, which could be a mix of Verified Agents such as ChatGPT-Agent, AI Browsers such as Perplexity Comet and other AI Agents such as Manus, Genspark etc. This is where Radware’s solution with its advanced AI Agent identification capability, will now have the ability to classify these new age AI Agents accessing the customer applications
- Behavioral Intent Analysis: Understand agent intent, patterns, and anomalies in real time
- Adaptive Trust Scoring: Evaluate agent interactions in real-time to enable risk-based decisioning
- Granular Permission Control: Based on the Agent classification and intent analysis and trust scoring, give option for customers to decide what level of permissions they want to give to the AI Agent.
A holistic approach combining all the above is essential to manage the AI Agents effectively in this era of Agentic commerce. As this space continues to evolve, the strategy towards AI Agent management should also continue to adapt and there will not be a one-size-fits-all solution that will work in this context going forward.
Web as we know is changing soon!
Within the next few years, a good percentage of the traffic on the applications will originate from the AI Agents, a good portion of which is triggered by humans. The AI Agents will browse, search and transact on behalf of the humans and the internet as we know it will evolve to becoming more of an Agentic web.
The rise of these AI agents represents the biggest shift in web interaction in recent years. It is no longer the question from a security perspective of whether this traffic is from a human or a bot but the question will be whether the intent behind this interaction on the application looks genuine or not and controlling the access in an intelligent and robust manner.
If you need more information or understanding on how this continues to evolve, do reach out to us at https://www.radware.com/contactus/