The Secret to Hacking APIs Is Context


Ask most security teams how well their APIs are protected, and they'll point to the scanner results. Green across the board. No critical findings. Compliance checked.

Then ask an attacker.

The attacker didn't just look at your OpenAPI spec. They learn the application behavior from your traffic and then manipulate it.

TL;DR: The real API attack surface isn't what you documented - it's what appears in runtime, across real identities, stateful sequences, and multi-step flows. Spec-based scans give you false confidence. The bugs that matter - BOLA, broken auth, chained business-logic bypasses - only appear when you test with the context attackers actually have.

The map is not the territory

There's a concept in security that keeps proving itself: you can only defend what you actually understand. Not what you think you have. What you actually have.

In API security, the gap between those two things is bigger than most teams realize.

OpenAPI/Swagger files are documentation artifacts. They describe intent. However, runtime systems are living entities - endpoints are added without spec updates, internal service-to-service routes are never documented, and old versions linger long after they're officially deprecated. The spec says you have 47 endpoints. Traffic says you have 112.

Shadow APIs - the ones that exist in runtime but nowhere in your documentation - are not edge cases. They're the norm. And they're not just blind spots in your scan coverage. They're soft targets because the teams who built them often assumed no one would find them.

When a scanner uses your Swagger file as its source of truth, it's not scanning your API. It's scanning a best-case approximation of it from sometime last quarter.

Why stateless scanning misses the real bugs

The vulnerabilities that end up on breach reports - broken object-level authorization, privilege escalation, auth logic gaps - share a trait. They're contextual. You can't find them by firing single, isolated HTTP requests at documented endpoints.

Take BOLA, the OWASP API top vulnerability for years running. To find it, you need to: authenticate as User A, create a resource, note the resource ID, authenticate as User B, and attempt to access User A's resource. That's four steps across two identities. A scanner that doesn't maintain session state, doesn't understand role boundaries, and doesn't replay multi-step flows will never surface it.

Business-logic flaws are worse. They require understanding sequences that only exist in real usage patterns: a checkout flow where a discount code can be reused if you time the requests right, or an account transfer that bypasses a daily limit if you chain three smaller requests through different endpoints. These aren't injection vulnerabilities. They don't trigger on malformed input. They trigger on the right combination of legitimate-looking requests in the right order.

Static analysis of specs cannot see this. Single-request fuzzing cannot see this. Only something that watches and understands the full sequence - identity, state, timing - can.

Context is what attackers actually have

Here's the uncomfortable truth: a skilled attacker probing your APIs starts with more context than most security tools ever acquire.

They create an account. They use the application the way a legitimate user would. They watch how IDs are structured. They notice when the same resource ID appears in a URL after they create an object, and they wonder what happens when they substitute someone else's. They chain requests deliberately, looking for the moments where server-side state diverges from what the API expects to validate.

They don't need your Swagger file. They're building context from traffic, the same way your runtime systems actually operate.

This is the fundamental mismatch. Security testing anchored to specifications operates on one model of your API. Attackers operate on another - the live one. And in API security, the live one always wins.

What 'testing with context' actually looks like

Closing this gap means rethinking the foundation of API security - from 'scan the documented surface' to 'understand and test what runs.'

That requires three things working together:

  • Complete discovery. Not just your OpenAPI files - traffic analysis, CI artifacts, test collections. Every source that reveals endpoints that actually exist, including the ones nobody documented. Shadow routes surface here. Deprecated endpoints that never got turned off show up here.
  • Semantic understanding. Knowing that a field called account_id maps to a sensitive object boundary is different from knowing it's a string parameter. Inferring which parameters carry resource IDs, which carry role or tenant claims, which affect privilege - that's the difference between generating realistic test scenarios and generating noise.
  • Stateful scenario execution. Testing must preserve the full context of a real interaction: cookies, tokens, CSRF state, and session side effects. It must be able to run multi-step flows and mutate specific steps - switch identities mid-sequence, replay with different scopes, probe the moments where business logic assumes one thing and runtime allows another.

Without all three, you're back to testing the spec - the imaginary API - instead of the one your users and your attackers actually interact with.

The runtime layer already has the context

There's an insight here worth sitting with the context that makes real API testing possible, already exists somewhere in your stack. It flows through every request your API handles. It's encoded in the tokens, the session state, and the sequence of calls that real users and real attackers make.

The question isn't whether context is available; it's whether your security architecture is positioned to use it.

A security layer that sits in the traffic path - seeing real requests as they happen, understanding the behavioral patterns across identities and sessions, watching multi-step flows unfold in real time - doesn't need to reconstruct context from a documentation file. It's already there. That positional advantage is the difference between finding a BOLA vulnerability in runtime before an attacker does or finding out about it in a breach notification.

Runtime has the full picture. And when it's intelligent enough to use that picture to model normal behavior, to detect the sequence anomalies that indicate an attacker is building context of their own, it can catch the hard bugs that testing missed.

The practical implication

If you're evaluating your API security posture, the question to ask isn't 'what percentage of our OpenAPI spec is covered?' It's: 'what percentage of our actual running API surface is understood, inventoried, and tested against realistic attack scenarios?'

Those are very different questions. Most organizations find the answer to the second one is uncomfortably small.

The path forward starts with discovery - real discovery, not spec-crawling. Then the semantic inference that makes your parameters meaningful rather than opaque. Then, stateful testing that mirrors how attackers actually probe.

And then, testing continuously, a security layer that doesn't just block known-bad signatures but understands the behavioral context of what normal looks like - so that when an attacker starts chaining requests across identities, the anomaly is visible before the damage is done.

Where most solutions still leave you exposed

The market has responded to the API security problem - but not always in the right direction.

One category of tools doubled down on traffic inspection at the gateway: rate limiting, schema validation, and basic anomaly detection. These catch the obvious stuff - malformed requests, known attack signatures, blatant volumetric abuse. What they don't catch is the attacker who looks completely legitimate: authenticated, well-formed requests, realistic rates - but systematically traversing resource IDs that don't belong to them. Without understanding behavioral context across sessions and identities, a gateway sees normal traffic right up until a breach.

Another category focused on API discovery and posture management: cataloguing endpoints, flagging sensitive data exposure, and mapping your attack surface. Valuable - but static. Knowing you have a BOLA-prone endpoint in your inventory is not the same as detecting when someone is actively exploiting it. Posture management tells you what to worry about. It doesn't stop the attack in motion.

Finally, a growing category of dedicated API security testing tools promises shift-left coverage - but most deliver spec-crawling dressed up as security. They ingest your Open API file, fire parameterized requests at documented endpoints, and report back on injection vulnerabilities and schema mismatches. For a narrow class of issues that has value. But they're inherently stateless: each request is isolated, no identity continuity, no multi-step sequencing, no understanding of the parameter’s nature. The result is scanning coverage that looks comprehensive on a dashboard and leaves BOLA, privilege escalation, and business-logic flaws completely untouched. Shallow testing against a partial spec isn't shift-left security - it's the illusion of it.

The gap none of these solve cleanly: there's no synergy between what happens in testing and what gets enforced at runtime.

How Radware closes the loop

Radware's approach to API security is built around the premise that context has to be continuous - from the early development to the last request in production. That means the discovery and testing phase can't be disconnected from the enforcement phase.

Following the Pynt acquisition, with Pynt's contextual API security testing, we bring contextual, stateful testing earlier in the SDLC - the kind that discovers shadow endpoints, infers parameter semantics, and runs multi-step scenarios that replicate real attack chains. Pynt doesn't scan your Swagger file. It learns your API from how it behaves.

For more information, contact us.

The secret isn't complicated

Attackers don't win because they're smarter than defenders. They win because they operate with a more relevant context. They test against the real thing. They chain steps. They switch identities. They probe sequences.

The security teams that close the gap are the ones who stop testing imaginary APIs and start securing the ones that actually run, with the context that runtime traffic makes available.

That's the secret. Context is the attack surface. And whoever has it, wins.

Ofer Hakimi

Ofer Hakimi

Related Articles

Contact Radware Sales

Our experts will answer your questions, assess your needs, and help you understand which products are best for your business.

Already a Customer?

We’re ready to help, whether you need support, additional services, or answers to your questions about our products and solutions.

Locations
Get Answers Now from KnowledgeBase
Get Free Online Product Training
Engage with Radware Technical Support
Join the Radware Customer Program

Get Social

Connect with experts and join the conversation about Radware technologies.

Blog
Security Research Center
CyberPedia