What Is Regular Expression Denial of Service (ReDoS)?
Regular expression denial of service (ReDoS) is a class of denial-of-service attacks that exploit the way certain regular expression engines process patterns. Attackers craft input that forces the regex engine to spend exponential time processing a single request. Instead of failing or matching quickly, the system gets bogged down in excessive backtracking, leading to degraded performance or system failure.
ReDoS attacks are particularly concerning in web services and APIs where user-provided input is commonly validated or parsed by regular expressions. Since these attacks can be triggered with relatively small requests, even a single user can cause significant disruption. If left unchecked, ReDoS makes an application a soft target for service outages.
This is part of a series of articles about DDoS.
In this article:
A successful ReDoS attack can lead to high CPU utilization, increased memory consumption, and severe degradations in application responsiveness. As the service tries to process a maliciously crafted input, other users see unresponsive applications, timeouts, or outright downtime. In shared or multi-tenant environments, this can cascade to impact multiple components or customers at once.
For organizations, the results range from service-level agreement violations to brand damage and financial loss. Security teams may need to triage the issue under emergency circumstances, while attackers require minimal resources or technical depth to launch the attack. Even strong infrastructure can't compensate for underlying vulnerable regex patterns.
Regex engines interpret expressions based on defined algorithms, but not all engines process patterns in the same way. Some are deterministic and compile regexes into state machines that evaluate in linear time, while others are non-deterministic and analyze multiple paths, leading to potential backtracking.
Deterministic Regex Engines
Deterministic regex engines process input in a single, predictable path using a finite state machine. Each input character leads to one and only one state transition, which eliminates the need for backtracking. Because of this design, deterministic engines evaluate regex patterns in linear time with respect to input size.
This linear-time behavior makes them immune to ReDoS attacks. Even when processing complex patterns, these engines don’t revisit previous states or re-evaluate input against multiple pattern paths. For example, when matching ^abc$, the engine checks each character (a, b, c) in order and quickly accepts or rejects the input based on a straightforward sequence of steps.
Deterministic engines are often preferred for their predictability and performance stability, regardless of input complexity. However, they typically support a smaller subset of regex features compared to their non-deterministic counterparts.
Non-Deterministic Regex Engines
Non-deterministic regex engines evaluate patterns by exploring all possible paths through the pattern simultaneously. When a pattern includes alternation (|), optional elements (?), or nested quantifiers like (a|ab)+, the engine attempts multiple match paths in parallel. If a path fails, the engine backtracks and tries another, often recursively.
This backtracking behavior is what makes non-deterministic engines vulnerable to ReDoS. Malicious input can force the engine to explore an exponential number of paths, consuming excessive CPU time. For example, with the pattern (a|ab)+b$ and input like “aaaaaaaaaaaaaaaaaaaaa!”, the engine may attempt thousands or millions of permutations before rejecting the input, significantly slowing down processing.
While non-deterministic engines support a wider range of regex features, including more expressive patterns, this flexibility comes at the cost of potential performance issues under crafted input. Developers using these engines must be especially careful to avoid vulnerable regex constructs.
Eva Abergel
Eva Abergel is a solution expert in Radware’s security group. Her domain of expertise is DDoS protection, where she leads positioning, messaging and product launches. Prior to joining Radware, Eva led a Product Marketing and Sales Enablement team at a global robotics company acquired by Bosch and worked as an Engineer at Intel. Eva holds a B.Sc. degree in Mechatronics Engineering from Ariel University and an Entrepreneurship Development certificate from the York Entrepreneurship Development Institute of Canada.
Tips from the Expert:
In my experience, here are tips that can help you better defend against and future-proof your systems from ReDoS vulnerabilities:
Use grammar-based parsers instead of regex for complex inputs: When parsing structured or nested data (e.g., JSON, URLs, custom formats), prefer dedicated parsers or grammar-based tools (like ANTLR or PEG parsers) instead of regex. These tools offer deterministic parsing and eliminate the ambiguity and backtracking risks inherent in regex.
Profile regex performance with fuzzed inputs in CI pipelines: Add fuzz testing to the CI/CD pipeline that specifically targets regex-based functions. Generate edge-case inputs designed to stress the pattern engine, and monitor execution time. Regexes that show spikes in processing time can be flagged automatically before they reach production.
Instrument regex usage with observability hooks: Instrument all regex evaluations (especially in user input paths) with logging and performance telemetry. Measure match duration, input size, and success/failure. This allows real-time detection of anomalous regex behavior and early signs of ReDoS attempts under normal traffic.
Adopt static analysis tools that detect vulnerable regex constructs: Go beyond basic linters and use static analyzers that deeply understand regex syntax trees and can identify dangerous constructs like nested quantifiers or unbounded alternations. Tools like r2d2 (for JavaScript) or RegexLint (for multiple languages) can automate large-scale auditing.
Decompose large regexes into smaller deterministic sub-patterns: Break complex regex patterns into simpler, linear-time sub-patterns when feasible. Apply them sequentially or in a decision tree rather than relying on a single monolithic pattern. This reduces the risk of catastrophic backtracking and improves maintainability.
A simple example of a ReDoS vulnerability can be seen in the regex pattern /A(B|C+)+D/. At first glance, this pattern appears harmless: it matches strings that start with “A”, followed by one or more sequences of either “B” or one or more “C”s, and ends with “D”. Valid examples include “ABBD” and “ACCCCCD”, which are matched quickly.
The problem arises when an attacker submits a near-matching string that fails at the end, such as “ACCCCCCCCCCCCCCCCCCCCCCCCCCCCX”. This input causes catastrophic backtracking. The regex engine explores all possible combinations of grouping and quantifier applications for the long sequence of “C”s, trying every way to match the inner (B|C+) group repeatedly. Since the string ends with an unexpected “X” instead of the expected “D”, every attempt eventually fails after exhausting all possibilities.
This exponential explosion in processing time can be demonstrated with input strings of increasing length. For example, with 5 “C”s followed by an “X”, the engine takes around 100 steps to determine a match failure. With 14 “C”s, the engine may perform more than 65,000 steps (depending on exact implementation). The input size grows linearly, but the processing time grows exponentially.
These excessive steps consume CPU resources, slowing down or freezing the application. Attackers can exploit this by sending multiple such inputs concurrently, leading to a denial-of-service condition. This example highlights the risk of using regex patterns that combine nested quantifiers with alternation, particularly in user-facing code.
Here are some of the ways to prevent ReDoS attacks when using a regex engine.
1. Apply WAF Rules and API-Layer Filtering
Web application firewalls (WAFs) can stop malicious traffic before it impacts backend services. By defining WAF rules that filter or rate-limit suspicious patterns or large payloads likely to trigger regex backtracking, organizations can block many forms of ReDoS at the earliest point. WAF solutions commonly include built-in protections or allow for custom signatures matching inputs that trigger suspicious regex patterns.
API gateways and service meshes can also validate request payloads at the application layer, rejecting malformed or extremely large inputs before they reach internal logic. By combining WAF rules with API pre-validation, it’s possible to intercept problematic requests and secure vulnerable endpoints, reducing both attack vectors and response drags.
2. Use ReDoS-Safe Regex Libraries Whenever Possible
Libraries like RE2, Hyperscan, or PCRE2 in safe mode are designed to avoid catastrophic backtracking by implementing deterministic evaluation or limiting supported constructs. When possible, integrate these libraries into the validation and parsing workflow, replacing legacy or unsafe regex engines. Use language libraries and frameworks that natively adopt these safe engines for common string operations.
Switching to ReDoS-safe libraries often involves migration work, adjusting patterns to remove unsupported features and verifying the accuracy of matches. The investment pays off by eliminating an entire class of attack vectors, making day-to-day operations more predictable and secure, especially under heavy load or during targeted attacks.
3. Avoid or Rewrite Patterns with Nested or Ambiguous Quantifiers
Patterns with nested quantifiers (e.g., (a+)+) or ambiguous branching introduce significant risk of exponential backtracking. Audit all regular expressions for these constructs, especially those used with user input. Refactor where possible: replace ambiguous groups or nested quantifiers with single, non-overlapping alternatives. For example, replace (.+)+ with a direct quantifier when feasible.
Language-specific regex linters and security-focused code review tools can automate detection of these problematic patterns. By enforcing a policy to ban or rewrite ambiguous regexes, development teams systematically eliminate vulnerable code paths before deployment, reducing reliance on runtime mitigations.
4. Limit Input Size and Apply Pre-Validation
Restricting the maximum input length before processing by a regex is a straightforward defense. Attackers often rely on extremely long payloads or edge-case strings to trigger catastrophic engine behavior. Set input length limits in both frontend and backend logic. For forms and API endpoints, return errors for data that exceeds reasonable boundaries.
Pre-validation steps such as checking for forbidden substrings or structure before invoking regex also reduce risk. For example, scan and reject obviously malformed email addresses or escape characters that might trigger ambiguities. This layered validation both accelerates response time and diminishes attack viability.
5. Implement Execution Timeouts and Resource Limits
Execution timeouts can cap how long any regex operation runs on user input. Most web platforms let users set per-request or per-operation time budgets. Configure the server or applicable language runtime to terminate execution if a regex runs beyond a set threshold, logging the event for investigation.
Resource limits are also vital; isolate regex processing to prevent a single bad request from destabilizing the host. Use thread pools, process isolation, or sandboxing features so that abused endpoints cannot exhaust memory or processor time for all users. By bounding resource usage, unanticipated attacks are contained.
6. Implement Circuit Breakers for Backend Services
A circuit breaker pattern detects repeated failures and temporarily disables or limits access to the affected endpoint or service, preventing wider outages. When a regex operation triggers slowdowns, the circuit breaker responds by serving error messages promptly, preventing resource starvation for other requests and giving operators space to respond.
Combine circuit breakers with alerting and monitoring for sustained spikes in regex execution time or error rates. These patterns are especially useful in distributed microservice environments, limiting the blast radius of a ReDoS attack while enabling targeted remediation or rollback of unsafe changes.
ReDoS attacks exploit inefficient pattern matching to exhaust CPU resources, often targeting application-layer input validation and API endpoints. Because these attacks frequently use syntactically valid requests, defending against them requires visibility into application behavior, request structure, and abnormal execution patterns rather than simple signature-based blocking.
Radware Cloud WAF Service provides application-layer protection that filters malicious or excessively large inputs before they reach vulnerable regex processing logic. By applying positive security models, input validation controls, and adaptive rate limiting, Cloud WAF reduces the likelihood that crafted payloads trigger catastrophic backtracking in back-end services.
For API-driven environments, Radware Cloud Application Protection Service extends these capabilities with API schema enforcement, behavioral analysis, and anomaly detection. This ensures that request structures conform to expected patterns and helps prevent abuse of endpoints that rely heavily on regex-based validation.
Where ReDoS attempts escalate into resource-exhaustion attacks, Radware DefensePro and Cloud DDoS Protection Service provide multi-layer mitigation to maintain availability during volumetric or application-layer floods. Behavioral detection mechanisms identify abnormal request rates and CPU-bound attack patterns, enabling automated mitigation before services degrade.
Together, these controls help organizations combine secure coding practices with runtime enforcement, ensuring that inefficient regex patterns cannot be weaponized into application-layer denial-of-service attacks.