MadeYouReset is a new HTTP/2 DDoS vector related to 2023’s Rapid Reset. By provoking the server to reset streams with malformed frames, an attacker keeps backend request processing alive while freeing the stream from HTTP/2 accounting. One TCP connection can drive an effectively unbounded number of in-flight requests through a proxy to origins. Patching affected implementations and enforcing request and control-frame rate limits are the fastest mitigations.
What’s new vs. Rapid Reset?
- Rapid Reset: the client floods RST_STREAM to cancel streams quickly, exhausting server resources via per-stream churn.
- MadeYouReset: the attacker sends invalid frames so that the server emits RST_STREAM (hence “made you reset”). The stream is considered closed for MAX_CONCURRENT_STREAMS, but backend work continues. Result: a single connection can keep origins busy far beyond negotiated stream limits.
Imperva and Tel Aviv University disclosed the issue; CERT/CC described it as a mismatch between HTTP/2 stream accounting and actual backend activity. At the time of disclosure, no exploitation in the wild was reported. The attack also blends with normal traffic, so naïve RPS or connection counters may not light up.
Affected software and patch status
The underlying flaw (CVE-2025-8671) impacts a wide set of HTTP/2 stacks and proxies/projects, including AMPHP, Apache Tomcat, Eclipse Foundation projects, F5, Fastly, gRPC, Mozilla, Netty, SUSE Linux, Varnish Software, Wind River, and Zephyr Project.
Patches have been released by Apache Tomcat, F5, Fastly, and Varnish; others are investigating or preparing fixes. Some vendors track the issue under their own CVE IDs. (Mozilla noted Firefox itself is not affected; their hosted services are being patched.)
Root cause (why this works)
HTTP/2 allows either endpoint to cancel a stream at any time. Many implementations continue processing a request that has already been reset, because work has already been scheduled upstream (cache lookup, origin forward, buffering, etc.). Meanwhile the stream no longer counts against SETTINGS_MAX_CONCURRENT_STREAMS. The attacker exploits this gap:
- Send a request (HEADERS, often with END_STREAM).
- Send an invalid frame on that stream to trigger a stream error on the server (examples below).
- The server replies with RST_STREAM (stream closes in accounting) but keeps backend work alive.
- Repeat quickly to accumulate unbounded concurrent requests on one TCP connection.
Frames that can trigger server-side resets:
- WINDOW_UPDATE with 0 or > 2^31−1 increment.
- HEADERS, DATA, CONTINUATION sent on a half-closed (remote) stream.
- PRIORITY with invalid length.
Frames that force a connection error are less useful to the attacker (they close the TCP session).
Attack walk-through
Left: normal operation bounded by MAX_CONCURRENT_STREAMS.
Right: repeated invalid frames force server RST_STREAM; streams drop from accounting while origin work continues, so the proxy/origin collapses under queued responses.
Why proxies and origins suffer
- Proxies do more per request than clients: decode, transform headers, cache lookups, origin selection, buffer management, re-encode, flow control.
- When the client-side stream disappears, the proxy still carries the cost of origin I/O and buffering, then discards results late.
- Concurrency is now bounded by backend capacity, not by HTTP/2 stream limits, so small attack rates can snowball into CPU, memory, and socket exhaustion.
What to monitor (practical signals)
Even if you don’t parse HTTP/2 frames on the wire, you can watch for these effects:
- Rising origin concurrency with flat client-side stream counts.
- High request starts vs. low response writes on the client side (lots of work thrown away late).
- Spike in server-emitted RST_STREAM, GOAWAY, or HTTP/2 stream errors in proxy logs/metrics.
- Upstream throughput and latency increase without a matching growth in successful responses.
- Per-connection anomalies: one IP keeping a single HTTP/2 connection busy for a long time while driving substantial upstream traffic.
From a FastNetMon perspective (flow/packet telemetry):
- Fewer new TCP connections than expected for the observed server CPU/throughput.
- Long-lived flows with steady upstream bytes to origins and comparatively low downstream payload to clients.
- Abrupt origin egress spikes without corresponding ingress from diverse clients (botnet distributing low request rates).
Mitigation checklist
1) Patch first
Upgrade affected servers, proxies, and libraries. Many vendors have shipped updates; others are in progress. Apply vendor-specific CVEs as they appear.
2) Drop on invalid frames (prefer connection errors)
Where policy allows, treat malformed WINDOW_UPDATE/PRIORITY/illegal frame sequences as connection-fatal instead of stream-fatal. This removes the accounting gap.
3) Request-rate and concurrency controls
- Enforce per-client request rate limits and per-connection concurrency caps.
- Be careful with shared forward proxies/VPNs to limit false positives; whitelist known egresses where needed.
4) Control-frame rate limiting
If your HTTP/2 stack supports it, enforce small sliding-window limits on control frames to blunt both Rapid Reset and MadeYouReset. Reasonable starting baselines reported by one vendor’s testing:
- PING: ~100/sec per connection
- SETTINGS: ~5/sec per connection
- PRIORITY: ~30/sec per connection
- RST_STREAM: ~5/sec per connection (apply to both directions)
Tune with production telemetry; browsers usually reconnect automatically if a connection is dropped by policy.
5) WINDOW_UPDATE sanity
Aim to prevent abusive WINDOW_UPDATE patterns:
- Allow an initial burst up to MAX_CONCURRENT_STREAMS frames at connection start.
- Cap ongoing rate to ~6 WINDOW_UPDATE per DATA frame sent by the server.
- Require ≥128 bytes of DATA per WINDOW_UPDATE to avoid “data dribble”.
(If your platform exposes multipliers or counters for these, monitor and tune them under load testing.)
6) Backpressure and buffering hygiene
- Bound proxy buffering per stream and per connection.
- Prefer early cancel of upstream requests when the client-side stream goes away.
- Propagate client aborts to origin aggressively.
7) Detection → automation
- Alert on the signals listed above.
- Where possible, classify abusive client connections and auto-apply temporary blocks or stricter limits.
- For hybrid architectures, feed signals into upstream scrubbing/FastNetMon to trigger diversion or policy changes when L7 attacks escalate.
8) Consider HTTP/3 policies
Operators often fall back to HTTP/2 during HTTP/3 incidents. Track draft efforts like “Using HTTP/3 Stream Limits in HTTP/2” and prefer connection-fatal handling for invalid sequences where compliant.
Key takeaways
Visibility matters: correlate proxy/origin metrics with network telemetry to catch the pattern early and automate response.
MadeYouReset exploits a design/implementation gap: backend work continues after a stream reset while stream accounting drops to zero.
It can hide inside a single connection and look like normal traffic until the origin buckles.
Patching, tighter control-frame handling, and rate limits are the most effective immediate defenses.
About FastNetMon
FastNetMon is a leading solution for network security, offering advanced DDoS detection and mitigation. With real-time analytics and rapid response capabilities, FastNetMon helps organisations protect their infrastructure from evolving cyber threats.For more information, visit https://fastnetmon.com