I’m trying to limit the number of concurrent connections to my site using Nginx limit_req. My goal is to defend the website against some users with a heavy hand on HTTRack, some aggressive bots and a couple of script kiddie. Nothing too hard, really.
I understood the whole leaky bucket analogy, but what I’m not so sure is how deep the bucket should be.
I understand that a tipical browser opens less than 10 concurrent connections to each host. So this would be approx. 15 requests per second, just to err on the side of caution:
limit_req_zone $binary_remote_addr zone=myzone:10m rate=15r/s;
Since these requests are more or less concurrent, I make them burstable:
limit_req zone=myzone burst=30 nodelay;
What do you think? Are these good values or are they too limiting/too broad?