19

Say we have a load balancer that also does rate limiting. Rate limiting seems pretty straightforward for logged-in users - just look at the JWT and maybe use an in-memory data-store to see how many requests in the last 10 seconds for that user.

However, what about non-logged in (unauthenticated) users? We don't know for sure who they or where the request is coming from exactly, so can't easily rate-limit those requests or..?

Are there built-in solutions to this on AWS and other hosting platforms is it something we need to worry about? Seems like we need to handle the rate-limiting logic of logged-users manually, but what about non-logged in users?

My guess/hope is there might be some built-in mechanism for rate-limiting unauthenticated requests on hosting platforms, please inform us all.

  • 2
    [This page](https://www.nginx.com/blog/rate-limiting-nginx/) never mentions logged-in users. In fact, the techniques described there are cited as a mitigation for brute-force attacks on passwords, which implies users that are not logged in. – Robert Harvey Dec 24 '18 at 04:50
  • 1
    Why do you want to use rate limiting? Is it to counter denial-of-service attacks, to prevent users from exceeding their payment plan, something else? The use case affects the methods you can effectively use. – Bart van Ingen Schenau Dec 24 '18 at 08:15
  • 1
    This question may be _more_ suited for https://security.stackexchange.com/, though I'm not saying it's off-topic – Peeyush Kushwaha Dec 24 '18 at 08:37
  • @BartvanIngenSchenau all of the above? –  Dec 24 '18 at 09:55
  • Why should you have two different rate limits? Are you selling any sort "plans" with different constraints/features? – Laiv Dec 24 '18 at 18:55
  • @Laiv because a DoS attack could come from an unauthenticated user? – Alexander Mills Dec 29 '18 at 01:07
  • And? Why the solution should be different for authentication ones? At the end both are IPs. Rate limits work blocking romote addresses not "user's". – Laiv Dec 29 '18 at 09:03
  • @Laiv that's where you're wrong hombre. IP addresses do not uniquely identify users, whereas username/passwords or tokens *do* uniquely identify users. You don't want to rate limit the wrong person b/c that would make for a bad experience. –  Dec 30 '18 at 04:08
  • Rate limits are intended to protect your system from being taking down. Whether the request is authorized or not is irrelevant. The only thing you have for blocking requests is the remote address. Otherwise, if you let the requests to hit the system for checking authorization It could be too late. – Laiv Dec 30 '18 at 09:54
  • Plus DoS is relatively easy to deal with. DDoS is not. Against distributed attacks, you can not wait to differenciate between authorized and unauthorized. By the time your system has checked 1 request 1000 more are killing you "softly". – Laiv Dec 30 '18 at 09:59
  • Exactly, by dealing with unauthenticated requests we are handling DoS and DDoS. –  Dec 31 '18 at 03:10
  • To me it seems like the OP is a good question – Alexander Mills Jan 22 '19 at 20:15

4 Answers4

14

However, what about non-logged in (unauthenticated) users? We don't know for sure who they or where the request is coming from exactly, so can't easily rate-limit those requests or..?

There are a couple approaches you can take. One is that you need a reasonably reliable origin identifier, for example IP address. You can rate limit by IP address, so that attacks on a single compromised machine will be limited. This is a pretty simple approach, but there's a drawback that there are large network providers may only use single outgoing IP addresses to hide a very large number of users behind a NAT.

Another approach to rate limiting you can take is to require a proof of work for any unauthenticated requests. Your server issues a challenge code that any clients making unauthenticated request (e.g. login requests) have to calculate an resource intensive response before the request is processed. A common implementation of this idea requires the clients to calculate a partial hash reversion.

Lie Ryan
  • 12,291
  • 1
  • 30
  • 41
  • I fail to see, how "proof of work" can prevent from DOS attacks? The client could ignore the challenge and keep sending requests up to the fail. Is there a 3rd process handling the challenge? – Laiv Dec 24 '18 at 19:07
  • 6
    @Laiv POW can be reliably issued and checked distributed without connecting to a central database, which is where most other rate limiting schemes fail. It increases the cost of an attack for an attacker, as scaling out your defence and increasing load factor is cheaper for you and legitimate users than scaling out the attack for the attacker. It creates an economic disincentive to attack the system, as it also effectively excludes low powered devices (e.g. compromised printers, IOT, routers) from being used as an effective attack platform. – Lie Ryan Dec 25 '18 at 00:29
6

To know if a request is from an authenticated user or from an anonymous user, you have to necessarily process the request (albeit quickly). This still means your application is vulnerable to a denial of service attack.

You should be checking overall requests per second, and if a certain number is exceeded, you simply ignore the rest. That number should be sufficiently high to not cause problems during normal functioning, but should protect against such attacks.

Also, as a general rule, you should probably not assume that an attack would not come from an authenticated user, as least for what concerns DOS attacks. A weak password would easily allow someone to presume the identity of an old user. So supposing you could do such a check, your (human) users should never need to perform requests at such rates not withstanding simply because you have many individual users.

Neil
  • 22,670
  • 45
  • 76
  • I suppose you could use IP addresses and set a high rate limit for each one. I guess a well orchestrated DoS attack could use thousands of IP addresses? maybe more? idk...I am aware that the same IP address could be used for multiple different clients, but I would say there is a strong likelihood that it's the same user, right? – Alexander Mills Dec 29 '18 at 01:10
  • @AlexanderMills Well suppose you decided the algoritihm would be to check for multiple requests from the same IP address. Even if there are thousands, they would be repeated for more than 1000 requests. Your server logs the first request from a given IP address and the flooding begins.. your server is backlogged already with requests.. you cannot even process enough requests to get to the second repetition from the same IP (which could still be a legitimate request by the way). It would protect against DoS attacks where same IP is used only. Better to use both if anything. :P – Neil Jan 29 '19 at 06:33
1

One of Cloudflare's main offerings is protection against Denial of Service attacks by providing an intelligent proxy for your API/web server. The basic service is free; they make money off of other related services like CDN services and load balancing. They also provide more sophisticated and controllable Rate Limiting services, currently at the rate of US $.05 per 10k good requests (no charge for rejected requests), but you have to upgrade to paid plans to get more than one global rule.

You can use Cloudflare's service with AWS or any other platform so long as you have control over your domain's name servers (as in, you can change the name servers registered for your domain).

You can provide separate rate limits for anonymous and logged-in users by directing logged-in users to different URLs. For example, you might simply prefix all your anonymously available URL paths with '/u' to create an endpoint that always requires authentication and has a different rate limit.

Note that Cloudflare's rate limiting (like all commercial rate limiting for anonymous users I am aware of) defines a client by its IP address. This can cause problems for people using commercial VPNs or Tor, since they tend to hide a large number of clients behind 1 IP address for added privacy.

Old Pro
  • 793
  • 6
  • 11
1

In AWS, there are the related services AWS Shield and AWS WAF. They are primarily intended for preventing DDoS attacks but also offer support for rate-limiting based on IP addresses.

In WAF, the concept is called Rate-Based Rules. Preventing brute-force based login attempts is mentioned as a use case in the original announcement:

This new rule type protects customer websites and APIs from threats such as web-layer DDoS attacks, brute force login attempts and bad bots. Rate Based Rules are automatically triggered when web requests from a client exceed a certain configurable threshold.

Other cloud providers should have similar offerings. Here, is a tabular comparison: Google Cloud Armor vs. AWS WAF vs. Cloudflare WAF.

As you are already using Nginx, using the built-in IP based rate-limiting might also be simple option. The module is called ngx_http_limit_req_module. This blog post describes how it can be used.

Please note that IP based rate limiting is a relatively simple concept but it is not perfect:

  • IP addresses might be shared (people working in the same office) leading to false positives
  • An attacker might have easy access to multiple IP addresses and use them to bypass the limits (distributed brute-force login attack)

In general, IP addresses are a good start. But if you need stronger protection, your best choices will depend on your thread model (which kind of attacks you want to prevent).

Philipp Claßen
  • 1,387
  • 1
  • 12
  • 26