Terminology for Rate Limiting

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Terminology for Rate Limiting

Brezal Campio
For a some function which is the following:
  - begin with request count at 0
  - check current request count
  - if current current is less than allowed requests
    - spin up a new process
    - way for some given time and reset count to 0

This is a naive approach to rate limiting, but is there common terminology for something like this?

I am interested in less naive approaches to rate limiting requests as well if anyone is willing to point in the right direction.

Ciao!

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Terminology for Rate Limiting

pablo platt-3
Maybe Leaky bucket is relevant?
https://en.wikipedia.org/wiki/Leaky_bucket

On Thu, May 25, 2017 at 9:14 AM, Brezal Campio <[hidden email]> wrote:
For a some function which is the following:
  - begin with request count at 0
  - check current request count
  - if current current is less than allowed requests
    - spin up a new process
    - way for some given time and reset count to 0

This is a naive approach to rate limiting, but is there common terminology for something like this?

I am interested in less naive approaches to rate limiting requests as well if anyone is willing to point in the right direction.

Ciao!

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions



_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Terminology for Rate Limiting

Dmitry Kolesnikov-2
In reply to this post by Brezal Campio
Hello,

I believe request (bandwidth) throttling is the applicable terminology.
https://en.wikipedia.org/wiki/Throttling_process_(computing)

May folks are using exponential back off as less naive solution
https://en.wikipedia.org/wiki/Exponential_backoff

Token bucket is another approach
https://en.m.wikipedia.org/wiki/Token_bucket

Best Regards,
Dmitry


> On May 25, 2017, at 9:14 AM, Brezal Campio <[hidden email]> wrote:
>
> For a some function which is the following:
>   - begin with request count at 0
>   - check current request count
>   - if current current is less than allowed requests
>     - spin up a new process
>     - way for some given time and reset count to 0
>
> This is a naive approach to rate limiting, but is there common terminology for something like this?
>
> I am interested in less naive approaches to rate limiting requests as well if anyone is willing to point in the right direction.
>
> Ciao!
> _______________________________________________
> erlang-questions mailing list
> [hidden email]
> http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Terminology for Rate Limiting

Danil Zagoskin-2
In reply to this post by Brezal Campio
Hi!

Some time ago I've implemented a rate limiter microservice for clustered services.
It's undocumented (no time for that), but maybe you will find it useful.

HTTP API for it (just in case) https://github.com/stolen/erateserver

The limiting logic is located in erater_counter module.
Counter behavior is configured with two options: rps and burst. Burst is how many requests are allowed just after start or after a long period of inactivity.
Also there is ttl option which is used to terminate unneeded counter processes.

Usage examples:
63> f(C), {ok, C} = erater_counter:start_link(none, none, [{mode, adhoc}, {rps, 0.3}, {burst, 3}, {ttl, 60000}]).
{ok,<0.130.0>}
64> erater_counter:acquire(C, 0).
{ok,0}
65> erater_counter:acquire(C, 0).
{ok,0}
66> erater_counter:acquire(C, 0).
{ok,0}
67> erater_counter:acquire(C, 0).
{error,overflow}
68> erater_counter:acquire(C, 0).
{error,overflow}
69> erater_counter:acquire(C, 0).
{error,overflow}
70> erater_counter:acquire(C, 0).
{ok,0}
71> erater_counter:acquire(C, 0).
{error,overflow}

The second argument of acquire is used to register future hit — if the client wants to wait a little instead of retrying request later:
59> erater_counter:acquire(C, 5000).
{ok,0}
60> erater_counter:acquire(C, 5000).
{ok,1311}
61> erater_counter:acquire(C, 5000).
{error,overflow}
62> erater_counter:acquire(C, 5000).
{ok,2059}

On Thu, May 25, 2017 at 9:14 AM, Brezal Campio <[hidden email]> wrote:
For a some function which is the following:
  - begin with request count at 0
  - check current request count
  - if current current is less than allowed requests
    - spin up a new process
    - way for some given time and reset count to 0

This is a naive approach to rate limiting, but is there common terminology for something like this?

I am interested in less naive approaches to rate limiting requests as well if anyone is willing to point in the right direction.

Ciao!

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions




--
Danil Zagoskin | [hidden email]

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions