A Token Bucket based rate limiter implemented implemented using go-routines.
see godocs RateLimiter-WaitMaxDuration
// 2 requests per second refreshing 2 capacity every second
r := ratelimit.NewRateLimiter(time.Second, 2, 2)
maxDuration := 500 * time.Millisecond
s := r.WaitMaxDuration(maxDuration)
fmt.Printf("r.Wait() success[%t]\n", s)
s = r.WaitMaxDuration(maxDuration)
fmt.Printf("r.Wait() success[%t]\n", s)
s = r.WaitMaxDuration(maxDuration)
fmt.Printf("r.Wait() success[%t]\n", s)
// Output:
// r.Wait() success[true]
// r.Wait() success[true]
// r.Wait() success[false]
see godocs RateLimiter-Wait
// 2 requests per second refreshing 2 capacity every second
r := ratelimit.NewRateLimiter(time.Second, 2, 2)
start := time.Now()
r.Wait()
fmt.Printf("r.Wait() elapsed less than 500 ms [%t]\n", time.Now().Sub(start) < 500*time.Millisecond)
r.Wait()
fmt.Printf("r.Wait() elapsed less than 500 ms [%t]\n", time.Now().Sub(start) < 500*time.Millisecond)
r.Wait()
fmt.Printf("r.Wait() elapsed greater than 500 ms [%t]\n", time.Now().Sub(start) > 500*time.Millisecond)
// Output:
// r.Wait() elapsed less than 500 ms [true]
// r.Wait() elapsed less than 500 ms [true]
// r.Wait() elapsed greater than 500 ms [true]
See Contributing
- godoc documentation
- write examples
- benchmark vs non-goroutine implementations like https://github.com/beefsack/go-rate
- configure rate limit policy
- by default we start the rate limit interval time only when it's actually used (e.g. following twitter api rate limit semantics)
- more commonly like in a traditionally token bucket algorithm the interval is continuously replenishing tokens see Token Bucket
- by default we start the rate limit interval time only when it's actually used (e.g. following twitter api rate limit semantics)