Commit e919bad4 authored by Nitin Garg's avatar Nitin Garg Committed by Facebook Github Bot

Add APIs in DynamicTokenBucket to be able to return excess tokens and to borrow from future.

Summary:
The common use case for token buckets to implement resource usage throttling/smoothing also implies that callers need to either implement their own backoff+retry mechanism. The simple exponential backoff approach is prone to starvation and over-throttling (under-utilized resource). Many cases will simply be okay with being told at what point in future their allocations can be met effectively doing the starvation free FIFO scheduling for them. Added a consumeWithBorrow API to enable this behavior.

Added checks in existing API to account for the possibility of the internal clock of the bucket being 'in future' wrt to the 'now' argument passed in the call.

Also added an API to be able to return previously allocated tokens effectively moving the clock back on the bucket. This is largely for completeness and there isn't an immediate use case in mind for this.

Reviewed By: yfeldblum, usumeet

Differential Revision: D13166472

fbshipit-source-id: 4d6f93eedcc75eb07c1250026e57806c96974c96
parent 97d612a3
......@@ -19,8 +19,10 @@
#include <algorithm>
#include <atomic>
#include <chrono>
#include <thread>
#include <folly/Likely.h>
#include <folly/Optional.h>
#include <folly/concurrency/CacheLocality.h>
namespace folly {
......@@ -34,7 +36,14 @@ namespace folly {
* bytes per second and the bytes come in finite packets (bursts). A token
* bucket stores up to a fixed number of tokens (the burst size). Some number
* of tokens are removed when an event occurs. The tokens are replenished at a
* fixed rate.
* fixed rate. Failure to allocate tokens implies resource is unavailable and
* caller needs to implement its own retry mechanism. For simple cases where
* caller is okay with a FIFO starvation-free scheduling behavior, there are
* also APIs to 'borrow' from the future effectively assigning a start time to
* the caller when it should proceed with using the resource. It is also
* possible to 'return' previously allocated tokens to make them available to
* other users. Returns in excess of burstSize are considered expired and
* will not be available to later callers.
*
* This implementation records the last time it was updated. This allows the
* token bucket to add tokens "just in time" when tokens are requested.
......@@ -126,6 +135,10 @@ class BasicDynamicTokenBucket {
assert(rate > 0);
assert(burstSize > 0);
if (nowInSeconds <= zeroTime_.load()) {
return 0;
}
return consumeImpl(
rate, burstSize, nowInSeconds, [toConsume](double& tokens) {
if (tokens < toConsume) {
......@@ -159,6 +172,10 @@ class BasicDynamicTokenBucket {
assert(rate > 0);
assert(burstSize > 0);
if (nowInSeconds <= zeroTime_.load()) {
return 0;
}
double consumed;
consumeImpl(
rate, burstSize, nowInSeconds, [&consumed, toConsume](double& tokens) {
......@@ -174,6 +191,83 @@ class BasicDynamicTokenBucket {
return consumed;
}
/**
* Return extra tokens back to the bucket. This will move the zeroTime_
* value back based on the rate.
*
* Thread-safe.
*/
void returnTokens(double tokensToReturn, double rate) {
assert(rate > 0);
assert(tokensToReturn > 0);
returnTokensImpl(tokensToReturn, rate);
}
/**
* Like consumeOrDrain but the call will always satisfy the asked for count.
* It does so by borrowing tokens from the future (zeroTime_ will move
* forward) if the currently available count isn't sufficient.
*
* Returns a folly::Optional<double>. The optional wont be set if the request
* cannot be satisfied: only case is when it is larger than burstSize. The
* value of the optional is a double indicating the time in seconds that the
* caller needs to wait at which the reservation becomes valid. The caller
* could simply sleep for the returned duration to smooth out the allocation
* to match the rate limiter or do some other computation in the meantime. In
* any case, any regular consume or consumeOrDrain calls will fail to allocate
* any tokens until the future time is reached.
*
* Note: It is assumed the caller will not ask for a very large count nor use
* it immediately (if not waiting inline) as that would break the burst
* prevention the limiter is meant to be used for.
*
* Thread-safe.
*/
Optional<double> consumeWithBorrowNonBlocking(
double toConsume,
double rate,
double burstSize,
double nowInSeconds = defaultClockNow()) {
assert(rate > 0);
assert(burstSize > 0);
if (burstSize < toConsume) {
return folly::none;
}
while (toConsume > 0) {
double consumed =
consumeOrDrain(toConsume, rate, burstSize, nowInSeconds);
if (consumed > 0) {
toConsume -= consumed;
} else {
double zeroTimeNew = returnTokensImpl(-toConsume, rate);
double napTime = std::max(0.0, zeroTimeNew - nowInSeconds);
return napTime;
}
}
return 0;
}
/**
* Convenience wrapper around non-blocking borrow to sleep inline until
* reservation is valid.
*/
bool consumeWithBorrowAndWait(
double toConsume,
double rate,
double burstSize,
double nowInSeconds = defaultClockNow()) {
auto res =
consumeWithBorrowNonBlocking(toConsume, rate, burstSize, nowInSeconds);
if (res.value_or(0) > 0) {
int64_t napUSec = res.value() * 1000000;
std::this_thread::sleep_for(std::chrono::microseconds(napUSec));
}
return res.has_value();
}
/**
* Returns the number of tokens currently available.
*
......@@ -186,7 +280,11 @@ class BasicDynamicTokenBucket {
assert(rate > 0);
assert(burstSize > 0);
return std::min((nowInSeconds - this->zeroTime_) * rate, burstSize);
double zt = this->zeroTime_.load();
if (nowInSeconds <= zt) {
return 0;
}
return std::min((nowInSeconds - zt) * rate, burstSize);
}
private:
......@@ -210,6 +308,21 @@ class BasicDynamicTokenBucket {
return true;
}
/**
* Adjust zeroTime based on rate and tokenCount and return the new value of
* zeroTime_. Note: Token count can be negative to move the zeroTime_ value
* into the future.
*/
double returnTokensImpl(double tokenCount, double rate) {
auto zeroTimeOld = zeroTime_.load();
double zeroTimeNew;
do {
zeroTimeNew = zeroTimeOld - tokenCount / rate;
} while (
UNLIKELY(!zeroTime_.compare_exchange_weak(zeroTimeOld, zeroTimeNew)));
return zeroTimeNew;
}
alignas(hardware_destructive_interference_size) std::atomic<double> zeroTime_;
};
......@@ -339,6 +452,34 @@ class BasicTokenBucket {
toConsume, rate_, burstSize_, nowInSeconds);
}
/**
* Returns extra token back to the bucket.
*/
void returnTokens(double tokensToReturn) {
return tokenBucket_.returnTokens(tokensToReturn, rate_);
}
/**
* Reserve tokens and return time to wait for in order for the reservation to
* be compatible with the bucket configuration.
*/
Optional<double> consumeWithBorrowNonBlocking(
double toConsume,
double nowInSeconds = defaultClockNow()) {
return tokenBucket_.consumeWithBorrowNonBlocking(
toConsume, rate_, burstSize_, nowInSeconds);
}
/**
* Reserve tokens. Blocks if need be until reservation is satisfied.
*/
bool consumeWithBorrowAndWait(
double toConsume,
double nowInSeconds = defaultClockNow()) {
return tokenBucket_.consumeWithBorrowAndWait(
toConsume, rate_, burstSize_, nowInSeconds);
}
/**
* Returns the number of tokens currently available.
*
......
......@@ -134,3 +134,41 @@ TEST(TokenBucket, drainOnFail) {
EXPECT_DOUBLE_EQ(1.0, tokenBucket.consumeOrDrain(5, 10, 10, 1));
EXPECT_DOUBLE_EQ(0.0, tokenBucket.consumeOrDrain(1, 10, 10, 1));
}
TEST(TokenBucket, returnTokensTest) {
DynamicTokenBucket tokenBucket;
// Empty the bucket.
EXPECT_TRUE(tokenBucket.consume(10, 10, 10, 5));
// consume should fail now.
EXPECT_FALSE(tokenBucket.consume(1, 10, 10, 5));
EXPECT_DOUBLE_EQ(0.0, tokenBucket.consumeOrDrain(1, 10, 10, 5));
// Return tokens. Return 40 'excess' tokens but they wont be available to
// later callers.
tokenBucket.returnTokens(50, 10);
// Should be able to allocate 10 tokens again but the extra 40 returned in
// previous call are gone.
EXPECT_TRUE(tokenBucket.consume(10, 10, 10, 5));
EXPECT_FALSE(tokenBucket.consume(1, 10, 10, 5));
}
TEST(TokenBucket, consumeOrBorrowTest) {
DynamicTokenBucket tokenBucket;
// Empty the bucket.
EXPECT_TRUE(tokenBucket.consume(10, 10, 10, 1));
// consume should fail now.
EXPECT_FALSE(tokenBucket.consume(1, 10, 10, 1));
// Now borrow from future allocations. Each call is asking for 1s worth of
// allocations so it should return (i+1)*1s in the ith iteration as the time
// caller needs to wait.
for (int i = 0; i < 10; ++i) {
auto waitTime = tokenBucket.consumeWithBorrowNonBlocking(10, 10, 10, 1);
EXPECT_TRUE(waitTime.has_value());
EXPECT_DOUBLE_EQ((i + 1) * 1.0, *waitTime);
}
// No allocation will succeed until nowInSeconds goes higher than 11s.
EXPECT_FALSE(tokenBucket.consume(1, 10, 10, 11));
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment