Commit b927c055 authored by Nathan Bronson's avatar Nathan Bronson Committed by Viswanath Sivakumar

SharedMutex potential lost wakeup with exactly 3 or 4 contending writers

Summary:
SharedMutex used a saturating counter that records the number of
waiting lock() calls, but an ABA problem on futexWait could lead to a lost
wakeup when there was exactly 3 or 4 threads contending on the RW lock
in W mode.  This diff changes the kWaitingE count to be heuristic (it is
possible that the count says 1 but there are two waiters), saturates at
2 instead of 3 (because there is no benefit from differentiating those
two), and doesn't decrement the count on a successful wakeup.

Also, I noticed while debugging this that boost::noncopyable was causing
SharedMutex to be 8 bytes when it should only be 4.

One way the wakeup could be lost in the old code:

1. A calls lock()
2. A updates state <- kHasE
3. A returns
4. B calls lock()
5. B spins
6. B updates state <- kHasE + 1 * kIncrWaitingE
7. A calls unlock()
8. A updates state <- 0
9. A calls futexWake(), which returns 0
10. A calls lock()
11. A updates state <- kHasE
12. A returns
13. C calls lock()
14. C spins
15. C updates state <- kHasE + 1 * kIncrWaitingE
16. C calls futexWait, expecting kHasE + 1 * kIncrWaitingE
17. B calls futexWait, expecting kHasE + 1 * kIncrWaitingE
18. A calls unlock()
19. A updates state <- 0
20. A calls futexWake(), which returns 1
21. C receives the wakeup
22. C updates state <- kHasE
23. C returns
24. C calls unlock()
25. C updates state <- 0

B missed the wakeup that was intended for it (sent at step 9, wait
started at step 17), but went to sleep anyway because it saw the write
state at step 17. Now there are two waiters but only 1 recorded in the
SharedMutex, at which point failure is inevitable.

Test Plan:
1. DeterministicSchedule test using uniformSubset that can repro the problem
2. Test in production scenario that produced occasional deadlocks under high stress

Reviewed By: yfeldblum@fb.com

Subscribers: folly-diffs@, yfeldblum, chalfant

FB internal diff: D1980210

Tasks: 6720328

Signature: t1:1980210:1428623932:ef1c00c3f88154578b2b253ac0cfdbadf9f31d8c
parent e426c673
......@@ -226,7 +226,7 @@ template <bool ReaderPriority,
typename Tag_ = void,
template <typename> class Atom = std::atomic,
bool BlockImmediately = false>
class SharedMutexImpl : boost::noncopyable {
class SharedMutexImpl {
public:
static constexpr bool kReaderPriority = ReaderPriority;
typedef Tag_ Tag;
......@@ -239,6 +239,11 @@ class SharedMutexImpl : boost::noncopyable {
SharedMutexImpl() : state_(0) {}
SharedMutexImpl(const SharedMutexImpl&) = delete;
SharedMutexImpl(SharedMutexImpl&&) = delete;
SharedMutexImpl& operator = (const SharedMutexImpl&) = delete;
SharedMutexImpl& operator = (SharedMutexImpl&&) = delete;
// It is an error to destroy an SharedMutex that still has
// any outstanding locks. This is checked if NDEBUG isn't defined.
// SharedMutex's exclusive mode can be safely used to guard the lock's
......@@ -591,18 +596,18 @@ class SharedMutexImpl : boost::noncopyable {
// one instead of wake all).
static constexpr uint32_t kWaitingNotS = 1 << 4;
// If there are multiple pending waiters, then waking them all can
// lead to a thundering herd on the lock. To avoid this, we keep
// a 2 bit saturating counter of the number of exclusive waiters
// (0, 1, 2, 3+), and if the value is >= 2 we perform futexWake(1)
// instead of futexWakeAll. See wakeRegisteredWaiters for more.
// It isn't actually useful to make the counter bigger, because
// whenever a futexWait fails with EAGAIN the counter becomes higher
// than the actual number of waiters, and hence effectively saturated.
// Bigger counters just lead to more changes in state_, which increase
// contention and failed futexWait-s.
static constexpr uint32_t kIncrWaitingE = 1 << 2;
static constexpr uint32_t kWaitingE = 0x3 * kIncrWaitingE;
// When waking writers we can either wake them all, in which case we
// can clear kWaitingE, or we can call futexWake(1). futexWake tells
// us if anybody woke up, but even if we detect that nobody woke up we
// can't clear the bit after the fact without issuing another wakeup.
// To avoid thundering herds when there are lots of pending lock()
// without needing to call futexWake twice when there is only one
// waiter, kWaitingE actually encodes if we have observed multiple
// concurrent waiters. Tricky: ABA issues on futexWait mean that when
// we see kWaitingESingle we can't assume that there is only one.
static constexpr uint32_t kWaitingESingle = 1 << 2;
static constexpr uint32_t kWaitingEMultiple = 1 << 3;
static constexpr uint32_t kWaitingE = kWaitingESingle | kWaitingEMultiple;
// kWaitingU is essentially a 1 bit saturating counter. It always
// requires a wakeAll.
......@@ -857,9 +862,11 @@ class SharedMutexImpl : boost::noncopyable {
auto after = state;
if (waitMask == kWaitingE) {
if ((state & kWaitingE) != kWaitingE) {
after += kIncrWaitingE;
} // else counter is saturated
if ((state & kWaitingESingle) != 0) {
after |= kWaitingEMultiple;
} else {
after |= kWaitingESingle;
}
} else {
after |= waitMask;
}
......@@ -887,50 +894,25 @@ class SharedMutexImpl : boost::noncopyable {
}
void wakeRegisteredWaitersImpl(uint32_t& state, uint32_t wakeMask) {
if ((wakeMask & kWaitingE) != 0) {
// If there are multiple lock() pending only one of them will
// actually get to wake up, so issuing futexWakeAll will make
// a thundering herd. There's nothing stopping us from issuing
// futexWake(1) instead, so long as the wait bits are still an
// accurate reflection of the waiters. If our pending lock() counter
// hasn't saturated we can decrement it. If it has saturated,
// then we can clear it by noticing that futexWake(1) returns 0
// (indicating no actual waiters) and then retrying via the normal
// clear+futexWakeAll path.
//
// It is possible that we wake an E waiter but an outside S grabs
// the lock instead, at which point we should wake pending U and
// S waiters. Rather than tracking state to make the failing E
// regenerate the wakeup, we just disable the optimization in the
// case that there are waiting U or S that we are eligible to wake.
//
// Note that in the contended scenario it is quite likely that the
// waiter's futexWait call will fail with EAGAIN (expected value
// mismatch), at which point the awaiting-exclusive count will be
// larger than the actual number of waiters. At this point the
// counter is effectively saturated. Since this is likely, it is
// actually less efficient to have a larger counter. 2 bits seems
// to be the best.
while ((state & kWaitingE) != 0 &&
(state & wakeMask & (kWaitingU | kWaitingS)) == 0) {
if ((state & kWaitingE) != kWaitingE) {
// not saturated
if (!state_.compare_exchange_strong(state, state - kIncrWaitingE)) {
continue;
}
state -= kIncrWaitingE;
}
if (state_.futexWake(1, kWaitingE) > 0) {
return;
}
// Despite the non-zero awaiting-exclusive count, there aren't
// actually any pending writers. Fall through to the logic below
// to wake up other classes of locks and to clear the saturated
// counter (if necessary).
break;
}
// If there are multiple lock() pending only one of them will actually
// get to wake up, so issuing futexWakeAll will make a thundering herd.
// There's nothing stopping us from issuing futexWake(1) instead,
// so long as the wait bits are still an accurate reflection of
// the waiters. If we notice (via futexWake's return value) that
// nobody woke up then we can try again with the normal wake-all path.
// Note that we can't just clear the bits at that point; we need to
// clear the bits and then issue another wakeup.
//
// It is possible that we wake an E waiter but an outside S grabs the
// lock instead, at which point we should wake pending U and S waiters.
// Rather than tracking state to make the failing E regenerate the
// wakeup, we just disable the optimization in the case that there
// are waiting U or S that we are eligible to wake.
if ((wakeMask & kWaitingE) == kWaitingE &&
(state & wakeMask) == kWaitingE &&
state_.futexWake(1, kWaitingE) > 0) {
// somebody woke up, so leave state_ as is and clear it later
return;
}
if ((state & wakeMask) != 0) {
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment