Commit b927c055 authored by Nathan Bronson's avatar Nathan Bronson Committed by Viswanath Sivakumar

SharedMutex potential lost wakeup with exactly 3 or 4 contending writers

Summary:
SharedMutex used a saturating counter that records the number of
waiting lock() calls, but an ABA problem on futexWait could lead to a lost
wakeup when there was exactly 3 or 4 threads contending on the RW lock
in W mode.  This diff changes the kWaitingE count to be heuristic (it is
possible that the count says 1 but there are two waiters), saturates at
2 instead of 3 (because there is no benefit from differentiating those
two), and doesn't decrement the count on a successful wakeup.

Also, I noticed while debugging this that boost::noncopyable was causing
SharedMutex to be 8 bytes when it should only be 4.

One way the wakeup could be lost in the old code:

1. A calls lock()
2. A updates state <- kHasE
3. A returns
4. B calls lock()
5. B spins
6. B updates state <- kHasE + 1 * kIncrWaitingE
7. A calls unlock()
8. A updates state <- 0
9. A calls futexWake(), which returns 0
10. A calls lock()
11. A updates state <- kHasE
12. A returns
13. C calls lock()
14. C spins
15. C updates state <- kHasE + 1 * kIncrWaitingE
16. C calls futexWait, expecting kHasE + 1 * kIncrWaitingE
17. B calls futexWait, expecting kHasE + 1 * kIncrWaitingE
18. A calls unlock()
19. A updates state <- 0
20. A calls futexWake(), which returns 1
21. C receives the wakeup
22. C updates state <- kHasE
23. C returns
24. C calls unlock()
25. C updates state <- 0

B missed the wakeup that was intended for it (sent at step 9, wait
started at step 17), but went to sleep anyway because it saw the write
state at step 17. Now there are two waiters but only 1 recorded in the
SharedMutex, at which point failure is inevitable.

Test Plan:
1. DeterministicSchedule test using uniformSubset that can repro the problem
2. Test in production scenario that produced occasional deadlocks under high stress

Reviewed By: yfeldblum@fb.com

Subscribers: folly-diffs@, yfeldblum, chalfant

FB internal diff: D1980210

Tasks: 6720328

Signature: t1:1980210:1428623932:ef1c00c3f88154578b2b253ac0cfdbadf9f31d8c
parent e426c673
...@@ -226,7 +226,7 @@ template <bool ReaderPriority, ...@@ -226,7 +226,7 @@ template <bool ReaderPriority,
typename Tag_ = void, typename Tag_ = void,
template <typename> class Atom = std::atomic, template <typename> class Atom = std::atomic,
bool BlockImmediately = false> bool BlockImmediately = false>
class SharedMutexImpl : boost::noncopyable { class SharedMutexImpl {
public: public:
static constexpr bool kReaderPriority = ReaderPriority; static constexpr bool kReaderPriority = ReaderPriority;
typedef Tag_ Tag; typedef Tag_ Tag;
...@@ -239,6 +239,11 @@ class SharedMutexImpl : boost::noncopyable { ...@@ -239,6 +239,11 @@ class SharedMutexImpl : boost::noncopyable {
SharedMutexImpl() : state_(0) {} SharedMutexImpl() : state_(0) {}
SharedMutexImpl(const SharedMutexImpl&) = delete;
SharedMutexImpl(SharedMutexImpl&&) = delete;
SharedMutexImpl& operator = (const SharedMutexImpl&) = delete;
SharedMutexImpl& operator = (SharedMutexImpl&&) = delete;
// It is an error to destroy an SharedMutex that still has // It is an error to destroy an SharedMutex that still has
// any outstanding locks. This is checked if NDEBUG isn't defined. // any outstanding locks. This is checked if NDEBUG isn't defined.
// SharedMutex's exclusive mode can be safely used to guard the lock's // SharedMutex's exclusive mode can be safely used to guard the lock's
...@@ -591,18 +596,18 @@ class SharedMutexImpl : boost::noncopyable { ...@@ -591,18 +596,18 @@ class SharedMutexImpl : boost::noncopyable {
// one instead of wake all). // one instead of wake all).
static constexpr uint32_t kWaitingNotS = 1 << 4; static constexpr uint32_t kWaitingNotS = 1 << 4;
// If there are multiple pending waiters, then waking them all can // When waking writers we can either wake them all, in which case we
// lead to a thundering herd on the lock. To avoid this, we keep // can clear kWaitingE, or we can call futexWake(1). futexWake tells
// a 2 bit saturating counter of the number of exclusive waiters // us if anybody woke up, but even if we detect that nobody woke up we
// (0, 1, 2, 3+), and if the value is >= 2 we perform futexWake(1) // can't clear the bit after the fact without issuing another wakeup.
// instead of futexWakeAll. See wakeRegisteredWaiters for more. // To avoid thundering herds when there are lots of pending lock()
// It isn't actually useful to make the counter bigger, because // without needing to call futexWake twice when there is only one
// whenever a futexWait fails with EAGAIN the counter becomes higher // waiter, kWaitingE actually encodes if we have observed multiple
// than the actual number of waiters, and hence effectively saturated. // concurrent waiters. Tricky: ABA issues on futexWait mean that when
// Bigger counters just lead to more changes in state_, which increase // we see kWaitingESingle we can't assume that there is only one.
// contention and failed futexWait-s. static constexpr uint32_t kWaitingESingle = 1 << 2;
static constexpr uint32_t kIncrWaitingE = 1 << 2; static constexpr uint32_t kWaitingEMultiple = 1 << 3;
static constexpr uint32_t kWaitingE = 0x3 * kIncrWaitingE; static constexpr uint32_t kWaitingE = kWaitingESingle | kWaitingEMultiple;
// kWaitingU is essentially a 1 bit saturating counter. It always // kWaitingU is essentially a 1 bit saturating counter. It always
// requires a wakeAll. // requires a wakeAll.
...@@ -857,9 +862,11 @@ class SharedMutexImpl : boost::noncopyable { ...@@ -857,9 +862,11 @@ class SharedMutexImpl : boost::noncopyable {
auto after = state; auto after = state;
if (waitMask == kWaitingE) { if (waitMask == kWaitingE) {
if ((state & kWaitingE) != kWaitingE) { if ((state & kWaitingESingle) != 0) {
after += kIncrWaitingE; after |= kWaitingEMultiple;
} // else counter is saturated } else {
after |= kWaitingESingle;
}
} else { } else {
after |= waitMask; after |= waitMask;
} }
...@@ -887,52 +894,27 @@ class SharedMutexImpl : boost::noncopyable { ...@@ -887,52 +894,27 @@ class SharedMutexImpl : boost::noncopyable {
} }
void wakeRegisteredWaitersImpl(uint32_t& state, uint32_t wakeMask) { void wakeRegisteredWaitersImpl(uint32_t& state, uint32_t wakeMask) {
if ((wakeMask & kWaitingE) != 0) { // If there are multiple lock() pending only one of them will actually
// If there are multiple lock() pending only one of them will // get to wake up, so issuing futexWakeAll will make a thundering herd.
// actually get to wake up, so issuing futexWakeAll will make // There's nothing stopping us from issuing futexWake(1) instead,
// a thundering herd. There's nothing stopping us from issuing // so long as the wait bits are still an accurate reflection of
// futexWake(1) instead, so long as the wait bits are still an // the waiters. If we notice (via futexWake's return value) that
// accurate reflection of the waiters. If our pending lock() counter // nobody woke up then we can try again with the normal wake-all path.
// hasn't saturated we can decrement it. If it has saturated, // Note that we can't just clear the bits at that point; we need to
// then we can clear it by noticing that futexWake(1) returns 0 // clear the bits and then issue another wakeup.
// (indicating no actual waiters) and then retrying via the normal
// clear+futexWakeAll path.
// //
// It is possible that we wake an E waiter but an outside S grabs // It is possible that we wake an E waiter but an outside S grabs the
// the lock instead, at which point we should wake pending U and // lock instead, at which point we should wake pending U and S waiters.
// S waiters. Rather than tracking state to make the failing E // Rather than tracking state to make the failing E regenerate the
// regenerate the wakeup, we just disable the optimization in the // wakeup, we just disable the optimization in the case that there
// case that there are waiting U or S that we are eligible to wake. // are waiting U or S that we are eligible to wake.
// if ((wakeMask & kWaitingE) == kWaitingE &&
// Note that in the contended scenario it is quite likely that the (state & wakeMask) == kWaitingE &&
// waiter's futexWait call will fail with EAGAIN (expected value state_.futexWake(1, kWaitingE) > 0) {
// mismatch), at which point the awaiting-exclusive count will be // somebody woke up, so leave state_ as is and clear it later
// larger than the actual number of waiters. At this point the
// counter is effectively saturated. Since this is likely, it is
// actually less efficient to have a larger counter. 2 bits seems
// to be the best.
while ((state & kWaitingE) != 0 &&
(state & wakeMask & (kWaitingU | kWaitingS)) == 0) {
if ((state & kWaitingE) != kWaitingE) {
// not saturated
if (!state_.compare_exchange_strong(state, state - kIncrWaitingE)) {
continue;
}
state -= kIncrWaitingE;
}
if (state_.futexWake(1, kWaitingE) > 0) {
return; return;
} }
// Despite the non-zero awaiting-exclusive count, there aren't
// actually any pending writers. Fall through to the logic below
// to wake up other classes of locks and to clear the saturated
// counter (if necessary).
break;
}
}
if ((state & wakeMask) != 0) { if ((state & wakeMask) != 0) {
auto prev = state_.fetch_and(~wakeMask); auto prev = state_.fetch_and(~wakeMask);
if ((prev & wakeMask) != 0) { if ((prev & wakeMask) != 0) {
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment