Commit 3692ff51 authored by Yang Ni's avatar Yang Ni Committed by Jordan DeLong

Fixed a potential deadlock in folly::RWSpinLock

Summary:
The current UpgradedHolder implementation may lead to a deadlock when upgrading from a reader lock, because it is blocking.

A reader that has failed to set the UPGRADED bit may block the
winner (upgrader) that has successfully set the UPGRADED bit, while
waiting to upgrade in an infinite loop without releasing its own
reader lock. The upgrader needs to wait for all reader locks to be
released before it can move forward.

This is the code pattern:

{
ReadHolder reader(lock);
UpgradedHolder upgrader(std::move(reader));
WriteHolder writer(std::move(upgrader));
}

To avoid this to happen, removed UpgradedHolder(ReadHolder&&)
constructor and the function that impelments it
unlock_shared_and_lock_upgrade. This would require a programmer explicitly
release a read lock before acquiring an upgrade lock, therefore
avoid the above mentioned deadlock.

In addition, the current folly::RWSpinLock::try_lock_shared()
implementation does not check the UPGRADED bit at all. The UPGRADED bit can be used to avoid starving writers.  This diff fixed this by correctly checking the UPGRADED bit.

Test Plan:
Passed folly unit tests.
Tested in a Facebook service that uses the
folly::RWSpinLock::UpgradedHolder.

Reviewed By: xliux@fb.com

FB internal diff: D659875
parent dca19fc4
......@@ -203,11 +203,6 @@ class RWSpinLock : boost::noncopyable {
bits_.fetch_add(READER - UPGRADED, std::memory_order_acq_rel);
}
void unlock_shared_and_lock_upgrade() {
lock_upgrade();
unlock_shared();
}
// write unlock and upgrade lock atomically
void unlock_and_lock_upgrade() {
// need to do it in two steps here -- as the UPGRADED bit might be OR-ed at
......@@ -225,12 +220,16 @@ class RWSpinLock : boost::noncopyable {
}
// Try to get reader permission on the lock. This can fail if we
// find out someone is a writer.
// find out someone is a writer or upgrader.
// Setting the UPGRADED bit would allow a writer-to-be to indicate
// its intention to write and block any new readers while waiting
// for existing readers to finish and release their read locks. This
// helps avoid starving writers (promoted from upgraders).
bool try_lock_shared() {
// fetch_add is considerably (100%) faster than compare_exchange,
// so here we are optimizing for the common (lock success) case.
int32_t value = bits_.fetch_add(READER, std::memory_order_acquire);
if (UNLIKELY(value & WRITER)) {
if (UNLIKELY(value & (WRITER|UPGRADED))) {
bits_.fetch_add(-READER, std::memory_order_release);
return false;
}
......@@ -325,12 +324,6 @@ class RWSpinLock : boost::noncopyable {
lock_->lock_upgrade();
}
explicit UpgradedHolder(ReadHolder&& reader) {
lock_ = reader.lock_;
reader.lock_ = nullptr;
if (lock_) lock_->unlock_shared_and_lock_upgrade();
}
explicit UpgradedHolder(WriteHolder&& writer) {
lock_ = writer.lock_;
writer.lock_ = nullptr;
......
......@@ -169,10 +169,11 @@ TYPED_TEST(RWSpinLockTest, ConcurrentTests) {
TEST(RWSpinLock, lock_unlock_tests) {
folly::RWSpinLock lock;
EXPECT_TRUE(lock.try_lock_upgrade());
EXPECT_TRUE(lock.try_lock_shared());
EXPECT_FALSE(lock.try_lock_shared());
EXPECT_FALSE(lock.try_lock());
EXPECT_FALSE(lock.try_lock_upgrade());
lock.unlock_upgrade();
lock.lock_shared();
EXPECT_FALSE(lock.try_lock());
EXPECT_TRUE(lock.try_lock_upgrade());
lock.unlock_upgrade();
......@@ -180,8 +181,7 @@ TEST(RWSpinLock, lock_unlock_tests) {
EXPECT_TRUE(lock.try_lock());
EXPECT_FALSE(lock.try_lock_upgrade());
lock.unlock_and_lock_upgrade();
EXPECT_TRUE(lock.try_lock_shared());
lock.unlock_shared();
EXPECT_FALSE(lock.try_lock_shared());
lock.unlock_upgrade_and_lock_shared();
lock.unlock_shared();
EXPECT_EQ(0, lock.bits());
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment