Commit ca49b1fe authored by Robin Cheng's avatar Robin Cheng Committed by Facebook GitHub Bot

Fix a TSAN-detected race condition in TLRefCount.

Summary:
See RefCountTest.cpp for a newly added test. When one thread is decrementing a local refcount and another thread is decrementing a global refcount, and the global decrement results in global zero, the second thread can reasonably expect to be able to safely destroy the refcount object. However, a memory ordering issue in LocalRefCount::collect() causes TSAN to detect a write-delete race condition on the inUpdate_ variable (though that variable is not the cause of the issue).

This diff changes collect() to use memory_order_acquire so that upon reading false, collect() may exit while assuming that all memory accesses in update() are complete. This way, any memory accesses (such as a read of refCount_.state_ in update()) happen before a delete of the whole TLRefCount like so:

 - A memory access in update() is sequenced before the release store of false into inUpdate_ at the SCOPE_EXIT of the function,
 - which synchronizes with the acquire load at the end of collect()
 - which is sequenced before the deletion that can only happen after collect()

Without this diff, the second relationship above is broken and TSAN complains.

I also updated the comment on top of inUpdate_ to better reflect the purpose. I couldn't understand the original comment (which seems to suggest it's related to refcount *correctness* rather than a write-delete race), but maybe I missed something there.

Reviewed By: yfeldblum

Differential Revision: D22805717

fbshipit-source-id: 34cf72676760526ba457f939e307ed03dc722528
parent 030a288b
......@@ -159,15 +159,20 @@ class TLRefCount {
refCount_.globalCount_.fetch_add(collectCount_);
collectGuard_.reset();
}
// We only care about seeing inUpdate if we've observed the new count_
// value set by the update() call, so memory_order_relaxed is enough.
if (inUpdate_.load(std::memory_order_relaxed)) {
// Once we exit collect(), it's possible TLRefCount may be deleted by our
// user since the global count may reach zero. We must therefore ensure
// that the thread corresponding to this LocalRefCount is not still
// executing the update() function. We wait on inUpdate_ to ensure this.
// We won't have to worry about further update() calls beyond this point,
// because the state is already non-LOCAL. We also don't need to worry
// about if a thread is in an update() call but have not gotten around to
// setting inUpdate_ to true yet, because then count_ has also not been
// updated and we couldn't reach global zero in that case.
folly::detail::Sleeper sleeper;
while (inUpdate_.load(std::memory_order_acquire)) {
sleeper.wait();
}
}
}
bool operator++() {
return update(1);
......
......@@ -19,6 +19,7 @@
#include <folly/experimental/TLRefCount.h>
#include <folly/portability/GTest.h>
#include <folly/synchronization/Baton.h>
#include <folly/synchronization/test/Barrier.h>
namespace folly {
......@@ -123,4 +124,45 @@ TEST(TLRefCount, Stress) {
// do it that many times.
stressTest<TLRefCount>(500);
}
TEST(TLRefCount, SafeToDeleteWhenReachingZero) {
// Tests that it is safe to delete a TLRefCount when it is already in global
// state and its ref count reaches zero. This is a reasonable assumption
// since data structures typically embed a TLRefCount object and delete
// themselves when the refcount reaches 0.
TLRefCount* count = new TLRefCount();
folly::test::Barrier done(2); // give time for TSAN to catch issues
folly::Baton<> batonUnref;
int times_deleted = 0;
std::thread t1([&] {
++*count;
batonUnref.post();
if (--*count == 0) {
times_deleted++;
delete count;
}
done.wait();
});
std::thread t2([&] {
// Make sure thread 1 already grabbed a reference first, otherwise we might
// destroy it before thread 1 had a chance.
batonUnref.wait();
count->useGlobal();
if (--*count == 0) {
times_deleted++;
delete count;
}
done.wait();
});
t1.join();
t2.join();
EXPECT_EQ(times_deleted, 1);
}
} // namespace folly
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment