Commit baef20f9 authored by Dave Watson's avatar Dave Watson Committed by Facebook Github Bot

Fix dynamic thread destruction race

Summary:
There is a subtle race between thread destruction and task addition (ensureActiveThreads).

For an executor with a single thread, the race would be:
threadid
1 cpu thread returns from try_take_for by timing out.
2 someone from a different thread calls add()
2 add() finds lifosem empty, calls ensureActiveThreads
2 ensureActiveThreads grabs the lock, finds active=1, total=1, returns doing nothing, assuming thread is still running.
1 cpu thread decrements activeThreads_ to 0 in taskShouldStop.

There are now *no* threads running until the next call to add(), and one task waiting.

Fix: Grab lock in taskShouldStop (like the documentation says we should be doing).  Also double check that there are no currently pending tasks.

This probably only affects pools with size of 1, otherwise we would have added a second thread and there may temporarily be one fewer threads running.

Reviewed By: davidtgoldblatt

Differential Revision: D7943241

fbshipit-source-id: 502e5809ccf4ecca85205c14c4d97b508897de9b
parent f6db2cea
...@@ -157,16 +157,26 @@ bool CPUThreadPoolExecutor::taskShouldStop(folly::Optional<CPUTask>& task) { ...@@ -157,16 +157,26 @@ bool CPUThreadPoolExecutor::taskShouldStop(folly::Optional<CPUTask>& task) {
return false; return false;
} }
} else { } else {
// Try to stop based on idle thread timeout (try_take_for), {
// if there are at least minThreads running. SharedMutex::WriteHolder w{&threadListLock_};
if (!minActive()) { // Try to stop based on idle thread timeout (try_take_for),
return false; // if there are at least minThreads running.
if (!minActive()) {
return false;
}
// If this is based on idle thread timeout, then
// adjust vars appropriately (otherwise stop() or join()
// does this).
if (getPendingTaskCountImpl() > 0) {
return false;
}
activeThreads_.store(
activeThreads_.load(std::memory_order_relaxed) - 1,
std::memory_order_relaxed);
threadsToJoin_.store(
threadsToJoin_.load(std::memory_order_relaxed) + 1,
std::memory_order_relaxed);
} }
// If this is based on idle thread timeout, then
// adjust vars appropriately (otherwise stop() or join()
// does this).
activeThreads_.fetch_sub(1, std::memory_order_relaxed);
threadsToJoin_.fetch_add(1, std::memory_order_relaxed);
} }
return true; return true;
} }
......
...@@ -708,3 +708,19 @@ TEST(ThreadPoolExecutorTest, DynamicThreadsTest) { ...@@ -708,3 +708,19 @@ TEST(ThreadPoolExecutorTest, DynamicThreadsTest) {
stats = e.getPoolStats(); stats = e.getPoolStats();
EXPECT_LE(stats.activeThreadCount, 0); EXPECT_LE(stats.activeThreadCount, 0);
} }
TEST(ThreadPoolExecutorTest, DynamicThreadAddRemoveRace) {
CPUThreadPoolExecutor e(1);
e.setThreadDeathTimeout(std::chrono::milliseconds(0));
std::atomic<uint64_t> count{0};
for (int i = 0; i < 10000; i++) {
Baton<> b;
e.add([&]() {
count.fetch_add(1, std::memory_order_relaxed);
b.post();
});
b.wait();
}
e.join();
EXPECT_EQ(count, 10000);
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment