Commit e5722ec4 authored by Nick Terrell's avatar Nick Terrell Committed by Facebook GitHub Bot

Add F14Table::prefetch() to speculatively prefetch

Summary:
`F14Table::prehash()` will prefetch the first cache line of the chunk, which F14 is guaranteed to need to find the item. However, more cache lines may be needed. `F14Table::prefetch()` will speculatively prefetch the next 2 cache lines of the chunk, up to the chunk size.

I chose to add this prefetching to a new function `prefetch()` instead of putting it in `prehash()` because these loads are speculative. They may end up polluting the L1 cache with useless cache lines, if the item was in the first cache line. So I think it makes most sense to be explicitly opt-in, for latency sensitive use cases that know they have cold maps.

Lastly, I disable the prefetching in `findImpl()` when using a `F14HashToken`, assuming that when it matters the caller will use `prefetch()`.

Reviewed By: shixiao

Differential Revision: D31751971

fbshipit-source-id: 62043702bb026143cf32c129ac8ba8246d4883bc
parent 29ea4cac
......@@ -615,6 +615,37 @@ class F14BasicMap {
// Hash tokens are not hints -- it is a bug to call any method on this
// class with a token t and key k where t isn't the result of a call
// to prehash(k2) with k2 == k.
//
// Example Scenario: Loading 2 values from a cold map.
// You have a map that is cold, meaning it is out of the local CPU cache,
// and you want to load two values from the map. This can be extended to
// load N values, but we're loading 2 for simplicity.
//
// When the map is cold the dominating factor in the latency is loading
// the cache line of the entry into the local CPU cache. Using prehash()
// and optionally prefetch() will issue these cache line fetches in parallel.
// That means that by the time we finish map.find(token1, key1) the cache
// lines needed by map.find(token2, key2) may already be in the local CPU
// cache. In the best case this will half the latency.
//
// It is always okay to call prehash(). It only prefetches cache lines that
// are guaranteed to be needed by find(). However, prefetch() will
// speculatively load cache lines that may be needed by find(), but may be
// superfluous. This may help local performance, but hurt overall application
// performance, because it may be evicting another cache line that is useful.
// So prefetch() should only be used when benchmarks show benefits.
//
// std::pair<iterator, iterator> find2(
// auto& map, key_type const& key1, key_type const& key2,
// bool prefetch) {
// auto const token1 = map.prehash(key1);
// auto const token2 = map.prehash(key2);
// if (prefetch) {
// map.prefetch(token1);
// map.prefetch(token2);
// }
// return std::make_pair(map.find(token1, key1), map.find(token2, key2));
// }
F14HashToken prehash(key_type const& key) const {
return table_.prehash(key);
}
......@@ -624,6 +655,37 @@ class F14BasicMap {
return table_.prehash(key);
}
// prehash() only prefetches cachelines it is guaranteed to need,
// so as not to pollute the local CPU cache with speculative loads.
// But when the load factor is high, as it is expected to be, one
// cache line may not be enough to find the item. prefetch(token)
// will more aggresively prefetch the chunk by adding speculative
// prefetches.
//
// Note: This function should only be used when benchmarks show that
// it is useful. Since it introduces speculative prefetches, it may
// improve local performance while hurting overall application
// performance.
//
// Example scenario: Finding a value in a cold map.
// When loading from a cold map the latency of loading the entry's
// cache line into the local CPU cache dominates. F14 uses linear
// probing, so the entry is likely to be within a few cache lines
// of the hash location. prehash() will fetch the first cache line,
// but when more than one cache line is needed, it won't be enough,
// and more cache line(s) will be needed. Each load introduces
// additional latency, which can be significant for cold maps.
// prefetch() will speculatively the first few cache lines to ensure
// that we are very likely to find the key within a prefetched cache
// line.
//
// iterator find_cold(auto& map, key_type const& key) {
// auto const token = map.prehash(key);
// map.prefetch(token, key);
// return map.find(key);
// }
void prefetch(F14HashToken const& token) const { table_.prefetch(token); }
FOLLY_ALWAYS_INLINE iterator find(key_type const& key) {
return table_.makeIter(table_.find(key));
}
......
......@@ -403,7 +403,7 @@ class F14BasicSet {
// prehash(key) does the work of evaluating hash_function()(key)
// (including additional bit-mixing for non-avalanching hash functions),
// wraps the result of that work in a token for later reuse, and
// begins prefetching of the first steps of looking for key into the
// begins prefetching the first steps of looking for key into the
// local CPU cache.
//
// The returned token may be used at any time, may be used more than
......@@ -414,6 +414,37 @@ class F14BasicSet {
// Hash tokens are not hints -- it is a bug to call any method on this
// class with a token t and key k where t isn't the result of a call
// to prehash(k2) with k2 == k.
//
// Example Scenario: Loading 2 values from a cold map.
// You have a map that is cold, meaning it is out of the local CPU cache,
// and you want to load two values from the map. This can be extended to
// load N values, but we're loading 2 for simplicity.
//
// When the map is cold the dominating factor in the latency is loading
// the cache line of the entry into the local CPU cache. Using prehash()
// and optionally prefetch() will issue these cache line fetches in parallel.
// That means that by the time we finish map.find(token1, key1) the cache
// lines needed by map.find(token2, key2) may already be in the local CPU
// cache. In the best case this will half the latency.
//
// It is always okay to call prehash(). It only prefetches cache lines that
// are guaranteed to be needed by find(). However, prefetch() will
// speculatively load cache lines that may be needed by find(), but may be
// superfluous. This may help local performance, but hurt overall application
// performance, because it may be evicting another cache line that is useful.
// So prefetch() should only be used when benchmarks show benefits.
//
// std::pair<iterator, iterator> find2(
// auto& map, key_type const& key1, key_type const& key2,
// bool prefetch) {
// auto const token1 = map.prehash(key1);
// auto const token2 = map.prehash(key2);
// if (prefetch) {
// map.prefetch(token1);
// map.prefetch(token2);
// }
// return std::make_pair(map.find(token1, key1), map.find(token2, key2));
// }
F14HashToken prehash(key_type const& key) const {
return table_.prehash(key);
}
......@@ -423,6 +454,37 @@ class F14BasicSet {
return table_.prehash(key);
}
// prehash() only prefetches cachelines it is guaranteed to need,
// so as not to pollute the local CPU cache with speculative loads.
// But when the load factor is high, as it is expected to be, one
// cache line may not be enough to find the item. prefetch(token)
// will more aggresively prefetch the chunk by adding speculative
// prefetches.
//
// Note: This function should only be used when benchmarks show that
// it is useful. Since it introduces speculative prefetches, it may
// improve local performance while hurting overall application
// performance.
//
// Example scenario: Finding a value in a cold map.
// When loading from a cold map the latency of loading the entry's
// cache line into the local CPU cache dominates. F14 uses linear
// probing, so the entry is likely to be within a few cache lines
// of the hash location. prehash() will fetch the first cache line,
// but when more than one cache line is needed, it won't be enough,
// and more cache line(s) will be needed. Each load introduces
// additional latency, which can be significant for cold maps.
// prefetch() will speculatively the first few cache lines to ensure
// that we are very likely to find the key within a prefetched cache
// line.
//
// iterator find_cold(auto& map, key_type const& key) {
// auto const token = map.prehash(key);
// map.prefetch(token, key);
// return map.find(key);
// }
void prefetch(F14HashToken const& token) const { table_.prefetch(token); }
FOLLY_ALWAYS_INLINE iterator find(key_type const& key) {
return const_cast<F14BasicSet const*>(this)->find(key);
}
......
......@@ -240,6 +240,7 @@ using Defaulted =
////////////////
/// Prefetch the first cache line of the object at ptr.
template <typename T>
FOLLY_ALWAYS_INLINE static void prefetchAddr(T const* ptr) {
#ifndef _WIN32
......@@ -252,6 +253,22 @@ FOLLY_ALWAYS_INLINE static void prefetchAddr(T const* ptr) {
#endif
}
/// Prefetch the object at ptr, starting at the cacheLineOffset cache line,
/// and prefetching at most maxCacheLines cache lines.
template <typename T>
FOLLY_ALWAYS_INLINE static void prefetchAddr(
T const* ptr, size_t cacheLineOffset, size_t maxCacheLines) {
size_t constexpr kCacheLineSize = hardware_constructive_interference_size;
auto constexpr kObjectCacheLines =
(sizeof(T) + kCacheLineSize - 1) / kCacheLineSize;
size_t const cacheLines = std::min(kObjectCacheLines, maxCacheLines);
auto const bytes = static_cast<char const*>(static_cast<void const*>(ptr));
for (size_t i = cacheLineOffset; i < cacheLines; ++i) {
prefetchAddr(bytes + i * kCacheLineSize);
}
}
#if FOLLY_NEON
using TagVector = uint8x16_t;
#else // SSE2
......@@ -1265,13 +1282,16 @@ class F14Table : public Policy {
std::size_t probeDelta(HashPair hp) const { return 2 * hp.second + 1; }
enum class Prefetch { DISABLED, ENABLED };
template <typename K>
FOLLY_ALWAYS_INLINE ItemIter findImpl(HashPair hp, K const& key) const {
FOLLY_ALWAYS_INLINE ItemIter
findImpl(HashPair hp, K const& key, Prefetch prefetch) const {
std::size_t index = hp.first;
std::size_t step = probeDelta(hp);
for (std::size_t tries = 0; tries <= chunkMask_; ++tries) {
ChunkPtr chunk = chunks_ + (index & chunkMask_);
if (sizeof(Chunk) > 64) {
if (prefetch == Prefetch::ENABLED && sizeof(Chunk) > 64) {
prefetchAddr(chunk->itemAddr(8));
}
auto hits = chunk->tagMatchIter(hp.second);
......@@ -1312,10 +1332,28 @@ class F14Table : public Policy {
return F14HashToken(std::move(hp));
}
// prefetch() fetches the next two cache lines of the chunk, up to the
// chunk size. This is not included in prehash() because these loads are
// speculative. find() will always need to load the first cache line of
// the chunk, but it won't always need to load further cache lines. If
// they aren't needed, then prefetching them will hurt application
// performance, by polluting the local CPU cache. So prefetch() should
// only be called when benchmarks show performance improvements.
void prefetch(F14HashToken const& token) const {
auto hp = static_cast<HashPair>(token);
ChunkPtr firstChunk = chunks_ + (hp.first & chunkMask_);
// The first cache line was already prefetched in prehash().
// Prefetch the next two cache lines up to the chunk size. We only
// fetch at most 2 more cache lines to avoid thrashing the local
// CPU cache when the chunk size is large. The item we're looking for
// is more likely to be near the beginning of the chunk.
prefetchAddr(firstChunk, /* offset */ 1, /* maxCacheLines */ 3);
}
template <typename K>
FOLLY_ALWAYS_INLINE ItemIter find(K const& key) const {
auto hp = splitHash(this->computeKeyHash(key));
return findImpl(hp, key);
return findImpl(hp, key, Prefetch::ENABLED);
}
template <typename K>
......@@ -1324,7 +1362,7 @@ class F14Table : public Policy {
FOLLY_SAFE_DCHECK(
splitHash(this->computeKeyHash(key)) == static_cast<HashPair>(token),
"");
return findImpl(static_cast<HashPair>(token), key);
return findImpl(static_cast<HashPair>(token), key, Prefetch::DISABLED);
}
// Searches for a key using a key predicate that is a refinement
......@@ -1961,7 +1999,7 @@ class F14Table : public Policy {
const auto hp = splitHash(this->computeKeyHash(key));
if (size() > 0) {
auto existing = findImpl(hp, key);
auto existing = findImpl(hp, key, Prefetch::ENABLED);
if (!existing.atEnd()) {
return std::make_pair(existing, false);
}
......@@ -2092,7 +2130,7 @@ class F14Table : public Policy {
return 0;
}
auto hp = splitHash(this->computeKeyHash(key));
auto iter = findImpl(hp, key);
auto iter = findImpl(hp, key, Prefetch::ENABLED);
if (!iter.atEnd()) {
beforeDestroy(this->valueAtItemForExtract(iter.item()));
eraseImpl(iter, hp);
......
......@@ -886,8 +886,10 @@ void runPrehash() {
auto t1 = h.prehash(s("def"));
F14HashToken t2;
t2 = h.prehash(s("abc"));
h.prefetch(t2);
EXPECT_TRUE(h.find(t1, s("def")) == h.end());
EXPECT_FALSE(h.find(t2, s("abc")) == h.end());
h.prefetch(t1);
}
TEST(F14ValueMap, prehash) {
runPrehash<F14ValueMap<std::string, std::string>>();
......@@ -2077,6 +2079,9 @@ void testContainsWithPrecomputedHash() {
const auto otherKey{2};
const auto hashTokenNotFound = m.prehash(otherKey);
EXPECT_FALSE(m.contains(hashTokenNotFound, otherKey));
m.prefetch(hashToken);
m.prefetch(hashTokenNotFound);
}
TEST(F14Map, containsWithPrecomputedHash) {
......
......@@ -1557,6 +1557,8 @@ void testContainsWithPrecomputedHash() {
const auto otherKey{2};
const auto hashTokenNotFound = m.prehash(otherKey);
EXPECT_FALSE(m.contains(hashTokenNotFound, otherKey));
m.prefetch(hashToken);
m.prefetch(hashTokenNotFound);
}
TEST(F14Set, containsWithPrecomputedHash) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment