Commit 8ad6b845 authored by Doron Roberts-Kedes's avatar Doron Roberts-Kedes Committed by Facebook Github Bot

Introduce SIMDTable for ConcurrentHashMapSegment

Summary:
As an alternative to backend to BucketTable, introduce SIMDTable, which mimics the use of SSE intrinsics to filter tags as found in F14 code.

On synthetic benchmarks, SIMDTable outperforms BucketTable by between 1.1x and 2.6x operations per second when the map does not fit in cache. However, when the map fits in cache SIMDTable executes around 10% fewer operations per second compared with BucketTable.

BucketTable, the existing backend, remains the default.

Reviewed By: djwatson

Differential Revision: D14458269

fbshipit-source-id: 5b6b01db5eb2430bdfc6f3500458f25971a6ad3d
parent 1060fb28
...@@ -79,7 +79,16 @@ template < ...@@ -79,7 +79,16 @@ template <
typename Allocator = std::allocator<uint8_t>, typename Allocator = std::allocator<uint8_t>,
uint8_t ShardBits = 8, uint8_t ShardBits = 8,
template <typename> class Atom = std::atomic, template <typename> class Atom = std::atomic,
class Mutex = std::mutex> class Mutex = std::mutex,
template <
typename,
typename,
uint8_t,
typename,
typename,
typename,
template <typename> class,
class> class Impl = detail::concurrenthashmap::bucket::BucketTable>
class ConcurrentHashMap { class ConcurrentHashMap {
using SegmentT = detail::ConcurrentHashMapSegment< using SegmentT = detail::ConcurrentHashMapSegment<
KeyType, KeyType,
...@@ -89,12 +98,12 @@ class ConcurrentHashMap { ...@@ -89,12 +98,12 @@ class ConcurrentHashMap {
KeyEqual, KeyEqual,
Allocator, Allocator,
Atom, Atom,
Mutex>; Mutex,
Impl>;
float load_factor_ = SegmentT::kDefaultLoadFactor;
static constexpr uint64_t NumShards = (1 << ShardBits); static constexpr uint64_t NumShards = (1 << ShardBits);
// Slightly higher than 1.0, in case hashing to shards isn't
// perfectly balanced, reserve(size) will still work without
// rehashing.
float load_factor_ = 1.05;
public: public:
class ConstIterator; class ConstIterator;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment