Commit 93d49bcf authored by Nathan Bronson's avatar Nathan Bronson Committed by Facebook Github Bot

F14 hash table in folly

Summary:
F14 is a 14-way probing hash table that resolves collisions by double
hashing.  Up to 14 keys are stored in a chunk at a single hash table
position.  SSE2 vector instructions are used to filter within a chunk;
intra-chunk search takes only a handful of instructions.  "F14" refers
to the fact that the algorithm "F"ilters up to "14" keys at a time.
This strategy allows the hash table to be operated at a high maximum
load factor (12/14) while still keeping probe chains very short.

F14 provides compelling replacements for most of the hash tables we use in
production at Facebook.  Switching to it can improve memory efficiency
and performance at the same time.  The hash table implementations
widely deployed in C++ at Facebook exist along a spectrum of space/time
tradeoffs.  The fastest is the least memory efficient, and the most
memory efficient is much slower than the rest.  F14 moves the curve,
simultaneously improving memory efficiency and performance when compared
to the existing algorithms, especially for complex keys and large maps.

Reviewed By: yfeldblum

Differential Revision: D7154343

fbshipit-source-id: 42ebd11b353285855c0fed5dd4b3af4620d39e98
parent bf814b82
...@@ -407,6 +407,8 @@ if (BUILD_TESTS) ...@@ -407,6 +407,8 @@ if (BUILD_TESTS)
# EnumerateTest.cpp since it uses macros to define tests. # EnumerateTest.cpp since it uses macros to define tests.
#TEST enumerate_test SOURCES EnumerateTest.cpp #TEST enumerate_test SOURCES EnumerateTest.cpp
TEST evicting_cache_map_test SOURCES EvictingCacheMapTest.cpp TEST evicting_cache_map_test SOURCES EvictingCacheMapTest.cpp
TEST f14_map_test SOURCES F14MapTest.cpp
TEST f14_set_test SOURCES F14SetTest.cpp
TEST foreach_test SOURCES ForeachTest.cpp TEST foreach_test SOURCES ForeachTest.cpp
TEST merge_test SOURCES MergeTest.cpp TEST merge_test SOURCES MergeTest.cpp
TEST sparse_byte_set_test SOURCES SparseByteSetTest.cpp TEST sparse_byte_set_test SOURCES SparseByteSetTest.cpp
......
...@@ -59,9 +59,14 @@ nobase_follyinclude_HEADERS = \ ...@@ -59,9 +59,14 @@ nobase_follyinclude_HEADERS = \
container/Access.h \ container/Access.h \
container/Array.h \ container/Array.h \
container/detail/BitIteratorDetail.h \ container/detail/BitIteratorDetail.h \
container/detail/F14Memory.h \
container/detail/F14Policy.h \
container/detail/F14Table.h \
container/Iterator.h \ container/Iterator.h \
container/Enumerate.h \ container/Enumerate.h \
container/EvictingCacheMap.h \ container/EvictingCacheMap.h \
container/F14Map.h \
container/F14Set.h \
container/Foreach.h \ container/Foreach.h \
container/Foreach-inl.h \ container/Foreach-inl.h \
container/SparseByteSet.h \ container/SparseByteSet.h \
...@@ -508,6 +513,7 @@ libfolly_la_SOURCES = \ ...@@ -508,6 +513,7 @@ libfolly_la_SOURCES = \
compression/Counters.cpp \ compression/Counters.cpp \
compression/Zlib.cpp \ compression/Zlib.cpp \
concurrency/CacheLocality.cpp \ concurrency/CacheLocality.cpp \
container/detail/F14Table.cpp \
detail/AtFork.cpp \ detail/AtFork.cpp \
detail/Futex.cpp \ detail/Futex.cpp \
detail/IPAddress.cpp \ detail/IPAddress.cpp \
......
# F14 Hash Table
F14 is a 14-way probing hash table that resolves collisions by double
hashing. Up to 14 keys are stored in a chunk at a single hash table
position. SSE2 vector instructions are used to filter within a chunk;
intra-chunk search takes only a handful of instructions. **F14** refers
to the fact that the algorithm **F**ilters up to **14** keys at a time.
This strategy allows the hash table to be operated at a high maximum
load factor (12/14) while still keeping probe chains very short.
F14 provides compelling replacements for most of the hash tables we use in
production at Facebook. Switching to it can improve memory efficiency
and performance at the same time. The hash table implementations
widely deployed in C++ at Facebook exist along a spectrum of space/time
tradeoffs. The fastest is the least memory efficient, and the most
memory efficient (google::sparse_hash_map) is much slower than the rest.
F14 moves the curve, simultaneously improving memory efficiency and
performance when compared to most of the existing algorithms.
## F14 VARIANTS
The core hash table implementation has a pluggable storage strategy,
with three policies provided:
F14NodeMap stores values indirectly, calling malloc on each insert like
std::unordered_map. This implementation is the most memory efficient
for medium and large keys. It provides the same iterator and reference
stability guarantees as the standard map while being faster and more
memory efficient, so you can substitute F14NodeMap for std::unordered_map
safely in production code. F14's filtering substantially reduces
indirection (and cache misses) when compared to std::unordered_map.
F14ValueMap stores values inline, like google::dense_hash_map.
Inline storage is the most memory efficient for small values, but for
medium and large values it wastes space. Because it can tolerate a much
higher load factor, F14ValueMap is almost twice as memory efficient as
dense_hash_map while also faster for most workloads.
F14VectorMap keeps values packed in a contiguous array. The main hash
array stores 32-bit indexes into the value vector. Compared to the
existing internal implementations that use a similar strategy, F14 is
slower for simple keys and small or medium-sized tables (because of the
cost of bit mixing), faster for complex keys and large tables, and saves
about 16 bytes per entry on average.
We also provide:
F14FastMap is an alias to F14ValueMap or F14VectorMap depending on
entry size. When the key and mapped_type are less than 24 bytes it
typedefs to F14ValueMap. For medium and large entries it typedefs to
F14VectorMap. This strategy provides the best performance, while also
providing better memory efficiency than dense_hash_map or the other hash
tables in use at Facebook that don't individually allocate nodes.
## WHICH F14 VARIANT IS RIGHT FOR ME?
F14FastMap is a good default choice. If you care more about memory
efficiency than performance, F14NodeMap is better for medium and
large entries. F14NodeMap is the only F14 variant that doesn't move
its elements, so in the rare case that you need reference stability you
should use it.
## TRANSPARENT (HETEROGENEOUS) HASH AND EQUALITY
In some cases it makes sense to define hash and key equality across
types. For example, StringPiece's hash and equality are capable of
accepting std::string (because std::string is implicitly convertible
to StringPiece). If you mark the hash functor and key equality functor
as _transparent_, then F14 will allow you to search the table directly
using any of the accepted key types without converting the key.
For example, using H =
folly::transparent<folly::hasher<folly::StringPiece>> and E
= folly::transparent<std::equal_to<folly::StringPiece>>, an
F14FastSet<std::string, H, E> will allow you to find or count using
a StringPiece key (as well as std::string key). Note that this is
possible even though there is no implicit conversion from StringPiece
to std::string.
## WHY CHUNKS?
Assuming that you have a magic wand that lets you search all of the keys
in a chunk in a single step (our wand is called _mm_cmpeq_epi8), then
using chunks fundamentally improves the load factor/collision tradeoff.
The cost is proportional only to the number of chunks visited to find
the key.
It's kind of like the birthday paradox in reverse. In a room with 23
people there is a 50/50 chance that two of them have the same birthday
(overflowing a chunk with capacity 1), but the chance that 8 of them
were born in the same week (overflowing a chunk with capacity 7) is
very small. Even though the chance of any two people being born in
the same week is higher (1/52 instead of 1/365), the larger number of
coincidences required means that the final probability is much lower
(less than 1 in a million). It would require 160 people to reach a 50/50
chance that 8 of them were born in the same week.
## WHY PROBING?
Chaining to a new chunk on collision is not very memory efficient,
because the new chunk is almost certain to be under-filled. We tried
chaining to individual entries, but that bloated the lookup code and
can't match the performance of a probing strategy.
At our max load factor of 12/14, the expected probe length when searching
for an existing key (find hit) is 1.04, and fewer than 1% of keys are
not found in one of the first 3 chunks. When searching for a key that is
not in the map (find miss) the expected probe length at max load factor
is 1.275 and the P99 probe length is 4.
CHUNK OVERFLOW COUNTS: REFERENCE-COUNTED TOMBSTONES
Hash tables with a complex probing strategy (quadratic or double-hashing)
typically use a tombstone on erase, because it is very difficult to
find the keys that might have been displaced by a full bucket (i.e.,
chunk in F14). If the probing strategy allows only a small number of
potential destinations for a displaced key (linear probing, Robin Hood
hashing, or Cuckoo hashing), it is also an option to find a displaced key,
relocate it, and then recursively repair the new hole.
Tombstones must be eventually reclaimed to deal with workloads that
continuously insert and erase. google::dense_hash_map eventually triggers
a rehash in this case, for example. Unfortunately, to avoid quadratic
behavior this rehash may have to halve the max load factor of the table,
resulting in a huge decrease in memory efficiency.
Although most probing algorithms just keep probing until they find an
empty slot, probe lengths can be substantially reduced if you track
whether a bucket has actually rejected a key. This "overflow bit"
is set when an attempt is made to place a key into the bucket but the
bucket was full. (An especially unlucky key might have to try several
buckets, setting the overflow bit in each.) Amble and Knuth describe an
overflow bit in the "Further development" section of "Ordered hash tables"
(https://academic.oup.com/comjnl/article/17/2/135/525363).
The overflow bit subsumes the role of a tombstone, since a tombstone's
only effect is to cause a probe search to continue. Unlike a tombstone,
however, the overflow bit is a property of the keys that were displaced
rather than the key that was erased. It's only a small step to turn
this into a counter that records the number of displaced keys, and that
can be decremented on erase. Overflow counts give us both an earlier
exit from probing and the effect of a reference-counted tombstone.
They automatically clean themselves up in a steady-state insert and
erase workload, giving us the upsides of double-hashing without the
normal downsides of tombstones.
## HOW DOES VECTOR FILTERING WORK?
F14 computes a secondary hash value for each key, which we call the key's
tag. Tags are 1 byte: 7 bits of entropy with the top bit set. The 14
tags are joined with 2 additional bytes of metadata to form a 16-byte
aligned __m128i at the beginning of the chunk. When we're looking for a
key we can compare the needle's tag to all 14 tags in a chunk in parallel.
The result of the comparison is a bitmask that identifies only slots in
a chunk that might have a non-empty matching key. Failing searches are
unlikely to perform any key comparisons, successful searches are likely
to perform exactly 1 comparison, and all of the resulting branches are
pretty predictable.
The vector search uses SSE2 intrinsics. SSE2 is a non-optional part
of the x86_64 platform, so every 64-bit x86 platform supports them.
AARCH64's vector instructions will allow a similar strategy, although
the lack of a movemask operation complicates things a bit.
## WHAT ABOUT MEMORY OVERHEAD FOR SMALL TABLES?
The F14 algorithm works well for large tables, because the tags can
fit in cache even when the keys and values can't. Tiny hash tables are
by far the most numerous, however, so it's important that we minimize
the footprint when the table is empty or has only 1 or 2 elements.
Conveniently, tags cause keys to be densely packed into the bottom of
a chunk and filter all memory accesses to the portions of a chunk that
are not used. That means that we can also support capacities that are
a fraction of 1 chunk with no change to any of the search and insertion
algorithms. The only change required is in the check to see if a rehash
is required. F14's first three capacities all use one chunk and one
16-byte metadata vector, but allocate space for 2, 6, and then 12 keys.
## IS F14NODEMAP FULLY STANDARDS-COMPLIANT?
No. F14 does provide full support for stateful allocators, fancy
pointers, and as many parts of the C++ standard for unordered associative
containers as it can, but it is not fully standards-compliant.
We don't know of a way to efficiently implement the full bucket API
in a table that uses double-hashed probing, in particular size_type
bucket(key_type const&). This function must compute the bucket index
for any key, even before it is inserted into the table. That means
that a local_iterator range can't partition the key space by the chunk
that terminated probing during insert; the only partition choice with
reasonable locality would be the first-choice chunk. The probe sequence
for a key in double-hashing depends on the key, not the first-choice
chunk, however, so it is infeasible to search for all of the displaced
keys given only their first-choice location. We're unwilling to use an
inferior probing strategy or dedicate space to the required metadata just
to support the full bucket API. Implementing the rest of the bucket API,
such as local_iterator begin(size_type), would not be difficult.
F14 does not allow max_load_factor to be adjusted. Probing tables
can't support load factors greater than 1, so the standards-required
ability to temporarily disable rehashing by temporarily setting a very
high max load factor just isn't possible. We have also measured that
there is no performance advantage to forcing a low load factor, so it's
better just to omit the field and save space in every F14 instance.
This is part of the way we get empty maps down to 32 bytes. The void
max_load_factor(float) method is still present, but does nothing. We use
the default max_load_factor of 1.0f all of the time, adjusting the value
returned from size_type bucket_count() so that the externally-visible
load factor reaches 1 just as the actual internal load factor reaches
our threshold of 12/14.
The standard requires that a hash table be iterable in O(size()) time
regardless of its load factor (rather than O(bucket_count()). That means
if you insert 1 million keys then erase all but 10, iteration should
be O(10). For std::unordered_map the cost of supporting this scenario
is an extra level of indirection in every read and every write, which is
part of why we can improve substantially on its performance. Low load
factor iteration occurs in practice when erasing keys during iteration
(for example by repeatedly calling map.erase(map.begin())), so we provide
the weaker guarantee that iteration is O(size()) after erasing any prefix
of the iteration order. F14VectorMap doesn't have this problem.
The standard requires that clear() be O(size()), which has the practical
effect of prohibiting a change to bucket_count. F14 deallocates
all memory during clear() if it has space for more than 100 keys, to
avoid leaving a large table that will be expensive to iterate (see the
previous paragraph). google::dense_hash_map works around this tradeoff
by providing both clear() and clear_no_resize(); we could do something
similar.
F14NodeMap does not currently support the C++17 node API, but it could
be trivially added.
* Nathan Bronson -- <ngbronson@fb.com>
* Xiao Shi -- <xshi@fb.com>
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
/**
* F14NodeMap, F14ValueMap, and F14VectorMap
*
* F14FastMap is a conditional typedef to F14ValueMap or F14VectorMap
*
* See F14.md
*
* @author Nathan Bronson <ngbronson@fb.com>
* @author Xiao Shi <xshi@fb.com>
*/
#include <stdexcept>
#include <folly/Traits.h>
#include <folly/functional/ApplyTuple.h>
#include <folly/lang/Exception.h>
#include <folly/container/detail/F14Policy.h>
#include <folly/container/detail/F14Table.h>
#if !FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
#include <unordered_map>
namespace folly {
template <typename... Args>
using F14NodeMap = std::unordered_map<Args...>;
template <typename... Args>
using F14ValueMap = std::unordered_map<Args...>;
template <typename... Args>
using F14VectorMap = std::unordered_map<Args...>;
template <typename... Args>
using F14FastMap = std::unordered_map<Args...>;
} // namespace folly
#else // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
namespace folly {
namespace f14 {
namespace detail {
template <typename Policy>
class F14BasicMap {
template <
typename K,
typename T,
typename H = typename Policy::Hasher,
typename E = typename Policy::KeyEqual>
using IfIsTransparent = folly::_t<EnableIfIsTransparent<void, H, E, K, T>>;
public:
//// PUBLIC - Member types
using key_type = typename Policy::Key;
using mapped_type = typename Policy::Mapped;
using value_type = typename Policy::Value;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using hasher = typename Policy::Hasher;
using key_equal = typename Policy::KeyEqual;
using allocator_type = typename Policy::Alloc;
using reference = value_type&;
using const_reference = value_type const&;
using pointer = typename std::allocator_traits<allocator_type>::pointer;
using const_pointer =
typename std::allocator_traits<allocator_type>::const_pointer;
using iterator = typename Policy::Iter;
using const_iterator = typename Policy::ConstIter;
private:
using ItemIter = typename Policy::ItemIter;
public:
//// PUBLIC - Member functions
F14BasicMap() noexcept(F14Table<Policy>::kDefaultConstructIsNoexcept)
: F14BasicMap(0) {}
explicit F14BasicMap(
std::size_t initialCapacity,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {}
explicit F14BasicMap(std::size_t initialCapacity, allocator_type const& alloc)
: F14BasicMap(initialCapacity, hasher{}, key_equal{}, alloc) {}
explicit F14BasicMap(
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: F14BasicMap(initialCapacity, hash, key_equal{}, alloc) {}
explicit F14BasicMap(allocator_type const& alloc) : F14BasicMap(0, alloc) {}
template <typename InputIt>
F14BasicMap(
InputIt first,
InputIt last,
std::size_t initialCapacity = 0,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {
initialInsert(first, last, initialCapacity);
}
template <typename InputIt>
F14BasicMap(
InputIt first,
InputIt last,
std::size_t initialCapacity,
allocator_type const& alloc)
: table_{initialCapacity, hasher{}, key_equal{}, alloc} {
initialInsert(first, last, initialCapacity);
}
template <typename InputIt>
F14BasicMap(
InputIt first,
InputIt last,
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: table_{initialCapacity, hash, key_equal{}, alloc} {
initialInsert(first, last, initialCapacity);
}
F14BasicMap(F14BasicMap const& rhs) = default;
F14BasicMap(F14BasicMap const& rhs, allocator_type const& alloc)
: table_{rhs.table_, alloc} {}
F14BasicMap(F14BasicMap&& rhs) = default;
F14BasicMap(F14BasicMap&& rhs, allocator_type const& alloc) noexcept(
F14Table<Policy>::kAllocIsAlwaysEqual)
: table_{std::move(rhs.table_), alloc} {}
F14BasicMap(
std::initializer_list<value_type> init,
std::size_t initialCapacity = 0,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicMap(
std::initializer_list<value_type> init,
std::size_t initialCapacity,
allocator_type const& alloc)
: table_{initialCapacity, hasher{}, key_equal{}, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicMap(
std::initializer_list<value_type> init,
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: table_{initialCapacity, hash, key_equal{}, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicMap& operator=(F14BasicMap const&) = default;
F14BasicMap& operator=(F14BasicMap&&) = default;
allocator_type get_allocator() const noexcept {
return table_.alloc();
}
//// PUBLIC - Iterators
iterator begin() noexcept {
return table_.makeIter(table_.begin());
}
const_iterator begin() const noexcept {
return cbegin();
}
const_iterator cbegin() const noexcept {
return table_.makeConstIter(table_.begin());
}
iterator end() noexcept {
return table_.makeIter(table_.end());
}
const_iterator end() const noexcept {
return cend();
}
const_iterator cend() const noexcept {
return table_.makeConstIter(table_.end());
}
//// PUBLIC - Capacity
bool empty() const noexcept {
return table_.empty();
}
std::size_t size() const noexcept {
return table_.size();
}
std::size_t max_size() const noexcept {
return table_.max_size();
}
F14TableStats computeStats() const noexcept {
return table_.computeStats();
}
//// PUBLIC - Modifiers
void clear() noexcept {
table_.clear();
}
std::pair<iterator, bool> insert(value_type const& value) {
return emplace(value);
}
template <typename P>
std::enable_if_t<
std::is_constructible<value_type, P&&>::value,
std::pair<iterator, bool>>
insert(P&& value) {
return emplace(std::forward<P>(value));
}
std::pair<iterator, bool> insert(value_type&& value) {
return emplace(std::move(value));
}
// std::unordered_map's hinted insertion API is misleading. No
// implementation I've seen actually uses the hint. Code restructuring
// by the caller to use the hinted API is at best unnecessary, and at
// worst a pessimization. It is used, however, so we provide it.
iterator insert(const_iterator /*hint*/, value_type const& value) {
return insert(value).first;
}
template <typename P>
std::enable_if_t<std::is_constructible<value_type, P&&>::value, iterator>
insert(const_iterator /*hint*/, P&& value) {
return insert(std::forward<P>(value)).first;
}
iterator insert(const_iterator /*hint*/, value_type&& value) {
return insert(std::move(value)).first;
}
template <class... Args>
iterator emplace_hint(const_iterator /*hint*/, Args&&... args) {
return emplace(std::forward<Args>(args)...).first;
}
private:
template <class InputIt>
FOLLY_ALWAYS_INLINE void
bulkInsert(InputIt first, InputIt last, bool autoReserve) {
if (autoReserve) {
table_.reserveForInsert(std::distance(first, last));
}
while (first != last) {
insert(*first);
++first;
}
}
template <class InputIt>
void initialInsert(InputIt first, InputIt last, std::size_t initialCapacity) {
assert(empty() && bucket_count() >= initialCapacity);
// It's possible that there are a lot of duplicates in first..last and
// so we will oversize ourself. The common case, however, is that
// we can avoid a lot of rehashing if we pre-expand. The behavior
// is easy to disable at a particular call site by asking for an
// initialCapacity of 1.
bool autoReserve =
std::is_same<
typename std::iterator_traits<InputIt>::iterator_category,
std::random_access_iterator_tag>::value &&
initialCapacity == 0;
bulkInsert(first, last, autoReserve);
}
public:
template <class InputIt>
void insert(InputIt first, InputIt last) {
// Bulk reserve is a heuristic choice, so it can backfire. We restrict
// ourself to situations that mimic bulk construction without an
// explicit initialCapacity.
bool autoReserve =
std::is_same<
typename std::iterator_traits<InputIt>::iterator_category,
std::random_access_iterator_tag>::value &&
bucket_count() == 0;
bulkInsert(first, last, autoReserve);
}
void insert(std::initializer_list<value_type> ilist) {
insert(ilist.begin(), ilist.end());
}
template <typename M>
std::pair<iterator, bool> insert_or_assign(key_type const& key, M&& obj) {
auto rv = try_emplace(key, std::forward<M>(obj));
if (!rv.second) {
rv.first->second = std::forward<M>(obj);
}
return rv;
}
template <typename M>
std::pair<iterator, bool> insert_or_assign(key_type&& key, M&& obj) {
auto rv = try_emplace(std::move(key), std::forward<M>(obj));
if (!rv.second) {
rv.first->second = std::forward<M>(obj);
}
return rv;
}
template <typename M>
iterator
insert_or_assign(const_iterator /*hint*/, key_type const& key, M&& obj) {
return insert_or_assign(key, std::move(obj)).first;
}
template <typename M>
iterator insert_or_assign(const_iterator /*hint*/, key_type&& key, M&& obj) {
return insert_or_assign(std::move(key), std::move(obj)).first;
}
private:
std::pair<ItemIter, bool> emplaceItem() {
// rare but valid
return table_.tryEmplaceValue(key_type{});
}
template <typename U2>
std::pair<ItemIter, bool> emplaceItem(key_type&& x, U2&& y) {
// best case
return table_.tryEmplaceValue(x, std::move(x), std::forward<U2>(y));
}
template <typename U2>
std::pair<ItemIter, bool> emplaceItem(key_type const& x, U2&& y) {
// okay case, no construction unless we will actually insert
return table_.tryEmplaceValue(x, x, std::forward<U2>(y));
}
template <typename U1, typename U2>
std::enable_if_t<
!std::is_same<key_type, folly::remove_cvref_t<U1>>::value,
std::pair<ItemIter, bool>>
emplaceItem(U1&& x, U2&& y) {
static_assert(
!std::is_same<key_type, folly::remove_cvref_t<U1>>::value,
"method signature bug");
// We can either construct key_type on the stack and move it if we end
// up inserting, or use a policy-specific mechanism to construct the
// item (possibly indirect) and then destroy it if we don't end up
// using it. The cost of being wrong is much higher for the latter
// so we choose the former (unlike std::unordered_map::emplace).
key_type k(std::forward<U1>(x));
return table_.tryEmplaceValue(k, std::move(k), std::forward<U2>(y));
}
template <typename U1, typename U2>
std::pair<ItemIter, bool> emplaceItem(std::pair<U1, U2> const& p) {
return emplaceItem(p.first, p.second);
}
template <typename U1, typename U2>
std::pair<ItemIter, bool> emplaceItem(std::pair<U1, U2>&& p) {
return emplaceItem(std::move(p.first), std::move(p.second));
}
template <typename U1, class... Args2>
std::enable_if_t<
std::is_same<folly::remove_cvref_t<U1>, key_type>::value,
std::pair<ItemIter, bool>>
emplaceItem(
std::piecewise_construct_t,
std::tuple<U1>&& first_args,
std::tuple<Args2...>&& second_args) {
// We take care to forward by reference even if the caller didn't
// use forward_as_tuple properly
return table_.tryEmplaceValue(
std::get<0>(first_args),
std::piecewise_construct,
std::tuple<std::add_rvalue_reference_t<U1>>{std::move(first_args)},
std::tuple<std::add_rvalue_reference_t<Args2>...>{
std::move(second_args)});
}
template <class... Args1, class... Args2>
std::enable_if_t<
std::tuple_size<std::tuple<Args1...>>::value != 1 ||
!std::is_same<
folly::remove_cvref_t<
std::tuple_element_t<0, std::tuple<Args1..., value_type>>>,
key_type>::value,
std::pair<ItemIter, bool>>
emplaceItem(
std::piecewise_construct_t,
std::tuple<Args1...>&& first_args,
std::tuple<Args2...>&& second_args) {
auto k = folly::make_from_tuple<key_type>(
std::tuple<std::add_rvalue_reference_t<Args1>...>{
std::move(first_args)});
return table_.tryEmplaceValue(
k,
std::piecewise_construct,
std::forward_as_tuple(std::move(k)),
std::tuple<std::add_rvalue_reference_t<Args2>...>{
std::move(second_args)});
}
public:
template <typename... Args>
std::pair<iterator, bool> emplace(Args&&... args) {
auto rv = emplaceItem(std::forward<Args>(args)...);
return std::make_pair(table_.makeIter(rv.first), rv.second);
}
template <typename... Args>
std::pair<iterator, bool> try_emplace(key_type const& key, Args&&... args) {
auto rv = table_.tryEmplaceValue(
key,
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(std::forward<Args>(args)...));
return std::make_pair(table_.makeIter(rv.first), rv.second);
}
template <typename... Args>
std::pair<iterator, bool> try_emplace(key_type&& key, Args&&... args) {
auto rv = table_.tryEmplaceValue(
key,
std::piecewise_construct,
std::forward_as_tuple(std::move(key)),
std::forward_as_tuple(std::forward<Args>(args)...));
return std::make_pair(table_.makeIter(rv.first), rv.second);
}
template <typename... Args>
iterator
try_emplace(const_iterator /*hint*/, key_type const& key, Args&&... args) {
auto rv = table_.tryEmplaceValue(
key,
std::piecewise_construct,
std::forward_as_tuple(key),
std::forward_as_tuple(std::forward<Args>(args)...));
return table_.makeIter(rv.first);
}
template <typename... Args>
iterator
try_emplace(const_iterator /*hint*/, key_type&& key, Args&&... args) {
auto rv = table_.tryEmplaceValue(
key,
std::piecewise_construct,
std::forward_as_tuple(std::move(key)),
std::forward_as_tuple(std::forward<Args>(args)...));
return table_.makeIter(rv.first);
}
FOLLY_ALWAYS_INLINE iterator erase(const_iterator pos) {
// If we are inlined then gcc and clang can optimize away all of the
// work of itemPos.advance() if our return value is discarded.
auto itemPos = table_.unwrapIter(pos);
table_.erase(itemPos);
itemPos.advance();
return table_.makeIter(itemPos);
}
// This form avoids ambiguity when key_type has a templated constructor
// that accepts const_iterator
iterator erase(iterator pos) {
table_.erase(table_.unwrapIter(pos));
return ++pos;
}
iterator erase(const_iterator first, const_iterator last) {
auto itemFirst = table_.unwrapIter(first);
auto itemLast = table_.unwrapIter(last);
while (itemFirst != itemLast) {
table_.erase(itemFirst);
itemFirst.advance();
}
return table_.makeIter(itemFirst);
}
size_type erase(key_type const& key) {
return table_.erase(key);
}
//// PUBLIC - Lookup
FOLLY_ALWAYS_INLINE mapped_type& at(key_type const& key) {
return at(*this, key);
}
FOLLY_ALWAYS_INLINE mapped_type const& at(key_type const& key) const {
return at(*this, key);
}
mapped_type& operator[](key_type const& key) {
return try_emplace(key).first->second;
}
mapped_type& operator[](key_type&& key) {
return try_emplace(std::move(key)).first->second;
}
FOLLY_ALWAYS_INLINE std::size_t count(key_type const& key) const {
return table_.find(key).atEnd() ? 0 : 1;
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, std::size_t> count(
K const& key) const {
return table_.find(key).atEnd() ? 0 : 1;
}
F14HashToken prehash(key_type const& key) const {
return table_.prehash(key);
}
template <typename K>
IfIsTransparent<K, F14HashToken> prehash(K const& key) const {
return table_.prehash(key);
}
FOLLY_ALWAYS_INLINE iterator find(key_type const& key) {
return table_.makeIter(table_.find(key));
}
FOLLY_ALWAYS_INLINE const_iterator find(key_type const& key) const {
return table_.makeConstIter(table_.find(key));
}
FOLLY_ALWAYS_INLINE iterator
find(F14HashToken const& token, key_type const& key) {
return table_.makeIter(table_.find(token, key));
}
FOLLY_ALWAYS_INLINE const_iterator
find(F14HashToken const& token, key_type const& key) const {
return table_.makeConstIter(table_.find(token, key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, iterator> find(K const& key) {
return table_.makeIter(table_.find(key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, const_iterator> find(
K const& key) const {
return table_.makeConstIter(table_.find(key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, iterator> find(
F14HashToken const& token,
K const& key) {
return table_.makeIter(table_.find(token, key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, const_iterator> find(
F14HashToken const& token,
K const& key) const {
return table_.makeConstIter(table_.find(token, key));
}
std::pair<iterator, iterator> equal_range(key_type const& key) {
return equal_range(*this, key);
}
std::pair<const_iterator, const_iterator> equal_range(
key_type const& key) const {
return equal_range(*this, key);
}
template <typename K>
IfIsTransparent<K, std::pair<iterator, iterator>> equal_range(K const& key) {
return equal_range(*this, key);
}
template <typename K>
IfIsTransparent<K, std::pair<const_iterator, const_iterator>> equal_range(
K const& key) const {
return equal_range(*this, key);
}
//// PUBLIC - Bucket interface
std::size_t bucket_count() const noexcept {
return table_.bucket_count();
}
std::size_t max_bucket_count() const noexcept {
return table_.max_bucket_count();
}
//// PUBLIC - Hash policy
float load_factor() const noexcept {
return table_.load_factor();
}
float max_load_factor() const noexcept {
return table_.max_load_factor();
}
void max_load_factor(float v) {
table_.max_load_factor(v);
}
void rehash(std::size_t bucketCapacity) {
// The standard's rehash() requires understanding the max load factor,
// which is easy to get wrong. Since we don't actually allow adjustment
// of max_load_factor there is no difference.
reserve(bucketCapacity);
}
void reserve(std::size_t capacity) {
table_.reserve(capacity);
}
//// PUBLIC - Observers
hasher hash_function() const {
return table_.hasher();
}
key_equal key_eq() const {
return table_.keyEqual();
}
private:
template <typename Self, typename K>
FOLLY_ALWAYS_INLINE static auto& at(Self& self, K const& key) {
auto iter = self.find(key);
if (iter == self.end()) {
throw_exception<std::out_of_range>("at() did not find key");
}
return iter->second;
}
template <typename Self, typename K>
static auto equal_range(Self& self, K const& key) {
auto first = self.find(key);
auto last = first;
if (last != self.end()) {
++last;
}
return std::make_pair(first, last);
}
protected:
F14Table<Policy> table_;
};
template <typename M>
bool mapsEqual(M const& lhs, M const& rhs) {
if (lhs.size() != rhs.size()) {
return false;
}
for (auto& kv : lhs) {
auto iter = rhs.find(kv.first);
if (iter == rhs.end()) {
return false;
}
if (std::is_same<
typename M::key_equal,
std::equal_to<typename M::key_type>>::value) {
// find already checked key, just check value
if (!(kv.second == iter->second)) {
return false;
}
} else {
// spec says we compare key with == as well as with key_eq()
if (!(kv == *iter)) {
return false;
}
}
}
return true;
}
} // namespace detail
} // namespace f14
template <
typename Key,
typename Mapped,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<std::pair<Key const, Mapped>>>
class F14ValueMap
: public f14::detail::F14BasicMap<f14::detail::MapPolicyWithDefaults<
f14::detail::ValueContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::MapPolicyWithDefaults<
f14::detail::ValueContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicMap<Policy>;
public:
F14ValueMap() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
using Super::Super;
void swap(F14ValueMap& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
};
template <typename K, typename M, typename H, typename E, typename A>
void swap(
F14ValueMap<K, M, H, E, A>& lhs,
F14ValueMap<K, M, H, E, A>& rhs) noexcept(noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator==(
F14ValueMap<K, M, H, E, A> const& lhs,
F14ValueMap<K, M, H, E, A> const& rhs) {
return mapsEqual(lhs, rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator!=(
F14ValueMap<K, M, H, E, A> const& lhs,
F14ValueMap<K, M, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Mapped,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<std::pair<Key const, Mapped>>>
class F14NodeMap
: public f14::detail::F14BasicMap<f14::detail::MapPolicyWithDefaults<
f14::detail::NodeContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::MapPolicyWithDefaults<
f14::detail::NodeContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicMap<Policy>;
public:
F14NodeMap() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
using Super::Super;
void swap(F14NodeMap& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
// TODO extract and node_handle insert
};
template <typename K, typename M, typename H, typename E, typename A>
void swap(
F14NodeMap<K, M, H, E, A>& lhs,
F14NodeMap<K, M, H, E, A>& rhs) noexcept(noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator==(
F14NodeMap<K, M, H, E, A> const& lhs,
F14NodeMap<K, M, H, E, A> const& rhs) {
return mapsEqual(lhs, rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator!=(
F14NodeMap<K, M, H, E, A> const& lhs,
F14NodeMap<K, M, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Mapped,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<std::pair<Key const, Mapped>>>
class F14VectorMap
: public f14::detail::F14BasicMap<f14::detail::MapPolicyWithDefaults<
f14::detail::VectorContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::MapPolicyWithDefaults<
f14::detail::VectorContainerPolicy,
Key,
Mapped,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicMap<Policy>;
public:
using typename Super::const_iterator;
using typename Super::iterator;
using typename Super::key_type;
F14VectorMap() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
// inherit constructors
using Super::Super;
void swap(F14VectorMap& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
iterator begin() {
return this->table_.linearBegin(this->size());
}
const_iterator begin() const {
return cbegin();
}
const_iterator cbegin() const {
return this->table_.linearBegin(this->size());
}
iterator end() {
return this->table_.linearEnd();
}
const_iterator end() const {
return cend();
}
const_iterator cend() const {
return this->table_.linearEnd();
}
private:
void eraseUnderlying(typename Policy::ItemIter underlying) {
Alloc& a = this->table_.alloc();
auto values = this->table_.values_;
// destroy the value and remove the ptr from the base table
auto index = underlying.item();
std::allocator_traits<Alloc>::destroy(a, std::addressof(values[index]));
this->table_.erase(underlying);
// move the last element in values_ down and fix up the inbound index
auto tailIndex = this->size();
if (tailIndex != index) {
auto tail = this->table_.find(f14::detail::VectorContainerIndexSearch{
static_cast<uint32_t>(tailIndex)});
tail.item() = index;
auto p = std::addressof(values[index]);
folly::assume(p != nullptr);
this->table_.transfer(a, std::addressof(values[tailIndex]), p, 1);
}
}
public:
FOLLY_ALWAYS_INLINE iterator erase(const_iterator pos) {
auto index = this->table_.iterToIndex(pos);
auto underlying =
this->table_.find(f14::detail::VectorContainerIndexSearch{index});
eraseUnderlying(underlying);
return index == 0 ? end() : this->table_.indexToIter(index - 1);
}
// This form avoids ambiguity when key_type has a templated constructor
// that accepts const_iterator
FOLLY_ALWAYS_INLINE iterator erase(iterator pos) {
const_iterator cpos{pos};
return erase(cpos);
}
iterator erase(const_iterator first, const_iterator last) {
while (first != last) {
first = erase(first);
}
return first;
}
std::size_t erase(key_type const& key) {
auto underlying = this->table_.find(key);
if (underlying.atEnd()) {
return 0;
} else {
eraseUnderlying(underlying);
return 1;
}
}
};
template <typename K, typename M, typename H, typename E, typename A>
void swap(
F14VectorMap<K, M, H, E, A>& lhs,
F14VectorMap<K, M, H, E, A>& rhs) noexcept(noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator==(
F14VectorMap<K, M, H, E, A> const& lhs,
F14VectorMap<K, M, H, E, A> const& rhs) {
return mapsEqual(lhs, rhs);
}
template <typename K, typename M, typename H, typename E, typename A>
bool operator!=(
F14VectorMap<K, M, H, E, A> const& lhs,
F14VectorMap<K, M, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Mapped,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<std::pair<Key const, Mapped>>>
using F14FastMap = std::conditional_t<
sizeof(std::pair<Key const, Mapped>) < 24,
F14ValueMap<Key, Mapped, Hasher, KeyEqual, Alloc>,
F14VectorMap<Key, Mapped, Hasher, KeyEqual, Alloc>>;
} // namespace folly
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
/**
* F14NodeSet, F14ValueSet, and F14VectorSet
*
* F14FastSet is a conditional typedef to F14ValueSet or F14VectorSet
*
* See F14.md
*
* @author Nathan Bronson <ngbronson@fb.com>
* @author Xiao Shi <xshi@fb.com>
*/
#include <folly/container/detail/F14Policy.h>
#include <folly/container/detail/F14Table.h>
#if !FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
#include <unordered_set>
namespace folly {
template <typename... Args>
using F14NodeSet = std::unordered_set<Args...>;
template <typename... Args>
using F14ValueSet = std::unordered_set<Args...>;
template <typename... Args>
using F14VectorSet = std::unordered_set<Args...>;
template <typename... Args>
using F14FastSet = std::unordered_set<Args...>;
} // namespace folly
#else // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
namespace folly {
namespace f14 {
namespace detail {
template <typename Policy>
class F14BasicSet {
template <
typename K,
typename T,
typename H = typename Policy::Hasher,
typename E = typename Policy::KeyEqual>
using IfIsTransparent = folly::_t<EnableIfIsTransparent<void, H, E, K, T>>;
public:
//// PUBLIC - Member types
using key_type = typename Policy::Value;
using value_type = key_type;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using hasher = typename Policy::Hasher;
using key_equal = typename Policy::KeyEqual;
using allocator_type = typename Policy::Alloc;
using reference = value_type&;
using const_reference = value_type const&;
using pointer = typename std::allocator_traits<allocator_type>::pointer;
using const_pointer =
typename std::allocator_traits<allocator_type>::const_pointer;
using iterator = typename Policy::Iter;
using const_iterator = iterator;
//// PUBLIC - Member functions
F14BasicSet() noexcept(F14Table<Policy>::kDefaultConstructIsNoexcept)
: F14BasicSet(0) {}
explicit F14BasicSet(
std::size_t initialCapacity,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {}
explicit F14BasicSet(std::size_t initialCapacity, allocator_type const& alloc)
: F14BasicSet(initialCapacity, hasher{}, key_equal{}, alloc) {}
explicit F14BasicSet(
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: F14BasicSet(initialCapacity, hash, key_equal{}, alloc) {}
explicit F14BasicSet(allocator_type const& alloc) : F14BasicSet(0, alloc) {}
template <typename InputIt>
F14BasicSet(
InputIt first,
InputIt last,
std::size_t initialCapacity = 0,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {
initialInsert(first, last, initialCapacity);
}
template <typename InputIt>
F14BasicSet(
InputIt first,
InputIt last,
std::size_t initialCapacity,
allocator_type const& alloc)
: table_{initialCapacity, hasher{}, key_equal{}, alloc} {
initialInsert(first, last, initialCapacity);
}
template <typename InputIt>
F14BasicSet(
InputIt first,
InputIt last,
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: table_{initialCapacity, hash, key_equal{}, alloc} {
initialInsert(first, last, initialCapacity);
}
F14BasicSet(F14BasicSet const& rhs) = default;
F14BasicSet(F14BasicSet const& rhs, allocator_type const& alloc)
: table_(rhs.table_, alloc) {}
F14BasicSet(F14BasicSet&& rhs) = default;
F14BasicSet(F14BasicSet&& rhs, allocator_type const& alloc) noexcept(
F14Table<Policy>::kAllocIsAlwaysEqual)
: table_{std::move(rhs.table_), alloc} {}
F14BasicSet(
std::initializer_list<value_type> init,
std::size_t initialCapacity = 0,
hasher const& hash = hasher{},
key_equal const& eq = key_equal{},
allocator_type const& alloc = allocator_type{})
: table_{initialCapacity, hash, eq, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicSet(
std::initializer_list<value_type> init,
std::size_t initialCapacity,
allocator_type const& alloc)
: table_{initialCapacity, hasher{}, key_equal{}, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicSet(
std::initializer_list<value_type> init,
std::size_t initialCapacity,
hasher const& hash,
allocator_type const& alloc)
: table_{initialCapacity, hash, key_equal{}, alloc} {
initialInsert(init.begin(), init.end(), initialCapacity);
}
F14BasicSet& operator=(F14BasicSet const&) = default;
F14BasicSet& operator=(F14BasicSet&&) = default;
allocator_type get_allocator() const noexcept {
return table_.alloc();
}
//// PUBLIC - Iterators
iterator begin() noexcept {
return cbegin();
}
const_iterator begin() const noexcept {
return cbegin();
}
const_iterator cbegin() const noexcept {
return table_.makeIter(table_.begin());
}
iterator end() noexcept {
return cend();
}
const_iterator end() const noexcept {
return cend();
}
const_iterator cend() const noexcept {
return table_.makeIter(table_.end());
}
//// PUBLIC - Capacity
bool empty() const noexcept {
return table_.empty();
}
std::size_t size() const noexcept {
return table_.size();
}
std::size_t max_size() const noexcept {
return table_.max_size();
}
F14TableStats computeStats() const {
return table_.computeStats();
}
//// PUBLIC - Modifiers
void clear() noexcept {
table_.clear();
}
std::pair<iterator, bool> insert(value_type const& value) {
auto rv = table_.tryEmplaceValue(value, value);
return std::make_pair(table_.makeIter(rv.first), rv.second);
}
std::pair<iterator, bool> insert(value_type&& value) {
// tryEmplaceValue guarantees not to touch the first arg after touching
// any others, so although this looks fishy it is okay
value_type const& searchKey = value;
auto rv = table_.tryEmplaceValue(searchKey, std::move(value));
return std::make_pair(table_.makeIter(rv.first), rv.second);
}
// std::unordered_set's hinted insertion API is misleading. No
// implementation I've seen actually uses the hint. Code restructuring
// by the caller to use the hinted API is at best unnecessary, and at
// worst a pessimization. It is used, however, so we provide it.
iterator insert(const_iterator /*hint*/, value_type const& value) {
return insert(value).first;
}
iterator insert(const_iterator /*hint*/, value_type&& value) {
return insert(std::move(value)).first;
}
private:
template <class InputIt>
FOLLY_ALWAYS_INLINE void
bulkInsert(InputIt first, InputIt last, bool autoReserve) {
if (autoReserve) {
table_.reserveForInsert(std::distance(first, last));
}
while (first != last) {
insert(*first);
++first;
}
}
template <class InputIt>
void initialInsert(InputIt first, InputIt last, std::size_t initialCapacity) {
assert(empty() && bucket_count() >= initialCapacity);
// It's possible that there are a lot of duplicates in first..last and
// so we will oversize ourself. The common case, however, is that
// we can avoid a lot of rehashing if we pre-expand. The behavior
// is easy to disable at a particular call site by asking for an
// initialCapacity of 1.
bool autoReserve =
std::is_same<
typename std::iterator_traits<InputIt>::iterator_category,
std::random_access_iterator_tag>::value &&
initialCapacity == 0;
bulkInsert(first, last, autoReserve);
}
public:
template <class InputIt>
void insert(InputIt first, InputIt last) {
// Bulk reserve is a heuristic choice, so it can backfire. We restrict
// ourself to situations that mimic bulk construction without an
// explicit initialCapacity.
bool autoReserve =
std::is_same<
typename std::iterator_traits<InputIt>::iterator_category,
std::random_access_iterator_tag>::value &&
bucket_count() == 0;
bulkInsert(first, last, autoReserve);
}
void insert(std::initializer_list<value_type> ilist) {
insert(ilist.begin(), ilist.end());
}
// node API doesn't make sense for value set, which stores values inline
// emplace won't actually be more efficient than insert until we
// add heterogeneous lookup, but it is still useful now from a code
// compactness standpoint.
template <class... Args>
std::pair<iterator, bool> emplace(Args&&... args) {
key_type key(std::forward<Args>(args)...);
return insert(std::move(key));
}
template <class... Args>
iterator emplace_hint(const_iterator /*hint*/, Args&&... args) {
return emplace(std::forward<Args>(args)...).first;
}
FOLLY_ALWAYS_INLINE iterator erase(const_iterator pos) {
// If we are inlined then gcc and clang can optimize away all of the
// work of ++pos if the caller discards it.
table_.erase(table_.unwrapIter(pos));
return ++pos;
}
iterator erase(const_iterator first, const_iterator last) {
while (first != last) {
table_.erase(table_.unwrapIter(first));
++first;
}
return first;
}
size_type erase(key_type const& key) {
return table_.erase(key);
}
//// PUBLIC - Lookup
FOLLY_ALWAYS_INLINE std::size_t count(key_type const& key) const {
return table_.find(key).atEnd() ? 0 : 1;
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, size_type> count(K const& key) const {
return table_.find(key).atEnd() ? 0 : 1;
}
F14HashToken prehash(key_type const& key) const {
return table_.prehash(key);
}
template <typename K>
IfIsTransparent<K, F14HashToken> prehash(K const& key) const {
return table_.prehash(key);
}
FOLLY_ALWAYS_INLINE iterator find(key_type const& key) {
return const_cast<F14BasicSet const*>(this)->find(key);
}
FOLLY_ALWAYS_INLINE const_iterator find(key_type const& key) const {
return table_.makeIter(table_.find(key));
}
FOLLY_ALWAYS_INLINE iterator
find(F14HashToken const& token, key_type const& key) {
return const_cast<F14BasicSet const*>(this)->find(token, key);
}
FOLLY_ALWAYS_INLINE const_iterator
find(F14HashToken const& token, key_type const& key) const {
return table_.makeIter(table_.find(token, key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, iterator> find(K const& key) {
return const_cast<F14BasicSet const*>(this)->find(key);
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, const_iterator> find(
K const& key) const {
return table_.makeIter(table_.find(key));
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, iterator> find(
F14HashToken const& token,
K const& key) {
return const_cast<F14BasicSet const*>(this)->find(token, key);
}
template <typename K>
FOLLY_ALWAYS_INLINE IfIsTransparent<K, const_iterator> find(
F14HashToken const& token,
K const& key) const {
return table_.makeIter(table_.find(token, key));
}
std::pair<iterator, iterator> equal_range(key_type const& key) {
return equal_range(*this, key);
}
std::pair<const_iterator, const_iterator> equal_range(
key_type const& key) const {
return equal_range(*this, key);
}
template <typename K>
IfIsTransparent<K, std::pair<iterator, iterator>> equal_range(K const& key) {
return equal_range(*this, key);
}
template <typename K>
IfIsTransparent<K, std::pair<const_iterator, const_iterator>> equal_range(
K const& key) const {
return equal_range(*this, key);
}
//// PUBLIC - Bucket interface
std::size_t bucket_count() const noexcept {
return table_.bucket_count();
}
std::size_t max_bucket_count() const noexcept {
return table_.max_bucket_count();
}
//// PUBLIC - Hash policy
float load_factor() const noexcept {
return table_.load_factor();
}
float max_load_factor() const noexcept {
return table_.max_load_factor();
}
void max_load_factor(float v) {
table_.max_load_factor(v);
}
void rehash(std::size_t bucketCapacity) {
// The standard's rehash() requires understanding the max load factor,
// which is easy to get wrong. Since we don't actually allow adjustment
// of max_load_factor there is no difference.
reserve(bucketCapacity);
}
void reserve(std::size_t capacity) {
table_.reserve(capacity);
}
//// PUBLIC - Observers
hasher hash_function() const {
return table_.hasher();
}
key_equal key_eq() const {
return table_.keyEqual();
}
private:
template <typename Self, typename K>
static auto equal_range(Self& self, K const& key) {
auto first = self.find(key);
auto last = first;
if (last != self.end()) {
++last;
}
return std::make_pair(first, last);
}
protected:
F14Table<Policy> table_;
};
template <typename S>
bool setsEqual(S const& lhs, S const& rhs) {
if (lhs.size() != rhs.size()) {
return false;
}
for (auto& k : lhs) {
auto iter = rhs.find(k);
if (iter == rhs.end()) {
return false;
}
if (!std::is_same<
typename S::key_equal,
std::equal_to<typename S::value_type>>::value) {
// spec says we compare key with == as well as with key_eq()
if (!(k == *iter)) {
return false;
}
}
}
return true;
}
} // namespace detail
} // namespace f14
template <
typename Key,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<Key>>
class F14ValueSet
: public f14::detail::F14BasicSet<f14::detail::SetPolicyWithDefaults<
f14::detail::ValueContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::SetPolicyWithDefaults<
f14::detail::ValueContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicSet<Policy>;
public:
F14ValueSet() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
using Super::Super;
void swap(F14ValueSet& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
};
template <typename K, typename H, typename E, typename A>
void swap(F14ValueSet<K, H, E, A>& lhs, F14ValueSet<K, H, E, A>& rhs) noexcept(
noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator==(
F14ValueSet<K, H, E, A> const& lhs,
F14ValueSet<K, H, E, A> const& rhs) {
return setsEqual(lhs, rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator!=(
F14ValueSet<K, H, E, A> const& lhs,
F14ValueSet<K, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<Key>>
class F14NodeSet
: public f14::detail::F14BasicSet<f14::detail::SetPolicyWithDefaults<
f14::detail::NodeContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::SetPolicyWithDefaults<
f14::detail::NodeContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicSet<Policy>;
public:
F14NodeSet() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
using Super::Super;
void swap(F14NodeSet& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
};
template <typename K, typename H, typename E, typename A>
void swap(F14NodeSet<K, H, E, A>& lhs, F14NodeSet<K, H, E, A>& rhs) noexcept(
noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator==(
F14NodeSet<K, H, E, A> const& lhs,
F14NodeSet<K, H, E, A> const& rhs) {
return setsEqual(lhs, rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator!=(
F14NodeSet<K, H, E, A> const& lhs,
F14NodeSet<K, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<Key>>
class F14VectorSet
: public f14::detail::F14BasicSet<f14::detail::SetPolicyWithDefaults<
f14::detail::VectorContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>> {
using Policy = f14::detail::SetPolicyWithDefaults<
f14::detail::VectorContainerPolicy,
Key,
Hasher,
KeyEqual,
Alloc>;
using Super = f14::detail::F14BasicSet<Policy>;
public:
using typename Super::const_iterator;
using typename Super::iterator;
using typename Super::key_type;
F14VectorSet() noexcept(
f14::detail::F14Table<Policy>::kDefaultConstructIsNoexcept)
: Super{} {}
// inherit constructors
using Super::Super;
void swap(F14VectorSet& rhs) noexcept(
f14::detail::F14Table<Policy>::kSwapIsNoexcept) {
this->table_.swap(rhs.table_);
}
iterator begin() {
return cbegin();
}
const_iterator begin() const {
return cbegin();
}
const_iterator cbegin() const {
return this->table_.linearBegin(this->size());
}
iterator end() {
return cend();
}
const_iterator end() const {
return cend();
}
const_iterator cend() const {
return this->table_.linearEnd();
}
private:
void eraseUnderlying(typename Policy::ItemIter underlying) {
Alloc& a = this->table_.alloc();
auto values = this->table_.values_;
// destroy the value and remove the ptr from the base table
auto index = underlying.item();
std::allocator_traits<Alloc>::destroy(a, std::addressof(values[index]));
this->table_.erase(underlying);
// move the last element in values_ down and fix up the inbound index
auto tailIndex = this->size();
if (tailIndex != index) {
auto tail = this->table_.find(f14::detail::VectorContainerIndexSearch{
static_cast<uint32_t>(tailIndex)});
tail.item() = index;
auto p = std::addressof(values[index]);
folly::assume(p != nullptr);
std::allocator_traits<Alloc>::construct(
a, p, std::move(values[tailIndex]));
std::allocator_traits<Alloc>::destroy(
a, std::addressof(values[tailIndex]));
}
}
public:
FOLLY_ALWAYS_INLINE iterator erase(const_iterator pos) {
auto underlying = this->table_.find(
f14::detail::VectorContainerIndexSearch{this->table_.iterToIndex(pos)});
eraseUnderlying(underlying);
return ++pos;
}
iterator erase(const_iterator first, const_iterator last) {
while (first != last) {
first = erase(first);
}
return first;
}
std::size_t erase(key_type const& key) {
auto underlying = this->table_.find(key);
if (underlying.atEnd()) {
return 0;
} else {
eraseUnderlying(underlying);
return 1;
}
}
};
template <typename K, typename H, typename E, typename A>
void swap(
F14VectorSet<K, H, E, A>& lhs,
F14VectorSet<K, H, E, A>& rhs) noexcept(noexcept(lhs.swap(rhs))) {
lhs.swap(rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator==(
F14VectorSet<K, H, E, A> const& lhs,
F14VectorSet<K, H, E, A> const& rhs) {
return setsEqual(lhs, rhs);
}
template <typename K, typename H, typename E, typename A>
bool operator!=(
F14VectorSet<K, H, E, A> const& lhs,
F14VectorSet<K, H, E, A> const& rhs) {
return !(lhs == rhs);
}
template <
typename Key,
typename Hasher = f14::detail::DefaultHasher<Key>,
typename KeyEqual = f14::detail::DefaultKeyEqual<Key>,
typename Alloc = f14::detail::DefaultAlloc<Key>>
using F14FastSet = std::conditional_t<
sizeof(Key) < 24,
F14ValueSet<Key, Hasher, KeyEqual, Alloc>,
F14VectorSet<Key, Hasher, KeyEqual, Alloc>>;
} // namespace folly
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <memory>
#include <type_traits>
#include <folly/Portability.h>
namespace folly {
namespace f14 {
namespace detail {
template <typename Ptr>
using NonConstPtr = typename std::pointer_traits<Ptr>::template rebind<
std::remove_const_t<typename std::pointer_traits<Ptr>::element_type>>;
//////// TaggedPtr
template <typename Ptr>
class TaggedPtr {
public:
TaggedPtr() = default;
TaggedPtr(TaggedPtr const&) = default;
TaggedPtr(TaggedPtr&&) = default;
TaggedPtr& operator=(TaggedPtr const&) = default;
TaggedPtr& operator=(TaggedPtr&&) = default;
TaggedPtr(Ptr p, uint8_t e) noexcept : ptr_{p}, extra_{e} {}
/* implicit */ TaggedPtr(std::nullptr_t) noexcept {}
TaggedPtr& operator=(std::nullptr_t) noexcept {
ptr_ = nullptr;
extra_ = 0;
return *this;
}
typename std::pointer_traits<Ptr>::element_type& operator*() const noexcept {
return *ptr_;
}
typename std::pointer_traits<Ptr>::element_type* operator->() const noexcept {
return std::addressof(*ptr_);
}
Ptr ptr() const {
return ptr_;
}
void setPtr(Ptr p) {
ptr_ = p;
}
uint8_t extra() const {
return extra_;
}
void setExtra(uint8_t e) {
extra_ = e;
}
bool operator==(TaggedPtr const& rhs) const noexcept {
return ptr_ == rhs.ptr_ && extra_ == rhs.extra_;
}
bool operator!=(TaggedPtr const& rhs) const noexcept {
return !(*this == rhs);
}
bool operator<(TaggedPtr const& rhs) const noexcept {
return ptr_ != rhs.ptr_ ? ptr_ < rhs.ptr_ : extra_ < rhs.extra_;
}
bool operator==(std::nullptr_t) const noexcept {
return ptr_ == nullptr;
}
bool operator!=(std::nullptr_t) const noexcept {
return !(*this == nullptr);
}
private:
Ptr ptr_{};
uint8_t extra_{};
};
#if FOLLY_X64 || FOLLY_AARCH64
template <typename T>
class TaggedPtr<T*> {
public:
TaggedPtr() = default;
TaggedPtr(TaggedPtr const&) = default;
TaggedPtr(TaggedPtr&&) = default;
TaggedPtr& operator=(TaggedPtr const&) = default;
TaggedPtr& operator=(TaggedPtr&&) = default;
TaggedPtr(T* p, uint8_t e) noexcept
: raw_{(reinterpret_cast<uintptr_t>(p) << 8) | e} {
assert(ptr() == p);
}
/* implicit */ TaggedPtr(std::nullptr_t) noexcept : raw_{0} {}
TaggedPtr& operator=(std::nullptr_t) noexcept {
raw_ = 0;
return *this;
}
T& operator*() const noexcept {
return *ptr();
}
T* operator->() const noexcept {
return std::addressof(*ptr());
}
T* ptr() const {
return reinterpret_cast<T*>(raw_ >> 8);
}
void setPtr(T* p) {
*this = TaggedPtr{p, extra()};
assert(ptr() == p);
}
uint8_t extra() const {
return static_cast<uint8_t>(raw_);
}
void setExtra(uint8_t e) {
*this = TaggedPtr{ptr(), e};
}
bool operator==(TaggedPtr const& rhs) const {
return raw_ == rhs.raw_;
}
bool operator!=(TaggedPtr const& rhs) const {
return !(*this == rhs);
}
bool operator<(TaggedPtr const& rhs) const noexcept {
return raw_ < rhs.raw_;
}
bool operator==(std::nullptr_t) const noexcept {
return raw_ == 0;
}
bool operator!=(std::nullptr_t) const noexcept {
return !(*this == nullptr);
}
private:
// TODO: verify no high-bit extension needed on aarch64
uintptr_t raw_;
};
#endif // FOLLY_X64 || FOLLY_AARCH64
} // namespace detail
} // namespace f14
} // namespace folly
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <folly/container/detail/F14Table.h>
#include <folly/hash/Hash.h>
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
namespace folly {
namespace f14 {
namespace detail {
template <typename KeyType, typename MappedType>
using MapValueType = std::pair<KeyType const, MappedType>;
template <typename KeyType, typename MappedTypeOrVoid>
using SetOrMapValueType = std::conditional_t<
std::is_same<MappedTypeOrVoid, void>::value,
KeyType,
MapValueType<KeyType, MappedTypeOrVoid>>;
// Policy provides the functionality of hasher, key_equal, and
// allocator_type. In addition, it can add indirection to the values
// contained in the base table by defining a non-trivial value() method.
//
// To facilitate stateful implementations it is guaranteed that there
// will be a 1:1 relationship between BaseTable and Policy instance:
// policies will only be copied when their owning table is copied, and
// they will only be moved when their owning table is moved.
//
// Key equality will have the user-supplied search key as its first
// argument and the table contents as its second. Heterogeneous lookup
// should be handled on the first argument.
//
// Item is the data stored inline in the hash table's chunks. The policy
// controls how this is mapped to the corresponding Value.
//
// The policies defined in this file work for either set or map types.
// Most of the functionality is identical. A few methods detect the
// collection type by checking to see if MappedType is void, and then use
// SFINAE to select the appropriate implementation.
template <
typename KeyType,
typename MappedTypeOrVoid,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid,
typename ItemType>
struct BasePolicy
: std::tuple<
Defaulted<HasherOrVoid, DefaultHasher<KeyType>>,
Defaulted<KeyEqualOrVoid, DefaultKeyEqual<KeyType>>,
Defaulted<
AllocOrVoid,
DefaultAlloc<SetOrMapValueType<KeyType, MappedTypeOrVoid>>>> {
using Key = KeyType;
using Mapped = MappedTypeOrVoid;
using Value = SetOrMapValueType<Key, Mapped>;
using Item = ItemType;
using Hasher = Defaulted<HasherOrVoid, DefaultHasher<Key>>;
using KeyEqual = Defaulted<KeyEqualOrVoid, DefaultKeyEqual<Key>>;
using Alloc = Defaulted<AllocOrVoid, DefaultAlloc<Value>>;
using AllocTraits = std::allocator_traits<Alloc>;
using InternalSizeType = std::size_t;
using Super = std::tuple<Hasher, KeyEqual, Alloc>;
static constexpr bool isAvalanchingHasher() {
return IsAvalanchingHasher<Hasher, Key>::value;
}
using Chunk = SSE2Chunk<Item>;
using ChunkPtr = typename std::pointer_traits<
typename AllocTraits::pointer>::template rebind<Chunk>;
using ItemIter = F14ItemIter<ChunkPtr>;
static constexpr bool kIsMap = !std::is_same<Key, Value>::value;
static_assert(
kIsMap == !std::is_void<MappedTypeOrVoid>::value,
"Assumption for the kIsMap check violated.");
static_assert(
std::is_same<typename AllocTraits::value_type, Value>::value,
"wrong allocator value_type");
BasePolicy(Hasher const& hasher, KeyEqual const& keyEqual, Alloc const& alloc)
: Super{hasher, keyEqual, alloc} {}
BasePolicy(BasePolicy const& rhs)
: Super{rhs.hasher(),
rhs.keyEqual(),
AllocTraits::select_on_container_copy_construction(rhs.alloc())} {
}
BasePolicy(BasePolicy const& rhs, Alloc const& alloc)
: Super{rhs.hasher(), rhs.keyEqual(), alloc} {}
BasePolicy(BasePolicy&& rhs) noexcept
: Super{std::move(rhs.hasher()),
std::move(rhs.keyEqual()),
std::move(rhs.alloc())} {}
BasePolicy(BasePolicy&& rhs, Alloc const& alloc) noexcept
: Super{std::move(rhs.hasher()), std::move(rhs.keyEqual()), alloc} {}
BasePolicy& operator=(BasePolicy const& rhs) {
hasher() = rhs.hasher();
keyEqual() = rhs.keyEqual();
if (AllocTraits::propagate_on_container_copy_assignment::value) {
alloc() = rhs.alloc();
}
return *this;
}
BasePolicy& operator=(BasePolicy&& rhs) noexcept {
hasher() = std::move(rhs.hasher());
keyEqual() = std::move(rhs.keyEqual());
if (AllocTraits::propagate_on_container_move_assignment::value) {
alloc() = std::move(rhs.alloc());
}
return *this;
}
void swapBasePolicy(BasePolicy& rhs) {
using std::swap;
swap(hasher(), rhs.hasher());
swap(keyEqual(), rhs.keyEqual());
if (AllocTraits::propagate_on_container_swap::value) {
swap(alloc(), rhs.alloc());
}
}
Hasher& hasher() {
return std::get<0>(*this);
}
Hasher const& hasher() const {
return std::get<0>(*this);
}
KeyEqual& keyEqual() {
return std::get<1>(*this);
}
KeyEqual const& keyEqual() const {
return std::get<1>(*this);
}
Alloc& alloc() {
return std::get<2>(*this);
}
Alloc const& alloc() const {
return std::get<2>(*this);
}
template <typename K>
std::size_t computeKeyHash(K const& key) const {
static_assert(
isAvalanchingHasher() == IsAvalanchingHasher<Hasher, K>::value, "");
return hasher()(key);
}
Key const& keyForValue(Key const& v) const {
return v;
}
Key const& keyForValue(
std::pair<Key const, std::conditional_t<kIsMap, Mapped, bool>> const& p)
const {
return p.first;
}
template <typename P>
bool
beforeCopy(std::size_t /*size*/, std::size_t /*capacity*/, P const& /*rhs*/) {
return false;
}
template <typename P>
void afterCopy(
bool /*undoState*/,
bool /*success*/,
std::size_t /*size*/,
std::size_t /*capacity*/,
P const& /*rhs*/) {}
bool beforeRehash(
std::size_t /*size*/,
std::size_t /*oldCapacity*/,
std::size_t /*newCapacity*/) {
return false;
}
void afterRehash(
bool /*undoState*/,
bool /*success*/,
std::size_t /*size*/,
std::size_t /*oldCapacity*/,
std::size_t /*newCapacity*/) {}
void beforeClear(std::size_t /*size*/, std::size_t) {}
void afterClear(std::size_t /*capacity*/) {}
void beforeReset(std::size_t /*size*/, std::size_t) {}
void afterReset() {}
void prefetchValue(Item const&) {
// Subclass should disable with prefetchBeforeRehash(),
// prefetchBeforeCopy(), and prefetchBeforeDestroy(). if they don't
// override this method, because neither gcc nor clang can figure
// out that DenseMaskIter with an empty body can be elided.
assert(false);
}
};
// BaseIter is a convenience for concrete set and map implementations
template <typename ValuePtr, typename Item>
class BaseIter : public std::iterator<
std::forward_iterator_tag,
std::remove_const_t<
typename std::pointer_traits<ValuePtr>::element_type>,
std::ptrdiff_t,
ValuePtr,
decltype(*std::declval<ValuePtr>())> {
protected:
using Chunk = SSE2Chunk<Item>;
using ChunkPtr =
typename std::pointer_traits<ValuePtr>::template rebind<Chunk>;
using ItemIter = F14ItemIter<ChunkPtr>;
using ValueConstPtr = typename std::pointer_traits<ValuePtr>::template rebind<
std::add_const_t<typename std::pointer_traits<ValuePtr>::element_type>>;
};
//////// ValueContainer
template <
typename Key,
typename Mapped,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class ValueContainerPolicy;
template <typename ValuePtr>
using ValueContainerIteratorBase = BaseIter<
ValuePtr,
std::remove_const_t<typename std::pointer_traits<ValuePtr>::element_type>>;
template <typename ValuePtr>
class ValueContainerIterator : public ValueContainerIteratorBase<ValuePtr> {
using Super = ValueContainerIteratorBase<ValuePtr>;
using typename Super::ItemIter;
using typename Super::ValueConstPtr;
public:
using typename Super::pointer;
using typename Super::reference;
using typename Super::value_type;
ValueContainerIterator() = default;
ValueContainerIterator(ValueContainerIterator const&) = default;
ValueContainerIterator(ValueContainerIterator&&) = default;
ValueContainerIterator& operator=(ValueContainerIterator const&) = default;
ValueContainerIterator& operator=(ValueContainerIterator&&) = default;
~ValueContainerIterator() = default;
/*implicit*/ operator ValueContainerIterator<ValueConstPtr>() const {
return ValueContainerIterator<ValueConstPtr>{underlying_};
}
reference operator*() const {
return underlying_.item();
}
pointer operator->() const {
return std::pointer_traits<pointer>::pointer_to(**this);
}
ValueContainerIterator& operator++() {
underlying_.advance();
return *this;
}
ValueContainerIterator operator++(int) {
auto cur = *this;
++*this;
return cur;
}
bool operator==(ValueContainerIterator<ValueConstPtr> const& rhs) const {
return underlying_ == rhs.underlying_;
}
bool operator!=(ValueContainerIterator<ValueConstPtr> const& rhs) const {
return !(*this == rhs);
}
private:
ItemIter underlying_;
explicit ValueContainerIterator(ItemIter const& underlying)
: underlying_{underlying} {}
template <typename K, typename M, typename H, typename E, typename A>
friend class ValueContainerPolicy;
template <typename P>
friend class ValueContainerIterator;
};
template <
typename Key,
typename MappedTypeOrVoid,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class ValueContainerPolicy : public BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
SetOrMapValueType<Key, MappedTypeOrVoid>> {
public:
using Super = BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
SetOrMapValueType<Key, MappedTypeOrVoid>>;
using typename Super::Alloc;
using typename Super::Item;
using typename Super::ItemIter;
using typename Super::Value;
private:
using Super::kIsMap;
using typename Super::AllocTraits;
public:
using ConstIter = ValueContainerIterator<typename AllocTraits::const_pointer>;
using Iter = std::conditional_t<
kIsMap,
ValueContainerIterator<typename AllocTraits::pointer>,
ConstIter>;
//////// F14Table policy
static constexpr bool prefetchBeforeRehash() {
return false;
}
static constexpr bool prefetchBeforeCopy() {
return false;
}
static constexpr bool prefetchBeforeDestroy() {
return false;
}
static constexpr bool destroyItemOnClear() {
return !std::is_trivially_destructible<Item>::value ||
!std::is_same<Alloc, std::allocator<Value>>::value;
}
// inherit constructors
using Super::Super;
void swapPolicy(ValueContainerPolicy& rhs) {
this->swapBasePolicy(rhs);
}
using Super::keyForValue;
static_assert(
std::is_same<Item, Value>::value,
"Item and Value should be the same type for ValueContainerPolicy.");
std::size_t computeItemHash(Item const& item) const {
return this->computeKeyHash(keyForValue(item));
}
template <typename K>
bool keyMatchesItem(K const& key, Item const& item) const {
return this->keyEqual()(key, keyForValue(item));
}
Value const& valueAtItemForCopy(Item const& item) const {
return item;
}
template <typename... Args>
void
constructValueAtItem(std::size_t /*size*/, Item* itemAddr, Args&&... args) {
Alloc& a = this->alloc();
folly::assume(itemAddr != nullptr);
AllocTraits::construct(a, itemAddr, std::forward<Args>(args)...);
}
template <typename T>
std::enable_if_t<std::is_nothrow_move_constructible<T>::value>
complainUnlessNothrowMove() {}
template <typename T>
FOLLY_DEPRECATED(
"use F14NodeMap/Set or mark key and mapped type move constructor nothrow")
std::enable_if_t<!std::is_nothrow_move_constructible<
T>::value> complainUnlessNothrowMove() {}
template <typename Dummy = int>
void moveItemDuringRehash(
Item* itemAddr,
Item& src,
typename std::enable_if_t<kIsMap, Dummy> = 0) {
complainUnlessNothrowMove<Key>();
complainUnlessNothrowMove<MappedTypeOrVoid>();
// map's choice of pair<K const,T> as value_type is unfortunate,
// because it means we either need a proxy iterator, a pointless key
// copy when moving items during rehash, or some sort of UB hack.
// See https://fb.quip.com/kKieAEtg0Pao for much more discussion of
// the possibilities.
//
// This code implements the hack.
// Laundering in the standard is only described as a solution for
// changes to const fields due to the creation of a new object
// lifetime (destroy and then placement new in the same location),
// but it seems highly likely that it will also cause the compiler
// to drop such assumptions that are violated due to our UB const_cast.
constructValueAtItem(
0,
itemAddr,
std::move(const_cast<Key&>(src.first)),
std::move(src.second));
if (destroyItemOnClear()) {
destroyItem(*folly::launder(std::addressof(src)));
}
}
template <typename Dummy = int>
void moveItemDuringRehash(
Item* itemAddr,
Item& src,
typename std::enable_if_t<!kIsMap, Dummy> = 0) {
complainUnlessNothrowMove<Item>();
constructValueAtItem(0, itemAddr, std::move(src));
if (destroyItemOnClear()) {
destroyItem(src);
}
}
void destroyItem(Item& item) {
Alloc& a = this->alloc();
AllocTraits::destroy(a, std::addressof(item));
}
std::size_t indirectBytesUsed(
std::size_t /*size*/,
std::size_t /*capacity*/,
ItemIter /*underlying*/) const {
return 0;
}
//////// F14BasicMap/Set policy
Iter makeIter(ItemIter const& underlying) const {
return Iter{underlying};
}
ConstIter makeConstIter(ItemIter const& underlying) const {
return ConstIter{underlying};
}
ItemIter const& unwrapIter(ConstIter const& iter) const {
return iter.underlying_;
}
};
//////// NodeContainer
template <
typename Key,
typename Mapped,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class NodeContainerPolicy;
template <typename ValuePtr>
class NodeContainerIterator : public BaseIter<ValuePtr, NonConstPtr<ValuePtr>> {
using Super = BaseIter<ValuePtr, NonConstPtr<ValuePtr>>;
using typename Super::ItemIter;
using typename Super::ValueConstPtr;
public:
using typename Super::pointer;
using typename Super::reference;
using typename Super::value_type;
NodeContainerIterator() = default;
NodeContainerIterator(NodeContainerIterator const&) = default;
NodeContainerIterator(NodeContainerIterator&&) = default;
NodeContainerIterator& operator=(NodeContainerIterator const&) = default;
NodeContainerIterator& operator=(NodeContainerIterator&&) = default;
~NodeContainerIterator() = default;
/*implicit*/ operator NodeContainerIterator<ValueConstPtr>() const {
return NodeContainerIterator<ValueConstPtr>{underlying_};
}
reference operator*() const {
return *underlying_.item();
}
pointer operator->() const {
return std::pointer_traits<pointer>::pointer_to(**this);
}
NodeContainerIterator& operator++() {
underlying_.advance();
return *this;
}
NodeContainerIterator operator++(int) {
auto cur = *this;
++*this;
return cur;
}
bool operator==(NodeContainerIterator<ValueConstPtr> const& rhs) const {
return underlying_ == rhs.underlying_;
}
bool operator!=(NodeContainerIterator<ValueConstPtr> const& rhs) const {
return !(*this == rhs);
}
private:
ItemIter underlying_;
explicit NodeContainerIterator(ItemIter const& underlying)
: underlying_{underlying} {}
template <typename K, typename M, typename H, typename E, typename A>
friend class NodeContainerPolicy;
template <typename P>
friend class NodeContainerIterator;
};
template <
typename Key,
typename MappedTypeOrVoid,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class NodeContainerPolicy
: public BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
typename std::allocator_traits<Defaulted<
AllocOrVoid,
DefaultAlloc<std::conditional_t<
std::is_void<MappedTypeOrVoid>::value,
Key,
MapValueType<Key, MappedTypeOrVoid>>>>>::pointer> {
public:
using Super = BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
typename std::allocator_traits<Defaulted<
AllocOrVoid,
DefaultAlloc<std::conditional_t<
std::is_void<MappedTypeOrVoid>::value,
Key,
MapValueType<Key, MappedTypeOrVoid>>>>>::pointer>;
using typename Super::Alloc;
using typename Super::Item;
using typename Super::ItemIter;
using typename Super::Value;
private:
using Super::kIsMap;
using typename Super::AllocTraits;
public:
using ConstIter = NodeContainerIterator<typename AllocTraits::const_pointer>;
using Iter = std::conditional_t<
kIsMap,
NodeContainerIterator<typename AllocTraits::pointer>,
ConstIter>;
//////// F14Table policy
static constexpr bool prefetchBeforeRehash() {
return true;
}
static constexpr bool prefetchBeforeCopy() {
return true;
}
static constexpr bool prefetchBeforeDestroy() {
return !std::is_trivially_destructible<Value>::value;
}
static constexpr bool destroyItemOnClear() {
return true;
}
// inherit constructors
using Super::Super;
void swapPolicy(NodeContainerPolicy& rhs) {
this->swapBasePolicy(rhs);
}
using Super::keyForValue;
std::size_t computeItemHash(Item const& item) const {
return this->computeKeyHash(keyForValue(*item));
}
template <typename K>
bool keyMatchesItem(K const& key, Item const& item) const {
return this->keyEqual()(key, keyForValue(*item));
}
Value const& valueAtItemForCopy(Item const& item) const {
return *item;
}
template <typename... Args>
void
constructValueAtItem(std::size_t /*size*/, Item* itemAddr, Args&&... args) {
Alloc& a = this->alloc();
folly::assume(itemAddr != nullptr);
new (itemAddr) Item{AllocTraits::allocate(a, 1)};
auto p = std::addressof(**itemAddr);
folly::assume(p != nullptr);
AllocTraits::construct(a, p, std::forward<Args>(args)...);
}
void moveItemDuringRehash(Item* itemAddr, Item& src) {
// This is basically *itemAddr = src; src = nullptr, but allowing
// for fancy pointers.
folly::assume(itemAddr != nullptr);
new (itemAddr) Item{std::move(src)};
src = nullptr;
src.~Item();
}
void prefetchValue(Item const& item) {
prefetchAddr(std::addressof(*item));
}
void destroyItem(Item& item) {
if (item != nullptr) {
Alloc& a = this->alloc();
AllocTraits::destroy(a, std::addressof(*item));
AllocTraits::deallocate(a, item, 1);
}
item.~Item();
}
std::size_t indirectBytesUsed(
std::size_t size,
std::size_t /*capacity*/,
ItemIter /*underlying*/) const {
return size * sizeof(Value);
}
//////// F14BasicMap/Set policy
Iter makeIter(ItemIter const& underlying) const {
return Iter{underlying};
}
ConstIter makeConstIter(ItemIter const& underlying) const {
return Iter{underlying};
}
ItemIter const& unwrapIter(ConstIter const& iter) const {
return iter.underlying_;
}
};
//////// VectorContainer
template <
typename Key,
typename MappedTypeOrVoid,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class VectorContainerPolicy;
template <typename ValuePtr>
class VectorContainerIterator : public BaseIter<ValuePtr, uint32_t> {
using Super = BaseIter<ValuePtr, uint32_t>;
using typename Super::ValueConstPtr;
public:
using typename Super::pointer;
using typename Super::reference;
using typename Super::value_type;
VectorContainerIterator() = default;
VectorContainerIterator(VectorContainerIterator const&) = default;
VectorContainerIterator(VectorContainerIterator&&) = default;
VectorContainerIterator& operator=(VectorContainerIterator const&) = default;
VectorContainerIterator& operator=(VectorContainerIterator&&) = default;
~VectorContainerIterator() = default;
/*implicit*/ operator VectorContainerIterator<ValueConstPtr>() const {
// can we trust that fancy pointers are implicitly convertible to
// fancy const pointers?
return VectorContainerIterator<ValueConstPtr>{current_, lowest_};
}
reference operator*() const {
return *current_;
}
pointer operator->() const {
return current_;
}
VectorContainerIterator& operator++() {
if (UNLIKELY(current_ == lowest_)) {
current_ = nullptr;
} else {
--current_;
}
return *this;
}
VectorContainerIterator operator++(int) {
auto cur = *this;
++*this;
return cur;
}
bool operator==(VectorContainerIterator<ValueConstPtr> const& rhs) const {
return current_ == rhs.current_;
}
bool operator!=(VectorContainerIterator<ValueConstPtr> const& rhs) const {
return !(*this == rhs);
}
private:
ValuePtr current_;
ValuePtr lowest_;
explicit VectorContainerIterator(ValuePtr current, ValuePtr lowest)
: current_(current), lowest_(lowest) {}
std::size_t index() const {
return current_ - lowest_;
}
template <typename K, typename M, typename H, typename E, typename A>
friend class VectorContainerPolicy;
template <typename P>
friend class VectorContainerIterator;
};
struct VectorContainerIndexSearch {
uint32_t index_;
};
template <
typename Key,
typename MappedTypeOrVoid,
typename HasherOrVoid,
typename KeyEqualOrVoid,
typename AllocOrVoid>
class VectorContainerPolicy : public BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
uint32_t> {
public:
using Super = BasePolicy<
Key,
MappedTypeOrVoid,
HasherOrVoid,
KeyEqualOrVoid,
AllocOrVoid,
uint32_t>;
using typename Super::Alloc;
using typename Super::Item;
using typename Super::ItemIter;
using typename Super::Value;
private:
using Super::kIsMap;
using typename Super::AllocTraits;
public:
using InternalSizeType = Item;
using ConstIter =
VectorContainerIterator<typename AllocTraits::const_pointer>;
using Iter = std::conditional_t<
kIsMap,
VectorContainerIterator<typename AllocTraits::pointer>,
ConstIter>;
using ValuePtr = typename AllocTraits::pointer;
//////// F14Table policy
static constexpr bool prefetchBeforeRehash() {
return true;
}
static constexpr bool prefetchBeforeCopy() {
return false;
}
static constexpr bool prefetchBeforeDestroy() {
return false;
}
static constexpr bool destroyItemOnClear() {
return false;
}
// inherit constructors
using Super::Super;
VectorContainerPolicy(VectorContainerPolicy const& rhs)
: Super{rhs}, values_{nullptr} {}
VectorContainerPolicy(VectorContainerPolicy&& rhs) noexcept
: Super{std::move(rhs)}, values_{rhs.values_} {
rhs.values_ = nullptr;
}
VectorContainerPolicy& operator=(VectorContainerPolicy const& rhs) {
if (this != &rhs) {
assert(values_ == nullptr);
Super::operator=(rhs);
}
return *this;
}
VectorContainerPolicy& operator=(VectorContainerPolicy&& rhs) noexcept {
if (this != &rhs) {
Super::operator=(std::move(rhs));
values_ = rhs.values_;
rhs.values_ = nullptr;
}
return *this;
}
void swapPolicy(VectorContainerPolicy& rhs) {
using std::swap;
this->swapBasePolicy(rhs);
swap(values_, rhs.values_);
}
template <typename K>
std::size_t computeKeyHash(K const& key) const {
static_assert(
Super::isAvalanchingHasher() ==
IsAvalanchingHasher<typename Super::Hasher, K>::value,
"");
return this->hasher()(key);
}
std::size_t computeKeyHash(VectorContainerIndexSearch const& key) const {
return computeItemHash(key.index_);
}
using Super::keyForValue;
std::size_t computeItemHash(Item const& item) const {
return this->computeKeyHash(keyForValue(values_[item]));
}
bool keyMatchesItem(VectorContainerIndexSearch const& key, Item const& item)
const {
return key.index_ == item;
}
template <typename K>
bool keyMatchesItem(K const& key, Item const& item) const {
return this->keyEqual()(key, keyForValue(values_[item]));
}
Key const& keyForValue(VectorContainerIndexSearch const& arg) const {
return keyForValue(values_[arg.index_]);
}
VectorContainerIndexSearch valueAtItemForCopy(Item const& item) const {
return {item};
}
void constructValueAtItem(
std::size_t /*size*/,
Item* itemAddr,
VectorContainerIndexSearch arg) {
*itemAddr = arg.index_;
}
template <typename... Args>
void constructValueAtItem(std::size_t size, Item* itemAddr, Args&&... args) {
Alloc& a = this->alloc();
*itemAddr = size;
AllocTraits::construct(
a, std::addressof(values_[size]), std::forward<Args>(args)...);
}
void moveItemDuringRehash(Item* itemAddr, Item& src) {
*itemAddr = src;
}
void prefetchValue(Item const& item) {
prefetchAddr(std::addressof(values_[item]));
}
void destroyItem(Item&) {}
template <typename T>
std::enable_if_t<std::is_nothrow_move_constructible<T>::value>
complainUnlessNothrowMove() {}
template <typename T>
FOLLY_DEPRECATED(
"use F14NodeMap/Set or mark key and mapped type move constructor nothrow")
std::enable_if_t<!std::is_nothrow_move_constructible<
T>::value> complainUnlessNothrowMove() {}
template <typename Dummy = int>
void transfer(
Alloc& a,
Value* src,
Value* dst,
std::size_t n,
typename std::enable_if_t<kIsMap, Dummy> = 0) {
complainUnlessNothrowMove<Key>();
complainUnlessNothrowMove<MappedTypeOrVoid>();
if (std::is_same<Alloc, std::allocator<Value>>::value &&
FOLLY_IS_TRIVIALLY_COPYABLE(Value)) {
std::memcpy(dst, src, n * sizeof(Value));
} else {
for (std::size_t i = 0; i < n; ++i, ++src, ++dst) {
// See ValueContainerPolicy::moveItemDuringRehash for an explanation
// of // the strange const_cast and launder below
folly::assume(dst != nullptr);
AllocTraits::construct(
a,
dst,
std::move(const_cast<Key&>(src->first)),
std::move(src->second));
AllocTraits::destroy(a, folly::launder(src));
}
}
}
template <typename Dummy = int>
void transfer(
Alloc& a,
Value* src,
Value* dst,
std::size_t n,
typename std::enable_if_t<!kIsMap, Dummy> = 0) {
complainUnlessNothrowMove<Value>();
if (std::is_same<Alloc, std::allocator<Value>>::value &&
FOLLY_IS_TRIVIALLY_COPYABLE(Value)) {
std::memcpy(dst, src, n * sizeof(Value));
} else {
for (std::size_t i = 0; i < n; ++i, ++src, ++dst) {
folly::assume(dst != nullptr);
AllocTraits::construct(a, dst, std::move(*src));
AllocTraits::destroy(a, src);
}
}
}
bool beforeCopy(
std::size_t size,
std::size_t /*capacity*/,
VectorContainerPolicy const& rhs) {
Alloc& a = this->alloc();
assert(values_ != nullptr);
Value const* src = std::addressof(rhs.values_[0]);
Value* dst = std::addressof(values_[0]);
if (std::is_same<Alloc, std::allocator<Value>>::value &&
FOLLY_IS_TRIVIALLY_COPYABLE(Value)) {
std::memcpy(dst, src, size * sizeof(Value));
} else {
for (std::size_t i = 0; i < size; ++i, ++src, ++dst) {
try {
folly::assume(dst != nullptr);
AllocTraits::construct(a, dst, *src);
} catch (...) {
for (Value* cleanup = std::addressof(values_[0]); cleanup != dst;
++cleanup) {
AllocTraits::destroy(a, cleanup);
}
throw;
}
}
}
return true;
}
void afterCopy(
bool /*undoState*/,
bool success,
std::size_t /*size*/,
std::size_t /*capacity*/,
VectorContainerPolicy const& /*rhs*/) {
// valueAtItemForCopy can be copied trivially, no failure should occur
assert(success);
}
ValuePtr beforeRehash(
std::size_t size,
std::size_t oldCapacity,
std::size_t newCapacity) {
assert(
size <= oldCapacity && ((values_ == nullptr) == (oldCapacity == 0)) &&
newCapacity > 0 && newCapacity <= (std::numeric_limits<Item>::max)());
Alloc& a = this->alloc();
ValuePtr before = values_;
ValuePtr after = AllocTraits::allocate(a, newCapacity);
if (size > 0) {
transfer(a, std::addressof(before[0]), std::addressof(after[0]), size);
}
values_ = after;
return before;
}
FOLLY_NOINLINE void
afterFailedRehash(ValuePtr state, std::size_t size, std::size_t newCapacity) {
// state holds the old storage
Alloc& a = this->alloc();
if (size > 0) {
transfer(a, std::addressof(values_[0]), std::addressof(state[0]), size);
}
AllocTraits::deallocate(a, values_, newCapacity);
values_ = state;
}
void afterRehash(
ValuePtr state,
bool success,
std::size_t size,
std::size_t oldCapacity,
std::size_t newCapacity) {
if (!success) {
afterFailedRehash(state, size, newCapacity);
} else if (state != nullptr) {
Alloc& a = this->alloc();
AllocTraits::deallocate(a, state, oldCapacity);
}
}
void beforeClear(std::size_t size, std::size_t capacity) {
assert(size <= capacity && ((values_ == nullptr) == (capacity == 0)));
Alloc& a = this->alloc();
for (std::size_t i = 0; i < size; ++i) {
AllocTraits::destroy(a, std::addressof(values_[i]));
}
}
void beforeReset(std::size_t size, std::size_t capacity) {
assert(size <= capacity && ((values_ == nullptr) == (capacity == 0)));
if (capacity > 0) {
beforeClear(size, capacity);
Alloc& a = this->alloc();
AllocTraits::deallocate(a, values_, capacity);
values_ = nullptr;
}
}
std::size_t indirectBytesUsed(
std::size_t /*size*/,
std::size_t capacity,
ItemIter /*underlying*/) const {
return sizeof(Value) * capacity;
}
// Iterator stuff
Iter linearBegin(std::size_t size) const {
return Iter{(size > 0 ? values_ + size - 1 : nullptr), values_};
}
Iter linearEnd() const {
return Iter{nullptr, nullptr};
}
//////// F14BasicMap/Set policy
Iter makeIter(ItemIter const& underlying) const {
if (underlying.atEnd()) {
return linearEnd();
} else {
folly::assume(values_ + underlying.item() != nullptr);
folly::assume(values_ != nullptr);
return Iter{values_ + underlying.item(), values_};
}
}
ConstIter makeConstIter(ItemIter const& underlying) const {
return makeIter(underlying);
}
Item iterToIndex(ConstIter const& iter) const {
auto n = iter.index();
folly::assume(n <= std::numeric_limits<Item>::max());
return static_cast<Item>(n);
}
Iter indexToIter(Item index) const {
return Iter{values_ + index, values_};
}
ValuePtr values_{nullptr};
};
template <
template <typename, typename, typename, typename, typename> class Policy,
typename Key,
typename Mapped,
typename Hasher,
typename KeyEqual,
typename Alloc>
using MapPolicyWithDefaults = Policy<
Key,
Mapped,
VoidDefault<Hasher, DefaultHasher<Key>>,
VoidDefault<KeyEqual, DefaultKeyEqual<Key>>,
VoidDefault<Alloc, DefaultAlloc<std::pair<Key const, Mapped>>>>;
template <
template <typename, typename, typename, typename, typename> class Policy,
typename Key,
typename Hasher,
typename KeyEqual,
typename Alloc>
using SetPolicyWithDefaults = Policy<
Key,
void,
VoidDefault<Hasher, DefaultHasher<Key>>,
VoidDefault<KeyEqual, DefaultKeyEqual<Key>>,
VoidDefault<Alloc, DefaultAlloc<Key>>>;
} // namespace detail
} // namespace f14
} // namespace folly
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/container/detail/F14Table.h>
///////////////////////////////////
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
namespace folly {
namespace f14 {
namespace detail {
__m128i kEmptyTagVector = {};
} // namespace detail
} // namespace f14
} // namespace folly
///////////////////////////////////
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <cstring>
#include <array>
#include <iterator>
#include <limits>
#include <memory>
#include <new>
#include <tuple>
#include <type_traits>
#include <utility>
#include <vector>
#include <folly/Bits.h>
#include <folly/Likely.h>
#include <folly/Portability.h>
#include <folly/ScopeGuard.h>
#include <folly/Traits.h>
#include <folly/lang/Assume.h>
#include <folly/lang/Exception.h>
#include <folly/lang/Launder.h>
#include <folly/lang/SafeAssert.h>
#include <folly/portability/TypeTraits.h>
#include <folly/container/detail/F14Memory.h>
// clang-format off
// F14 is only available on x86 with SSE2 intrinsics (so far)
#ifndef FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
# if FOLLY_SSE >= 2
# define FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE 1
# else
# define FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE 0
# pragma message \
"Vector intrinsics unavailable on this platform, " \
"falling back to std::unordered_map / set"
# endif
#endif
// clang-format on
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
#include <immintrin.h> // __m128i intrinsics
#include <xmmintrin.h> // _mm_prefetch
#endif
namespace folly {
struct F14TableStats {
char const* policy;
std::size_t size{0};
std::size_t valueSize{0};
std::size_t bucketCount{0};
std::size_t chunkCount{0};
std::vector<std::size_t> chunkOccupancyHisto;
std::vector<std::size_t> chunkOutboundOverflowHisto;
std::vector<std::size_t> chunkHostedOverflowHisto;
std::vector<std::size_t> keyProbeLengthHisto;
std::vector<std::size_t> missProbeLengthHisto;
std::size_t totalBytes{0};
std::size_t overheadBytes{0};
private:
template <typename T>
static auto computeHelper(T const* m) -> decltype(m->computeStats()) {
return m->computeStats();
}
static F14TableStats computeHelper(...) {
return {};
}
public:
template <typename T>
static F14TableStats compute(T const& m) {
return computeHelper(&m);
}
};
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
namespace f14 {
namespace detail {
template <typename Policy>
class F14Table;
} // namespace detail
} // namespace f14
class F14HashToken final {
private:
using HashPair = std::pair<std::size_t, uint8_t>;
explicit F14HashToken(HashPair hp) : hp_(hp) {}
explicit operator HashPair() const {
return hp_;
}
HashPair hp_;
template <typename Policy>
friend class f14::detail::F14Table;
};
namespace f14 {
namespace detail {
template <
typename Void,
typename Hasher,
typename KeyEqual,
typename Key,
typename T>
struct EnableIfIsTransparent {};
template <typename Hasher, typename KeyEqual, typename Key, typename T>
struct EnableIfIsTransparent<
folly::void_t<
typename Hasher::is_transparent,
typename KeyEqual::is_transparent>,
Hasher,
KeyEqual,
Key,
T> {
using type = T;
};
//// Defaults should be selected using void
template <typename Arg, typename Default>
using VoidDefault =
std::conditional_t<std::is_same<Arg, Default>::value, void, Arg>;
template <typename Arg, typename Default>
using Defaulted =
typename std::conditional_t<std::is_same<Arg, void>::value, Default, Arg>;
template <typename T>
using DefaultHasher = std::hash<T>;
template <typename T>
using DefaultKeyEqual = std::equal_to<T>;
template <typename T>
using DefaultAlloc = std::allocator<T>;
////////////////
template <typename T>
FOLLY_ALWAYS_INLINE static void prefetchAddr(T const* ptr) {
// _mm_prefetch is x86_64-specific and comes from xmmintrin.h.
// It compiles to the same thing as __builtin_prefetch.
_mm_prefetch(
static_cast<char const*>(static_cast<void const*>(ptr)), _MM_HINT_T0);
}
extern __m128i kEmptyTagVector;
template <typename ItemType>
struct alignas(std::max_align_t) SSE2Chunk {
using Item = ItemType;
// Assuming alignof(std::max_align_t) == 16 (and assuming alignof(Item)
// >= 4) kCapacity of 14 is always most space efficient. Slightly
// smaller or larger capacities can help with cache alignment in a
// couple of cases without wasting too much space, but once the items
// are larger then we're unlikely to get much benefit anyway. The only
// case we optimize is using kCapacity of 12 for 4 byte items, which
// makes the chunk take exactly 1 cache line, and adding 16 bytes of
// padding for 16 byte items so that a chunk takes exactly 4 cache lines.
static constexpr unsigned kCapacity = sizeof(Item) == 4 ? 12 : 14;
static constexpr unsigned kDesiredCapacity = kCapacity - 2;
static constexpr unsigned kAllocatedCapacity =
kCapacity + (sizeof(Item) == 16 ? 1 : 0);
static constexpr unsigned kFullMask =
static_cast<unsigned>(~(~uint64_t{0} << kCapacity));
// Non-empty tags have their top bit set
std::array<uint8_t, kCapacity> tags_;
// Bits 0..3 record the actual capacity of the chunk if this is chunk
// zero, or hold 0000 for other chunks. Bits 4-7 are a 4-bit counter
// of the number of values in this chunk that were placed because they
// overflowed their desired chunk (hostedOverflowCount).
uint8_t control_;
// The number of values that would have been placed into this chunk if
// there had been space, including values that also overflowed previous
// full chunks. This value saturates; once it becomes 255 it no longer
// increases nor decreases.
uint8_t outboundOverflowCount_;
std::array<
std::aligned_storage_t<sizeof(Item), alignof(Item)>,
kAllocatedCapacity>
rawItems_;
static SSE2Chunk* emptyInstance() {
auto rv = static_cast<SSE2Chunk*>(static_cast<void*>(&kEmptyTagVector));
assert(
rv->occupiedMask() == 0 && rv->chunk0Capacity() == 0 &&
rv->outboundOverflowCount() == 0);
return rv;
}
void clear() {
// this doesn't violate strict aliasing rules because __m128i is
// tagged as __may_alias__
auto* v = static_cast<__m128i*>(static_cast<void*>(&tags_[0]));
_mm_store_si128(v, _mm_setzero_si128());
// tags_ = {}; control_ = 0; outboundOverflowCount_ = 0;
}
void copyOverflowInfoFrom(SSE2Chunk const& rhs) {
assert(hostedOverflowCount() == 0);
control_ += rhs.control_ & 0xf0;
outboundOverflowCount_ = rhs.outboundOverflowCount_;
}
unsigned hostedOverflowCount() const {
return control_ >> 4;
}
static constexpr uint8_t kIncrHostedOverflowCount = 0x10;
static constexpr uint8_t kDecrHostedOverflowCount =
static_cast<uint8_t>(-0x10);
void adjustHostedOverflowCount(uint8_t op) {
control_ += op;
}
bool eof() const {
return (control_ & 0xf) != 0;
}
std::size_t chunk0Capacity() const {
return control_ & 0xf;
}
void markEof(std::size_t c0c) {
assert(this != emptyInstance());
assert(control_ == 0);
assert(c0c > 0 && c0c <= 0xf && c0c <= kCapacity);
control_ = static_cast<uint8_t>(c0c);
}
unsigned outboundOverflowCount() const {
return outboundOverflowCount_;
}
void incrOutboundOverflowCount() {
if (outboundOverflowCount_ != 255) {
++outboundOverflowCount_;
}
}
void decrOutboundOverflowCount() {
if (outboundOverflowCount_ != 255) {
--outboundOverflowCount_;
}
}
uint8_t tag(std::size_t index) const {
return tags_[index];
}
void setTag(std::size_t index, uint8_t tag) {
assert(this != emptyInstance());
assert((tag & 0x80) != 0);
tags_[index] = tag;
}
void clearTag(std::size_t index) {
tags_[index] = 0;
}
__m128i const* tagVector() const {
return static_cast<__m128i const*>(static_cast<void const*>(&tags_[0]));
}
unsigned tagMatchMask(uint8_t needle) const {
assert((needle & 0x80) != 0);
auto tagV = _mm_load_si128(tagVector());
auto needleV = _mm_set1_epi8(needle);
auto eqV = _mm_cmpeq_epi8(tagV, needleV);
return _mm_movemask_epi8(eqV) & kFullMask;
}
unsigned occupiedMask() const {
auto tagV = _mm_load_si128(tagVector());
return _mm_movemask_epi8(tagV) & kFullMask;
}
bool occupied(std::size_t index) const {
assert(tags_[index] == 0 || (tags_[index] & 0x80) != 0);
return tags_[index] != 0;
}
unsigned emptyMask() const {
return occupiedMask() ^ kFullMask;
}
unsigned lastOccupiedIndex() const {
auto m = occupiedMask();
// assume + findLastSet results in optimal __builtin_clz on gcc
folly::assume(m != 0);
unsigned i = folly::findLastSet(m) - 1;
assert(occupied(i));
return i;
}
Item* itemAddr(std::size_t i) const {
return static_cast<Item*>(
const_cast<void*>(static_cast<void const*>(&rawItems_[i])));
}
Item& item(std::size_t i) {
assert(this->occupied(i));
return *folly::launder(itemAddr(i));
}
Item const& citem(std::size_t i) const {
assert(this->occupied(i));
return *folly::launder(itemAddr(i));
}
static SSE2Chunk& owner(Item& item, std::size_t index) {
auto rawAddr =
static_cast<uint8_t*>(static_cast<void*>(std::addressof(item))) -
offsetof(SSE2Chunk, rawItems_) - index * sizeof(Item);
auto chunkAddr = static_cast<SSE2Chunk*>(static_cast<void*>(rawAddr));
assert(std::addressof(item) == chunkAddr->itemAddr(index));
return *chunkAddr;
}
};
class SparseMaskIter {
unsigned mask_;
public:
explicit SparseMaskIter(unsigned mask) : mask_{mask} {}
bool hasNext() {
return mask_ != 0;
}
unsigned next() {
assert(hasNext());
unsigned i = __builtin_ctz(mask_);
mask_ &= (mask_ - 1);
return i;
}
};
class DenseMaskIter {
unsigned mask_;
unsigned index_{0};
public:
explicit DenseMaskIter(unsigned mask) : mask_{mask} {}
bool hasNext() {
return mask_ != 0;
}
unsigned next() {
assert(hasNext());
if (LIKELY((mask_ & 1) != 0)) {
mask_ >>= 1;
return index_++;
} else {
unsigned s = __builtin_ctz(mask_);
unsigned rv = index_ + s;
mask_ >>= (s + 1);
index_ = rv + 1;
return rv;
}
}
};
////////////////
template <typename ChunkPtr>
class F14ItemIter {
private:
using Chunk = typename std::pointer_traits<ChunkPtr>::element_type;
public:
using Item = typename Chunk::Item;
using ItemPtr = typename std::pointer_traits<ChunkPtr>::template rebind<Item>;
using ItemConstPtr =
typename std::pointer_traits<ChunkPtr>::template rebind<Item const>;
using Packed = TaggedPtr<ItemPtr>;
//// PUBLIC
F14ItemIter() noexcept : itemPtr_{nullptr}, index_{0} {}
// default copy and move constructors and assignment operators are correct
explicit F14ItemIter(Packed const& packed)
: itemPtr_{packed.ptr()}, index_{packed.extra()} {}
F14ItemIter(ChunkPtr chunk, std::size_t index)
: itemPtr_{std::pointer_traits<ItemPtr>::pointer_to(chunk->item(index))},
index_{index} {
assert(index < Chunk::kCapacity);
folly::assume(
std::pointer_traits<ItemPtr>::pointer_to(chunk->item(index)) !=
nullptr);
folly::assume(itemPtr_ != nullptr);
}
FOLLY_ALWAYS_INLINE void advance() {
auto c = chunk();
// common case is packed entries
while (index_ > 0) {
--index_;
--itemPtr_;
if (LIKELY(c->occupied(index_))) {
return;
}
}
// It's fairly common for an iterator to be advanced and then become
// dead, for example in the return value from erase(iter) or in
// the last step of a loop. We'd like to make sure that the entire
// advance() method can be eliminated by the compiler's dead code
// elimination pass. To do that it must eliminate the loops, which
// requires it to prove that they have no side effects. It's easy
// to show that there are no escaping stores, but at the moment
// compilers also consider an infinite loop to be a side effect.
// (There are parts of the standard that would allow them to treat
// this as undefined behavior, but at the moment they don't exploit
// those clauses.)
//
// The following loop should really be a while loop, which would
// save a register, some instructions, and a conditional branch,
// but by writing it as a for loop the compiler can prove to itself
// that it will eventually terminate. (No matter that even if the
// loop executed in a single cycle it would take about 200 years to
// run all 2^64 iterations.)
//
// https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82776 has the bug we
// filed about the issue. while (true) {
for (std::size_t i = 1; i != 0; ++i) {
// exhausted the current chunk
if (UNLIKELY(c->eof())) {
assert(index_ == 0);
itemPtr_ = nullptr;
return;
}
--c;
auto m = c->occupiedMask();
if (LIKELY(m != 0)) {
index_ = folly::findLastSet(m) - 1;
itemPtr_ = std::pointer_traits<ItemPtr>::pointer_to(c->item(index_));
return;
}
}
}
// precheckedAdvance requires knowledge that the current iterator
// position isn't the last item
void precheckedAdvance() {
auto c = chunk();
// common case is packed entries
while (index_ > 0) {
--index_;
--itemPtr_;
if (LIKELY(c->occupied(index_))) {
return;
}
}
while (true) {
// exhausted the current chunk
assert(!c->eof());
--c;
auto m = c->occupiedMask();
if (LIKELY(m != 0)) {
index_ = folly::findLastSet(m) - 1;
itemPtr_ = std::pointer_traits<ItemPtr>::pointer_to(c->item(index_));
return;
}
}
}
ChunkPtr chunk() const {
return std::pointer_traits<ChunkPtr>::pointer_to(
Chunk::owner(*itemPtr_, index_));
}
std::size_t index() const {
return index_;
}
Item* itemAddr() const {
return std::addressof(*itemPtr_);
}
Item& item() const {
return *itemPtr_;
}
Item const& citem() const {
return *itemPtr_;
}
bool atEnd() const {
return itemPtr_ == nullptr;
}
Packed pack() const {
return Packed{itemPtr_, static_cast<uint8_t>(index_)};
}
bool operator==(F14ItemIter const& rhs) const {
// this form makes iter == end() into a single null check after inlining
// and constant propagation
return itemPtr_ == rhs.itemPtr_;
}
bool operator!=(F14ItemIter const& rhs) const {
return !(*this == rhs);
}
private:
ItemPtr itemPtr_;
std::size_t index_;
};
////////////////
template <typename SizeType, typename ItemIter, bool EnablePackedItemIter>
struct SizeAndPackedBegin {
SizeType size_{0};
private:
typename ItemIter::Packed packedBegin_{ItemIter{}.pack()};
public:
typename ItemIter::Packed& packedBegin() {
return packedBegin_;
}
typename ItemIter::Packed const& packedBegin() const {
return packedBegin_;
}
};
template <typename SizeType, typename ItemIter>
struct SizeAndPackedBegin<SizeType, ItemIter, false> {
SizeType size_{0};
[[noreturn]] typename ItemIter::Packed& packedBegin() {
folly::assume_unreachable();
}
[[noreturn]] typename ItemIter::Packed const& packedBegin() const {
folly::assume_unreachable();
}
};
template <typename Policy>
class F14Table : public Policy {
public:
using typename Policy::Item;
using value_type = typename Policy::Value;
using allocator_type = typename Policy::Alloc;
private:
using HashPair = typename F14HashToken::HashPair;
using Chunk = SSE2Chunk<Item>;
using ChunkAlloc = typename std::allocator_traits<
allocator_type>::template rebind_alloc<Chunk>;
using ChunkPtr = typename std::allocator_traits<ChunkAlloc>::pointer;
static constexpr bool kChunkAllocIsDefault =
std::is_same<ChunkAlloc, std::allocator<Chunk>>::value;
using ByteAlloc = typename std::allocator_traits<
allocator_type>::template rebind_alloc<uint8_t>;
using BytePtr = typename std::allocator_traits<ByteAlloc>::pointer;
public:
using ItemIter = F14ItemIter<ChunkPtr>;
private:
// emulate c++17's std::allocator_traits<A>::is_always_equal
template <typename A, typename = void>
struct AllocIsAlwaysEqual : std::is_empty<A> {};
template <typename A>
struct AllocIsAlwaysEqual<A, typename A::is_always_equal>
: A::is_always_equal {};
// emulate c++17 has std::is_nothrow_swappable
template <typename T>
static constexpr bool isNothrowSwap() {
using std::swap;
return noexcept(swap(std::declval<T&>(), std::declval<T&>()));
}
public:
static constexpr bool kAllocIsAlwaysEqual =
AllocIsAlwaysEqual<allocator_type>::value;
static constexpr bool kDefaultConstructIsNoexcept =
std::is_nothrow_default_constructible<typename Policy::Hasher>::value &&
std::is_nothrow_default_constructible<typename Policy::KeyEqual>::value &&
std::is_nothrow_default_constructible<typename Policy::Alloc>::value;
static constexpr bool kSwapIsNoexcept = kAllocIsAlwaysEqual &&
isNothrowSwap<typename Policy::Hasher>() &&
isNothrowSwap<typename Policy::KeyEqual>();
private:
//////// begin fields
ChunkPtr chunks_{Chunk::emptyInstance()};
typename Policy::InternalSizeType chunkMask_{0};
typename Policy::InternalSizeType size_{0};
typename ItemIter::Packed packedBegin_{ItemIter{}.pack()};
//////// end fields
void swapContents(F14Table& rhs) noexcept {
using std::swap;
swap(chunks_, rhs.chunks_);
swap(chunkMask_, rhs.chunkMask_);
swap(size_, rhs.size_);
swap(packedBegin_, rhs.packedBegin_);
}
public:
F14Table(
std::size_t initialCapacity,
typename Policy::Hasher const& hasher,
typename Policy::KeyEqual const& keyEqual,
typename Policy::Alloc const& alloc)
: Policy{hasher, keyEqual, alloc} {
if (initialCapacity > 0) {
reserve(initialCapacity);
}
}
F14Table(F14Table const& rhs) : Policy{rhs} {
copyFromF14Table(rhs);
}
F14Table(F14Table const& rhs, typename Policy::Alloc const& alloc)
: Policy{rhs, alloc} {
copyFromF14Table(rhs);
}
F14Table(F14Table&& rhs) noexcept(
std::is_nothrow_move_constructible<typename Policy::Hasher>::value&&
std::is_nothrow_move_constructible<typename Policy::KeyEqual>::value&&
std::is_nothrow_move_constructible<typename Policy::Alloc>::value)
: Policy{std::move(rhs)} {
swapContents(rhs);
}
F14Table(F14Table&& rhs, typename Policy::Alloc const& alloc) noexcept(
kAllocIsAlwaysEqual)
: Policy{std::move(rhs), alloc} {
FOLLY_SAFE_CHECK(
kAllocIsAlwaysEqual || this->alloc() == rhs.alloc(),
"F14 move with unequal allocators not yet supported");
swapContents(rhs);
}
F14Table& operator=(F14Table const& rhs) {
if (this != &rhs) {
reset();
static_cast<Policy&>(*this) = rhs;
copyFromF14Table(rhs);
}
return *this;
}
F14Table& operator=(F14Table&& rhs) noexcept(
std::is_nothrow_move_assignable<typename Policy::Hasher>::value&&
std::is_nothrow_move_assignable<typename Policy::KeyEqual>::value &&
(kAllocIsAlwaysEqual ||
(std::allocator_traits<typename Policy::Alloc>::
propagate_on_container_move_assignment::value &&
std::is_nothrow_move_assignable<typename Policy::Alloc>::value))) {
if (this != &rhs) {
reset();
static_cast<Policy&>(*this) = std::move(rhs);
FOLLY_SAFE_CHECK(
std::allocator_traits<typename Policy::Alloc>::
propagate_on_container_move_assignment::value ||
kAllocIsAlwaysEqual || this->alloc() == rhs.alloc(),
"F14 move with unequal allocators not yet supported");
swapContents(rhs);
}
return *this;
}
~F14Table() {
reset();
}
void swap(F14Table& rhs) noexcept(kSwapIsNoexcept) {
this->swapPolicy(rhs);
swapContents(rhs);
}
private:
//////// hash helpers
// Hash values are used to compute the desired position, which is the
// chunk index at which we would like to place a value (if there is no
// overflow), and the tag, which is an additional 8 bits of entropy.
//
// The standard's definition of hash function quality only refers to
// the probability of collisions of the entire hash value, not to the
// probability of collisions of the results of shifting or masking the
// hash value. Some hash functions, however, provide this stronger
// guarantee (not quite the same as the definition of avalanching,
// but similar).
//
// If the user-supplied hasher is an avalanching one (each bit of the
// hash value has a 50% chance of being the same for differing hash
// inputs), then we can just take 1 byte of the hash value for the tag
// and the rest for the desired position. Avalanching hashers also
// let us map hash value to array index position with just a bitmask
// without risking clumping. (Many hash tables just accept the risk
// and do it regardless.)
//
// std::hash<std::string> avalanches in all implementations we've
// examined: libstdc++-v3 uses MurmurHash2, and libc++ uses CityHash
// or MurmurHash2. The other std::hash specializations, however, do not
// have this property. std::hash for integral and pointer values is the
// identity function on libstdc++-v3 and libc++, in particular. In our
// experience it is also fairly common for user-defined specializations
// of std::hash to combine fields in an ad-hoc way that does not evenly
// distribute entropy among the bits of the result (a + 37 * b, for
// example, where a and b are integer fields).
//
// For hash functions we don't trust to avalanche, we repair things by
// applying a bit mixer to the user-supplied hash. The mixer below is
// not fully avalanching for all 64 bits of output, but looks quite
// good for bits 18..63 and puts plenty of entropy even lower when
// considering multiple bits together (like the tag). Importantly,
// when under register pressure it uses fewer registers, instructions,
// and immediate constants than the alternatives, resulting in compact
// code that is more easily inlinable. In one instantiation a modified
// Murmur mixer was 48 bytes of assembly (even after using the same
// multiplicand for both steps) and this one was 27 bytes, for example.
static HashPair splitHash(std::size_t hash) {
uint8_t tag;
if (!Policy::isAvalanchingHasher()) {
auto hi = static_cast<uint64_t>(
(static_cast<unsigned __int128>(hash) * 0xc4ceb9fe1a85ec53ULL) >> 64);
auto lo = hash * 0xc4ceb9fe1a85ec53ULL;
hash = hi ^ lo;
hash *= 0xc4ceb9fe1a85ec53ULL;
tag = static_cast<uint8_t>(hash >> 15);
hash >>= 22;
} else {
tag = hash >> 56;
}
tag |= 0x80;
return std::make_pair(hash, tag);
}
//////// memory management helpers
static std::size_t allocSize(
std::size_t chunkCount,
std::size_t maxSizeWithoutRehash) {
if (chunkCount == 1) {
auto n = offsetof(Chunk, rawItems_) + maxSizeWithoutRehash * sizeof(Item);
assert((maxSizeWithoutRehash % 2) == 0);
if ((sizeof(Item) % 8) != 0) {
n = ((n - 1) | 15) + 1;
}
assert((n % 16) == 0);
return n;
} else {
return sizeof(Chunk) * chunkCount;
}
}
ChunkPtr newChunks(std::size_t chunkCount, std::size_t maxSizeWithoutRehash) {
ByteAlloc a{this->alloc()};
uint8_t* raw = &*std::allocator_traits<ByteAlloc>::allocate(
a, allocSize(chunkCount, maxSizeWithoutRehash));
static_assert(std::is_trivial<Chunk>::value, "SSE2Chunk should be POD");
auto chunks = static_cast<Chunk*>(static_cast<void*>(raw));
for (std::size_t i = 0; i < chunkCount; ++i) {
chunks[i].clear();
}
chunks[0].markEof(chunkCount == 1 ? maxSizeWithoutRehash : 1);
return std::pointer_traits<ChunkPtr>::pointer_to(*chunks);
}
void deleteChunks(
ChunkPtr chunks,
std::size_t chunkCount,
std::size_t maxSizeWithoutRehash) {
ByteAlloc a{this->alloc()};
BytePtr bp = std::pointer_traits<BytePtr>::pointer_to(
*static_cast<uint8_t*>(static_cast<void*>(&*chunks)));
std::allocator_traits<ByteAlloc>::deallocate(
a, bp, allocSize(chunkCount, maxSizeWithoutRehash));
}
public:
ItemIter begin() const noexcept {
return ItemIter{packedBegin_};
}
ItemIter end() const noexcept {
return ItemIter{};
}
bool empty() const noexcept {
return size() == 0;
}
std::size_t size() const noexcept {
return size_;
}
std::size_t max_size() const noexcept {
allocator_type a = this->alloc();
return std::min<std::size_t>(
(std::numeric_limits<typename Policy::InternalSizeType>::max)(),
std::allocator_traits<allocator_type>::max_size(a));
}
std::size_t bucket_count() const noexcept {
// bucket_count is just a synthetic construct for the outside world
// so that size, bucket_count, load_factor, and max_load_factor are
// all self-consistent. The only one of those that is real is size().
if (chunkMask_ != 0) {
return (chunkMask_ + 1) * Chunk::kDesiredCapacity;
} else {
return chunks_->chunk0Capacity();
}
}
std::size_t max_bucket_count() const noexcept {
return max_size();
}
float load_factor() const noexcept {
return empty()
? 0.0f
: static_cast<float>(size()) / static_cast<float>(bucket_count());
}
float max_load_factor() const noexcept {
return 1.0f;
}
void max_load_factor(float) noexcept {
// Probing hash tables can't run load factors >= 1 (unlike chaining
// tables). In addition, we have measured that there is little or
// no performance advantage to running a smaller load factor (cache
// locality losses outweigh the small reduction in probe lengths,
// often making it slower). Therefore, we've decided to just fix
// max_load_factor at 1.0f regardless of what the user requests.
// This has an additional advantage that we don't have to store it.
// Taking alignment into consideration this makes every F14 table
// 8 bytes smaller, and is part of the reason an empty F14NodeMap
// is almost half the size of an empty std::unordered_map (32 vs
// 56 bytes).
//
// I don't have a strong opinion on whether we should remove this
// method or leave a stub, let ngbronson or xshi know if you have a
// compelling argument either way.
}
private:
// Our probe strategy is to advance through additional chunks with
// a stride that is key-specific. This is called double hashing,
// and is a well known and high quality probing strategy. So long as
// the stride and the chunk count are relatively prime, we will visit
// every chunk once and then return to the original chunk, letting us
// detect and end the cycle. The chunk count is a power of two, so
// we can satisfy the relatively prime part by choosing an odd stride.
// We've already computed a high quality secondary hash value for the
// tag, so we just use it for the second probe hash as well.
//
// At the maximum load factor of 12/14, expected probe length for a
// find hit is 1.041, with 99% of keys found in the first three chunks.
// Expected probe length for a find miss (or insert) is 1.275, with a
// p99 probe length of 4 (fewer than 1% of failing find look at 5 or
// more chunks).
//
// This code is structured so you can try various ways of encoding
// the current probe state. For example, at the moment the probe's
// state is the position in the cycle and the resulting chunk index is
// computed from that inside probeCurrentIndex. We could also make the
// probe state the chunk index, and then increment it by hp.second *
// 2 + 1 in probeAdvance. Wrapping can be applied early or late as
// well. This particular code seems to be easier for the optimizer
// to understand.
//
// We could also implement probing strategies that resulted in the same
// tour for every key initially assigned to a chunk (linear probing or
// quadratic), but that results in longer probe lengths. In particular,
// the cache locality wins of linear probing are not worth the increase
// in probe lengths (extra work and less branch predictability) in
// our experiments.
std::size_t probeDelta(HashPair hp) const {
return 2 * hp.second + 1;
}
template <typename K>
FOLLY_ALWAYS_INLINE ItemIter findImpl(HashPair hp, K const& key) const {
std::size_t index = hp.first;
std::size_t step = probeDelta(hp);
for (std::size_t tries = 0; tries <= chunkMask_; ++tries) {
ChunkPtr chunk = chunks_ + (index & chunkMask_);
if (sizeof(Chunk) > 64) {
prefetchAddr(chunk->itemAddr(8));
}
auto mask = chunk->tagMatchMask(hp.second);
SparseMaskIter hits{mask};
while (hits.hasNext()) {
auto i = hits.next();
if (LIKELY(this->keyMatchesItem(key, chunk->item(i)))) {
// Tag match and key match were both successful. The chance
// of a false tag match is 1/128 for each key in the chunk
// (with a proper hash function).
return ItemIter{chunk, i};
}
}
if (LIKELY(chunk->outboundOverflowCount() == 0)) {
// No keys that wanted to be placed in this chunk were denied
// entry, so our search is over. This is the common case.
break;
}
index += step;
}
// Loop exit because tries is exhausted is rare, but possible.
// That means that for every chunk there is currently a key present
// in the map that visited that chunk on its probe search but ended
// up somewhere else, and we have searched every chunk.
return ItemIter{};
}
public:
// Prehashing splits the work of find(key) into two calls, enabling you
// to manually implement loop pipelining for hot bulk lookups. prehash
// computes the hash and prefetches the first computed memory location,
// and the two-arg find(F14HashToken,K) performs the rest of the search.
template <typename K>
F14HashToken prehash(K const& key) const {
assert(chunks_ != nullptr);
auto hp = splitHash(this->computeKeyHash(key));
ChunkPtr firstChunk = chunks_ + (hp.first & chunkMask_);
prefetchAddr(firstChunk);
return F14HashToken(std::move(hp));
}
template <typename K>
FOLLY_ALWAYS_INLINE ItemIter find(K const& key) const {
auto hp = splitHash(this->computeKeyHash(key));
return findImpl(hp, key);
}
template <typename K>
FOLLY_ALWAYS_INLINE ItemIter
find(F14HashToken const& token, K const& key) const {
assert(
splitHash(this->computeKeyHash(key)) == static_cast<HashPair>(token));
return findImpl(static_cast<HashPair>(token), key);
}
private:
void adjustSizeAndBeginAfterInsert(ItemIter iter) {
// packedBegin_ is the max of all valid ItemIter::pack()
auto packed = iter.pack();
if (packedBegin_ < packed) {
packedBegin_ = packed;
}
++size_;
}
// Ignores hp if pos.chunk()->hostedOverflowCount() == 0
void eraseBlank(ItemIter iter, HashPair hp) {
iter.chunk()->clearTag(iter.index());
if (iter.chunk()->hostedOverflowCount() != 0) {
// clean up
std::size_t index = hp.first;
std::size_t delta = probeDelta(hp);
uint8_t hostedOp = 0;
while (true) {
ChunkPtr chunk = chunks_ + (index & chunkMask_);
if (chunk == iter.chunk()) {
chunk->adjustHostedOverflowCount(hostedOp);
break;
}
chunk->decrOutboundOverflowCount();
hostedOp = Chunk::kDecrHostedOverflowCount;
index += delta;
}
}
}
void adjustSizeAndBeginBeforeErase(ItemIter iter) {
--size_;
if (iter.pack() == packedBegin_) {
if (size_ == 0) {
iter = ItemIter{};
} else {
iter.precheckedAdvance();
}
packedBegin_ = iter.pack();
}
}
template <typename... Args>
void insertAtBlank(ItemIter pos, HashPair hp, Args&&... args) {
try {
auto dst = pos.itemAddr();
folly::assume(dst != nullptr);
this->constructValueAtItem(size_, dst, std::forward<Args>(args)...);
} catch (...) {
eraseBlank(pos, hp);
throw;
}
adjustSizeAndBeginAfterInsert(pos);
}
ItemIter allocateTag(uint8_t* fullness, HashPair hp) {
ChunkPtr chunk;
std::size_t index = hp.first;
std::size_t delta = probeDelta(hp);
uint8_t hostedOp = 0;
while (true) {
index &= chunkMask_;
chunk = chunks_ + index;
if (LIKELY(fullness[index] < Chunk::kCapacity)) {
break;
}
chunk->incrOutboundOverflowCount();
hostedOp = Chunk::kIncrHostedOverflowCount;
index += delta;
}
unsigned itemIndex = fullness[index]++;
assert(!chunk->occupied(itemIndex));
chunk->setTag(itemIndex, hp.second);
chunk->adjustHostedOverflowCount(hostedOp);
return ItemIter{chunk, itemIndex};
}
void directCopyFrom(F14Table const& src) {
assert(src.size() > 0 && chunkMask_ == src.chunkMask_);
Policy const& srcPolicy = src;
auto undoState = this->beforeCopy(src.size(), bucket_count(), srcPolicy);
bool success = false;
SCOPE_EXIT {
this->afterCopy(
undoState, success, src.size(), bucket_count(), srcPolicy);
};
// Copy can fail part-way through if a Value copy constructor throws.
// Failing afterCopy is limited in its cleanup power in this case,
// because it can't enumerate the items that were actually copied.
// Fortunately we can divide the situation into cases where all of
// the state is owned by the table itself (F14Node and F14Value),
// for which clearImpl() can do partial cleanup, and cases where all
// of the values are owned by the policy (F14Vector), in which case
// partial failure should not occur. Sorry for the subtle invariants
// in the Policy API.
auto srcBegin = src.begin();
std::size_t maxChunkIndex = srcBegin.chunk() - src.chunks_;
if (FOLLY_IS_TRIVIALLY_COPYABLE(Item) && !this->destroyItemOnClear() &&
bucket_count() == src.bucket_count()) {
// most happy path
auto n = allocSize(chunkMask_ + 1, bucket_count());
std::memcpy(&chunks_[0], &src.chunks_[0], n);
size_ = src.size_;
packedBegin_ = ItemIter{chunks_ + maxChunkIndex, srcBegin.index()}.pack();
} else {
// happy path, no rehash but pack items toward bottom of chunk and
// use copy constructor
Chunk const* srcChunk = &src.chunks_[maxChunkIndex];
Chunk* dstChunk = &chunks_[maxChunkIndex];
do {
dstChunk->copyOverflowInfoFrom(*srcChunk);
auto mask = srcChunk->occupiedMask();
if (Policy::prefetchBeforeCopy()) {
for (DenseMaskIter iter{mask}; iter.hasNext();) {
this->prefetchValue(srcChunk->citem(iter.next()));
}
}
std::size_t dstI = 0;
for (DenseMaskIter iter{mask}; iter.hasNext(); ++dstI) {
auto srcI = iter.next();
auto&& srcValue = src.valueAtItemForCopy(srcChunk->citem(srcI));
auto dst = dstChunk->itemAddr(dstI);
folly::assume(dst != nullptr);
this->constructValueAtItem(
0, dst, std::forward<decltype(srcValue)>(srcValue));
dstChunk->setTag(dstI, srcChunk->tag(srcI));
++size_;
}
--srcChunk;
--dstChunk;
} while (size_ != src.size_);
// reset doesn't care about packedBegin, so we don't fix it until the end
packedBegin_ =
ItemIter{chunks_ + maxChunkIndex,
folly::popcount(chunks_[maxChunkIndex].occupiedMask()) - 1}
.pack();
}
success = true;
}
void rehashCopyFrom(F14Table const& src) {
assert(src.chunkMask_ > chunkMask_);
// 1 byte per chunk means < 1 bit per value temporary overhead
std::array<uint8_t, 256> stackBuf;
uint8_t* fullness;
auto cc = chunkMask_ + 1;
if (cc <= stackBuf.size()) {
fullness = stackBuf.data();
} else {
ByteAlloc a{this->alloc()};
fullness = &*std::allocator_traits<ByteAlloc>::allocate(a, cc);
}
SCOPE_EXIT {
if (cc > stackBuf.size()) {
ByteAlloc a{this->alloc()};
std::allocator_traits<ByteAlloc>::deallocate(
a,
std::pointer_traits<typename std::allocator_traits<
ByteAlloc>::pointer>::pointer_to(*fullness),
cc);
}
};
std::memset(fullness, '\0', cc);
// Exception safety requires beforeCopy to happen after all of the
// allocate() calls.
Policy const& srcPolicy = src;
auto undoState = this->beforeCopy(src.size(), bucket_count(), srcPolicy);
bool success = false;
SCOPE_EXIT {
this->afterCopy(
undoState, success, src.size(), bucket_count(), srcPolicy);
};
// The current table is at a valid state at all points for policies
// in which non-trivial values are owned by the main table (F14Node
// and F14Value), so reset() will clean things up properly if we
// fail partway through. For the case that the policy manages value
// lifecycle (F14Vector) then nothing after beforeCopy can throw and
// we don't have to worry about partial failure.
std::size_t srcChunkIndex = src.begin().chunk() - src.chunks_;
while (true) {
Chunk const* srcChunk = &src.chunks_[srcChunkIndex];
auto mask = srcChunk->occupiedMask();
if (Policy::prefetchBeforeRehash()) {
for (DenseMaskIter iter{mask}; iter.hasNext();) {
this->prefetchValue(srcChunk->citem(iter.next()));
}
}
if (srcChunk->hostedOverflowCount() == 0) {
// all items are in their preferred chunk (no probing), so we
// don't need to compute any hash values
for (DenseMaskIter iter{mask}; iter.hasNext();) {
auto i = iter.next();
auto& srcItem = srcChunk->citem(i);
auto&& srcValue = src.valueAtItemForCopy(srcItem);
HashPair hp{srcChunkIndex, srcChunk->tag(i)};
insertAtBlank(
allocateTag(fullness, hp),
hp,
std::forward<decltype(srcValue)>(srcValue));
}
} else {
// any chunk's items might be in here
for (DenseMaskIter iter{mask}; iter.hasNext();) {
auto i = iter.next();
auto& srcItem = srcChunk->citem(i);
auto&& srcValue = src.valueAtItemForCopy(srcItem);
auto const& srcKey = src.keyForValue(srcValue);
auto hp = splitHash(this->computeKeyHash(srcKey));
assert(hp.second == srcChunk->tag(i));
insertAtBlank(
allocateTag(fullness, hp),
hp,
std::forward<decltype(srcValue)>(srcValue));
}
}
if (srcChunkIndex == 0) {
break;
}
--srcChunkIndex;
}
success = true;
}
FOLLY_NOINLINE void copyFromF14Table(F14Table const& src) {
assert(size() == 0);
if (src.size() == 0) {
return;
}
reserveForInsert(src.size());
try {
if (chunkMask_ == src.chunkMask_) {
directCopyFrom(src);
} else {
rehashCopyFrom(src);
}
} catch (...) {
reset();
throw;
}
}
FOLLY_NOINLINE void rehashImpl(
std::size_t newChunkCount,
std::size_t newMaxSizeWithoutRehash) {
assert(newMaxSizeWithoutRehash > 0);
auto origChunks = chunks_;
const auto origChunkCount = chunkMask_ + 1;
const auto origMaxSizeWithoutRehash = bucket_count();
auto undoState = this->beforeRehash(
size_, origMaxSizeWithoutRehash, newMaxSizeWithoutRehash);
bool success = false;
SCOPE_EXIT {
this->afterRehash(
std::move(undoState),
success,
size_,
origMaxSizeWithoutRehash,
newMaxSizeWithoutRehash);
};
chunks_ = newChunks(newChunkCount, newMaxSizeWithoutRehash);
chunkMask_ = newChunkCount - 1;
if (size_ == 0) {
// nothing to do
} else if (origChunkCount == 1 && newChunkCount == 1) {
// no mask, no chunk scan, no hash computation, no probing
auto srcChunk = origChunks;
auto dstChunk = chunks_;
std::size_t srcI = 0;
std::size_t dstI = 0;
while (dstI < size_) {
if (LIKELY(srcChunk->occupied(srcI))) {
dstChunk->setTag(dstI, srcChunk->tag(srcI));
this->moveItemDuringRehash(
dstChunk->itemAddr(dstI), srcChunk->item(srcI));
++dstI;
}
++srcI;
}
packedBegin_ = ItemIter{dstChunk, dstI - 1}.pack();
} else {
// 1 byte per chunk means < 1 bit per value temporary overhead
std::array<uint8_t, 256> stackBuf;
uint8_t* fullness;
if (newChunkCount <= stackBuf.size()) {
fullness = stackBuf.data();
} else {
try {
ByteAlloc a{this->alloc()};
fullness =
&*std::allocator_traits<ByteAlloc>::allocate(a, newChunkCount);
} catch (...) {
deleteChunks(chunks_, newChunkCount, newMaxSizeWithoutRehash);
chunks_ = origChunks;
chunkMask_ = origChunkCount - 1;
throw;
}
}
std::memset(fullness, '\0', newChunkCount);
SCOPE_EXIT {
if (newChunkCount > stackBuf.size()) {
ByteAlloc a{this->alloc()};
std::allocator_traits<ByteAlloc>::deallocate(
a,
std::pointer_traits<typename std::allocator_traits<
ByteAlloc>::pointer>::pointer_to(*fullness),
newChunkCount);
}
};
auto srcChunk = origChunks + origChunkCount - 1;
std::size_t remaining = size_;
while (remaining > 0) {
auto mask = srcChunk->occupiedMask();
if (Policy::prefetchBeforeRehash()) {
for (DenseMaskIter iter{mask}; iter.hasNext();) {
this->prefetchValue(srcChunk->item(iter.next()));
}
}
for (DenseMaskIter iter{mask}; iter.hasNext();) {
--remaining;
auto srcI = iter.next();
Item& srcItem = srcChunk->item(srcI);
auto hp = splitHash(
this->computeItemHash(const_cast<Item const&>(srcItem)));
assert(hp.second == srcChunk->tag(srcI));
auto dstIter = allocateTag(fullness, hp);
this->moveItemDuringRehash(dstIter.itemAddr(), srcItem);
}
--srcChunk;
}
// this code replaces size_ invocations of adjustSizeAndBeginAfterInsert
std::size_t i = chunkMask_;
while (fullness[i] == 0) {
--i;
}
packedBegin_ = ItemIter{chunks_ + i, std::size_t{fullness[i]} - 1}.pack();
}
if (origMaxSizeWithoutRehash != 0) {
deleteChunks(origChunks, origChunkCount, origMaxSizeWithoutRehash);
}
success = true;
}
public:
// user has no control over max_load_factor
void rehash(std::size_t capacity) {
if (capacity < size()) {
capacity = size();
}
auto unroundedLimit = max_size();
std::size_t exactLimit = Chunk::kDesiredCapacity;
while (exactLimit <= unroundedLimit / 2) {
exactLimit *= 2;
}
if (UNLIKELY(capacity > exactLimit)) {
throw_exception<std::bad_alloc>();
}
std::size_t const kInitialCapacity = 2;
std::size_t const kHalfChunkCapacity =
(Chunk::kDesiredCapacity / 2) & ~std::size_t{1};
std::size_t maxSizeWithoutRehash;
std::size_t chunkCount;
if (capacity <= kInitialCapacity) {
chunkCount = 1;
maxSizeWithoutRehash = kInitialCapacity;
} else if (capacity <= kHalfChunkCapacity) {
chunkCount = 1;
maxSizeWithoutRehash = kHalfChunkCapacity;
} else {
chunkCount = 1;
while (chunkCount * Chunk::kDesiredCapacity < capacity) {
chunkCount *= 2;
}
maxSizeWithoutRehash = chunkCount * Chunk::kDesiredCapacity;
}
if (bucket_count() != maxSizeWithoutRehash) {
rehashImpl(chunkCount, maxSizeWithoutRehash);
}
}
void reserve(std::size_t capacity) {
rehash(capacity);
}
// Returns true iff a rehash was performed
void reserveForInsert(size_t incoming = 1) {
if (size() + incoming - 1 >= bucket_count()) {
reserveForInsertImpl(incoming);
}
}
FOLLY_NOINLINE void reserveForInsertImpl(size_t incoming) {
rehash(size() + incoming);
}
// Returns pos,true if construct, pos,false if found. key is only used
// during the search; all constructor args for an inserted value come
// from args... key won't be accessed after args are touched.
template <typename K, typename... Args>
std::pair<ItemIter, bool> tryEmplaceValue(K const& key, Args&&... args) {
const auto hp = splitHash(this->computeKeyHash(key));
auto existing = findImpl(hp, key);
if (!existing.atEnd()) {
return std::make_pair(existing, false);
}
reserveForInsert();
std::size_t index = hp.first;
ChunkPtr chunk = chunks_ + (index & chunkMask_);
auto emptyMask = chunk->emptyMask();
if (emptyMask == 0) {
std::size_t delta = probeDelta(hp);
do {
chunk->incrOutboundOverflowCount();
index += delta;
chunk = chunks_ + (index & chunkMask_);
emptyMask = chunk->emptyMask();
} while (emptyMask == 0);
chunk->adjustHostedOverflowCount(Chunk::kIncrHostedOverflowCount);
}
std::size_t itemIndex = __builtin_ctz(emptyMask);
chunk->setTag(itemIndex, hp.second);
ItemIter iter{chunk, itemIndex};
// insertAtBlank will clear the tag if the constructor throws
insertAtBlank(iter, hp, std::forward<Args>(args)...);
return std::make_pair(iter, true);
}
private:
template <bool Reset>
void clearImpl() noexcept {
if (chunks_ == Chunk::emptyInstance()) {
assert(empty());
assert(bucket_count() == 0);
return;
}
// turn clear into reset if the table is >= 16 chunks so that
// we don't get too low a load factor
bool willReset = Reset || chunkMask_ + 1 >= 16;
if (willReset) {
this->beforeReset(size(), bucket_count());
} else {
this->beforeClear(size(), bucket_count());
}
if (!empty()) {
if (Policy::destroyItemOnClear()) {
for (std::size_t ci = 0; ci <= chunkMask_; ++ci) {
ChunkPtr chunk = chunks_ + ci;
auto mask = chunk->occupiedMask();
if (Policy::prefetchBeforeDestroy()) {
for (DenseMaskIter iter{mask}; iter.hasNext();) {
this->prefetchValue(chunk->item(iter.next()));
}
}
for (DenseMaskIter iter{mask}; iter.hasNext();) {
this->destroyItem(chunk->item(iter.next()));
}
}
}
if (!willReset) {
// It's okay to do this in a separate loop because we only do it
// when the chunk count is small. That avoids a branch when we
// are promoting a clear to a reset for a large table.
auto c0c = chunks_[0].chunk0Capacity();
for (std::size_t ci = 0; ci <= chunkMask_; ++ci) {
chunks_[ci].clear();
}
chunks_[0].markEof(c0c);
}
packedBegin_ = ItemIter{}.pack();
size_ = 0;
}
if (willReset) {
deleteChunks(chunks_, chunkMask_ + 1, bucket_count());
chunks_ = Chunk::emptyInstance();
chunkMask_ = 0;
this->afterReset();
} else {
this->afterClear(bucket_count());
}
}
void eraseImpl(ItemIter pos, HashPair hp) {
this->destroyItem(pos.item());
adjustSizeAndBeginBeforeErase(pos);
eraseBlank(pos, hp);
}
public:
// The item needs to still be hashable during this call. If you want
// to intercept the item before it is destroyed (to extract it, for
// example), use erase(pos, beforeDestroy).
template <typename BeforeDestroy>
void erase(ItemIter pos, BeforeDestroy const& beforeDestroy) {
HashPair hp{};
if (pos.chunk()->hostedOverflowCount() != 0) {
hp = splitHash(this->computeItemHash(pos.citem()));
}
beforeDestroy(pos.item());
eraseImpl(pos, hp);
}
// The item needs to still be hashable during this call. If you want
// to intercept the item before it is destroyed (to extract it, for
// example), use erase(pos, beforeDestroy).
void erase(ItemIter pos) {
return erase(pos, [](Item const&) {});
}
template <typename K>
std::size_t erase(K const& key) {
if (UNLIKELY(size_ == 0)) {
return 0;
}
auto hp = splitHash(this->computeKeyHash(key));
auto iter = findImpl(hp, key);
if (!iter.atEnd()) {
eraseImpl(iter, hp);
return 1;
} else {
return 0;
}
}
void clear() noexcept {
clearImpl<false>();
}
// Like clear(), but always frees all dynamic storage allocated
// by the table.
void reset() noexcept {
clearImpl<true>();
}
private:
static std::size_t& histoAt(
std::vector<std::size_t>& histo,
std::size_t index) {
if (histo.size() <= index) {
histo.resize(index + 1);
}
return histo.at(index);
}
public:
// Expensive
F14TableStats computeStats() const {
F14TableStats stats;
if (folly::kIsDebug) {
// validate iteration
std::size_t n = 0;
ItemIter prev;
for (auto iter = begin(); iter != end(); iter.advance()) {
assert(n == 0 || iter.pack() < prev.pack());
++n;
prev = iter;
}
assert(n == size());
}
assert((chunks_ == Chunk::emptyInstance()) == (bucket_count() == 0));
std::size_t n1 = 0;
std::size_t n2 = 0;
auto cc = bucket_count() == 0 ? 0 : chunkMask_ + 1;
for (std::size_t ci = 0; ci < cc; ++ci) {
ChunkPtr chunk = chunks_ + ci;
assert(chunk->eof() == (ci == 0));
auto mask = chunk->occupiedMask();
n1 += folly::popcount(mask);
histoAt(stats.chunkOccupancyHisto, folly::popcount(mask))++;
histoAt(
stats.chunkOutboundOverflowHisto, chunk->outboundOverflowCount())++;
histoAt(stats.chunkHostedOverflowHisto, chunk->hostedOverflowCount())++;
for (DenseMaskIter iter{mask}; iter.hasNext();) {
auto ii = iter.next();
++n2;
{
auto& item = chunk->citem(ii);
auto hp = splitHash(this->computeItemHash(item));
assert(chunk->tag(ii) == hp.second);
std::size_t dist = 1;
std::size_t index = hp.first;
std::size_t delta = probeDelta(hp);
while ((index & chunkMask_) != ci) {
index += delta;
++dist;
}
histoAt(stats.keyProbeLengthHisto, dist)++;
}
// misses could have any tag, so we do the dumb but accurate
// thing and just try them all
for (std::size_t ti = 0; ti < 256; ++ti) {
uint8_t tag = static_cast<uint8_t>(ti == 0 ? 1 : 0);
HashPair hp{ci, tag};
std::size_t dist = 1;
std::size_t index = hp.first;
std::size_t delta = probeDelta(hp);
for (std::size_t tries = 0; tries <= chunkMask_ &&
chunks_[index & chunkMask_].outboundOverflowCount() != 0;
++tries) {
index += delta;
++dist;
}
histoAt(stats.missProbeLengthHisto, dist)++;
}
}
}
assert(n1 == size());
assert(n2 == size());
stats.policy = typeid(Policy).name();
stats.size = size();
stats.valueSize = sizeof(value_type);
stats.bucketCount = bucket_count();
stats.chunkCount = cc;
stats.totalBytes = sizeof(*this) +
(cc == 0 ? 0 : allocSize(cc, bucket_count())) +
this->indirectBytesUsed(size(), bucket_count(), begin());
stats.overheadBytes = stats.totalBytes - size() * sizeof(value_type);
return stats;
}
};
} // namespace detail
} // namespace f14
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
} // namespace folly
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/container/F14Map.h>
///////////////////////////////////
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
#include <chrono>
#include <random>
#include <string>
#include <typeinfo>
#include <unordered_map>
#include <folly/Range.h>
#include <folly/hash/Hash.h>
#include <folly/portability/GTest.h>
#include <folly/container/test/F14TestUtil.h>
using namespace folly;
using namespace folly::string_piece_literals;
namespace {
std::string s(char const* p) {
return p;
}
} // namespace
template <typename T>
void runSimple() {
T h;
EXPECT_EQ(h.size(), 0);
h.insert(std::make_pair(s("abc"), s("ABC")));
EXPECT_TRUE(h.find(s("def")) == h.end());
EXPECT_FALSE(h.find(s("abc")) == h.end());
EXPECT_EQ(h[s("abc")], s("ABC"));
h[s("ghi")] = s("GHI");
EXPECT_EQ(h.size(), 2);
h.erase(h.find(s("abc")));
EXPECT_EQ(h.size(), 1);
T h2(std::move(h));
EXPECT_EQ(h.size(), 0);
EXPECT_TRUE(h.begin() == h.end());
EXPECT_EQ(h2.size(), 1);
EXPECT_TRUE(h2.find(s("abc")) == h2.end());
EXPECT_EQ(h2.begin()->first, s("ghi"));
{
auto i = h2.begin();
EXPECT_FALSE(i == h2.end());
++i;
EXPECT_TRUE(i == h2.end());
}
T h3;
h3.try_emplace(s("xxx"));
h3.insert_or_assign(s("yyy"), s("YYY"));
h3 = std::move(h2);
EXPECT_EQ(h2.size(), 0);
EXPECT_EQ(h3.size(), 1);
EXPECT_TRUE(h3.find(s("xxx")) == h3.end());
for (uint64_t i = 0; i < 1000; ++i) {
h[std::to_string(i * i * i)] = s("x");
EXPECT_EQ(h.size(), i + 1);
}
{
using std::swap;
swap(h, h2);
}
for (uint64_t i = 0; i < 1000; ++i) {
EXPECT_TRUE(h2.find(std::to_string(i * i * i)) != h2.end());
EXPECT_EQ(
h2.find(std::to_string(i * i * i))->first, std::to_string(i * i * i));
EXPECT_TRUE(h2.find(std::to_string(i * i * i + 2)) == h2.end());
}
T h4{h2};
EXPECT_EQ(h2.size(), 1000);
EXPECT_EQ(h4.size(), 1000);
T h5{std::move(h2)};
T h6;
h6 = h4;
T h7 = h4;
T h8({{s("abc"), s("ABC")}, {s("def"), s("DEF")}});
T h9({{s("abc"), s("ABD")}, {s("def"), s("DEF")}});
EXPECT_EQ(h8.size(), 2);
EXPECT_EQ(h8.count(s("abc")), 1);
EXPECT_EQ(h8.count(s("xyz")), 0);
EXPECT_TRUE(h7 != h8);
EXPECT_TRUE(h8 != h9);
h8 = std::move(h7);
// h2 and h7 are moved from, h4, h5, h6, and h8 should be identical
EXPECT_TRUE(h4 == h8);
EXPECT_TRUE(h2.empty());
EXPECT_TRUE(h7.empty());
for (uint64_t i = 0; i < 1000; ++i) {
auto k = std::to_string(i * i * i);
EXPECT_EQ(h4.count(k), 1);
EXPECT_EQ(h5.count(k), 1);
EXPECT_EQ(h6.count(k), 1);
EXPECT_EQ(h8.count(k), 1);
}
EXPECT_TRUE(h2 == h7);
EXPECT_TRUE(h4 != h7);
EXPECT_EQ(h3.at(s("ghi")), s("GHI"));
EXPECT_THROW(h3.at(s("abc")), std::out_of_range);
F14TableStats::compute(h);
F14TableStats::compute(h2);
F14TableStats::compute(h3);
F14TableStats::compute(h4);
F14TableStats::compute(h5);
F14TableStats::compute(h6);
F14TableStats::compute(h7);
F14TableStats::compute(h8);
F14TableStats::compute(h9);
LOG(INFO) << "sizeof(" << typeid(T).name() << ") = " << sizeof(T);
}
template <typename T>
void runRehash() {
unsigned n = 10000;
T h;
auto b = h.bucket_count();
for (unsigned i = 0; i < n; ++i) {
h.insert(std::make_pair(std::to_string(i), s("")));
if (b != h.bucket_count()) {
F14TableStats::compute(h);
b = h.bucket_count();
}
}
EXPECT_EQ(h.size(), n);
F14TableStats::compute(h);
}
// T should be a map from uint64_t to uint64_t
template <typename T>
void runRandom() {
using R = std::unordered_map<uint64_t, uint64_t>;
std::mt19937_64 gen(0);
std::uniform_int_distribution<> pctDist(0, 100);
std::uniform_int_distribution<uint64_t> bitsBitsDist(1, 6);
T t0;
T t1;
R r0;
R r1;
for (std::size_t reps = 0; reps < 10000; ++reps) {
// discardBits will be from 0 to 62
auto discardBits = (uint64_t{1} << bitsBitsDist(gen)) - 2;
auto k = gen() >> discardBits;
auto v = gen();
auto pct = pctDist(gen);
EXPECT_EQ(t0.empty(), r0.empty());
EXPECT_EQ(t0.size(), r0.size());
if (pct < 15) {
// insert
auto t = t0.insert(std::make_pair(k, v));
auto r = r0.insert(std::make_pair(k, v));
EXPECT_EQ(*t.first, *r.first);
EXPECT_EQ(t.second, r.second);
} else if (pct < 25) {
// emplace
auto t = t0.emplace(k, v);
auto r = r0.emplace(k, v);
EXPECT_EQ(*t.first, *r.first);
EXPECT_EQ(t.second, r.second);
} else if (pct < 30) {
// bulk insert
t0.insert(r1.begin(), r1.end());
r0.insert(r1.begin(), r1.end());
} else if (pct < 40) {
// erase by key
auto t = t0.erase(k);
auto r = r0.erase(k);
EXPECT_EQ(t, r);
} else if (pct < 50) {
// erase by iterator
if (t0.size() > 0) {
auto r = r0.find(k);
if (r == r0.end()) {
r = r0.begin();
}
k = r->first;
auto t = t0.find(k);
t = t0.erase(t);
if (t != t0.end()) {
EXPECT_NE(t->first, k);
}
r = r0.erase(r);
if (r != r0.end()) {
EXPECT_NE(r->first, k);
}
}
} else if (pct < 58) {
// find
auto t = t0.find(k);
auto r = r0.find(k);
EXPECT_EQ((t == t0.end()), (r == r0.end()));
if (t != t0.end() && r != r0.end()) {
EXPECT_EQ(*t, *r);
}
EXPECT_EQ(t0.count(k), r0.count(k));
} else if (pct < 60) {
// equal_range
auto t = t0.equal_range(k);
auto r = r0.equal_range(k);
EXPECT_EQ((t.first == t.second), (r.first == r.second));
if (t.first != t.second && r.first != r.second) {
EXPECT_EQ(*t.first, *r.first);
t.first++;
r.first++;
EXPECT_TRUE(t.first == t.second);
EXPECT_TRUE(r.first == r.second);
}
} else if (pct < 65) {
// iterate
uint64_t t = 0;
for (auto& e : t0) {
t += e.first * 37 + e.second + 1000;
}
uint64_t r = 0;
for (auto& e : r0) {
r += e.first * 37 + e.second + 1000;
}
EXPECT_EQ(t, r);
} else if (pct < 69) {
// swap
using std::swap;
swap(t0, t1);
swap(r0, r1);
} else if (pct < 70) {
// swap
t0.swap(t1);
r0.swap(r1);
} else if (pct < 72) {
// default construct
t0.~T();
new (&t0) T();
r0.~R();
new (&r0) R();
} else if (pct < 74) {
// default construct with capacity
std::size_t capacity = k & 0xffff;
t0.~T();
new (&t0) T(capacity);
r0.~R();
new (&r0) R(capacity);
} else if (pct < 80) {
// bulk iterator construct
t0.~T();
new (&t0) T(r1.begin(), r1.end());
r0.~R();
new (&r0) R(r1.begin(), r1.end());
} else if (pct < 82) {
// initializer list construct
auto k2 = gen() >> discardBits;
auto v2 = gen();
t0.~T();
new (&t0) T({{k, v}, {k2, v}, {k2, v2}});
r0.~R();
new (&r0) R({{k, v}, {k2, v}, {k2, v2}});
} else if (pct < 88) {
// copy construct
t0.~T();
new (&t0) T(t1);
r0.~R();
new (&r0) R(r1);
} else if (pct < 90) {
// move construct
t0.~T();
new (&t0) T(std::move(t1));
r0.~R();
new (&r0) R(std::move(r1));
} else if (pct < 94) {
// copy assign
t0 = t1;
r0 = r1;
} else if (pct < 96) {
// move assign
t0 = std::move(t1);
r0 = std::move(r1);
} else if (pct < 98) {
// operator==
EXPECT_EQ((t0 == t1), (r0 == r1));
} else if (pct < 99) {
// clear
F14TableStats::compute(t0);
t0.clear();
r0.clear();
} else if (pct < 100) {
// reserve
auto scale = std::uniform_int_distribution<>(0, 8)(gen);
auto delta = std::uniform_int_distribution<>(-2, 2)(gen);
std::ptrdiff_t target = (t0.size() * scale) / 4 + delta;
if (target >= 0) {
t0.reserve(static_cast<std::size_t>(target));
r0.reserve(static_cast<std::size_t>(target));
}
}
}
}
template <typename T>
void runPrehash() {
T h;
EXPECT_EQ(h.size(), 0);
h.insert(std::make_pair(s("abc"), s("ABC")));
EXPECT_TRUE(h.find(s("def")) == h.end());
EXPECT_FALSE(h.find(s("abc")) == h.end());
auto t1 = h.prehash(s("def"));
auto t2 = h.prehash(s("abc"));
EXPECT_TRUE(h.find(t1, s("def")) == h.end());
EXPECT_FALSE(h.find(t2, s("abc")) == h.end());
}
TEST(F14ValueMap, simple) {
runSimple<F14ValueMap<std::string, std::string>>();
}
TEST(F14NodeMap, simple) {
runSimple<F14NodeMap<std::string, std::string>>();
}
TEST(F14VectorMap, simple) {
runSimple<F14VectorMap<std::string, std::string>>();
}
TEST(F14FastMap, simple) {
// F14FastMap is just a conditional typedef. Verify it compiles.
runRandom<F14FastMap<uint64_t, uint64_t>>();
runSimple<F14FastMap<std::string, std::string>>();
}
TEST(F14ValueMap, rehash) {
runRehash<F14ValueMap<std::string, std::string>>();
}
TEST(F14NodeMap, rehash) {
runRehash<F14NodeMap<std::string, std::string>>();
}
TEST(F14VectorMap, rehash) {
runRehash<F14VectorMap<std::string, std::string>>();
}
TEST(F14ValueMap, prehash) {
runPrehash<F14ValueMap<std::string, std::string>>();
}
TEST(F14NodeMap, prehash) {
runPrehash<F14NodeMap<std::string, std::string>>();
}
TEST(F14ValueMap, random) {
runRandom<F14ValueMap<uint64_t, uint64_t>>();
}
TEST(F14NodeMap, random) {
runRandom<F14NodeMap<uint64_t, uint64_t>>();
}
TEST(F14VectorMap, random) {
runRandom<F14VectorMap<uint64_t, uint64_t>>();
}
TEST(F14ValueMap, grow_stats) {
F14ValueMap<uint64_t, uint64_t> h;
for (unsigned i = 1; i <= 3072; ++i) {
h[i]++;
}
LOG(INFO) << "F14ValueMap just before rehash -> "
<< F14TableStats::compute(h);
h[0]++;
LOG(INFO) << "F14ValueMap just after rehash -> " << F14TableStats::compute(h);
}
TEST(F14ValueMap, steady_state_stats) {
// 10k keys, 14% probability of insert, 90% chance of erase, so the
// table should converge to 1400 size without triggering the rehash
// that would occur at 1536.
F14ValueMap<uint64_t, uint64_t> h;
std::mt19937_64 gen(0);
std::uniform_int_distribution<> dist(0, 10000);
for (std::size_t i = 0; i < 100000; ++i) {
auto key = dist(gen);
if (dist(gen) < 1400) {
h.insert_or_assign(key, i);
} else {
h.erase(key);
}
if (((i + 1) % 10000) == 0) {
auto stats = F14TableStats::compute(h);
// Verify that average miss probe length is bounded despite continued
// erase + reuse. p99 of the average across 10M random steps is 4.69,
// average is 2.96.
EXPECT_LT(f14::expectedProbe(stats.missProbeLengthHisto), 10.0);
}
}
LOG(INFO) << "F14ValueMap at steady state -> " << F14TableStats::compute(h);
}
// Tracked is implicitly constructible across tags
namespace {
struct Counts {
uint64_t copyConstruct{0};
uint64_t moveConstruct{0};
uint64_t copyConvert{0};
uint64_t moveConvert{0};
uint64_t copyAssign{0};
uint64_t moveAssign{0};
uint64_t defaultConstruct{0};
explicit Counts(
uint64_t copConstr = 0,
uint64_t movConstr = 0,
uint64_t copConv = 0,
uint64_t movConv = 0,
uint64_t copAssign = 0,
uint64_t movAssign = 0,
uint64_t def = 0)
: copyConstruct{copConstr},
moveConstruct{movConstr},
copyConvert{copConv},
moveConvert{movConv},
copyAssign{copAssign},
moveAssign{movAssign},
defaultConstruct{def} {}
uint64_t dist(Counts const& rhs) const {
auto d = [](uint64_t x, uint64_t y) { return (x - y) * (x - y); };
return d(copyConstruct, rhs.copyConstruct) +
d(moveConstruct, rhs.moveConstruct) + d(copyConvert, rhs.copyConvert) +
d(moveConvert, rhs.moveConvert) + d(copyAssign, rhs.copyAssign) +
d(moveAssign, rhs.moveAssign) +
d(defaultConstruct, rhs.defaultConstruct);
}
bool operator==(Counts const& rhs) const {
return copyConstruct == rhs.copyConstruct &&
moveConstruct == rhs.moveConstruct && copyConvert == rhs.copyConvert &&
moveConvert == rhs.moveConvert && copyAssign == rhs.copyAssign &&
moveAssign == rhs.moveAssign &&
defaultConstruct == rhs.defaultConstruct;
}
bool operator!=(Counts const& rhs) const {
return !(*this == rhs);
}
};
thread_local Counts sumCounts{};
template <int Tag>
struct Tracked {
static thread_local Counts counts;
uint64_t val_;
Tracked() : val_{0} {
sumCounts.defaultConstruct++;
counts.defaultConstruct++;
}
/* implicit */ Tracked(uint64_t val) : val_{val} {
sumCounts.copyConvert++;
counts.copyConvert++;
}
Tracked(Tracked const& rhs) : val_{rhs.val_} {
sumCounts.copyConstruct++;
counts.copyConstruct++;
}
Tracked(Tracked&& rhs) noexcept : val_{rhs.val_} {
sumCounts.moveConstruct++;
counts.moveConstruct++;
}
Tracked& operator=(Tracked const& rhs) {
val_ = rhs.val_;
sumCounts.copyAssign++;
counts.copyAssign++;
return *this;
}
Tracked& operator=(Tracked&& rhs) noexcept {
val_ = rhs.val_;
sumCounts.moveAssign++;
counts.moveAssign++;
return *this;
}
template <int T>
/* implicit */ Tracked(Tracked<T> const& rhs) : val_{rhs.val_} {
sumCounts.copyConvert++;
counts.copyConvert++;
}
template <int T>
/* implicit */ Tracked(Tracked<T>&& rhs) : val_{rhs.val_} {
sumCounts.moveConvert++;
counts.moveConvert++;
}
bool operator==(Tracked const& rhs) const {
return val_ == rhs.val_;
}
bool operator!=(Tracked const& rhs) const {
return !(*this == rhs);
}
};
template <>
thread_local Counts Tracked<0>::counts{};
template <>
thread_local Counts Tracked<1>::counts{};
template <>
thread_local Counts Tracked<2>::counts{};
template <>
thread_local Counts Tracked<3>::counts{};
template <>
thread_local Counts Tracked<4>::counts{};
template <>
thread_local Counts Tracked<5>::counts{};
void resetTracking() {
sumCounts = Counts{};
Tracked<0>::counts = Counts{};
Tracked<1>::counts = Counts{};
Tracked<2>::counts = Counts{};
Tracked<3>::counts = Counts{};
Tracked<4>::counts = Counts{};
Tracked<5>::counts = Counts{};
}
} // namespace
std::ostream& operator<<(std::ostream& xo, Counts const& counts) {
xo << "[";
std::string glue = "";
if (counts.copyConstruct > 0) {
xo << glue << counts.copyConstruct << " copy";
glue = ", ";
}
if (counts.moveConstruct > 0) {
xo << glue << counts.moveConstruct << " move";
glue = ", ";
}
if (counts.copyConvert > 0) {
xo << glue << counts.copyConvert << " copy convert";
glue = ", ";
}
if (counts.moveConvert > 0) {
xo << glue << counts.moveConvert << " move convert";
glue = ", ";
}
if (counts.copyAssign > 0) {
xo << glue << counts.copyAssign << " copy assign";
glue = ", ";
}
if (counts.moveAssign > 0) {
xo << glue << counts.moveAssign << " move assign";
glue = ", ";
}
if (counts.defaultConstruct > 0) {
xo << glue << counts.defaultConstruct << " default construct";
glue = ", ";
}
xo << "]";
return xo;
}
namespace std {
template <int Tag>
struct hash<Tracked<Tag>> {
size_t operator()(Tracked<Tag> const& tracked) const {
return tracked.val_ ^ Tag;
}
};
} // namespace std
TEST(Tracked, baseline) {
Tracked<0> a0;
{
resetTracking();
Tracked<0> b0{a0};
EXPECT_EQ(a0.val_, b0.val_);
EXPECT_EQ(sumCounts, (Counts{1, 0, 0, 0}));
EXPECT_EQ(Tracked<0>::counts, (Counts{1, 0, 0, 0}));
}
{
resetTracking();
Tracked<0> b0{std::move(a0)};
EXPECT_EQ(a0.val_, b0.val_);
EXPECT_EQ(sumCounts, (Counts{0, 1, 0, 0}));
EXPECT_EQ(Tracked<0>::counts, (Counts{0, 1, 0, 0}));
}
{
resetTracking();
Tracked<1> b1{a0};
EXPECT_EQ(a0.val_, b1.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 1, 0}));
EXPECT_EQ(Tracked<1>::counts, (Counts{0, 0, 1, 0}));
}
{
resetTracking();
Tracked<1> b1{std::move(a0)};
EXPECT_EQ(a0.val_, b1.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 0, 1}));
EXPECT_EQ(Tracked<1>::counts, (Counts{0, 0, 0, 1}));
}
{
Tracked<0> b0;
resetTracking();
b0 = a0;
EXPECT_EQ(a0.val_, b0.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 0, 0, 1, 0}));
EXPECT_EQ(Tracked<0>::counts, (Counts{0, 0, 0, 0, 1, 0}));
}
{
Tracked<0> b0;
resetTracking();
b0 = std::move(a0);
EXPECT_EQ(a0.val_, b0.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 0, 0, 0, 1}));
EXPECT_EQ(Tracked<0>::counts, (Counts{0, 0, 0, 0, 0, 1}));
}
{
Tracked<1> b1;
resetTracking();
b1 = a0;
EXPECT_EQ(a0.val_, b1.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 1, 0, 0, 1}));
EXPECT_EQ(Tracked<1>::counts, (Counts{0, 0, 1, 0, 0, 1}));
}
{
Tracked<1> b1;
resetTracking();
b1 = std::move(a0);
EXPECT_EQ(a0.val_, b1.val_);
EXPECT_EQ(sumCounts, (Counts{0, 0, 0, 1, 0, 1}));
EXPECT_EQ(Tracked<1>::counts, (Counts{0, 0, 0, 1, 0, 1}));
}
}
// M should be a map from Tracked<0> to Tracked<1>. F should take a map
// and a pair const& or pair&& and cause it to be inserted
template <typename M, typename F>
void runInsertCases(
std::string const& name,
F const& insertFunc,
uint64_t expectedDist = 0) {
static_assert(std::is_same<typename M::key_type, Tracked<0>>::value, "");
static_assert(std::is_same<typename M::mapped_type, Tracked<1>>::value, "");
{
typename M::value_type p{0, 0};
M m;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name << ", fresh key, value_type const& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
// copy is expected
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{1, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{1, 0, 0, 0}),
expectedDist);
}
{
typename M::value_type p{0, 0};
M m;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", fresh key, value_type&& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
// key copy is unfortunate but required
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{1, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 1, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<0>, Tracked<1>> p{0, 0};
M m;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name << ", fresh key, pair<key_type,mapped_type> const& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
// 1 copy is required
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{1, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{1, 0, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<0>, Tracked<1>> p{0, 0};
M m;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", fresh key, pair<key_type,mapped_type>&& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
// this is the happy path for insert(make_pair(.., ..))
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 1, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 1, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<2>, Tracked<3>> p{0, 0};
M m;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name << ", fresh key, convertible const& -> "
<< "key_type ops " << Tracked<0>::counts << ", key_src ops "
<< Tracked<2>::counts << ", mapped_type ops "
<< Tracked<1>::counts << ", mapped_src ops "
<< Tracked<3>::counts;
// There are three strategies that could be optimal for particular
// ratios of cost:
//
// - convert key and value in place to final position, destroy if
// insert fails. This is the strategy used by std::unordered_map
// and FBHashMap
//
// - convert key and default value in place to final position,
// convert value only if insert succeeds. Nobody uses this strategy
//
// - convert key to a temporary, move key and convert value if
// insert succeeds. This is the strategy used by F14 and what is
// EXPECT_EQ here.
// The expectedDist * 3 is just a hack for the emplace-pieces-by-value
// test, whose test harness copies the original pair and then uses
// move conversion instead of copy conversion.
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 1, 1, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 1, 0}) +
Tracked<2>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<3>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist * 3);
}
{
std::pair<Tracked<2>, Tracked<3>> p{0, 0};
M m;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", fresh key, convertible&& -> "
<< "key_type ops " << Tracked<0>::counts << ", key_src ops "
<< Tracked<2>::counts << ", mapped_type ops "
<< Tracked<1>::counts << ", mapped_src ops "
<< Tracked<3>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 1, 0, 1}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 1}) +
Tracked<2>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<3>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
{
typename M::value_type p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name << ", duplicate key, value_type const& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
{
typename M::value_type p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", duplicate key, value_type&& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<0>, Tracked<1>> p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name
<< ", duplicate key, pair<key_type,mapped_type> const& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<0>, Tracked<1>> p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", duplicate key, pair<key_type,mapped_type>&& -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
{
std::pair<Tracked<2>, Tracked<3>> p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, p);
LOG(INFO) << name << ", duplicate key, convertible const& -> "
<< "key_type ops " << Tracked<0>::counts << ", key_src ops "
<< Tracked<2>::counts << ", mapped_type ops "
<< Tracked<1>::counts << ", mapped_src ops "
<< Tracked<3>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 1, 0}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<2>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<3>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist * 2);
}
{
std::pair<Tracked<2>, Tracked<3>> p{0, 0};
M m;
m[0] = 0;
resetTracking();
insertFunc(m, std::move(p));
LOG(INFO) << name << ", duplicate key, convertible&& -> "
<< "key_type ops " << Tracked<0>::counts << ", key_src ops "
<< Tracked<2>::counts << ", mapped_type ops "
<< Tracked<1>::counts << ", mapped_src ops "
<< Tracked<3>::counts;
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 0, 0, 1}) +
Tracked<1>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<2>::counts.dist(Counts{0, 0, 0, 0}) +
Tracked<3>::counts.dist(Counts{0, 0, 0, 0}),
expectedDist);
}
}
struct DoInsert {
template <typename M, typename P>
void operator()(M& m, P&& p) const {
m.insert(std::forward<P>(p));
}
};
struct DoEmplace1 {
template <typename M, typename P>
void operator()(M& m, P&& p) const {
m.emplace(std::forward<P>(p));
}
};
struct DoEmplace2 {
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2> const& p) const {
m.emplace(p.first, p.second);
}
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2>&& p) const {
m.emplace(std::move(p.first), std::move(p.second));
}
};
struct DoEmplace3 {
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2> const& p) const {
m.emplace(
std::piecewise_construct,
std::forward_as_tuple(p.first),
std::forward_as_tuple(p.second));
}
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2>&& p) const {
m.emplace(
std::piecewise_construct,
std::forward_as_tuple(std::move(p.first)),
std::forward_as_tuple(std::move(p.second)));
}
};
// Simulates use of piecewise_construct without proper use of
// forward_as_tuple. This code doesn't yield the normal pattern, but
// it should have exactly 1 additional move or copy of the key and 1
// additional move or copy of the mapped value.
struct DoEmplace3Value {
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2> const& p) const {
m.emplace(
std::piecewise_construct,
std::tuple<U1>{p.first},
std::tuple<U2>{p.second});
}
template <typename M, typename U1, typename U2>
void operator()(M& m, std::pair<U1, U2>&& p) const {
m.emplace(
std::piecewise_construct,
std::tuple<U1>{std::move(p.first)},
std::tuple<U2>{std::move(p.second)});
}
};
template <typename M>
void runInsertAndEmplace(std::string const& name) {
runInsertCases<M>(name + " insert", DoInsert{});
runInsertCases<M>(name + " emplace pair", DoEmplace1{});
runInsertCases<M>(name + " emplace k,v", DoEmplace2{});
runInsertCases<M>(name + " emplace pieces", DoEmplace3{});
runInsertCases<M>(name + " emplace pieces by value", DoEmplace3Value{}, 2);
// Calling the default pair constructor via emplace is valid, but not
// very useful in real life. Verify that it works.
M m;
typename M::key_type k;
EXPECT_EQ(m.count(k), 0);
m.emplace();
EXPECT_EQ(m.count(k), 1);
}
TEST(F14ValueMap, destructuring) {
runInsertAndEmplace<F14ValueMap<Tracked<0>, Tracked<1>>>("f14value");
}
TEST(F14NodeMap, destructuring) {
runInsertAndEmplace<F14NodeMap<Tracked<0>, Tracked<1>>>("f14node");
}
TEST(F14VectorMap, destructuring) {
runInsertAndEmplace<F14VectorMap<Tracked<0>, Tracked<1>>>("f14vector");
}
TEST(F14VectorMap, destructuringErase) {
using M = F14VectorMap<Tracked<0>, Tracked<1>>;
typename M::value_type p1{0, 0};
typename M::value_type p2{2, 2};
M m;
m.insert(p1);
m.insert(p2);
resetTracking();
m.erase(p1.first);
LOG(INFO) << "erase -> "
<< "key_type ops " << Tracked<0>::counts << ", mapped_type ops "
<< Tracked<1>::counts;
// deleting p1 will cause p2 to be moved to the front of the values array
EXPECT_EQ(
Tracked<0>::counts.dist(Counts{0, 1, 0, 0}) +
Tracked<1>::counts.dist(Counts{0, 1, 0, 0}),
0);
}
TEST(F14ValueMap, vectorMaxSize) {
F14ValueMap<int, int> m;
EXPECT_EQ(
m.max_size(),
std::numeric_limits<uint64_t>::max() / sizeof(std::pair<int, int>));
}
TEST(F14NodeMap, vectorMaxSize) {
F14NodeMap<int, int> m;
EXPECT_EQ(
m.max_size(),
std::numeric_limits<uint64_t>::max() / sizeof(std::pair<int, int>));
}
TEST(F14VectorMap, vectorMaxSize) {
F14VectorMap<int, int> m;
EXPECT_EQ(m.max_size(), std::numeric_limits<uint32_t>::max());
}
template <typename M>
void runMoveOnlyTest() {
M t0;
t0[10] = 20;
t0.emplace(30, 40);
t0.insert(std::make_pair(50, 60));
M t1{std::move(t0)};
EXPECT_TRUE(t0.empty());
M t2;
EXPECT_TRUE(t2.empty());
t2 = std::move(t1);
EXPECT_EQ(t2.size(), 3);
}
TEST(F14ValueMap, moveOnly) {
runMoveOnlyTest<F14ValueMap<f14::MoveOnlyTestInt, int>>();
runMoveOnlyTest<F14ValueMap<int, f14::MoveOnlyTestInt>>();
runMoveOnlyTest<F14ValueMap<f14::MoveOnlyTestInt, f14::MoveOnlyTestInt>>();
}
TEST(F14NodeMap, moveOnly) {
runMoveOnlyTest<F14NodeMap<f14::MoveOnlyTestInt, int>>();
runMoveOnlyTest<F14NodeMap<int, f14::MoveOnlyTestInt>>();
runMoveOnlyTest<F14NodeMap<f14::MoveOnlyTestInt, f14::MoveOnlyTestInt>>();
}
TEST(F14VectorMap, moveOnly) {
runMoveOnlyTest<F14VectorMap<f14::MoveOnlyTestInt, int>>();
runMoveOnlyTest<F14VectorMap<int, f14::MoveOnlyTestInt>>();
runMoveOnlyTest<F14VectorMap<f14::MoveOnlyTestInt, f14::MoveOnlyTestInt>>();
}
TEST(F14FastMap, moveOnly) {
runMoveOnlyTest<F14FastMap<f14::MoveOnlyTestInt, int>>();
runMoveOnlyTest<F14FastMap<int, f14::MoveOnlyTestInt>>();
runMoveOnlyTest<F14FastMap<f14::MoveOnlyTestInt, f14::MoveOnlyTestInt>>();
}
TEST(F14ValueMap, heterogeneous) {
// note: std::string is implicitly convertible to but not from StringPiece
using Hasher = folly::transparent<folly::hasher<folly::StringPiece>>;
using KeyEqual = folly::transparent<std::equal_to<folly::StringPiece>>;
constexpr auto hello = "hello"_sp;
constexpr auto buddy = "buddy"_sp;
constexpr auto world = "world"_sp;
F14ValueMap<std::string, bool, Hasher, KeyEqual> map;
map.emplace(hello.str(), true);
map.emplace(world.str(), false);
auto checks = [hello, buddy](auto& ref) {
// count
EXPECT_EQ(0, ref.count(buddy));
EXPECT_EQ(1, ref.count(hello));
// find
EXPECT_TRUE(ref.end() == ref.find(buddy));
EXPECT_EQ(hello, ref.find(hello)->first);
// prehash + find
EXPECT_TRUE(ref.end() == ref.find(ref.prehash(buddy), buddy));
EXPECT_EQ(hello, ref.find(ref.prehash(hello), hello)->first);
// equal_range
EXPECT_TRUE(std::make_pair(ref.end(), ref.end()) == ref.equal_range(buddy));
EXPECT_TRUE(
std::make_pair(ref.find(hello), ++ref.find(hello)) ==
ref.equal_range(hello));
};
checks(map);
checks(folly::as_const(map));
}
///////////////////////////////////
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <folly/container/F14Set.h>
///////////////////////////////////
#if FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
#include <chrono>
#include <random>
#include <string>
#include <unordered_set>
#include <folly/Range.h>
#include <folly/portability/GTest.h>
#include <folly/container/test/F14TestUtil.h>
using namespace folly;
using namespace folly::string_piece_literals;
namespace {
std::string s(char const* p) {
return p;
}
} // namespace
template <typename T>
void runSimple() {
T h;
EXPECT_EQ(h.size(), 0);
h.insert(s("abc"));
EXPECT_TRUE(h.find(s("def")) == h.end());
EXPECT_FALSE(h.find(s("abc")) == h.end());
h.insert(s("ghi"));
EXPECT_EQ(h.size(), 2);
h.erase(h.find(s("abc")));
EXPECT_EQ(h.size(), 1);
T h2(std::move(h));
EXPECT_EQ(h.size(), 0);
EXPECT_TRUE(h.begin() == h.end());
EXPECT_EQ(h2.size(), 1);
EXPECT_TRUE(h2.find(s("abc")) == h2.end());
EXPECT_EQ(*h2.begin(), s("ghi"));
{
auto i = h2.begin();
EXPECT_FALSE(i == h2.end());
++i;
EXPECT_TRUE(i == h2.end());
}
T h3;
h3.insert(s("xxx"));
h3.insert(s("yyy"));
h3 = std::move(h2);
EXPECT_EQ(h2.size(), 0);
EXPECT_EQ(h3.size(), 1);
EXPECT_TRUE(h3.find(s("xxx")) == h3.end());
for (uint64_t i = 0; i < 1000; ++i) {
h.insert(std::move(std::to_string(i * i * i)));
EXPECT_EQ(h.size(), i + 1);
}
{
using std::swap;
swap(h, h2);
}
for (uint64_t i = 0; i < 1000; ++i) {
EXPECT_TRUE(h2.find(std::to_string(i * i * i)) != h2.end());
EXPECT_EQ(*h2.find(std::to_string(i * i * i)), std::to_string(i * i * i));
EXPECT_TRUE(h2.find(std::to_string(i * i * i + 2)) == h2.end());
}
T h4{h2};
EXPECT_EQ(h2.size(), 1000);
EXPECT_EQ(h4.size(), 1000);
T h5{std::move(h2)};
T h6;
h6 = h4;
T h7 = h4;
T h8({s("abc"), s("def")});
T h9({s("abd"), s("def")});
EXPECT_EQ(h8.size(), 2);
EXPECT_EQ(h8.count(s("abc")), 1);
EXPECT_EQ(h8.count(s("xyz")), 0);
EXPECT_TRUE(h7 != h8);
EXPECT_TRUE(h8 != h9);
h8 = std::move(h7);
// h2 and h7 are moved from, h4, h5, h6, and h8 should be identical
EXPECT_TRUE(h4 == h8);
EXPECT_TRUE(h2.empty());
EXPECT_TRUE(h7.empty());
for (uint64_t i = 0; i < 1000; ++i) {
auto k = std::to_string(i * i * i);
EXPECT_EQ(h4.count(k), 1);
EXPECT_EQ(h5.count(k), 1);
EXPECT_EQ(h6.count(k), 1);
EXPECT_EQ(h8.count(k), 1);
}
F14TableStats::compute(h);
F14TableStats::compute(h2);
F14TableStats::compute(h3);
F14TableStats::compute(h4);
F14TableStats::compute(h5);
F14TableStats::compute(h6);
F14TableStats::compute(h7);
F14TableStats::compute(h8);
}
template <typename T>
void runRehash() {
unsigned n = 10000;
T h;
for (unsigned i = 0; i < n; ++i) {
h.insert(std::to_string(i));
}
EXPECT_EQ(h.size(), n);
F14TableStats::compute(h);
}
// T should be a set of uint64_t
template <typename T>
void runRandom() {
using R = std::unordered_set<uint64_t>;
std::mt19937_64 gen(0);
std::uniform_int_distribution<> pctDist(0, 100);
std::uniform_int_distribution<uint64_t> bitsBitsDist(1, 6);
T t0;
T t1;
R r0;
R r1;
for (std::size_t reps = 0; reps < 100000; ++reps) {
// discardBits will be from 0 to 62
auto discardBits = (uint64_t{1} << bitsBitsDist(gen)) - 2;
auto k = gen() >> discardBits;
auto pct = pctDist(gen);
EXPECT_EQ(t0.size(), r0.size());
if (pct < 15) {
// insert
auto t = t0.insert(k);
auto r = r0.insert(k);
EXPECT_EQ(t.second, r.second);
EXPECT_EQ(*t.first, *r.first);
} else if (pct < 25) {
// emplace
auto t = t0.emplace(k);
auto r = r0.emplace(k);
EXPECT_EQ(t.second, r.second);
EXPECT_EQ(*t.first, *r.first);
} else if (pct < 30) {
// bulk insert
t0.insert(t1.begin(), t1.end());
r0.insert(r1.begin(), r1.end());
} else if (pct < 40) {
// erase by key
auto t = t0.erase(k);
auto r = r0.erase(k);
EXPECT_EQ(t, r);
} else if (pct < 50) {
// erase by iterator
if (t0.size() > 0) {
auto r = r0.find(k);
if (r == r0.end()) {
r = r0.begin();
}
k = *r;
auto t = t0.find(k);
t = t0.erase(t);
if (t != t0.end()) {
EXPECT_NE(*t, k);
}
r = r0.erase(r);
if (r != r0.end()) {
EXPECT_NE(*r, k);
}
}
} else if (pct < 58) {
// find
auto t = t0.find(k);
auto r = r0.find(k);
EXPECT_EQ((t == t0.end()), (r == r0.end()));
if (t != t0.end() && r != r0.end()) {
EXPECT_EQ(*t, *r);
}
EXPECT_EQ(t0.count(k), r0.count(k));
} else if (pct < 60) {
// equal_range
auto t = t0.equal_range(k);
auto r = r0.equal_range(k);
EXPECT_EQ((t.first == t.second), (r.first == r.second));
if (t.first != t.second && r.first != r.second) {
EXPECT_EQ(*t.first, *r.first);
t.first++;
r.first++;
EXPECT_TRUE(t.first == t.second);
EXPECT_TRUE(r.first == r.second);
}
} else if (pct < 65) {
// iterate
uint64_t t = 0;
for (auto& e : t0) {
t += e + 1000;
}
uint64_t r = 0;
for (auto& e : r0) {
r += e + 1000;
}
EXPECT_EQ(t, r);
} else if (pct < 69) {
// swap
using std::swap;
swap(t0, t1);
swap(r0, r1);
} else if (pct < 70) {
// swap
t0.swap(t1);
r0.swap(r1);
} else if (pct < 72) {
// default construct
t0.~T();
new (&t0) T();
r0.~R();
new (&r0) R();
} else if (pct < 74) {
// default construct with capacity
std::size_t capacity = k & 0xffff;
t0.~T();
new (&t0) T(capacity);
r0.~R();
new (&r0) R(capacity);
} else if (pct < 80) {
// bulk iterator construct
t0.~T();
new (&t0) T(r1.begin(), r1.end());
r0.~R();
new (&r0) R(r1.begin(), r1.end());
} else if (pct < 82) {
// initializer list construct
auto k2 = gen() >> discardBits;
t0.~T();
new (&t0) T({k, k, k2});
r0.~R();
new (&r0) R({k, k, k2});
} else if (pct < 88) {
// copy construct
t0.~T();
new (&t0) T(t1);
r0.~R();
new (&r0) R(r1);
} else if (pct < 90) {
// move construct
t0.~T();
new (&t0) T(std::move(t1));
r0.~R();
new (&r0) R(std::move(r1));
} else if (pct < 94) {
// copy assign
t0 = t1;
r0 = r1;
} else if (pct < 96) {
// move assign
t0 = std::move(t1);
r0 = std::move(r1);
} else if (pct < 98) {
// operator==
EXPECT_EQ((t0 == t1), (r0 == r1));
} else if (pct < 99) {
// clear
t0.computeStats();
t0.clear();
r0.clear();
} else if (pct < 100) {
// reserve
auto scale = std::uniform_int_distribution<>(0, 8)(gen);
auto delta = std::uniform_int_distribution<>(-2, 2)(gen);
std::ptrdiff_t target = (t0.size() * scale) / 4 + delta;
if (target >= 0) {
t0.reserve(static_cast<std::size_t>(target));
r0.reserve(static_cast<std::size_t>(target));
}
}
}
}
TEST(F14ValueSet, simple) {
runSimple<F14ValueSet<std::string>>();
}
TEST(F14NodeSet, simple) {
runSimple<F14NodeSet<std::string>>();
}
TEST(F14VectorSet, simple) {
runSimple<F14VectorSet<std::string>>();
}
TEST(F14FastSet, simple) {
// F14FastSet is just a conditional typedef. Verify it compiles.
runRandom<F14FastSet<uint64_t>>();
runSimple<F14FastSet<std::string>>();
}
TEST(F14ValueSet, rehash) {
runRehash<F14ValueSet<std::string>>();
}
TEST(F14NodeSet, rehash) {
runRehash<F14NodeSet<std::string>>();
}
TEST(F14VectorSet, rehash) {
runRehash<F14VectorSet<std::string>>();
}
TEST(F14ValueSet, random) {
runRandom<F14ValueSet<uint64_t>>();
}
TEST(F14NodeSet, random) {
runRandom<F14NodeSet<uint64_t>>();
}
TEST(F14VectorSet, random) {
runRandom<F14VectorSet<uint64_t>>();
}
TEST(F14ValueSet, grow_stats) {
F14ValueSet<uint64_t> h;
for (unsigned i = 1; i <= 3072; ++i) {
h.insert(i);
}
LOG(INFO) << "F14ValueSet just before rehash -> "
<< F14TableStats::compute(h);
h.insert(0);
LOG(INFO) << "F14ValueSet just after rehash -> " << F14TableStats::compute(h);
}
TEST(F14ValueSet, steady_state_stats) {
// 10k keys, 14% probability of insert, 90% chance of erase, so the
// table should converge to 1400 size without triggering the rehash
// that would occur at 1536.
F14ValueSet<uint64_t> h;
std::mt19937 gen(0);
std::uniform_int_distribution<> dist(0, 10000);
for (std::size_t i = 0; i < 100000; ++i) {
auto key = dist(gen);
if (dist(gen) < 1400) {
h.insert(key);
} else {
h.erase(key);
}
if (((i + 1) % 10000) == 0) {
auto stats = F14TableStats::compute(h);
// Verify that average miss probe length is bounded despite continued
// erase + reuse. p99 of the average across 10M random steps is 4.69,
// average is 2.96.
EXPECT_LT(f14::expectedProbe(stats.missProbeLengthHisto), 10.0);
}
}
LOG(INFO) << "F14ValueSet at steady state -> " << F14TableStats::compute(h);
}
TEST(F14ValueSet, vectorMaxSize) {
F14ValueSet<int> s;
EXPECT_EQ(s.max_size(), std::numeric_limits<uint64_t>::max() / sizeof(int));
}
TEST(F14NodeSet, vectorMaxSize) {
F14NodeSet<int> s;
EXPECT_EQ(s.max_size(), std::numeric_limits<uint64_t>::max() / sizeof(int));
}
TEST(F14VectorSet, vectorMaxSize) {
F14VectorSet<int> s;
EXPECT_EQ(s.max_size(), std::numeric_limits<uint32_t>::max());
}
template <typename S>
void runMoveOnlyTest() {
S t0;
t0.emplace(10);
t0.insert(20);
S t1{std::move(t0)};
EXPECT_TRUE(t0.empty());
S t2;
EXPECT_TRUE(t2.empty());
t2 = std::move(t1);
EXPECT_EQ(t2.size(), 2);
}
TEST(F14ValueSet, moveOnly) {
runMoveOnlyTest<F14ValueSet<f14::MoveOnlyTestInt>>();
}
TEST(F14NodeSet, moveOnly) {
runMoveOnlyTest<F14NodeSet<f14::MoveOnlyTestInt>>();
}
TEST(F14VectorSet, moveOnly) {
runMoveOnlyTest<F14VectorSet<f14::MoveOnlyTestInt>>();
}
TEST(F14FastSet, moveOnly) {
runMoveOnlyTest<F14FastSet<f14::MoveOnlyTestInt>>();
}
TEST(F14ValueSet, heterogeneous) {
// note: std::string is implicitly convertible to but not from StringPiece
using Hasher = folly::transparent<folly::hasher<folly::StringPiece>>;
using KeyEqual = folly::transparent<std::equal_to<folly::StringPiece>>;
constexpr auto hello = "hello"_sp;
constexpr auto buddy = "buddy"_sp;
constexpr auto world = "world"_sp;
F14ValueSet<std::string, Hasher, KeyEqual> set;
set.emplace(hello.str());
set.emplace(world.str());
auto checks = [hello, buddy](auto& ref) {
// count
EXPECT_EQ(0, ref.count(buddy));
EXPECT_EQ(1, ref.count(hello));
// find
EXPECT_TRUE(ref.end() == ref.find(buddy));
EXPECT_EQ(hello, *ref.find(hello));
// prehash + find
EXPECT_TRUE(ref.end() == ref.find(ref.prehash(buddy), buddy));
EXPECT_EQ(hello, *ref.find(ref.prehash(hello), hello));
// equal_range
EXPECT_TRUE(std::make_pair(ref.end(), ref.end()) == ref.equal_range(buddy));
EXPECT_TRUE(
std::make_pair(ref.find(hello), ++ref.find(hello)) ==
ref.equal_range(hello));
};
checks(set);
checks(folly::as_const(set));
}
///////////////////////////////////
#endif // FOLLY_F14_VECTOR_INTRINSICS_AVAILABLE
///////////////////////////////////
/*
* Copyright 2017-present Facebook, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#pragma once
#include <cstddef>
#include <vector>
#include <folly/Demangle.h>
#include <folly/container/detail/F14Policy.h>
#include <folly/container/detail/F14Table.h>
namespace folly {
namespace f14 {
struct Histo {
std::vector<std::size_t> const& data;
};
std::ostream& operator<<(std::ostream& xo, Histo const& histo) {
xo << "[";
size_t sum = 0;
for (auto v : histo.data) {
sum += v;
}
size_t partial = 0;
for (size_t i = 0; i < histo.data.size(); ++i) {
if (i > 0) {
xo << ", ";
}
partial += histo.data[i];
if (histo.data[i] > 0) {
xo << i << ": " << histo.data[i] << " (" << (partial * 100.0 / sum)
<< "%)";
}
}
xo << "]";
return xo;
}
void accumulate(
std::vector<std::size_t>& a,
std::vector<std::size_t> const& d) {
if (a.size() < d.size()) {
a.resize(d.size());
}
for (std::size_t i = 0; i < d.size(); ++i) {
a[i] += d[i];
}
}
double expectedProbe(std::vector<std::size_t> const& probeLengths) {
std::size_t sum = 0;
std::size_t count = 0;
for (std::size_t i = 1; i < probeLengths.size(); ++i) {
sum += i * probeLengths[i];
count += probeLengths[i];
}
return static_cast<double>(sum) / count;
}
// Returns i such that probeLengths elements 0 to i (inclusive) account
// for at least 99% of the samples.
std::size_t p99Probe(std::vector<std::size_t> const& probeLengths) {
std::size_t count = 0;
for (std::size_t i = 1; i < probeLengths.size(); ++i) {
count += probeLengths[i];
}
std::size_t rv = probeLengths.size();
std::size_t suffix = 0;
while ((suffix + probeLengths[rv - 1]) * 100 <= count) {
--rv;
}
return rv;
}
struct MoveOnlyTestInt {
int x;
MoveOnlyTestInt() noexcept : x(0) {}
/* implicit */ MoveOnlyTestInt(int x0) : x(x0) {}
MoveOnlyTestInt(MoveOnlyTestInt&& rhs) noexcept : x(rhs.x) {}
MoveOnlyTestInt(MoveOnlyTestInt const&) = delete;
MoveOnlyTestInt& operator=(MoveOnlyTestInt&& rhs) noexcept {
x = rhs.x;
return *this;
}
MoveOnlyTestInt& operator=(MoveOnlyTestInt const&) = delete;
bool operator==(MoveOnlyTestInt const& rhs) const {
return x == rhs.x;
}
bool operator!=(MoveOnlyTestInt const& rhs) const {
return !(*this == rhs);
}
};
} // namespace f14
std::ostream& operator<<(std::ostream& xo, F14TableStats const& stats) {
using f14::Histo;
xo << "{ " << std::endl;
xo << " policy: " << folly::demangle(stats.policy) << std::endl;
xo << " size: " << stats.size << std::endl;
xo << " valueSize: " << stats.valueSize << std::endl;
xo << " bucketCount: " << stats.bucketCount << std::endl;
xo << " chunkCount: " << stats.chunkCount << std::endl;
xo << " chunkOccupancyHisto" << Histo{stats.chunkOccupancyHisto}
<< std::endl;
xo << " chunkOutboundOverflowHisto"
<< Histo{stats.chunkOutboundOverflowHisto} << std::endl;
xo << " chunkHostedOverflowHisto" << Histo{stats.chunkHostedOverflowHisto}
<< std::endl;
xo << " keyProbeLengthHisto" << Histo{stats.keyProbeLengthHisto}
<< std::endl;
xo << " missProbeLengthHisto" << Histo{stats.missProbeLengthHisto}
<< std::endl;
xo << " totalBytes: " << stats.totalBytes << std::endl;
xo << " valueBytes: " << (stats.size * stats.valueSize) << std::endl;
xo << " overheadBytes: " << stats.overheadBytes << std::endl;
if (stats.size > 0) {
xo << " overheadBytesPerKey: " << (stats.overheadBytes * 1.0 / stats.size)
<< std::endl;
}
xo << "}";
return xo;
}
} // namespace folly
namespace std {
template <>
struct hash<folly::f14::MoveOnlyTestInt> {
std::size_t operator()(folly::f14::MoveOnlyTestInt const& val) const {
return val.x;
}
};
} // namespace std
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment