1//! A lock-free concurrent slab.
2//!
3//! Slabs provide pre-allocated storage for many instances of a single data
4//! type. When a large number of values of a single type are required,
5//! this can be more efficient than allocating each item individually. Since the
6//! allocated items are the same size, memory fragmentation is reduced, and
7//! creating and removing new items can be very cheap.
8//!
9//! This crate implements a lock-free concurrent slab, indexed by `usize`s.
10//!
11//! ## Usage
12//!
13//! First, add this to your `Cargo.toml`:
14//!
15//! ```toml
16//! sharded-slab = "0.1.1"
17//! ```
18//!
19//! This crate provides two types, [`Slab`] and [`Pool`], which provide
20//! slightly different APIs for using a sharded slab.
21//!
22//! [`Slab`] implements a slab for _storing_ small types, sharing them between
23//! threads, and accessing them by index. New entries are allocated by
24//! [inserting] data, moving it in by value. Similarly, entries may be
25//! deallocated by [taking] from the slab, moving the value out. This API is
26//! similar to a `Vec<Option<T>>`, but allowing lock-free concurrent insertion
27//! and removal.
28//!
29//! In contrast, the [`Pool`] type provides an [object pool] style API for
30//! _reusing storage_. Rather than constructing values and moving them into the
31//! pool, as with [`Slab`], [allocating an entry][create] from the pool takes a
32//! closure that's provided with a mutable reference to initialize the entry in
33//! place. When entries are deallocated, they are [cleared] in place. Types
34//! which own a heap allocation can be cleared by dropping any _data_ they
35//! store, but retaining any previously-allocated capacity. This means that a
36//! [`Pool`] may be used to reuse a set of existing heap allocations, reducing
37//! allocator load.
38//!
39//! [inserting]: Slab::insert
40//! [taking]: Slab::take
41//! [create]: Pool::create
42//! [cleared]: Clear
43//! [object pool]: https://en.wikipedia.org/wiki/Object_pool_pattern
44//!
45//! # Examples
46//!
47//! Inserting an item into the slab, returning an index:
48//! ```rust
49//! # use sharded_slab::Slab;
50//! let slab = Slab::new();
51//!
52//! let key = slab.insert("hello world").unwrap();
53//! assert_eq!(slab.get(key).unwrap(), "hello world");
54//! ```
55//!
56//! To share a slab across threads, it may be wrapped in an `Arc`:
57//! ```rust
58//! # use sharded_slab::Slab;
59//! use std::sync::Arc;
60//! let slab = Arc::new(Slab::new());
61//!
62//! let slab2 = slab.clone();
63//! let thread2 = std::thread::spawn(move || {
64//! let key = slab2.insert("hello from thread two").unwrap();
65//! assert_eq!(slab2.get(key).unwrap(), "hello from thread two");
66//! key
67//! });
68//!
69//! let key1 = slab.insert("hello from thread one").unwrap();
70//! assert_eq!(slab.get(key1).unwrap(), "hello from thread one");
71//!
72//! // Wait for thread 2 to complete.
73//! let key2 = thread2.join().unwrap();
74//!
75//! // The item inserted by thread 2 remains in the slab.
76//! assert_eq!(slab.get(key2).unwrap(), "hello from thread two");
77//!```
78//!
79//! If items in the slab must be mutated, a `Mutex` or `RwLock` may be used for
80//! each item, providing granular locking of items rather than of the slab:
81//!
82//! ```rust
83//! # use sharded_slab::Slab;
84//! use std::sync::{Arc, Mutex};
85//! let slab = Arc::new(Slab::new());
86//!
87//! let key = slab.insert(Mutex::new(String::from("hello world"))).unwrap();
88//!
89//! let slab2 = slab.clone();
90//! let thread2 = std::thread::spawn(move || {
91//! let hello = slab2.get(key).expect("item missing");
92//! let mut hello = hello.lock().expect("mutex poisoned");
93//! *hello = String::from("hello everyone!");
94//! });
95//!
96//! thread2.join().unwrap();
97//!
98//! let hello = slab.get(key).expect("item missing");
99//! let mut hello = hello.lock().expect("mutex poisoned");
100//! assert_eq!(hello.as_str(), "hello everyone!");
101//! ```
102//!
103//! # Configuration
104//!
105//! For performance reasons, several values used by the slab are calculated as
106//! constants. In order to allow users to tune the slab's parameters, we provide
107//! a [`Config`] trait which defines these parameters as associated `consts`.
108//! The `Slab` type is generic over a `C: Config` parameter.
109//!
110//! [`Config`]: trait.Config.html
111//!
112//! # Comparison with Similar Crates
113//!
114//! - [`slab`]: Carl Lerche's `slab` crate provides a slab implementation with a
115//! similar API, implemented by storing all data in a single vector.
116//!
117//! Unlike `sharded_slab`, inserting and removing elements from the slab
118//! requires mutable access. This means that if the slab is accessed
119//! concurrently by multiple threads, it is necessary for it to be protected
120//! by a `Mutex` or `RwLock`. Items may not be inserted or removed (or
121//! accessed, if a `Mutex` is used) concurrently, even when they are
122//! unrelated. In many cases, the lock can become a significant bottleneck. On
123//! the other hand, this crate allows separate indices in the slab to be
124//! accessed, inserted, and removed concurrently without requiring a global
125//! lock. Therefore, when the slab is shared across multiple threads, this
126//! crate offers significantly better performance than `slab`.
127//!
128//! However, the lock free slab introduces some additional constant-factor
129//! overhead. This means that in use-cases where a slab is _not_ shared by
130//! multiple threads and locking is not required, this crate will likely offer
131//! slightly worse performance.
132//!
133//! In summary: `sharded-slab` offers significantly improved performance in
134//! concurrent use-cases, while `slab` should be preferred in single-threaded
135//! use-cases.
136//!
137//! [`slab`]: https://crates.io/crates/loom
138//!
139//! # Safety and Correctness
140//!
141//! Most implementations of lock-free data structures in Rust require some
142//! amount of unsafe code, and this crate is not an exception. In order to catch
143//! potential bugs in this unsafe code, we make use of [`loom`], a
144//! permutation-testing tool for concurrent Rust programs. All `unsafe` blocks
145//! this crate occur in accesses to `loom` `UnsafeCell`s. This means that when
146//! those accesses occur in this crate's tests, `loom` will assert that they are
147//! valid under the C11 memory model across multiple permutations of concurrent
148//! executions of those tests.
149//!
150//! In order to guard against the [ABA problem][aba], this crate makes use of
151//! _generational indices_. Each slot in the slab tracks a generation counter
152//! which is incremented every time a value is inserted into that slot, and the
153//! indices returned by [`Slab::insert`] include the generation of the slot when
154//! the value was inserted, packed into the high-order bits of the index. This
155//! ensures that if a value is inserted, removed, and a new value is inserted
156//! into the same slot in the slab, the key returned by the first call to
157//! `insert` will not map to the new value.
158//!
159//! Since a fixed number of bits are set aside to use for storing the generation
160//! counter, the counter will wrap around after being incremented a number of
161//! times. To avoid situations where a returned index lives long enough to see the
162//! generation counter wrap around to the same value, it is good to be fairly
163//! generous when configuring the allocation of index bits.
164//!
165//! [`loom`]: https://crates.io/crates/loom
166//! [aba]: https://en.wikipedia.org/wiki/ABA_problem
167//! [`Slab::insert`]: struct.Slab.html#method.insert
168//!
169//! # Performance
170//!
171//! These graphs were produced by [benchmarks] of the sharded slab implementation,
172//! using the [`criterion`] crate.
173//!
174//! The first shows the results of a benchmark where an increasing number of
175//! items are inserted and then removed into a slab concurrently by five
176//! threads. It compares the performance of the sharded slab implementation
177//! with a `RwLock<slab::Slab>`:
178//!
179//! <img width="1124" alt="Screen Shot 2019-10-01 at 5 09 49 PM" src="https://user-images.githubusercontent.com/2796466/66078398-cd6c9f80-e516-11e9-9923-0ed6292e8498.png">
180//!
181//! The second graph shows the results of a benchmark where an increasing
182//! number of items are inserted and then removed by a _single_ thread. It
183//! compares the performance of the sharded slab implementation with an
184//! `RwLock<slab::Slab>` and a `mut slab::Slab`.
185//!
186//! <img width="925" alt="Screen Shot 2019-10-01 at 5 13 45 PM" src="https://user-images.githubusercontent.com/2796466/66078469-f0974f00-e516-11e9-95b5-f65f0aa7e494.png">
187//!
188//! These benchmarks demonstrate that, while the sharded approach introduces
189//! a small constant-factor overhead, it offers significantly better
190//! performance across concurrent accesses.
191//!
192//! [benchmarks]: https://github.com/hawkw/sharded-slab/blob/master/benches/bench.rs
193//! [`criterion`]: https://crates.io/crates/criterion
194//!
195//! # Implementation Notes
196//!
197//! See [this page](crate::implementation) for details on this crate's design
198//! and implementation.
199//!
200#![doc(html_root_url = "https://docs.rs/sharded-slab/0.1.4")]
201#![warn(missing_debug_implementations, missing_docs)]
202#![cfg_attr(docsrs, warn(rustdoc::broken_intra_doc_links))]
203#[macro_use]
204mod macros;
205
206pub mod implementation;
207pub mod pool;
208
209pub(crate) mod cfg;
210pub(crate) mod sync;
211
212mod clear;
213mod iter;
214mod page;
215mod shard;
216mod tid;
217
218pub use cfg::{Config, DefaultConfig};
219pub use clear::Clear;
220#[doc(inline)]
221pub use pool::Pool;
222
223pub(crate) use tid::Tid;
224
225use cfg::CfgPrivate;
226use shard::Shard;
227use std::{fmt, marker::PhantomData, ptr, sync::Arc};
228
229/// A sharded slab.
230///
231/// See the [crate-level documentation](crate) for details on using this type.
232pub struct Slab<T, C: cfg::Config = DefaultConfig> {
233 shards: shard::Array<Option<T>, C>,
234 _cfg: PhantomData<C>,
235}
236
237/// A handle that allows access to an occupied entry in a [`Slab`].
238///
239/// While the guard exists, it indicates to the slab that the item the guard
240/// references is currently being accessed. If the item is removed from the slab
241/// while a guard exists, the removal will be deferred until all guards are
242/// dropped.
243pub struct Entry<'a, T, C: cfg::Config = DefaultConfig> {
244 inner: page::slot::Guard<Option<T>, C>,
245 value: ptr::NonNull<T>,
246 shard: &'a Shard<Option<T>, C>,
247 key: usize,
248}
249
250/// A handle to a vacant entry in a [`Slab`].
251///
252/// `VacantEntry` allows constructing values with the key that they will be
253/// assigned to.
254///
255/// # Examples
256///
257/// ```
258/// # use sharded_slab::Slab;
259/// let mut slab = Slab::new();
260///
261/// let hello = {
262/// let entry = slab.vacant_entry().unwrap();
263/// let key = entry.key();
264///
265/// entry.insert((key, "hello"));
266/// key
267/// };
268///
269/// assert_eq!(hello, slab.get(hello).unwrap().0);
270/// assert_eq!("hello", slab.get(hello).unwrap().1);
271/// ```
272#[derive(Debug)]
273pub struct VacantEntry<'a, T, C: cfg::Config = DefaultConfig> {
274 inner: page::slot::InitGuard<Option<T>, C>,
275 key: usize,
276 _lt: PhantomData<&'a ()>,
277}
278
279/// An owned reference to an occupied entry in a [`Slab`].
280///
281/// While the guard exists, it indicates to the slab that the item the guard
282/// references is currently being accessed. If the item is removed from the slab
283/// while the guard exists, the removal will be deferred until all guards are
284/// dropped.
285///
286/// Unlike [`Entry`], which borrows the slab, an `OwnedEntry` clones the [`Arc`]
287/// around the slab. Therefore, it keeps the slab from being dropped until all
288/// such guards have been dropped. This means that an `OwnedEntry` may be held for
289/// an arbitrary lifetime.
290///
291/// # Examples
292///
293/// ```
294/// # use sharded_slab::Slab;
295/// use std::sync::Arc;
296///
297/// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
298/// let key = slab.insert("hello world").unwrap();
299///
300/// // Look up the created key, returning an `OwnedEntry`.
301/// let value = slab.clone().get_owned(key).unwrap();
302///
303/// // Now, the original `Arc` clone of the slab may be dropped, but the
304/// // returned `OwnedEntry` can still access the value.
305/// assert_eq!(value, "hello world");
306/// ```
307///
308/// Unlike [`Entry`], an `OwnedEntry` may be stored in a struct which must live
309/// for the `'static` lifetime:
310///
311/// ```
312/// # use sharded_slab::Slab;
313/// use sharded_slab::OwnedEntry;
314/// use std::sync::Arc;
315///
316/// pub struct MyStruct {
317/// entry: OwnedEntry<&'static str>,
318/// // ... other fields ...
319/// }
320///
321/// // Suppose this is some arbitrary function which requires a value that
322/// // lives for the 'static lifetime...
323/// fn function_requiring_static<T: 'static>(t: &T) {
324/// // ... do something extremely important and interesting ...
325/// }
326///
327/// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
328/// let key = slab.insert("hello world").unwrap();
329///
330/// // Look up the created key, returning an `OwnedEntry`.
331/// let entry = slab.clone().get_owned(key).unwrap();
332/// let my_struct = MyStruct {
333/// entry,
334/// // ...
335/// };
336///
337/// // We can use `my_struct` anywhere where it is required to have the
338/// // `'static` lifetime:
339/// function_requiring_static(&my_struct);
340/// ```
341///
342/// `OwnedEntry`s may be sent between threads:
343///
344/// ```
345/// # use sharded_slab::Slab;
346/// use std::{thread, sync::Arc};
347///
348/// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
349/// let key = slab.insert("hello world").unwrap();
350///
351/// // Look up the created key, returning an `OwnedEntry`.
352/// let value = slab.clone().get_owned(key).unwrap();
353///
354/// thread::spawn(move || {
355/// assert_eq!(value, "hello world");
356/// // ...
357/// }).join().unwrap();
358/// ```
359///
360/// [`get`]: Slab::get
361/// [`Arc`]: std::sync::Arc
362pub struct OwnedEntry<T, C = DefaultConfig>
363where
364 C: cfg::Config,
365{
366 inner: page::slot::Guard<Option<T>, C>,
367 value: ptr::NonNull<T>,
368 slab: Arc<Slab<T, C>>,
369 key: usize,
370}
371
372impl<T> Slab<T> {
373 /// Returns a new slab with the default configuration parameters.
374 pub fn new() -> Self {
375 Self::new_with_config()
376 }
377
378 /// Returns a new slab with the provided configuration parameters.
379 pub fn new_with_config<C: cfg::Config>() -> Slab<T, C> {
380 C::validate();
381 Slab {
382 shards: shard::Array::new(),
383 _cfg: PhantomData,
384 }
385 }
386}
387
388impl<T, C: cfg::Config> Slab<T, C> {
389 /// The number of bits in each index which are used by the slab.
390 ///
391 /// If other data is packed into the `usize` indices returned by
392 /// [`Slab::insert`], user code is free to use any bits higher than the
393 /// `USED_BITS`-th bit freely.
394 ///
395 /// This is determined by the [`Config`] type that configures the slab's
396 /// parameters. By default, all bits are used; this can be changed by
397 /// overriding the [`Config::RESERVED_BITS`][res] constant.
398 ///
399 /// [res]: crate::Config#RESERVED_BITS
400 pub const USED_BITS: usize = C::USED_BITS;
401
402 /// Inserts a value into the slab, returning the integer index at which that
403 /// value was inserted. This index can then be used to access the entry.
404 ///
405 /// If this function returns `None`, then the shard for the current thread
406 /// is full and no items can be added until some are removed, or the maximum
407 /// number of shards has been reached.
408 ///
409 /// # Examples
410 /// ```rust
411 /// # use sharded_slab::Slab;
412 /// let slab = Slab::new();
413 ///
414 /// let key = slab.insert("hello world").unwrap();
415 /// assert_eq!(slab.get(key).unwrap(), "hello world");
416 /// ```
417 pub fn insert(&self, value: T) -> Option<usize> {
418 let (tid, shard) = self.shards.current();
419 test_println!("insert {:?}", tid);
420 let mut value = Some(value);
421 shard
422 .init_with(|idx, slot| {
423 let gen = slot.insert(&mut value)?;
424 Some(gen.pack(idx))
425 })
426 .map(|idx| tid.pack(idx))
427 }
428
429 /// Return a handle to a vacant entry allowing for further manipulation.
430 ///
431 /// This function is useful when creating values that must contain their
432 /// slab index. The returned [`VacantEntry`] reserves a slot in the slab and
433 /// is able to return the index of the entry.
434 ///
435 /// # Examples
436 ///
437 /// ```
438 /// # use sharded_slab::Slab;
439 /// let mut slab = Slab::new();
440 ///
441 /// let hello = {
442 /// let entry = slab.vacant_entry().unwrap();
443 /// let key = entry.key();
444 ///
445 /// entry.insert((key, "hello"));
446 /// key
447 /// };
448 ///
449 /// assert_eq!(hello, slab.get(hello).unwrap().0);
450 /// assert_eq!("hello", slab.get(hello).unwrap().1);
451 /// ```
452 pub fn vacant_entry(&self) -> Option<VacantEntry<'_, T, C>> {
453 let (tid, shard) = self.shards.current();
454 test_println!("vacant_entry {:?}", tid);
455 shard.init_with(|idx, slot| {
456 let inner = slot.init()?;
457 let key = inner.generation().pack(tid.pack(idx));
458 Some(VacantEntry {
459 inner,
460 key,
461 _lt: PhantomData,
462 })
463 })
464 }
465
466 /// Remove the value at the given index in the slab, returning `true` if a
467 /// value was removed.
468 ///
469 /// Unlike [`take`], this method does _not_ block the current thread until
470 /// the value can be removed. Instead, if another thread is currently
471 /// accessing that value, this marks it to be removed by that thread when it
472 /// finishes accessing the value.
473 ///
474 /// # Examples
475 ///
476 /// ```rust
477 /// let slab = sharded_slab::Slab::new();
478 /// let key = slab.insert("hello world").unwrap();
479 ///
480 /// // Remove the item from the slab.
481 /// assert!(slab.remove(key));
482 ///
483 /// // Now, the slot is empty.
484 /// assert!(!slab.contains(key));
485 /// ```
486 ///
487 /// ```rust
488 /// use std::sync::Arc;
489 ///
490 /// let slab = Arc::new(sharded_slab::Slab::new());
491 /// let key = slab.insert("hello world").unwrap();
492 ///
493 /// let slab2 = slab.clone();
494 /// let thread2 = std::thread::spawn(move || {
495 /// // Depending on when this thread begins executing, the item may
496 /// // or may not have already been removed...
497 /// if let Some(item) = slab2.get(key) {
498 /// assert_eq!(item, "hello world");
499 /// }
500 /// });
501 ///
502 /// // The item will be removed by thread2 when it finishes accessing it.
503 /// assert!(slab.remove(key));
504 ///
505 /// thread2.join().unwrap();
506 /// assert!(!slab.contains(key));
507 /// ```
508 /// [`take`]: Slab::take
509 pub fn remove(&self, idx: usize) -> bool {
510 // The `Drop` impl for `Entry` calls `remove_local` or `remove_remote` based
511 // on where the guard was dropped from. If the dropped guard was the last one, this will
512 // call `Slot::remove_value` which actually clears storage.
513 let tid = C::unpack_tid(idx);
514
515 test_println!("rm_deferred {:?}", tid);
516 let shard = self.shards.get(tid.as_usize());
517 if tid.is_current() {
518 shard.map(|shard| shard.remove_local(idx)).unwrap_or(false)
519 } else {
520 shard.map(|shard| shard.remove_remote(idx)).unwrap_or(false)
521 }
522 }
523
524 /// Removes the value associated with the given key from the slab, returning
525 /// it.
526 ///
527 /// If the slab does not contain a value for that key, `None` is returned
528 /// instead.
529 ///
530 /// If the value associated with the given key is currently being
531 /// accessed by another thread, this method will block the current thread
532 /// until the item is no longer accessed. If this is not desired, use
533 /// [`remove`] instead.
534 ///
535 /// **Note**: This method blocks the calling thread by spinning until the
536 /// currently outstanding references are released. Spinning for long periods
537 /// of time can result in high CPU time and power consumption. Therefore,
538 /// `take` should only be called when other references to the slot are
539 /// expected to be dropped soon (e.g., when all accesses are relatively
540 /// short).
541 ///
542 /// # Examples
543 ///
544 /// ```rust
545 /// let slab = sharded_slab::Slab::new();
546 /// let key = slab.insert("hello world").unwrap();
547 ///
548 /// // Remove the item from the slab, returning it.
549 /// assert_eq!(slab.take(key), Some("hello world"));
550 ///
551 /// // Now, the slot is empty.
552 /// assert!(!slab.contains(key));
553 /// ```
554 ///
555 /// ```rust
556 /// use std::sync::Arc;
557 ///
558 /// let slab = Arc::new(sharded_slab::Slab::new());
559 /// let key = slab.insert("hello world").unwrap();
560 ///
561 /// let slab2 = slab.clone();
562 /// let thread2 = std::thread::spawn(move || {
563 /// // Depending on when this thread begins executing, the item may
564 /// // or may not have already been removed...
565 /// if let Some(item) = slab2.get(key) {
566 /// assert_eq!(item, "hello world");
567 /// }
568 /// });
569 ///
570 /// // The item will only be removed when the other thread finishes
571 /// // accessing it.
572 /// assert_eq!(slab.take(key), Some("hello world"));
573 ///
574 /// thread2.join().unwrap();
575 /// assert!(!slab.contains(key));
576 /// ```
577 /// [`remove`]: Slab::remove
578 pub fn take(&self, idx: usize) -> Option<T> {
579 let tid = C::unpack_tid(idx);
580
581 test_println!("rm {:?}", tid);
582 let shard = self.shards.get(tid.as_usize())?;
583 if tid.is_current() {
584 shard.take_local(idx)
585 } else {
586 shard.take_remote(idx)
587 }
588 }
589
590 /// Return a reference to the value associated with the given key.
591 ///
592 /// If the slab does not contain a value for the given key, or if the
593 /// maximum number of concurrent references to the slot has been reached,
594 /// `None` is returned instead.
595 ///
596 /// # Examples
597 ///
598 /// ```rust
599 /// let slab = sharded_slab::Slab::new();
600 /// let key = slab.insert("hello world").unwrap();
601 ///
602 /// assert_eq!(slab.get(key).unwrap(), "hello world");
603 /// assert!(slab.get(12345).is_none());
604 /// ```
605 pub fn get(&self, key: usize) -> Option<Entry<'_, T, C>> {
606 let tid = C::unpack_tid(key);
607
608 test_println!("get {:?}; current={:?}", tid, Tid::<C>::current());
609 let shard = self.shards.get(tid.as_usize())?;
610 shard.with_slot(key, |slot| {
611 let inner = slot.get(C::unpack_gen(key))?;
612 let value = ptr::NonNull::from(slot.value().as_ref().unwrap());
613 Some(Entry {
614 inner,
615 value,
616 shard,
617 key,
618 })
619 })
620 }
621
622 /// Return an owned reference to the value at the given index.
623 ///
624 /// If the slab does not contain a value for the given key, `None` is
625 /// returned instead.
626 ///
627 /// Unlike [`get`], which borrows the slab, this method _clones_ the [`Arc`]
628 /// around the slab. This means that the returned [`OwnedEntry`] can be held
629 /// for an arbitrary lifetime. However, this method requires that the slab
630 /// itself be wrapped in an `Arc`.
631 ///
632 /// # Examples
633 ///
634 /// ```
635 /// # use sharded_slab::Slab;
636 /// use std::sync::Arc;
637 ///
638 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
639 /// let key = slab.insert("hello world").unwrap();
640 ///
641 /// // Look up the created key, returning an `OwnedEntry`.
642 /// let value = slab.clone().get_owned(key).unwrap();
643 ///
644 /// // Now, the original `Arc` clone of the slab may be dropped, but the
645 /// // returned `OwnedEntry` can still access the value.
646 /// assert_eq!(value, "hello world");
647 /// ```
648 ///
649 /// Unlike [`Entry`], an `OwnedEntry` may be stored in a struct which must live
650 /// for the `'static` lifetime:
651 ///
652 /// ```
653 /// # use sharded_slab::Slab;
654 /// use sharded_slab::OwnedEntry;
655 /// use std::sync::Arc;
656 ///
657 /// pub struct MyStruct {
658 /// entry: OwnedEntry<&'static str>,
659 /// // ... other fields ...
660 /// }
661 ///
662 /// // Suppose this is some arbitrary function which requires a value that
663 /// // lives for the 'static lifetime...
664 /// fn function_requiring_static<T: 'static>(t: &T) {
665 /// // ... do something extremely important and interesting ...
666 /// }
667 ///
668 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
669 /// let key = slab.insert("hello world").unwrap();
670 ///
671 /// // Look up the created key, returning an `OwnedEntry`.
672 /// let entry = slab.clone().get_owned(key).unwrap();
673 /// let my_struct = MyStruct {
674 /// entry,
675 /// // ...
676 /// };
677 ///
678 /// // We can use `my_struct` anywhere where it is required to have the
679 /// // `'static` lifetime:
680 /// function_requiring_static(&my_struct);
681 /// ```
682 ///
683 /// [`OwnedEntry`]s may be sent between threads:
684 ///
685 /// ```
686 /// # use sharded_slab::Slab;
687 /// use std::{thread, sync::Arc};
688 ///
689 /// let slab: Arc<Slab<&'static str>> = Arc::new(Slab::new());
690 /// let key = slab.insert("hello world").unwrap();
691 ///
692 /// // Look up the created key, returning an `OwnedEntry`.
693 /// let value = slab.clone().get_owned(key).unwrap();
694 ///
695 /// thread::spawn(move || {
696 /// assert_eq!(value, "hello world");
697 /// // ...
698 /// }).join().unwrap();
699 /// ```
700 ///
701 /// [`get`]: Slab::get
702 /// [`Arc`]: std::sync::Arc
703 pub fn get_owned(self: Arc<Self>, key: usize) -> Option<OwnedEntry<T, C>> {
704 let tid = C::unpack_tid(key);
705
706 test_println!("get_owned {:?}; current={:?}", tid, Tid::<C>::current());
707 let shard = self.shards.get(tid.as_usize())?;
708 shard.with_slot(key, |slot| {
709 let inner = slot.get(C::unpack_gen(key))?;
710 let value = ptr::NonNull::from(slot.value().as_ref().unwrap());
711 Some(OwnedEntry {
712 inner,
713 value,
714 slab: self.clone(),
715 key,
716 })
717 })
718 }
719
720 /// Returns `true` if the slab contains a value for the given key.
721 ///
722 /// # Examples
723 ///
724 /// ```
725 /// let slab = sharded_slab::Slab::new();
726 ///
727 /// let key = slab.insert("hello world").unwrap();
728 /// assert!(slab.contains(key));
729 ///
730 /// slab.take(key).unwrap();
731 /// assert!(!slab.contains(key));
732 /// ```
733 pub fn contains(&self, key: usize) -> bool {
734 self.get(key).is_some()
735 }
736
737 /// Returns an iterator over all the items in the slab.
738 pub fn unique_iter(&mut self) -> iter::UniqueIter<'_, T, C> {
739 let mut shards = self.shards.iter_mut();
740 let shard = shards.next().expect("must be at least 1 shard");
741 let mut pages = shard.iter();
742 let slots = pages.next().and_then(page::Shared::iter);
743 iter::UniqueIter {
744 shards,
745 slots,
746 pages,
747 }
748 }
749}
750
751impl<T> Default for Slab<T> {
752 fn default() -> Self {
753 Self::new()
754 }
755}
756
757impl<T: fmt::Debug, C: cfg::Config> fmt::Debug for Slab<T, C> {
758 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
759 f&mut DebugStruct<'_, '_>.debug_struct("Slab")
760 .field("shards", &self.shards)
761 .field(name:"config", &C::debug())
762 .finish()
763 }
764}
765
766unsafe impl<T: Send, C: cfg::Config> Send for Slab<T, C> {}
767unsafe impl<T: Sync, C: cfg::Config> Sync for Slab<T, C> {}
768
769// === impl Entry ===
770
771impl<'a, T, C: cfg::Config> Entry<'a, T, C> {
772 /// Returns the key used to access the guard.
773 pub fn key(&self) -> usize {
774 self.key
775 }
776
777 #[inline(always)]
778 fn value(&self) -> &T {
779 unsafe {
780 // Safety: this is always going to be valid, as it's projected from
781 // the safe reference to `self.value` --- this is just to avoid
782 // having to `expect` an option in the hot path when dereferencing.
783 self.value.as_ref()
784 }
785 }
786}
787
788impl<'a, T, C: cfg::Config> std::ops::Deref for Entry<'a, T, C> {
789 type Target = T;
790
791 fn deref(&self) -> &Self::Target {
792 self.value()
793 }
794}
795
796impl<'a, T, C: cfg::Config> Drop for Entry<'a, T, C> {
797 fn drop(&mut self) {
798 let should_remove: bool = unsafe {
799 // Safety: calling `slot::Guard::release` is unsafe, since the
800 // `Guard` value contains a pointer to the slot that may outlive the
801 // slab containing that slot. Here, the `Entry` guard owns a
802 // borrowed reference to the shard containing that slot, which
803 // ensures that the slot will not be dropped while this `Guard`
804 // exists.
805 self.inner.release()
806 };
807 if should_remove {
808 self.shard.clear_after_release(self.key)
809 }
810 }
811}
812
813impl<'a, T, C> fmt::Debug for Entry<'a, T, C>
814where
815 T: fmt::Debug,
816 C: cfg::Config,
817{
818 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
819 fmt::Debug::fmt(self.value(), f)
820 }
821}
822
823impl<'a, T, C> PartialEq<T> for Entry<'a, T, C>
824where
825 T: PartialEq<T>,
826 C: cfg::Config,
827{
828 fn eq(&self, other: &T) -> bool {
829 self.value().eq(other)
830 }
831}
832
833// === impl VacantEntry ===
834
835impl<'a, T, C: cfg::Config> VacantEntry<'a, T, C> {
836 /// Insert a value in the entry.
837 ///
838 /// To get the integer index at which this value will be inserted, use
839 /// [`key`] prior to calling `insert`.
840 ///
841 /// # Examples
842 ///
843 /// ```
844 /// # use sharded_slab::Slab;
845 /// let mut slab = Slab::new();
846 ///
847 /// let hello = {
848 /// let entry = slab.vacant_entry().unwrap();
849 /// let key = entry.key();
850 ///
851 /// entry.insert((key, "hello"));
852 /// key
853 /// };
854 ///
855 /// assert_eq!(hello, slab.get(hello).unwrap().0);
856 /// assert_eq!("hello", slab.get(hello).unwrap().1);
857 /// ```
858 ///
859 /// [`key`]: VacantEntry::key
860 pub fn insert(mut self, val: T) {
861 let value = unsafe {
862 // Safety: this `VacantEntry` only lives as long as the `Slab` it was
863 // borrowed from, so it cannot outlive the entry's slot.
864 self.inner.value_mut()
865 };
866 debug_assert!(
867 value.is_none(),
868 "tried to insert to a slot that already had a value!"
869 );
870 *value = Some(val);
871 let _released = unsafe {
872 // Safety: again, this `VacantEntry` only lives as long as the
873 // `Slab` it was borrowed from, so it cannot outlive the entry's
874 // slot.
875 self.inner.release()
876 };
877 debug_assert!(
878 !_released,
879 "removing a value before it was inserted should be a no-op"
880 )
881 }
882
883 /// Return the integer index at which this entry will be inserted.
884 ///
885 /// A value stored in this entry will be associated with this key.
886 ///
887 /// # Examples
888 ///
889 /// ```
890 /// # use sharded_slab::*;
891 /// let mut slab = Slab::new();
892 ///
893 /// let hello = {
894 /// let entry = slab.vacant_entry().unwrap();
895 /// let key = entry.key();
896 ///
897 /// entry.insert((key, "hello"));
898 /// key
899 /// };
900 ///
901 /// assert_eq!(hello, slab.get(hello).unwrap().0);
902 /// assert_eq!("hello", slab.get(hello).unwrap().1);
903 /// ```
904 pub fn key(&self) -> usize {
905 self.key
906 }
907}
908// === impl OwnedEntry ===
909
910impl<T, C> OwnedEntry<T, C>
911where
912 C: cfg::Config,
913{
914 /// Returns the key used to access this guard
915 pub fn key(&self) -> usize {
916 self.key
917 }
918
919 #[inline(always)]
920 fn value(&self) -> &T {
921 unsafe {
922 // Safety: this is always going to be valid, as it's projected from
923 // the safe reference to `self.value` --- this is just to avoid
924 // having to `expect` an option in the hot path when dereferencing.
925 self.value.as_ref()
926 }
927 }
928}
929
930impl<T, C> std::ops::Deref for OwnedEntry<T, C>
931where
932 C: cfg::Config,
933{
934 type Target = T;
935
936 fn deref(&self) -> &Self::Target {
937 self.value()
938 }
939}
940
941impl<T, C> Drop for OwnedEntry<T, C>
942where
943 C: cfg::Config,
944{
945 fn drop(&mut self) {
946 test_println!("drop OwnedEntry: try clearing data");
947 let should_clear: bool = unsafe {
948 // Safety: calling `slot::Guard::release` is unsafe, since the
949 // `Guard` value contains a pointer to the slot that may outlive the
950 // slab containing that slot. Here, the `OwnedEntry` owns an `Arc`
951 // clone of the pool, which keeps it alive as long as the `OwnedEntry`
952 // exists.
953 self.inner.release()
954 };
955 if should_clear {
956 let shard_idx: Tid = Tid::<C>::from_packed(self.key);
957 test_println!("-> shard={:?}", shard_idx);
958 if let Some(shard: &Shard, C>) = self.slab.shards.get(idx:shard_idx.as_usize()) {
959 shard.clear_after_release(self.key)
960 } else {
961 test_println!("-> shard={:?} does not exist! THIS IS A BUG", shard_idx);
962 debug_assert!(std::thread::panicking(), "[internal error] tried to drop an `OwnedEntry` to a slot on a shard that never existed!");
963 }
964 }
965 }
966}
967
968impl<T, C> fmt::Debug for OwnedEntry<T, C>
969where
970 T: fmt::Debug,
971 C: cfg::Config,
972{
973 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
974 fmt::Debug::fmt(self.value(), f)
975 }
976}
977
978impl<T, C> PartialEq<T> for OwnedEntry<T, C>
979where
980 T: PartialEq<T>,
981 C: cfg::Config,
982{
983 fn eq(&self, other: &T) -> bool {
984 *self.value() == *other
985 }
986}
987
988unsafe impl<T, C> Sync for OwnedEntry<T, C>
989where
990 T: Sync,
991 C: cfg::Config,
992{
993}
994
995unsafe impl<T, C> Send for OwnedEntry<T, C>
996where
997 T: Sync,
998 C: cfg::Config,
999{
1000}
1001
1002// === pack ===
1003
1004pub(crate) trait Pack<C: cfg::Config>: Sized {
1005 // ====== provided by each implementation =================================
1006
1007 /// The number of bits occupied by this type when packed into a usize.
1008 ///
1009 /// This must be provided to determine the number of bits into which to pack
1010 /// the type.
1011 const LEN: usize;
1012 /// The type packed on the less significant side of this type.
1013 ///
1014 /// If this type is packed into the least significant bit of a usize, this
1015 /// should be `()`, which occupies no bytes.
1016 ///
1017 /// This is used to calculate the shift amount for packing this value.
1018 type Prev: Pack<C>;
1019
1020 // ====== calculated automatically ========================================
1021
1022 /// A number consisting of `Self::LEN` 1 bits, starting at the least
1023 /// significant bit.
1024 ///
1025 /// This is the higest value this type can represent. This number is shifted
1026 /// left by `Self::SHIFT` bits to calculate this type's `MASK`.
1027 ///
1028 /// This is computed automatically based on `Self::LEN`.
1029 const BITS: usize = {
1030 let shift = 1 << (Self::LEN - 1);
1031 shift | (shift - 1)
1032 };
1033 /// The number of bits to shift a number to pack it into a usize with other
1034 /// values.
1035 ///
1036 /// This is caculated automatically based on the `LEN` and `SHIFT` constants
1037 /// of the previous value.
1038 const SHIFT: usize = Self::Prev::SHIFT + Self::Prev::LEN;
1039
1040 /// The mask to extract only this type from a packed `usize`.
1041 ///
1042 /// This is calculated by shifting `Self::BITS` left by `Self::SHIFT`.
1043 const MASK: usize = Self::BITS << Self::SHIFT;
1044
1045 fn as_usize(&self) -> usize;
1046 fn from_usize(val: usize) -> Self;
1047
1048 #[inline(always)]
1049 fn pack(&self, to: usize) -> usize {
1050 let value = self.as_usize();
1051 debug_assert!(value <= Self::BITS);
1052
1053 (to & !Self::MASK) | (value << Self::SHIFT)
1054 }
1055
1056 #[inline(always)]
1057 fn from_packed(from: usize) -> Self {
1058 let value = (from & Self::MASK) >> Self::SHIFT;
1059 debug_assert!(value <= Self::BITS);
1060 Self::from_usize(value)
1061 }
1062}
1063
1064impl<C: cfg::Config> Pack<C> for () {
1065 const BITS: usize = 0;
1066 const LEN: usize = 0;
1067 const SHIFT: usize = 0;
1068 const MASK: usize = 0;
1069
1070 type Prev = ();
1071
1072 fn as_usize(&self) -> usize {
1073 unreachable!()
1074 }
1075 fn from_usize(_val: usize) -> Self {
1076 unreachable!()
1077 }
1078
1079 fn pack(&self, _to: usize) -> usize {
1080 unreachable!()
1081 }
1082
1083 fn from_packed(_from: usize) -> Self {
1084 unreachable!()
1085 }
1086}
1087
1088#[cfg(test)]
1089pub(crate) use self::tests::util as test_util;
1090
1091#[cfg(test)]
1092mod tests;
1093