1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically `atomic_ref`.
28//! Basically, creating a *shared reference* to one of the Rust atomic types corresponds to creating
29//! an `atomic_ref` in C++; the `atomic_ref` is destroyed when the lifetime of the shared reference
30//! ends. A Rust atomic type that is exclusively owned or behind a mutable reference does *not*
31//! correspond to an “atomic object” in C++, since the underlying primitive can be mutably accessed,
32//! for example with `get_mut`, to perform non-atomic operations.
33//!
34//! [cpp]: https://en.cppreference.com/w/cpp/atomic
35//!
36//! Each method takes an [`Ordering`] which represents the strength of
37//! the memory barrier for that operation. These orderings are the
38//! same as the [C++20 atomic orderings][1]. For more information see the [nomicon][2].
39//!
40//! [1]: https://en.cppreference.com/w/cpp/atomic/memory_order
41//! [2]: ../../../nomicon/atomics.html
42//!
43//! Since C++ does not support mixing atomic and non-atomic accesses, or non-synchronized
44//! different-sized accesses to the same data, Rust does not support those operations either.
45//! Note that both of those restrictions only apply if the accesses are non-synchronized.
46//!
47//! ```rust,no_run undefined_behavior
48//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
49//! use std::mem::transmute;
50//! use std::thread;
51//!
52//! let atomic = AtomicU16::new(0);
53//!
54//! thread::scope(|s| {
55//! // This is UB: mixing atomic and non-atomic accesses
56//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
57//! s.spawn(|| unsafe { atomic.as_ptr().write(2) });
58//! });
59//!
60//! thread::scope(|s| {
61//! // This is UB: even reads are not allowed to be mixed
62//! s.spawn(|| atomic.load(Ordering::Relaxed));
63//! s.spawn(|| unsafe { atomic.as_ptr().read() });
64//! });
65//!
66//! thread::scope(|s| {
67//! // This is fine, `join` synchronizes the code in a way such that atomic
68//! // and non-atomic accesses can't happen "at the same time"
69//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
70//! handle.join().unwrap();
71//! s.spawn(|| unsafe { atomic.as_ptr().write(2) });
72//! });
73//!
74//! thread::scope(|s| {
75//! // This is UB: using different-sized atomic accesses to the same data
76//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
77//! s.spawn(|| unsafe {
78//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
79//! differently_sized.store(2, Ordering::Relaxed);
80//! });
81//! });
82//!
83//! thread::scope(|s| {
84//! // This is fine, `join` synchronizes the code in a way such that
85//! // differently-sized accesses can't happen "at the same time"
86//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
87//! handle.join().unwrap();
88//! s.spawn(|| unsafe {
89//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
90//! differently_sized.store(2, Ordering::Relaxed);
91//! });
92//! });
93//! ```
94//!
95//! # Portability
96//!
97//! All atomic types in this module are guaranteed to be [lock-free] if they're
98//! available. This means they don't internally acquire a global mutex. Atomic
99//! types and operations are not guaranteed to be wait-free. This means that
100//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
101//!
102//! Atomic operations may be implemented at the instruction layer with
103//! larger-size atomics. For example some platforms use 4-byte atomic
104//! instructions to implement `AtomicI8`. Note that this emulation should not
105//! have an impact on correctness of code, it's just something to be aware of.
106//!
107//! The atomic types in this module might not be available on all platforms. The
108//! atomic types here are all widely available, however, and can generally be
109//! relied upon existing. Some notable exceptions are:
110//!
111//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
112//! `AtomicI64` types.
113//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
114//! and `store` operations, and do not support Compare and Swap (CAS)
115//! operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
116//! these CAS operations are implemented via [operating system support], which
117//! may come with a performance penalty.
118//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
119//! and do not support Compare and Swap (CAS) operations, such as `swap`,
120//! `fetch_add`, etc.
121//!
122//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
123//!
124//! Note that future platforms may be added that also do not have support for
125//! some atomic operations. Maximally portable code will want to be careful
126//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
127//! generally the most portable, but even then they're not available everywhere.
128//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
129//! `core` does not.
130//!
131//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
132//! compile based on the target's supported bit widths. It is a key-value
133//! option set for each supported size, with values "8", "16", "32", "64",
134//! "128", and "ptr" for pointer-sized atomics.
135//!
136//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
137//!
138//! # Atomic accesses to read-only memory
139//!
140//! In general, *all* atomic accesses on read-only memory are Undefined Behavior. For instance, attempting
141//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
142//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
143//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
144//! on read-only memory.
145//!
146//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
147//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
148//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
149//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
150//! is read-write; the only exceptions are memory created by `const` items or `static` items without
151//! interior mutability, and memory that was specifically marked as read-only by the operating
152//! system via platform-specific APIs.
153//!
154//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
155//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
156//! Undefined Behavior. The exact size limit for what makes a load "sufficiently small" varies
157//! depending on the target:
158//!
159//! | `target_arch` | Size limit |
160//! |---------------|---------|
161//! | `x86`, `arm`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
162//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
163//!
164//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
165//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
166//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
167//! upon.
168//!
169//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
170//! acquire fence instead.
171//!
172//! # Examples
173//!
174//! A simple spinlock:
175//!
176//! ```
177//! use std::sync::Arc;
178//! use std::sync::atomic::{AtomicUsize, Ordering};
179//! use std::{hint, thread};
180//!
181//! fn main() {
182//! let spinlock = Arc::new(AtomicUsize::new(1));
183//!
184//! let spinlock_clone = Arc::clone(&spinlock);
185//!
186//! let thread = thread::spawn(move|| {
187//! spinlock_clone.store(0, Ordering::Release);
188//! });
189//!
190//! // Wait for the other thread to release the lock
191//! while spinlock.load(Ordering::Acquire) != 0 {
192//! hint::spin_loop();
193//! }
194//!
195//! if let Err(panic) = thread.join() {
196//! println!("Thread had an error: {panic:?}");
197//! }
198//! }
199//! ```
200//!
201//! Keep a global count of live threads:
202//!
203//! ```
204//! use std::sync::atomic::{AtomicUsize, Ordering};
205//!
206//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
207//!
208//! // Note that Relaxed ordering doesn't synchronize anything
209//! // except the global thread counter itself.
210//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
211//! // Note that this number may not be true at the moment of printing
212//! // because some other thread may have changed static value already.
213//! println!("live threads: {}", old_thread_count + 1);
214//! ```
215
216#![stable(feature = "rust1", since = "1.0.0")]
217#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
218#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
219#![rustc_diagnostic_item = "atomic_mod"]
220// Clippy complains about the pattern of "safe function calling unsafe function taking pointers".
221// This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about
222// are just normal values that get loaded/stored, but not dereferenced.
223#![allow(clippy::not_unsafe_ptr_arg_deref)]
224
225use self::Ordering::*;
226
227use crate::cell::UnsafeCell;
228use crate::fmt;
229use crate::intrinsics;
230
231use crate::hint::spin_loop;
232
233// Some architectures don't have byte-sized atomics, which results in LLVM
234// emulating them using a LL/SC loop. However for AtomicBool we can take
235// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
236// instead, which LLVM can emulate using a larger atomic OR/AND operation.
237//
238// This list should only contain architectures which have word-sized atomic-or/
239// atomic-and instructions but don't natively support byte-sized atomics.
240#[cfg(target_has_atomic = "8")]
241const EMULATE_ATOMIC_BOOL: bool =
242 cfg!(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"));
243
244/// A boolean type which can be safely shared between threads.
245///
246/// This type has the same size, alignment, and bit validity as a [`bool`].
247///
248/// **Note**: This type is only available on platforms that support atomic
249/// loads and stores of `u8`.
250#[cfg(target_has_atomic_load_store = "8")]
251#[stable(feature = "rust1", since = "1.0.0")]
252#[rustc_diagnostic_item = "AtomicBool"]
253#[repr(C, align(1))]
254pub struct AtomicBool {
255 v: UnsafeCell<u8>,
256}
257
258#[cfg(target_has_atomic_load_store = "8")]
259#[stable(feature = "rust1", since = "1.0.0")]
260impl Default for AtomicBool {
261 /// Creates an `AtomicBool` initialized to `false`.
262 #[inline]
263 fn default() -> Self {
264 Self::new(false)
265 }
266}
267
268// Send is implicitly implemented for AtomicBool.
269#[cfg(target_has_atomic_load_store = "8")]
270#[stable(feature = "rust1", since = "1.0.0")]
271unsafe impl Sync for AtomicBool {}
272
273/// A raw pointer type which can be safely shared between threads.
274///
275/// This type has the same size and bit validity as a `*mut T`.
276///
277/// **Note**: This type is only available on platforms that support atomic
278/// loads and stores of pointers. Its size depends on the target pointer's size.
279#[cfg(target_has_atomic_load_store = "ptr")]
280#[stable(feature = "rust1", since = "1.0.0")]
281#[cfg_attr(not(test), rustc_diagnostic_item = "AtomicPtr")]
282#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
283#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
284#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
285pub struct AtomicPtr<T> {
286 p: UnsafeCell<*mut T>,
287}
288
289#[cfg(target_has_atomic_load_store = "ptr")]
290#[stable(feature = "rust1", since = "1.0.0")]
291impl<T> Default for AtomicPtr<T> {
292 /// Creates a null `AtomicPtr<T>`.
293 fn default() -> AtomicPtr<T> {
294 AtomicPtr::new(crate::ptr::null_mut())
295 }
296}
297
298#[cfg(target_has_atomic_load_store = "ptr")]
299#[stable(feature = "rust1", since = "1.0.0")]
300unsafe impl<T> Send for AtomicPtr<T> {}
301#[cfg(target_has_atomic_load_store = "ptr")]
302#[stable(feature = "rust1", since = "1.0.0")]
303unsafe impl<T> Sync for AtomicPtr<T> {}
304
305/// Atomic memory orderings
306///
307/// Memory orderings specify the way atomic operations synchronize memory.
308/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
309/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
310/// operations synchronize other memory while additionally preserving a total order of such
311/// operations across all threads.
312///
313/// Rust's memory orderings are [the same as those of
314/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
315///
316/// For more information see the [nomicon].
317///
318/// [nomicon]: ../../../nomicon/atomics.html
319#[stable(feature = "rust1", since = "1.0.0")]
320#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
321#[non_exhaustive]
322#[rustc_diagnostic_item = "Ordering"]
323pub enum Ordering {
324 /// No ordering constraints, only atomic operations.
325 ///
326 /// Corresponds to [`memory_order_relaxed`] in C++20.
327 ///
328 /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
329 #[stable(feature = "rust1", since = "1.0.0")]
330 Relaxed,
331 /// When coupled with a store, all previous operations become ordered
332 /// before any load of this value with [`Acquire`] (or stronger) ordering.
333 /// In particular, all previous writes become visible to all threads
334 /// that perform an [`Acquire`] (or stronger) load of this value.
335 ///
336 /// Notice that using this ordering for an operation that combines loads
337 /// and stores leads to a [`Relaxed`] load operation!
338 ///
339 /// This ordering is only applicable for operations that can perform a store.
340 ///
341 /// Corresponds to [`memory_order_release`] in C++20.
342 ///
343 /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
344 #[stable(feature = "rust1", since = "1.0.0")]
345 Release,
346 /// When coupled with a load, if the loaded value was written by a store operation with
347 /// [`Release`] (or stronger) ordering, then all subsequent operations
348 /// become ordered after that store. In particular, all subsequent loads will see data
349 /// written before the store.
350 ///
351 /// Notice that using this ordering for an operation that combines loads
352 /// and stores leads to a [`Relaxed`] store operation!
353 ///
354 /// This ordering is only applicable for operations that can perform a load.
355 ///
356 /// Corresponds to [`memory_order_acquire`] in C++20.
357 ///
358 /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
359 #[stable(feature = "rust1", since = "1.0.0")]
360 Acquire,
361 /// Has the effects of both [`Acquire`] and [`Release`] together:
362 /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
363 ///
364 /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
365 /// not performing any store and hence it has just [`Acquire`] ordering. However,
366 /// `AcqRel` will never perform [`Relaxed`] accesses.
367 ///
368 /// This ordering is only applicable for operations that combine both loads and stores.
369 ///
370 /// Corresponds to [`memory_order_acq_rel`] in C++20.
371 ///
372 /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
373 #[stable(feature = "rust1", since = "1.0.0")]
374 AcqRel,
375 /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
376 /// operations, respectively) with the additional guarantee that all threads see all
377 /// sequentially consistent operations in the same order.
378 ///
379 /// Corresponds to [`memory_order_seq_cst`] in C++20.
380 ///
381 /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
382 #[stable(feature = "rust1", since = "1.0.0")]
383 SeqCst,
384}
385
386/// An [`AtomicBool`] initialized to `false`.
387#[cfg(target_has_atomic_load_store = "8")]
388#[stable(feature = "rust1", since = "1.0.0")]
389#[deprecated(
390 since = "1.34.0",
391 note = "the `new` function is now preferred",
392 suggestion = "AtomicBool::new(false)"
393)]
394pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
395
396#[cfg(target_has_atomic_load_store = "8")]
397impl AtomicBool {
398 /// Creates a new `AtomicBool`.
399 ///
400 /// # Examples
401 ///
402 /// ```
403 /// use std::sync::atomic::AtomicBool;
404 ///
405 /// let atomic_true = AtomicBool::new(true);
406 /// let atomic_false = AtomicBool::new(false);
407 /// ```
408 #[inline]
409 #[stable(feature = "rust1", since = "1.0.0")]
410 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
411 #[must_use]
412 pub const fn new(v: bool) -> AtomicBool {
413 AtomicBool { v: UnsafeCell::new(v as u8) }
414 }
415
416 /// Creates a new `AtomicBool` from a pointer.
417 ///
418 /// # Examples
419 ///
420 /// ```
421 /// use std::sync::atomic::{self, AtomicBool};
422 ///
423 /// // Get a pointer to an allocated value
424 /// let ptr: *mut bool = Box::into_raw(Box::new(false));
425 ///
426 /// assert!(ptr.cast::<AtomicBool>().is_aligned());
427 ///
428 /// {
429 /// // Create an atomic view of the allocated value
430 /// let atomic = unsafe { AtomicBool::from_ptr(ptr) };
431 ///
432 /// // Use `atomic` for atomic operations, possibly share it with other threads
433 /// atomic.store(true, atomic::Ordering::Relaxed);
434 /// }
435 ///
436 /// // It's ok to non-atomically access the value behind `ptr`,
437 /// // since the reference to the atomic ended its lifetime in the block above
438 /// assert_eq!(unsafe { *ptr }, true);
439 ///
440 /// // Deallocate the value
441 /// unsafe { drop(Box::from_raw(ptr)) }
442 /// ```
443 ///
444 /// # Safety
445 ///
446 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
447 /// be bigger than `align_of::<bool>()`).
448 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
449 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
450 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
451 /// without synchronization.
452 ///
453 /// [valid]: crate::ptr#safety
454 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
455 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
456 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
457 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
458 // SAFETY: guaranteed by the caller
459 unsafe { &*ptr.cast() }
460 }
461
462 /// Returns a mutable reference to the underlying [`bool`].
463 ///
464 /// This is safe because the mutable reference guarantees that no other threads are
465 /// concurrently accessing the atomic data.
466 ///
467 /// # Examples
468 ///
469 /// ```
470 /// use std::sync::atomic::{AtomicBool, Ordering};
471 ///
472 /// let mut some_bool = AtomicBool::new(true);
473 /// assert_eq!(*some_bool.get_mut(), true);
474 /// *some_bool.get_mut() = false;
475 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
476 /// ```
477 #[inline]
478 #[stable(feature = "atomic_access", since = "1.15.0")]
479 pub fn get_mut(&mut self) -> &mut bool {
480 // SAFETY: the mutable reference guarantees unique ownership.
481 unsafe { &mut *(self.v.get() as *mut bool) }
482 }
483
484 /// Get atomic access to a `&mut bool`.
485 ///
486 /// # Examples
487 ///
488 /// ```
489 /// #![feature(atomic_from_mut)]
490 /// use std::sync::atomic::{AtomicBool, Ordering};
491 ///
492 /// let mut some_bool = true;
493 /// let a = AtomicBool::from_mut(&mut some_bool);
494 /// a.store(false, Ordering::Relaxed);
495 /// assert_eq!(some_bool, false);
496 /// ```
497 #[inline]
498 #[cfg(target_has_atomic_equal_alignment = "8")]
499 #[unstable(feature = "atomic_from_mut", issue = "76314")]
500 pub fn from_mut(v: &mut bool) -> &mut Self {
501 // SAFETY: the mutable reference guarantees unique ownership, and
502 // alignment of both `bool` and `Self` is 1.
503 unsafe { &mut *(v as *mut bool as *mut Self) }
504 }
505
506 /// Get non-atomic access to a `&mut [AtomicBool]` slice.
507 ///
508 /// This is safe because the mutable reference guarantees that no other threads are
509 /// concurrently accessing the atomic data.
510 ///
511 /// # Examples
512 ///
513 /// ```
514 /// #![feature(atomic_from_mut)]
515 /// # #![cfg_attr(bootstrap, feature(inline_const))]
516 /// use std::sync::atomic::{AtomicBool, Ordering};
517 ///
518 /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
519 ///
520 /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
521 /// assert_eq!(view, [false; 10]);
522 /// view[..5].copy_from_slice(&[true; 5]);
523 ///
524 /// std::thread::scope(|s| {
525 /// for t in &some_bools[..5] {
526 /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
527 /// }
528 ///
529 /// for f in &some_bools[5..] {
530 /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
531 /// }
532 /// });
533 /// ```
534 #[inline]
535 #[unstable(feature = "atomic_from_mut", issue = "76314")]
536 pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
537 // SAFETY: the mutable reference guarantees unique ownership.
538 unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
539 }
540
541 /// Get atomic access to a `&mut [bool]` slice.
542 ///
543 /// # Examples
544 ///
545 /// ```
546 /// #![feature(atomic_from_mut)]
547 /// use std::sync::atomic::{AtomicBool, Ordering};
548 ///
549 /// let mut some_bools = [false; 10];
550 /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
551 /// std::thread::scope(|s| {
552 /// for i in 0..a.len() {
553 /// s.spawn(move || a[i].store(true, Ordering::Relaxed));
554 /// }
555 /// });
556 /// assert_eq!(some_bools, [true; 10]);
557 /// ```
558 #[inline]
559 #[cfg(target_has_atomic_equal_alignment = "8")]
560 #[unstable(feature = "atomic_from_mut", issue = "76314")]
561 pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
562 // SAFETY: the mutable reference guarantees unique ownership, and
563 // alignment of both `bool` and `Self` is 1.
564 unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
565 }
566
567 /// Consumes the atomic and returns the contained value.
568 ///
569 /// This is safe because passing `self` by value guarantees that no other threads are
570 /// concurrently accessing the atomic data.
571 ///
572 /// # Examples
573 ///
574 /// ```
575 /// use std::sync::atomic::AtomicBool;
576 ///
577 /// let some_bool = AtomicBool::new(true);
578 /// assert_eq!(some_bool.into_inner(), true);
579 /// ```
580 #[inline]
581 #[stable(feature = "atomic_access", since = "1.15.0")]
582 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "CURRENT_RUSTC_VERSION")]
583 pub const fn into_inner(self) -> bool {
584 self.v.primitive_into_inner() != 0
585 }
586
587 /// Loads a value from the bool.
588 ///
589 /// `load` takes an [`Ordering`] argument which describes the memory ordering
590 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
591 ///
592 /// # Panics
593 ///
594 /// Panics if `order` is [`Release`] or [`AcqRel`].
595 ///
596 /// # Examples
597 ///
598 /// ```
599 /// use std::sync::atomic::{AtomicBool, Ordering};
600 ///
601 /// let some_bool = AtomicBool::new(true);
602 ///
603 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
604 /// ```
605 #[inline]
606 #[stable(feature = "rust1", since = "1.0.0")]
607 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
608 pub fn load(&self, order: Ordering) -> bool {
609 // SAFETY: any data races are prevented by atomic intrinsics and the raw
610 // pointer passed in is valid because we got it from a reference.
611 unsafe { atomic_load(self.v.get(), order) != 0 }
612 }
613
614 /// Stores a value into the bool.
615 ///
616 /// `store` takes an [`Ordering`] argument which describes the memory ordering
617 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
618 ///
619 /// # Panics
620 ///
621 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
622 ///
623 /// # Examples
624 ///
625 /// ```
626 /// use std::sync::atomic::{AtomicBool, Ordering};
627 ///
628 /// let some_bool = AtomicBool::new(true);
629 ///
630 /// some_bool.store(false, Ordering::Relaxed);
631 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
632 /// ```
633 #[inline]
634 #[stable(feature = "rust1", since = "1.0.0")]
635 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
636 pub fn store(&self, val: bool, order: Ordering) {
637 // SAFETY: any data races are prevented by atomic intrinsics and the raw
638 // pointer passed in is valid because we got it from a reference.
639 unsafe {
640 atomic_store(self.v.get(), val as u8, order);
641 }
642 }
643
644 /// Stores a value into the bool, returning the previous value.
645 ///
646 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
647 /// of this operation. All ordering modes are possible. Note that using
648 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
649 /// using [`Release`] makes the load part [`Relaxed`].
650 ///
651 /// **Note:** This method is only available on platforms that support atomic
652 /// operations on `u8`.
653 ///
654 /// # Examples
655 ///
656 /// ```
657 /// use std::sync::atomic::{AtomicBool, Ordering};
658 ///
659 /// let some_bool = AtomicBool::new(true);
660 ///
661 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
662 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
663 /// ```
664 #[inline]
665 #[stable(feature = "rust1", since = "1.0.0")]
666 #[cfg(target_has_atomic = "8")]
667 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
668 pub fn swap(&self, val: bool, order: Ordering) -> bool {
669 if EMULATE_ATOMIC_BOOL {
670 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
671 } else {
672 // SAFETY: data races are prevented by atomic intrinsics.
673 unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
674 }
675 }
676
677 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
678 ///
679 /// The return value is always the previous value. If it is equal to `current`, then the value
680 /// was updated.
681 ///
682 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
683 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
684 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
685 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
686 /// happens, and using [`Release`] makes the load part [`Relaxed`].
687 ///
688 /// **Note:** This method is only available on platforms that support atomic
689 /// operations on `u8`.
690 ///
691 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
692 ///
693 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
694 /// memory orderings:
695 ///
696 /// Original | Success | Failure
697 /// -------- | ------- | -------
698 /// Relaxed | Relaxed | Relaxed
699 /// Acquire | Acquire | Acquire
700 /// Release | Release | Relaxed
701 /// AcqRel | AcqRel | Acquire
702 /// SeqCst | SeqCst | SeqCst
703 ///
704 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
705 /// which allows the compiler to generate better assembly code when the compare and swap
706 /// is used in a loop.
707 ///
708 /// # Examples
709 ///
710 /// ```
711 /// use std::sync::atomic::{AtomicBool, Ordering};
712 ///
713 /// let some_bool = AtomicBool::new(true);
714 ///
715 /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
716 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
717 ///
718 /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
719 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
720 /// ```
721 #[inline]
722 #[stable(feature = "rust1", since = "1.0.0")]
723 #[deprecated(
724 since = "1.50.0",
725 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
726 )]
727 #[cfg(target_has_atomic = "8")]
728 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
729 pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
730 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
731 Ok(x) => x,
732 Err(x) => x,
733 }
734 }
735
736 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
737 ///
738 /// The return value is a result indicating whether the new value was written and containing
739 /// the previous value. On success this value is guaranteed to be equal to `current`.
740 ///
741 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
742 /// ordering of this operation. `success` describes the required ordering for the
743 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
744 /// `failure` describes the required ordering for the load operation that takes place when
745 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
746 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
747 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
748 ///
749 /// **Note:** This method is only available on platforms that support atomic
750 /// operations on `u8`.
751 ///
752 /// # Examples
753 ///
754 /// ```
755 /// use std::sync::atomic::{AtomicBool, Ordering};
756 ///
757 /// let some_bool = AtomicBool::new(true);
758 ///
759 /// assert_eq!(some_bool.compare_exchange(true,
760 /// false,
761 /// Ordering::Acquire,
762 /// Ordering::Relaxed),
763 /// Ok(true));
764 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
765 ///
766 /// assert_eq!(some_bool.compare_exchange(true, true,
767 /// Ordering::SeqCst,
768 /// Ordering::Acquire),
769 /// Err(false));
770 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
771 /// ```
772 #[inline]
773 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
774 #[doc(alias = "compare_and_swap")]
775 #[cfg(target_has_atomic = "8")]
776 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
777 pub fn compare_exchange(
778 &self,
779 current: bool,
780 new: bool,
781 success: Ordering,
782 failure: Ordering,
783 ) -> Result<bool, bool> {
784 if EMULATE_ATOMIC_BOOL {
785 // Pick the strongest ordering from success and failure.
786 let order = match (success, failure) {
787 (SeqCst, _) => SeqCst,
788 (_, SeqCst) => SeqCst,
789 (AcqRel, _) => AcqRel,
790 (_, AcqRel) => {
791 panic!("there is no such thing as an acquire-release failure ordering")
792 }
793 (Release, Acquire) => AcqRel,
794 (Acquire, _) => Acquire,
795 (_, Acquire) => Acquire,
796 (Release, Relaxed) => Release,
797 (_, Release) => panic!("there is no such thing as a release failure ordering"),
798 (Relaxed, Relaxed) => Relaxed,
799 };
800 let old = if current == new {
801 // This is a no-op, but we still need to perform the operation
802 // for memory ordering reasons.
803 self.fetch_or(false, order)
804 } else {
805 // This sets the value to the new one and returns the old one.
806 self.swap(new, order)
807 };
808 if old == current { Ok(old) } else { Err(old) }
809 } else {
810 // SAFETY: data races are prevented by atomic intrinsics.
811 match unsafe {
812 atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
813 } {
814 Ok(x) => Ok(x != 0),
815 Err(x) => Err(x != 0),
816 }
817 }
818 }
819
820 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
821 ///
822 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
823 /// comparison succeeds, which can result in more efficient code on some platforms. The
824 /// return value is a result indicating whether the new value was written and containing the
825 /// previous value.
826 ///
827 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
828 /// ordering of this operation. `success` describes the required ordering for the
829 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
830 /// `failure` describes the required ordering for the load operation that takes place when
831 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
832 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
833 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
834 ///
835 /// **Note:** This method is only available on platforms that support atomic
836 /// operations on `u8`.
837 ///
838 /// # Examples
839 ///
840 /// ```
841 /// use std::sync::atomic::{AtomicBool, Ordering};
842 ///
843 /// let val = AtomicBool::new(false);
844 ///
845 /// let new = true;
846 /// let mut old = val.load(Ordering::Relaxed);
847 /// loop {
848 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
849 /// Ok(_) => break,
850 /// Err(x) => old = x,
851 /// }
852 /// }
853 /// ```
854 #[inline]
855 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
856 #[doc(alias = "compare_and_swap")]
857 #[cfg(target_has_atomic = "8")]
858 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
859 pub fn compare_exchange_weak(
860 &self,
861 current: bool,
862 new: bool,
863 success: Ordering,
864 failure: Ordering,
865 ) -> Result<bool, bool> {
866 if EMULATE_ATOMIC_BOOL {
867 return self.compare_exchange(current, new, success, failure);
868 }
869
870 // SAFETY: data races are prevented by atomic intrinsics.
871 match unsafe {
872 atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
873 } {
874 Ok(x) => Ok(x != 0),
875 Err(x) => Err(x != 0),
876 }
877 }
878
879 /// Logical "and" with a boolean value.
880 ///
881 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
882 /// the new value to the result.
883 ///
884 /// Returns the previous value.
885 ///
886 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
887 /// of this operation. All ordering modes are possible. Note that using
888 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
889 /// using [`Release`] makes the load part [`Relaxed`].
890 ///
891 /// **Note:** This method is only available on platforms that support atomic
892 /// operations on `u8`.
893 ///
894 /// # Examples
895 ///
896 /// ```
897 /// use std::sync::atomic::{AtomicBool, Ordering};
898 ///
899 /// let foo = AtomicBool::new(true);
900 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
901 /// assert_eq!(foo.load(Ordering::SeqCst), false);
902 ///
903 /// let foo = AtomicBool::new(true);
904 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
905 /// assert_eq!(foo.load(Ordering::SeqCst), true);
906 ///
907 /// let foo = AtomicBool::new(false);
908 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
909 /// assert_eq!(foo.load(Ordering::SeqCst), false);
910 /// ```
911 #[inline]
912 #[stable(feature = "rust1", since = "1.0.0")]
913 #[cfg(target_has_atomic = "8")]
914 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
915 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
916 // SAFETY: data races are prevented by atomic intrinsics.
917 unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
918 }
919
920 /// Logical "nand" with a boolean value.
921 ///
922 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
923 /// the new value to the result.
924 ///
925 /// Returns the previous value.
926 ///
927 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
928 /// of this operation. All ordering modes are possible. Note that using
929 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
930 /// using [`Release`] makes the load part [`Relaxed`].
931 ///
932 /// **Note:** This method is only available on platforms that support atomic
933 /// operations on `u8`.
934 ///
935 /// # Examples
936 ///
937 /// ```
938 /// use std::sync::atomic::{AtomicBool, Ordering};
939 ///
940 /// let foo = AtomicBool::new(true);
941 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
942 /// assert_eq!(foo.load(Ordering::SeqCst), true);
943 ///
944 /// let foo = AtomicBool::new(true);
945 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
946 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
947 /// assert_eq!(foo.load(Ordering::SeqCst), false);
948 ///
949 /// let foo = AtomicBool::new(false);
950 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
951 /// assert_eq!(foo.load(Ordering::SeqCst), true);
952 /// ```
953 #[inline]
954 #[stable(feature = "rust1", since = "1.0.0")]
955 #[cfg(target_has_atomic = "8")]
956 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
957 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
958 // We can't use atomic_nand here because it can result in a bool with
959 // an invalid value. This happens because the atomic operation is done
960 // with an 8-bit integer internally, which would set the upper 7 bits.
961 // So we just use fetch_xor or swap instead.
962 if val {
963 // !(x & true) == !x
964 // We must invert the bool.
965 self.fetch_xor(true, order)
966 } else {
967 // !(x & false) == true
968 // We must set the bool to true.
969 self.swap(true, order)
970 }
971 }
972
973 /// Logical "or" with a boolean value.
974 ///
975 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
976 /// new value to the result.
977 ///
978 /// Returns the previous value.
979 ///
980 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
981 /// of this operation. All ordering modes are possible. Note that using
982 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
983 /// using [`Release`] makes the load part [`Relaxed`].
984 ///
985 /// **Note:** This method is only available on platforms that support atomic
986 /// operations on `u8`.
987 ///
988 /// # Examples
989 ///
990 /// ```
991 /// use std::sync::atomic::{AtomicBool, Ordering};
992 ///
993 /// let foo = AtomicBool::new(true);
994 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
995 /// assert_eq!(foo.load(Ordering::SeqCst), true);
996 ///
997 /// let foo = AtomicBool::new(true);
998 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
999 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1000 ///
1001 /// let foo = AtomicBool::new(false);
1002 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1003 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1004 /// ```
1005 #[inline]
1006 #[stable(feature = "rust1", since = "1.0.0")]
1007 #[cfg(target_has_atomic = "8")]
1008 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1009 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1010 // SAFETY: data races are prevented by atomic intrinsics.
1011 unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1012 }
1013
1014 /// Logical "xor" with a boolean value.
1015 ///
1016 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1017 /// the new value to the result.
1018 ///
1019 /// Returns the previous value.
1020 ///
1021 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1022 /// of this operation. All ordering modes are possible. Note that using
1023 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1024 /// using [`Release`] makes the load part [`Relaxed`].
1025 ///
1026 /// **Note:** This method is only available on platforms that support atomic
1027 /// operations on `u8`.
1028 ///
1029 /// # Examples
1030 ///
1031 /// ```
1032 /// use std::sync::atomic::{AtomicBool, Ordering};
1033 ///
1034 /// let foo = AtomicBool::new(true);
1035 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1036 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1037 ///
1038 /// let foo = AtomicBool::new(true);
1039 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1040 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1041 ///
1042 /// let foo = AtomicBool::new(false);
1043 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1044 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1045 /// ```
1046 #[inline]
1047 #[stable(feature = "rust1", since = "1.0.0")]
1048 #[cfg(target_has_atomic = "8")]
1049 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1050 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1051 // SAFETY: data races are prevented by atomic intrinsics.
1052 unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1053 }
1054
1055 /// Logical "not" with a boolean value.
1056 ///
1057 /// Performs a logical "not" operation on the current value, and sets
1058 /// the new value to the result.
1059 ///
1060 /// Returns the previous value.
1061 ///
1062 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1063 /// of this operation. All ordering modes are possible. Note that using
1064 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1065 /// using [`Release`] makes the load part [`Relaxed`].
1066 ///
1067 /// **Note:** This method is only available on platforms that support atomic
1068 /// operations on `u8`.
1069 ///
1070 /// # Examples
1071 ///
1072 /// ```
1073 /// #![feature(atomic_bool_fetch_not)]
1074 /// use std::sync::atomic::{AtomicBool, Ordering};
1075 ///
1076 /// let foo = AtomicBool::new(true);
1077 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1078 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1079 ///
1080 /// let foo = AtomicBool::new(false);
1081 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1082 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1083 /// ```
1084 #[inline]
1085 #[unstable(feature = "atomic_bool_fetch_not", issue = "98485")]
1086 #[cfg(target_has_atomic = "8")]
1087 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1088 pub fn fetch_not(&self, order: Ordering) -> bool {
1089 self.fetch_xor(true, order)
1090 }
1091
1092 /// Returns a mutable pointer to the underlying [`bool`].
1093 ///
1094 /// Doing non-atomic reads and writes on the resulting boolean can be a data race.
1095 /// This method is mostly useful for FFI, where the function signature may use
1096 /// `*mut bool` instead of `&AtomicBool`.
1097 ///
1098 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1099 /// atomic types work with interior mutability. All modifications of an atomic change the value
1100 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1101 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1102 /// restriction: operations on it must be atomic.
1103 ///
1104 /// # Examples
1105 ///
1106 /// ```ignore (extern-declaration)
1107 /// # fn main() {
1108 /// use std::sync::atomic::AtomicBool;
1109 ///
1110 /// extern "C" {
1111 /// fn my_atomic_op(arg: *mut bool);
1112 /// }
1113 ///
1114 /// let mut atomic = AtomicBool::new(true);
1115 /// unsafe {
1116 /// my_atomic_op(atomic.as_ptr());
1117 /// }
1118 /// # }
1119 /// ```
1120 #[inline]
1121 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1122 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1123 #[rustc_never_returns_null_ptr]
1124 pub const fn as_ptr(&self) -> *mut bool {
1125 self.v.get().cast()
1126 }
1127
1128 /// Fetches the value, and applies a function to it that returns an optional
1129 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1130 /// returned `Some(_)`, else `Err(previous_value)`.
1131 ///
1132 /// Note: This may call the function multiple times if the value has been
1133 /// changed from other threads in the meantime, as long as the function
1134 /// returns `Some(_)`, but the function will have been applied only once to
1135 /// the stored value.
1136 ///
1137 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1138 /// ordering of this operation. The first describes the required ordering for
1139 /// when the operation finally succeeds while the second describes the
1140 /// required ordering for loads. These correspond to the success and failure
1141 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1142 ///
1143 /// Using [`Acquire`] as success ordering makes the store part of this
1144 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1145 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1146 /// [`Acquire`] or [`Relaxed`].
1147 ///
1148 /// **Note:** This method is only available on platforms that support atomic
1149 /// operations on `u8`.
1150 ///
1151 /// # Considerations
1152 ///
1153 /// This method is not magic; it is not provided by the hardware.
1154 /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1155 /// In particular, this method will not circumvent the [ABA Problem].
1156 ///
1157 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1158 ///
1159 /// # Examples
1160 ///
1161 /// ```rust
1162 /// use std::sync::atomic::{AtomicBool, Ordering};
1163 ///
1164 /// let x = AtomicBool::new(false);
1165 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1166 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1167 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1168 /// assert_eq!(x.load(Ordering::SeqCst), false);
1169 /// ```
1170 #[inline]
1171 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1172 #[cfg(target_has_atomic = "8")]
1173 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1174 pub fn fetch_update<F>(
1175 &self,
1176 set_order: Ordering,
1177 fetch_order: Ordering,
1178 mut f: F,
1179 ) -> Result<bool, bool>
1180 where
1181 F: FnMut(bool) -> Option<bool>,
1182 {
1183 let mut prev = self.load(fetch_order);
1184 while let Some(next) = f(prev) {
1185 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1186 x @ Ok(_) => return x,
1187 Err(next_prev) => prev = next_prev,
1188 }
1189 }
1190 Err(prev)
1191 }
1192}
1193
1194#[cfg(target_has_atomic_load_store = "ptr")]
1195impl<T> AtomicPtr<T> {
1196 /// Creates a new `AtomicPtr`.
1197 ///
1198 /// # Examples
1199 ///
1200 /// ```
1201 /// use std::sync::atomic::AtomicPtr;
1202 ///
1203 /// let ptr = &mut 5;
1204 /// let atomic_ptr = AtomicPtr::new(ptr);
1205 /// ```
1206 #[inline]
1207 #[stable(feature = "rust1", since = "1.0.0")]
1208 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1209 pub const fn new(p: *mut T) -> AtomicPtr<T> {
1210 AtomicPtr { p: UnsafeCell::new(p) }
1211 }
1212
1213 /// Creates a new `AtomicPtr` from a pointer.
1214 ///
1215 /// # Examples
1216 ///
1217 /// ```
1218 /// use std::sync::atomic::{self, AtomicPtr};
1219 ///
1220 /// // Get a pointer to an allocated value
1221 /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1222 ///
1223 /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned());
1224 ///
1225 /// {
1226 /// // Create an atomic view of the allocated value
1227 /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1228 ///
1229 /// // Use `atomic` for atomic operations, possibly share it with other threads
1230 /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1231 /// }
1232 ///
1233 /// // It's ok to non-atomically access the value behind `ptr`,
1234 /// // since the reference to the atomic ended its lifetime in the block above
1235 /// assert!(!unsafe { *ptr }.is_null());
1236 ///
1237 /// // Deallocate the value
1238 /// unsafe { drop(Box::from_raw(ptr)) }
1239 /// ```
1240 ///
1241 /// # Safety
1242 ///
1243 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1244 /// can be bigger than `align_of::<*mut T>()`).
1245 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1246 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1247 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
1248 /// without synchronization.
1249 ///
1250 /// [valid]: crate::ptr#safety
1251 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1252 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1253 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
1254 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1255 // SAFETY: guaranteed by the caller
1256 unsafe { &*ptr.cast() }
1257 }
1258
1259 /// Returns a mutable reference to the underlying pointer.
1260 ///
1261 /// This is safe because the mutable reference guarantees that no other threads are
1262 /// concurrently accessing the atomic data.
1263 ///
1264 /// # Examples
1265 ///
1266 /// ```
1267 /// use std::sync::atomic::{AtomicPtr, Ordering};
1268 ///
1269 /// let mut data = 10;
1270 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1271 /// let mut other_data = 5;
1272 /// *atomic_ptr.get_mut() = &mut other_data;
1273 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1274 /// ```
1275 #[inline]
1276 #[stable(feature = "atomic_access", since = "1.15.0")]
1277 pub fn get_mut(&mut self) -> &mut *mut T {
1278 self.p.get_mut()
1279 }
1280
1281 /// Get atomic access to a pointer.
1282 ///
1283 /// # Examples
1284 ///
1285 /// ```
1286 /// #![feature(atomic_from_mut)]
1287 /// use std::sync::atomic::{AtomicPtr, Ordering};
1288 ///
1289 /// let mut data = 123;
1290 /// let mut some_ptr = &mut data as *mut i32;
1291 /// let a = AtomicPtr::from_mut(&mut some_ptr);
1292 /// let mut other_data = 456;
1293 /// a.store(&mut other_data, Ordering::Relaxed);
1294 /// assert_eq!(unsafe { *some_ptr }, 456);
1295 /// ```
1296 #[inline]
1297 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1298 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1299 pub fn from_mut(v: &mut *mut T) -> &mut Self {
1300 use crate::mem::align_of;
1301 let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1302 // SAFETY:
1303 // - the mutable reference guarantees unique ownership.
1304 // - the alignment of `*mut T` and `Self` is the same on all platforms
1305 // supported by rust, as verified above.
1306 unsafe { &mut *(v as *mut *mut T as *mut Self) }
1307 }
1308
1309 /// Get non-atomic access to a `&mut [AtomicPtr]` slice.
1310 ///
1311 /// This is safe because the mutable reference guarantees that no other threads are
1312 /// concurrently accessing the atomic data.
1313 ///
1314 /// # Examples
1315 ///
1316 /// ```
1317 /// #![feature(atomic_from_mut)]
1318 /// # #![cfg_attr(bootstrap, feature(inline_const))]
1319 /// use std::ptr::null_mut;
1320 /// use std::sync::atomic::{AtomicPtr, Ordering};
1321 ///
1322 /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1323 ///
1324 /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1325 /// assert_eq!(view, [null_mut::<String>(); 10]);
1326 /// view
1327 /// .iter_mut()
1328 /// .enumerate()
1329 /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1330 ///
1331 /// std::thread::scope(|s| {
1332 /// for ptr in &some_ptrs {
1333 /// s.spawn(move || {
1334 /// let ptr = ptr.load(Ordering::Relaxed);
1335 /// assert!(!ptr.is_null());
1336 ///
1337 /// let name = unsafe { Box::from_raw(ptr) };
1338 /// println!("Hello, {name}!");
1339 /// });
1340 /// }
1341 /// });
1342 /// ```
1343 #[inline]
1344 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1345 pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1346 // SAFETY: the mutable reference guarantees unique ownership.
1347 unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1348 }
1349
1350 /// Get atomic access to a slice of pointers.
1351 ///
1352 /// # Examples
1353 ///
1354 /// ```
1355 /// #![feature(atomic_from_mut)]
1356 /// use std::ptr::null_mut;
1357 /// use std::sync::atomic::{AtomicPtr, Ordering};
1358 ///
1359 /// let mut some_ptrs = [null_mut::<String>(); 10];
1360 /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1361 /// std::thread::scope(|s| {
1362 /// for i in 0..a.len() {
1363 /// s.spawn(move || {
1364 /// let name = Box::new(format!("thread{i}"));
1365 /// a[i].store(Box::into_raw(name), Ordering::Relaxed);
1366 /// });
1367 /// }
1368 /// });
1369 /// for p in some_ptrs {
1370 /// assert!(!p.is_null());
1371 /// let name = unsafe { Box::from_raw(p) };
1372 /// println!("Hello, {name}!");
1373 /// }
1374 /// ```
1375 #[inline]
1376 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1377 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1378 pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1379 // SAFETY:
1380 // - the mutable reference guarantees unique ownership.
1381 // - the alignment of `*mut T` and `Self` is the same on all platforms
1382 // supported by rust, as verified above.
1383 unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1384 }
1385
1386 /// Consumes the atomic and returns the contained value.
1387 ///
1388 /// This is safe because passing `self` by value guarantees that no other threads are
1389 /// concurrently accessing the atomic data.
1390 ///
1391 /// # Examples
1392 ///
1393 /// ```
1394 /// use std::sync::atomic::AtomicPtr;
1395 ///
1396 /// let mut data = 5;
1397 /// let atomic_ptr = AtomicPtr::new(&mut data);
1398 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1399 /// ```
1400 #[inline]
1401 #[stable(feature = "atomic_access", since = "1.15.0")]
1402 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "CURRENT_RUSTC_VERSION")]
1403 pub const fn into_inner(self) -> *mut T {
1404 self.p.primitive_into_inner()
1405 }
1406
1407 /// Loads a value from the pointer.
1408 ///
1409 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1410 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1411 ///
1412 /// # Panics
1413 ///
1414 /// Panics if `order` is [`Release`] or [`AcqRel`].
1415 ///
1416 /// # Examples
1417 ///
1418 /// ```
1419 /// use std::sync::atomic::{AtomicPtr, Ordering};
1420 ///
1421 /// let ptr = &mut 5;
1422 /// let some_ptr = AtomicPtr::new(ptr);
1423 ///
1424 /// let value = some_ptr.load(Ordering::Relaxed);
1425 /// ```
1426 #[inline]
1427 #[stable(feature = "rust1", since = "1.0.0")]
1428 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1429 pub fn load(&self, order: Ordering) -> *mut T {
1430 // SAFETY: data races are prevented by atomic intrinsics.
1431 unsafe { atomic_load(self.p.get(), order) }
1432 }
1433
1434 /// Stores a value into the pointer.
1435 ///
1436 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1437 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1438 ///
1439 /// # Panics
1440 ///
1441 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1442 ///
1443 /// # Examples
1444 ///
1445 /// ```
1446 /// use std::sync::atomic::{AtomicPtr, Ordering};
1447 ///
1448 /// let ptr = &mut 5;
1449 /// let some_ptr = AtomicPtr::new(ptr);
1450 ///
1451 /// let other_ptr = &mut 10;
1452 ///
1453 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1454 /// ```
1455 #[inline]
1456 #[stable(feature = "rust1", since = "1.0.0")]
1457 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1458 pub fn store(&self, ptr: *mut T, order: Ordering) {
1459 // SAFETY: data races are prevented by atomic intrinsics.
1460 unsafe {
1461 atomic_store(self.p.get(), ptr, order);
1462 }
1463 }
1464
1465 /// Stores a value into the pointer, returning the previous value.
1466 ///
1467 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1468 /// of this operation. All ordering modes are possible. Note that using
1469 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1470 /// using [`Release`] makes the load part [`Relaxed`].
1471 ///
1472 /// **Note:** This method is only available on platforms that support atomic
1473 /// operations on pointers.
1474 ///
1475 /// # Examples
1476 ///
1477 /// ```
1478 /// use std::sync::atomic::{AtomicPtr, Ordering};
1479 ///
1480 /// let ptr = &mut 5;
1481 /// let some_ptr = AtomicPtr::new(ptr);
1482 ///
1483 /// let other_ptr = &mut 10;
1484 ///
1485 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1486 /// ```
1487 #[inline]
1488 #[stable(feature = "rust1", since = "1.0.0")]
1489 #[cfg(target_has_atomic = "ptr")]
1490 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1491 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1492 // SAFETY: data races are prevented by atomic intrinsics.
1493 unsafe { atomic_swap(self.p.get(), ptr, order) }
1494 }
1495
1496 /// Stores a value into the pointer if the current value is the same as the `current` value.
1497 ///
1498 /// The return value is always the previous value. If it is equal to `current`, then the value
1499 /// was updated.
1500 ///
1501 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1502 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1503 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1504 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1505 /// happens, and using [`Release`] makes the load part [`Relaxed`].
1506 ///
1507 /// **Note:** This method is only available on platforms that support atomic
1508 /// operations on pointers.
1509 ///
1510 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1511 ///
1512 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1513 /// memory orderings:
1514 ///
1515 /// Original | Success | Failure
1516 /// -------- | ------- | -------
1517 /// Relaxed | Relaxed | Relaxed
1518 /// Acquire | Acquire | Acquire
1519 /// Release | Release | Relaxed
1520 /// AcqRel | AcqRel | Acquire
1521 /// SeqCst | SeqCst | SeqCst
1522 ///
1523 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1524 /// which allows the compiler to generate better assembly code when the compare and swap
1525 /// is used in a loop.
1526 ///
1527 /// # Examples
1528 ///
1529 /// ```
1530 /// use std::sync::atomic::{AtomicPtr, Ordering};
1531 ///
1532 /// let ptr = &mut 5;
1533 /// let some_ptr = AtomicPtr::new(ptr);
1534 ///
1535 /// let other_ptr = &mut 10;
1536 ///
1537 /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1538 /// ```
1539 #[inline]
1540 #[stable(feature = "rust1", since = "1.0.0")]
1541 #[deprecated(
1542 since = "1.50.0",
1543 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1544 )]
1545 #[cfg(target_has_atomic = "ptr")]
1546 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1547 pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1548 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1549 Ok(x) => x,
1550 Err(x) => x,
1551 }
1552 }
1553
1554 /// Stores a value into the pointer if the current value is the same as the `current` value.
1555 ///
1556 /// The return value is a result indicating whether the new value was written and containing
1557 /// the previous value. On success this value is guaranteed to be equal to `current`.
1558 ///
1559 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1560 /// ordering of this operation. `success` describes the required ordering for the
1561 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1562 /// `failure` describes the required ordering for the load operation that takes place when
1563 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1564 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1565 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1566 ///
1567 /// **Note:** This method is only available on platforms that support atomic
1568 /// operations on pointers.
1569 ///
1570 /// # Examples
1571 ///
1572 /// ```
1573 /// use std::sync::atomic::{AtomicPtr, Ordering};
1574 ///
1575 /// let ptr = &mut 5;
1576 /// let some_ptr = AtomicPtr::new(ptr);
1577 ///
1578 /// let other_ptr = &mut 10;
1579 ///
1580 /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1581 /// Ordering::SeqCst, Ordering::Relaxed);
1582 /// ```
1583 #[inline]
1584 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1585 #[cfg(target_has_atomic = "ptr")]
1586 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1587 pub fn compare_exchange(
1588 &self,
1589 current: *mut T,
1590 new: *mut T,
1591 success: Ordering,
1592 failure: Ordering,
1593 ) -> Result<*mut T, *mut T> {
1594 // SAFETY: data races are prevented by atomic intrinsics.
1595 unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1596 }
1597
1598 /// Stores a value into the pointer if the current value is the same as the `current` value.
1599 ///
1600 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1601 /// comparison succeeds, which can result in more efficient code on some platforms. The
1602 /// return value is a result indicating whether the new value was written and containing the
1603 /// previous value.
1604 ///
1605 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1606 /// ordering of this operation. `success` describes the required ordering for the
1607 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1608 /// `failure` describes the required ordering for the load operation that takes place when
1609 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1610 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1611 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1612 ///
1613 /// **Note:** This method is only available on platforms that support atomic
1614 /// operations on pointers.
1615 ///
1616 /// # Examples
1617 ///
1618 /// ```
1619 /// use std::sync::atomic::{AtomicPtr, Ordering};
1620 ///
1621 /// let some_ptr = AtomicPtr::new(&mut 5);
1622 ///
1623 /// let new = &mut 10;
1624 /// let mut old = some_ptr.load(Ordering::Relaxed);
1625 /// loop {
1626 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1627 /// Ok(_) => break,
1628 /// Err(x) => old = x,
1629 /// }
1630 /// }
1631 /// ```
1632 #[inline]
1633 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1634 #[cfg(target_has_atomic = "ptr")]
1635 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1636 pub fn compare_exchange_weak(
1637 &self,
1638 current: *mut T,
1639 new: *mut T,
1640 success: Ordering,
1641 failure: Ordering,
1642 ) -> Result<*mut T, *mut T> {
1643 // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1644 // but we know for sure that the pointer is valid (we just got it from
1645 // an `UnsafeCell` that we have by reference) and the atomic operation
1646 // itself allows us to safely mutate the `UnsafeCell` contents.
1647 unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1648 }
1649
1650 /// Fetches the value, and applies a function to it that returns an optional
1651 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1652 /// returned `Some(_)`, else `Err(previous_value)`.
1653 ///
1654 /// Note: This may call the function multiple times if the value has been
1655 /// changed from other threads in the meantime, as long as the function
1656 /// returns `Some(_)`, but the function will have been applied only once to
1657 /// the stored value.
1658 ///
1659 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1660 /// ordering of this operation. The first describes the required ordering for
1661 /// when the operation finally succeeds while the second describes the
1662 /// required ordering for loads. These correspond to the success and failure
1663 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1664 ///
1665 /// Using [`Acquire`] as success ordering makes the store part of this
1666 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1667 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1668 /// [`Acquire`] or [`Relaxed`].
1669 ///
1670 /// **Note:** This method is only available on platforms that support atomic
1671 /// operations on pointers.
1672 ///
1673 /// # Considerations
1674 ///
1675 /// This method is not magic; it is not provided by the hardware.
1676 /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1677 /// In particular, this method will not circumvent the [ABA Problem].
1678 ///
1679 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1680 ///
1681 /// # Examples
1682 ///
1683 /// ```rust
1684 /// use std::sync::atomic::{AtomicPtr, Ordering};
1685 ///
1686 /// let ptr: *mut _ = &mut 5;
1687 /// let some_ptr = AtomicPtr::new(ptr);
1688 ///
1689 /// let new: *mut _ = &mut 10;
1690 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1691 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1692 /// if x == ptr {
1693 /// Some(new)
1694 /// } else {
1695 /// None
1696 /// }
1697 /// });
1698 /// assert_eq!(result, Ok(ptr));
1699 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1700 /// ```
1701 #[inline]
1702 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1703 #[cfg(target_has_atomic = "ptr")]
1704 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1705 pub fn fetch_update<F>(
1706 &self,
1707 set_order: Ordering,
1708 fetch_order: Ordering,
1709 mut f: F,
1710 ) -> Result<*mut T, *mut T>
1711 where
1712 F: FnMut(*mut T) -> Option<*mut T>,
1713 {
1714 let mut prev = self.load(fetch_order);
1715 while let Some(next) = f(prev) {
1716 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1717 x @ Ok(_) => return x,
1718 Err(next_prev) => prev = next_prev,
1719 }
1720 }
1721 Err(prev)
1722 }
1723
1724 /// Offsets the pointer's address by adding `val` (in units of `T`),
1725 /// returning the previous pointer.
1726 ///
1727 /// This is equivalent to using [`wrapping_add`] to atomically perform the
1728 /// equivalent of `ptr = ptr.wrapping_add(val);`.
1729 ///
1730 /// This method operates in units of `T`, which means that it cannot be used
1731 /// to offset the pointer by an amount which is not a multiple of
1732 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1733 /// work with a deliberately misaligned pointer. In such cases, you may use
1734 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
1735 ///
1736 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
1737 /// memory ordering of this operation. All ordering modes are possible. Note
1738 /// that using [`Acquire`] makes the store part of this operation
1739 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1740 ///
1741 /// **Note**: This method is only available on platforms that support atomic
1742 /// operations on [`AtomicPtr`].
1743 ///
1744 /// [`wrapping_add`]: pointer::wrapping_add
1745 ///
1746 /// # Examples
1747 ///
1748 /// ```
1749 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1750 /// use core::sync::atomic::{AtomicPtr, Ordering};
1751 ///
1752 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1753 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
1754 /// // Note: units of `size_of::<i64>()`.
1755 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
1756 /// ```
1757 #[inline]
1758 #[cfg(target_has_atomic = "ptr")]
1759 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1760 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1761 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
1762 self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
1763 }
1764
1765 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
1766 /// returning the previous pointer.
1767 ///
1768 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
1769 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
1770 ///
1771 /// This method operates in units of `T`, which means that it cannot be used
1772 /// to offset the pointer by an amount which is not a multiple of
1773 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1774 /// work with a deliberately misaligned pointer. In such cases, you may use
1775 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
1776 ///
1777 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
1778 /// ordering of this operation. All ordering modes are possible. Note that
1779 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1780 /// and using [`Release`] makes the load part [`Relaxed`].
1781 ///
1782 /// **Note**: This method is only available on platforms that support atomic
1783 /// operations on [`AtomicPtr`].
1784 ///
1785 /// [`wrapping_sub`]: pointer::wrapping_sub
1786 ///
1787 /// # Examples
1788 ///
1789 /// ```
1790 /// #![feature(strict_provenance_atomic_ptr)]
1791 /// use core::sync::atomic::{AtomicPtr, Ordering};
1792 ///
1793 /// let array = [1i32, 2i32];
1794 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
1795 ///
1796 /// assert!(core::ptr::eq(
1797 /// atom.fetch_ptr_sub(1, Ordering::Relaxed),
1798 /// &array[1],
1799 /// ));
1800 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
1801 /// ```
1802 #[inline]
1803 #[cfg(target_has_atomic = "ptr")]
1804 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1805 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1806 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
1807 self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
1808 }
1809
1810 /// Offsets the pointer's address by adding `val` *bytes*, returning the
1811 /// previous pointer.
1812 ///
1813 /// This is equivalent to using [`wrapping_byte_add`] to atomically
1814 /// perform `ptr = ptr.wrapping_byte_add(val)`.
1815 ///
1816 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
1817 /// memory ordering of this operation. All ordering modes are possible. Note
1818 /// that using [`Acquire`] makes the store part of this operation
1819 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1820 ///
1821 /// **Note**: This method is only available on platforms that support atomic
1822 /// operations on [`AtomicPtr`].
1823 ///
1824 /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
1825 ///
1826 /// # Examples
1827 ///
1828 /// ```
1829 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1830 /// use core::sync::atomic::{AtomicPtr, Ordering};
1831 ///
1832 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1833 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
1834 /// // Note: in units of bytes, not `size_of::<i64>()`.
1835 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
1836 /// ```
1837 #[inline]
1838 #[cfg(target_has_atomic = "ptr")]
1839 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1840 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1841 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
1842 // SAFETY: data races are prevented by atomic intrinsics.
1843 unsafe { atomic_add(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
1844 }
1845
1846 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
1847 /// previous pointer.
1848 ///
1849 /// This is equivalent to using [`wrapping_byte_sub`] to atomically
1850 /// perform `ptr = ptr.wrapping_byte_sub(val)`.
1851 ///
1852 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
1853 /// memory ordering of this operation. All ordering modes are possible. Note
1854 /// that using [`Acquire`] makes the store part of this operation
1855 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1856 ///
1857 /// **Note**: This method is only available on platforms that support atomic
1858 /// operations on [`AtomicPtr`].
1859 ///
1860 /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
1861 ///
1862 /// # Examples
1863 ///
1864 /// ```
1865 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1866 /// use core::sync::atomic::{AtomicPtr, Ordering};
1867 ///
1868 /// let atom = AtomicPtr::<i64>::new(core::ptr::without_provenance_mut(1));
1869 /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
1870 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
1871 /// ```
1872 #[inline]
1873 #[cfg(target_has_atomic = "ptr")]
1874 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1875 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1876 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
1877 // SAFETY: data races are prevented by atomic intrinsics.
1878 unsafe { atomic_sub(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
1879 }
1880
1881 /// Performs a bitwise "or" operation on the address of the current pointer,
1882 /// and the argument `val`, and stores a pointer with provenance of the
1883 /// current pointer and the resulting address.
1884 ///
1885 /// This is equivalent to using [`map_addr`] to atomically perform
1886 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
1887 /// pointer schemes to atomically set tag bits.
1888 ///
1889 /// **Caveat**: This operation returns the previous value. To compute the
1890 /// stored value without losing provenance, you may use [`map_addr`]. For
1891 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
1892 ///
1893 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
1894 /// ordering of this operation. All ordering modes are possible. Note that
1895 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1896 /// and using [`Release`] makes the load part [`Relaxed`].
1897 ///
1898 /// **Note**: This method is only available on platforms that support atomic
1899 /// operations on [`AtomicPtr`].
1900 ///
1901 /// This API and its claimed semantics are part of the Strict Provenance
1902 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1903 /// details.
1904 ///
1905 /// [`map_addr`]: pointer::map_addr
1906 ///
1907 /// # Examples
1908 ///
1909 /// ```
1910 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1911 /// use core::sync::atomic::{AtomicPtr, Ordering};
1912 ///
1913 /// let pointer = &mut 3i64 as *mut i64;
1914 ///
1915 /// let atom = AtomicPtr::<i64>::new(pointer);
1916 /// // Tag the bottom bit of the pointer.
1917 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
1918 /// // Extract and untag.
1919 /// let tagged = atom.load(Ordering::Relaxed);
1920 /// assert_eq!(tagged.addr() & 1, 1);
1921 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
1922 /// ```
1923 #[inline]
1924 #[cfg(target_has_atomic = "ptr")]
1925 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1926 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1927 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
1928 // SAFETY: data races are prevented by atomic intrinsics.
1929 unsafe { atomic_or(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
1930 }
1931
1932 /// Performs a bitwise "and" operation on the address of the current
1933 /// pointer, and the argument `val`, and stores a pointer with provenance of
1934 /// the current pointer and the resulting address.
1935 ///
1936 /// This is equivalent to using [`map_addr`] to atomically perform
1937 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
1938 /// pointer schemes to atomically unset tag bits.
1939 ///
1940 /// **Caveat**: This operation returns the previous value. To compute the
1941 /// stored value without losing provenance, you may use [`map_addr`]. For
1942 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
1943 ///
1944 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
1945 /// ordering of this operation. All ordering modes are possible. Note that
1946 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1947 /// and using [`Release`] makes the load part [`Relaxed`].
1948 ///
1949 /// **Note**: This method is only available on platforms that support atomic
1950 /// operations on [`AtomicPtr`].
1951 ///
1952 /// This API and its claimed semantics are part of the Strict Provenance
1953 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1954 /// details.
1955 ///
1956 /// [`map_addr`]: pointer::map_addr
1957 ///
1958 /// # Examples
1959 ///
1960 /// ```
1961 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1962 /// use core::sync::atomic::{AtomicPtr, Ordering};
1963 ///
1964 /// let pointer = &mut 3i64 as *mut i64;
1965 /// // A tagged pointer
1966 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
1967 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
1968 /// // Untag, and extract the previously tagged pointer.
1969 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
1970 /// .map_addr(|a| a & !1);
1971 /// assert_eq!(untagged, pointer);
1972 /// ```
1973 #[inline]
1974 #[cfg(target_has_atomic = "ptr")]
1975 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1976 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1977 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
1978 // SAFETY: data races are prevented by atomic intrinsics.
1979 unsafe { atomic_and(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
1980 }
1981
1982 /// Performs a bitwise "xor" operation on the address of the current
1983 /// pointer, and the argument `val`, and stores a pointer with provenance of
1984 /// the current pointer and the resulting address.
1985 ///
1986 /// This is equivalent to using [`map_addr`] to atomically perform
1987 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
1988 /// pointer schemes to atomically toggle tag bits.
1989 ///
1990 /// **Caveat**: This operation returns the previous value. To compute the
1991 /// stored value without losing provenance, you may use [`map_addr`]. For
1992 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
1993 ///
1994 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
1995 /// ordering of this operation. All ordering modes are possible. Note that
1996 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1997 /// and using [`Release`] makes the load part [`Relaxed`].
1998 ///
1999 /// **Note**: This method is only available on platforms that support atomic
2000 /// operations on [`AtomicPtr`].
2001 ///
2002 /// This API and its claimed semantics are part of the Strict Provenance
2003 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2004 /// details.
2005 ///
2006 /// [`map_addr`]: pointer::map_addr
2007 ///
2008 /// # Examples
2009 ///
2010 /// ```
2011 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
2012 /// use core::sync::atomic::{AtomicPtr, Ordering};
2013 ///
2014 /// let pointer = &mut 3i64 as *mut i64;
2015 /// let atom = AtomicPtr::<i64>::new(pointer);
2016 ///
2017 /// // Toggle a tag bit on the pointer.
2018 /// atom.fetch_xor(1, Ordering::Relaxed);
2019 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2020 /// ```
2021 #[inline]
2022 #[cfg(target_has_atomic = "ptr")]
2023 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2024 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2025 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2026 // SAFETY: data races are prevented by atomic intrinsics.
2027 unsafe { atomic_xor(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() }
2028 }
2029
2030 /// Returns a mutable pointer to the underlying pointer.
2031 ///
2032 /// Doing non-atomic reads and writes on the resulting pointer can be a data race.
2033 /// This method is mostly useful for FFI, where the function signature may use
2034 /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2035 ///
2036 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2037 /// atomic types work with interior mutability. All modifications of an atomic change the value
2038 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2039 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2040 /// restriction: operations on it must be atomic.
2041 ///
2042 /// # Examples
2043 ///
2044 /// ```ignore (extern-declaration)
2045 /// use std::sync::atomic::AtomicPtr;
2046 ///
2047 /// extern "C" {
2048 /// fn my_atomic_op(arg: *mut *mut u32);
2049 /// }
2050 ///
2051 /// let mut value = 17;
2052 /// let atomic = AtomicPtr::new(&mut value);
2053 ///
2054 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2055 /// unsafe {
2056 /// my_atomic_op(atomic.as_ptr());
2057 /// }
2058 /// ```
2059 #[inline]
2060 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2061 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2062 #[rustc_never_returns_null_ptr]
2063 pub const fn as_ptr(&self) -> *mut *mut T {
2064 self.p.get()
2065 }
2066}
2067
2068#[cfg(target_has_atomic_load_store = "8")]
2069#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2070impl From<bool> for AtomicBool {
2071 /// Converts a `bool` into an `AtomicBool`.
2072 ///
2073 /// # Examples
2074 ///
2075 /// ```
2076 /// use std::sync::atomic::AtomicBool;
2077 /// let atomic_bool = AtomicBool::from(true);
2078 /// assert_eq!(format!("{atomic_bool:?}"), "true")
2079 /// ```
2080 #[inline]
2081 fn from(b: bool) -> Self {
2082 Self::new(b)
2083 }
2084}
2085
2086#[cfg(target_has_atomic_load_store = "ptr")]
2087#[stable(feature = "atomic_from", since = "1.23.0")]
2088impl<T> From<*mut T> for AtomicPtr<T> {
2089 /// Converts a `*mut T` into an `AtomicPtr<T>`.
2090 #[inline]
2091 fn from(p: *mut T) -> Self {
2092 Self::new(p)
2093 }
2094}
2095
2096#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2097macro_rules! if_not_8_bit {
2098 (u8, $($tt:tt)*) => { "" };
2099 (i8, $($tt:tt)*) => { "" };
2100 ($_:ident, $($tt:tt)*) => { $($tt)* };
2101}
2102
2103#[cfg(target_has_atomic_load_store)]
2104macro_rules! atomic_int {
2105 ($cfg_cas:meta,
2106 $cfg_align:meta,
2107 $stable:meta,
2108 $stable_cxchg:meta,
2109 $stable_debug:meta,
2110 $stable_access:meta,
2111 $stable_from:meta,
2112 $stable_nand:meta,
2113 $const_stable:meta,
2114 $diagnostic_item:meta,
2115 $s_int_type:literal,
2116 $extra_feature:expr,
2117 $min_fn:ident, $max_fn:ident,
2118 $align:expr,
2119 $int_type:ident $atomic_type:ident) => {
2120 /// An integer type which can be safely shared between threads.
2121 ///
2122 /// This type has the same size and bit validity as the underlying
2123 /// integer type, [`
2124 #[doc = $s_int_type]
2125 /// `].
2126 #[doc = if_not_8_bit! {
2127 $int_type,
2128 concat!(
2129 "However, the alignment of this type is always equal to its ",
2130 "size, even on targets where [`", $s_int_type, "`] has a ",
2131 "lesser alignment."
2132 )
2133 }]
2134 /// For more about the differences between atomic types and
2135 /// non-atomic types as well as information about the portability of
2136 /// this type, please see the [module-level documentation].
2137 ///
2138 /// **Note:** This type is only available on platforms that support
2139 /// atomic loads and stores of [`
2140 #[doc = $s_int_type]
2141 /// `].
2142 ///
2143 /// [module-level documentation]: crate::sync::atomic
2144 #[$stable]
2145 #[$diagnostic_item]
2146 #[repr(C, align($align))]
2147 pub struct $atomic_type {
2148 v: UnsafeCell<$int_type>,
2149 }
2150
2151 #[$stable]
2152 impl Default for $atomic_type {
2153 #[inline]
2154 fn default() -> Self {
2155 Self::new(Default::default())
2156 }
2157 }
2158
2159 #[$stable_from]
2160 impl From<$int_type> for $atomic_type {
2161 #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2162 #[inline]
2163 fn from(v: $int_type) -> Self { Self::new(v) }
2164 }
2165
2166 #[$stable_debug]
2167 impl fmt::Debug for $atomic_type {
2168 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2169 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2170 }
2171 }
2172
2173 // Send is implicitly implemented.
2174 #[$stable]
2175 unsafe impl Sync for $atomic_type {}
2176
2177 impl $atomic_type {
2178 /// Creates a new atomic integer.
2179 ///
2180 /// # Examples
2181 ///
2182 /// ```
2183 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2184 ///
2185 #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2186 /// ```
2187 #[inline]
2188 #[$stable]
2189 #[$const_stable]
2190 #[must_use]
2191 pub const fn new(v: $int_type) -> Self {
2192 Self {v: UnsafeCell::new(v)}
2193 }
2194
2195 /// Creates a new reference to an atomic integer from a pointer.
2196 ///
2197 /// # Examples
2198 ///
2199 /// ```
2200 #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2201 ///
2202 /// // Get a pointer to an allocated value
2203 #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2204 ///
2205 #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")]
2206 ///
2207 /// {
2208 /// // Create an atomic view of the allocated value
2209 // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2210 #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2211 ///
2212 /// // Use `atomic` for atomic operations, possibly share it with other threads
2213 /// atomic.store(1, atomic::Ordering::Relaxed);
2214 /// }
2215 ///
2216 /// // It's ok to non-atomically access the value behind `ptr`,
2217 /// // since the reference to the atomic ended its lifetime in the block above
2218 /// assert_eq!(unsafe { *ptr }, 1);
2219 ///
2220 /// // Deallocate the value
2221 /// unsafe { drop(Box::from_raw(ptr)) }
2222 /// ```
2223 ///
2224 /// # Safety
2225 ///
2226 #[doc = concat!(" * `ptr` must be aligned to \
2227 `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this \
2228 can be bigger than `align_of::<", stringify!($int_type), ">()`).")]
2229 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2230 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2231 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
2232 /// without synchronization.
2233 ///
2234 /// [valid]: crate::ptr#safety
2235 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2236 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2237 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
2238 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2239 // SAFETY: guaranteed by the caller
2240 unsafe { &*ptr.cast() }
2241 }
2242
2243
2244 /// Returns a mutable reference to the underlying integer.
2245 ///
2246 /// This is safe because the mutable reference guarantees that no other threads are
2247 /// concurrently accessing the atomic data.
2248 ///
2249 /// # Examples
2250 ///
2251 /// ```
2252 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2253 ///
2254 #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2255 /// assert_eq!(*some_var.get_mut(), 10);
2256 /// *some_var.get_mut() = 5;
2257 /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2258 /// ```
2259 #[inline]
2260 #[$stable_access]
2261 pub fn get_mut(&mut self) -> &mut $int_type {
2262 self.v.get_mut()
2263 }
2264
2265 #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2266 ///
2267 #[doc = if_not_8_bit! {
2268 $int_type,
2269 concat!(
2270 "**Note:** This function is only available on targets where `",
2271 stringify!($int_type), "` has an alignment of ", $align, " bytes."
2272 )
2273 }]
2274 ///
2275 /// # Examples
2276 ///
2277 /// ```
2278 /// #![feature(atomic_from_mut)]
2279 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2280 ///
2281 /// let mut some_int = 123;
2282 #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2283 /// a.store(100, Ordering::Relaxed);
2284 /// assert_eq!(some_int, 100);
2285 /// ```
2286 ///
2287 #[inline]
2288 #[$cfg_align]
2289 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2290 pub fn from_mut(v: &mut $int_type) -> &mut Self {
2291 use crate::mem::align_of;
2292 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2293 // SAFETY:
2294 // - the mutable reference guarantees unique ownership.
2295 // - the alignment of `$int_type` and `Self` is the
2296 // same, as promised by $cfg_align and verified above.
2297 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2298 }
2299
2300 #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2301 ///
2302 /// This is safe because the mutable reference guarantees that no other threads are
2303 /// concurrently accessing the atomic data.
2304 ///
2305 /// # Examples
2306 ///
2307 /// ```
2308 /// #![feature(atomic_from_mut)]
2309 /// # #![cfg_attr(bootstrap, feature(inline_const))]
2310 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2311 ///
2312 #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2313 ///
2314 #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2315 /// assert_eq!(view, [0; 10]);
2316 /// view
2317 /// .iter_mut()
2318 /// .enumerate()
2319 /// .for_each(|(idx, int)| *int = idx as _);
2320 ///
2321 /// std::thread::scope(|s| {
2322 /// some_ints
2323 /// .iter()
2324 /// .enumerate()
2325 /// .for_each(|(idx, int)| {
2326 /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2327 /// })
2328 /// });
2329 /// ```
2330 #[inline]
2331 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2332 pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2333 // SAFETY: the mutable reference guarantees unique ownership.
2334 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2335 }
2336
2337 #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2338 ///
2339 /// # Examples
2340 ///
2341 /// ```
2342 /// #![feature(atomic_from_mut)]
2343 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2344 ///
2345 /// let mut some_ints = [0; 10];
2346 #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2347 /// std::thread::scope(|s| {
2348 /// for i in 0..a.len() {
2349 /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2350 /// }
2351 /// });
2352 /// for (i, n) in some_ints.into_iter().enumerate() {
2353 /// assert_eq!(i, n as usize);
2354 /// }
2355 /// ```
2356 #[inline]
2357 #[$cfg_align]
2358 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2359 pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2360 use crate::mem::align_of;
2361 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2362 // SAFETY:
2363 // - the mutable reference guarantees unique ownership.
2364 // - the alignment of `$int_type` and `Self` is the
2365 // same, as promised by $cfg_align and verified above.
2366 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2367 }
2368
2369 /// Consumes the atomic and returns the contained value.
2370 ///
2371 /// This is safe because passing `self` by value guarantees that no other threads are
2372 /// concurrently accessing the atomic data.
2373 ///
2374 /// # Examples
2375 ///
2376 /// ```
2377 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2378 ///
2379 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2380 /// assert_eq!(some_var.into_inner(), 5);
2381 /// ```
2382 #[inline]
2383 #[$stable_access]
2384 #[rustc_const_stable(feature = "const_atomic_into_inner", since = "CURRENT_RUSTC_VERSION")]
2385 pub const fn into_inner(self) -> $int_type {
2386 self.v.primitive_into_inner()
2387 }
2388
2389 /// Loads a value from the atomic integer.
2390 ///
2391 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2392 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2393 ///
2394 /// # Panics
2395 ///
2396 /// Panics if `order` is [`Release`] or [`AcqRel`].
2397 ///
2398 /// # Examples
2399 ///
2400 /// ```
2401 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2402 ///
2403 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2404 ///
2405 /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2406 /// ```
2407 #[inline]
2408 #[$stable]
2409 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2410 pub fn load(&self, order: Ordering) -> $int_type {
2411 // SAFETY: data races are prevented by atomic intrinsics.
2412 unsafe { atomic_load(self.v.get(), order) }
2413 }
2414
2415 /// Stores a value into the atomic integer.
2416 ///
2417 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2418 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2419 ///
2420 /// # Panics
2421 ///
2422 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2423 ///
2424 /// # Examples
2425 ///
2426 /// ```
2427 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2428 ///
2429 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2430 ///
2431 /// some_var.store(10, Ordering::Relaxed);
2432 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2433 /// ```
2434 #[inline]
2435 #[$stable]
2436 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2437 pub fn store(&self, val: $int_type, order: Ordering) {
2438 // SAFETY: data races are prevented by atomic intrinsics.
2439 unsafe { atomic_store(self.v.get(), val, order); }
2440 }
2441
2442 /// Stores a value into the atomic integer, returning the previous value.
2443 ///
2444 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2445 /// of this operation. All ordering modes are possible. Note that using
2446 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2447 /// using [`Release`] makes the load part [`Relaxed`].
2448 ///
2449 /// **Note**: This method is only available on platforms that support atomic operations on
2450 #[doc = concat!("[`", $s_int_type, "`].")]
2451 ///
2452 /// # Examples
2453 ///
2454 /// ```
2455 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2456 ///
2457 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2458 ///
2459 /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2460 /// ```
2461 #[inline]
2462 #[$stable]
2463 #[$cfg_cas]
2464 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2465 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2466 // SAFETY: data races are prevented by atomic intrinsics.
2467 unsafe { atomic_swap(self.v.get(), val, order) }
2468 }
2469
2470 /// Stores a value into the atomic integer if the current value is the same as
2471 /// the `current` value.
2472 ///
2473 /// The return value is always the previous value. If it is equal to `current`, then the
2474 /// value was updated.
2475 ///
2476 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2477 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2478 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2479 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2480 /// happens, and using [`Release`] makes the load part [`Relaxed`].
2481 ///
2482 /// **Note**: This method is only available on platforms that support atomic operations on
2483 #[doc = concat!("[`", $s_int_type, "`].")]
2484 ///
2485 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2486 ///
2487 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2488 /// memory orderings:
2489 ///
2490 /// Original | Success | Failure
2491 /// -------- | ------- | -------
2492 /// Relaxed | Relaxed | Relaxed
2493 /// Acquire | Acquire | Acquire
2494 /// Release | Release | Relaxed
2495 /// AcqRel | AcqRel | Acquire
2496 /// SeqCst | SeqCst | SeqCst
2497 ///
2498 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2499 /// which allows the compiler to generate better assembly code when the compare and swap
2500 /// is used in a loop.
2501 ///
2502 /// # Examples
2503 ///
2504 /// ```
2505 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2506 ///
2507 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2508 ///
2509 /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2510 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2511 ///
2512 /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2513 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2514 /// ```
2515 #[inline]
2516 #[$stable]
2517 #[deprecated(
2518 since = "1.50.0",
2519 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2520 ]
2521 #[$cfg_cas]
2522 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2523 pub fn compare_and_swap(&self,
2524 current: $int_type,
2525 new: $int_type,
2526 order: Ordering) -> $int_type {
2527 match self.compare_exchange(current,
2528 new,
2529 order,
2530 strongest_failure_ordering(order)) {
2531 Ok(x) => x,
2532 Err(x) => x,
2533 }
2534 }
2535
2536 /// Stores a value into the atomic integer if the current value is the same as
2537 /// the `current` value.
2538 ///
2539 /// The return value is a result indicating whether the new value was written and
2540 /// containing the previous value. On success this value is guaranteed to be equal to
2541 /// `current`.
2542 ///
2543 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2544 /// ordering of this operation. `success` describes the required ordering for the
2545 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2546 /// `failure` describes the required ordering for the load operation that takes place when
2547 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2548 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2549 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2550 ///
2551 /// **Note**: This method is only available on platforms that support atomic operations on
2552 #[doc = concat!("[`", $s_int_type, "`].")]
2553 ///
2554 /// # Examples
2555 ///
2556 /// ```
2557 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2558 ///
2559 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2560 ///
2561 /// assert_eq!(some_var.compare_exchange(5, 10,
2562 /// Ordering::Acquire,
2563 /// Ordering::Relaxed),
2564 /// Ok(5));
2565 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2566 ///
2567 /// assert_eq!(some_var.compare_exchange(6, 12,
2568 /// Ordering::SeqCst,
2569 /// Ordering::Acquire),
2570 /// Err(10));
2571 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2572 /// ```
2573 #[inline]
2574 #[$stable_cxchg]
2575 #[$cfg_cas]
2576 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2577 pub fn compare_exchange(&self,
2578 current: $int_type,
2579 new: $int_type,
2580 success: Ordering,
2581 failure: Ordering) -> Result<$int_type, $int_type> {
2582 // SAFETY: data races are prevented by atomic intrinsics.
2583 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
2584 }
2585
2586 /// Stores a value into the atomic integer if the current value is the same as
2587 /// the `current` value.
2588 ///
2589 #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
2590 /// this function is allowed to spuriously fail even
2591 /// when the comparison succeeds, which can result in more efficient code on some
2592 /// platforms. The return value is a result indicating whether the new value was
2593 /// written and containing the previous value.
2594 ///
2595 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2596 /// ordering of this operation. `success` describes the required ordering for the
2597 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2598 /// `failure` describes the required ordering for the load operation that takes place when
2599 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2600 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2601 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2602 ///
2603 /// **Note**: This method is only available on platforms that support atomic operations on
2604 #[doc = concat!("[`", $s_int_type, "`].")]
2605 ///
2606 /// # Examples
2607 ///
2608 /// ```
2609 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2610 ///
2611 #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
2612 ///
2613 /// let mut old = val.load(Ordering::Relaxed);
2614 /// loop {
2615 /// let new = old * 2;
2616 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2617 /// Ok(_) => break,
2618 /// Err(x) => old = x,
2619 /// }
2620 /// }
2621 /// ```
2622 #[inline]
2623 #[$stable_cxchg]
2624 #[$cfg_cas]
2625 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2626 pub fn compare_exchange_weak(&self,
2627 current: $int_type,
2628 new: $int_type,
2629 success: Ordering,
2630 failure: Ordering) -> Result<$int_type, $int_type> {
2631 // SAFETY: data races are prevented by atomic intrinsics.
2632 unsafe {
2633 atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
2634 }
2635 }
2636
2637 /// Adds to the current value, returning the previous value.
2638 ///
2639 /// This operation wraps around on overflow.
2640 ///
2641 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
2642 /// of this operation. All ordering modes are possible. Note that using
2643 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2644 /// using [`Release`] makes the load part [`Relaxed`].
2645 ///
2646 /// **Note**: This method is only available on platforms that support atomic operations on
2647 #[doc = concat!("[`", $s_int_type, "`].")]
2648 ///
2649 /// # Examples
2650 ///
2651 /// ```
2652 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2653 ///
2654 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
2655 /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
2656 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2657 /// ```
2658 #[inline]
2659 #[$stable]
2660 #[$cfg_cas]
2661 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2662 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
2663 // SAFETY: data races are prevented by atomic intrinsics.
2664 unsafe { atomic_add(self.v.get(), val, order) }
2665 }
2666
2667 /// Subtracts from the current value, returning the previous value.
2668 ///
2669 /// This operation wraps around on overflow.
2670 ///
2671 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
2672 /// of this operation. All ordering modes are possible. Note that using
2673 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2674 /// using [`Release`] makes the load part [`Relaxed`].
2675 ///
2676 /// **Note**: This method is only available on platforms that support atomic operations on
2677 #[doc = concat!("[`", $s_int_type, "`].")]
2678 ///
2679 /// # Examples
2680 ///
2681 /// ```
2682 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2683 ///
2684 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
2685 /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
2686 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2687 /// ```
2688 #[inline]
2689 #[$stable]
2690 #[$cfg_cas]
2691 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2692 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
2693 // SAFETY: data races are prevented by atomic intrinsics.
2694 unsafe { atomic_sub(self.v.get(), val, order) }
2695 }
2696
2697 /// Bitwise "and" with the current value.
2698 ///
2699 /// Performs a bitwise "and" operation on the current value and the argument `val`, and
2700 /// sets the new value to the result.
2701 ///
2702 /// Returns the previous value.
2703 ///
2704 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
2705 /// of this operation. All ordering modes are possible. Note that using
2706 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2707 /// using [`Release`] makes the load part [`Relaxed`].
2708 ///
2709 /// **Note**: This method is only available on platforms that support atomic operations on
2710 #[doc = concat!("[`", $s_int_type, "`].")]
2711 ///
2712 /// # Examples
2713 ///
2714 /// ```
2715 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2716 ///
2717 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2718 /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
2719 /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
2720 /// ```
2721 #[inline]
2722 #[$stable]
2723 #[$cfg_cas]
2724 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2725 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
2726 // SAFETY: data races are prevented by atomic intrinsics.
2727 unsafe { atomic_and(self.v.get(), val, order) }
2728 }
2729
2730 /// Bitwise "nand" with the current value.
2731 ///
2732 /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
2733 /// sets the new value to the result.
2734 ///
2735 /// Returns the previous value.
2736 ///
2737 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
2738 /// of this operation. All ordering modes are possible. Note that using
2739 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2740 /// using [`Release`] makes the load part [`Relaxed`].
2741 ///
2742 /// **Note**: This method is only available on platforms that support atomic operations on
2743 #[doc = concat!("[`", $s_int_type, "`].")]
2744 ///
2745 /// # Examples
2746 ///
2747 /// ```
2748 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2749 ///
2750 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
2751 /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
2752 /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
2753 /// ```
2754 #[inline]
2755 #[$stable_nand]
2756 #[$cfg_cas]
2757 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2758 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
2759 // SAFETY: data races are prevented by atomic intrinsics.
2760 unsafe { atomic_nand(self.v.get(), val, order) }
2761 }
2762
2763 /// Bitwise "or" with the current value.
2764 ///
2765 /// Performs a bitwise "or" operation on the current value and the argument `val`, and
2766 /// sets the new value to the result.
2767 ///
2768 /// Returns the previous value.
2769 ///
2770 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
2771 /// of this operation. All ordering modes are possible. Note that using
2772 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2773 /// using [`Release`] makes the load part [`Relaxed`].
2774 ///
2775 /// **Note**: This method is only available on platforms that support atomic operations on
2776 #[doc = concat!("[`", $s_int_type, "`].")]
2777 ///
2778 /// # Examples
2779 ///
2780 /// ```
2781 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2782 ///
2783 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2784 /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
2785 /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
2786 /// ```
2787 #[inline]
2788 #[$stable]
2789 #[$cfg_cas]
2790 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2791 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
2792 // SAFETY: data races are prevented by atomic intrinsics.
2793 unsafe { atomic_or(self.v.get(), val, order) }
2794 }
2795
2796 /// Bitwise "xor" with the current value.
2797 ///
2798 /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
2799 /// sets the new value to the result.
2800 ///
2801 /// Returns the previous value.
2802 ///
2803 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
2804 /// of this operation. All ordering modes are possible. Note that using
2805 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2806 /// using [`Release`] makes the load part [`Relaxed`].
2807 ///
2808 /// **Note**: This method is only available on platforms that support atomic operations on
2809 #[doc = concat!("[`", $s_int_type, "`].")]
2810 ///
2811 /// # Examples
2812 ///
2813 /// ```
2814 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2815 ///
2816 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2817 /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
2818 /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
2819 /// ```
2820 #[inline]
2821 #[$stable]
2822 #[$cfg_cas]
2823 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2824 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
2825 // SAFETY: data races are prevented by atomic intrinsics.
2826 unsafe { atomic_xor(self.v.get(), val, order) }
2827 }
2828
2829 /// Fetches the value, and applies a function to it that returns an optional
2830 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
2831 /// `Err(previous_value)`.
2832 ///
2833 /// Note: This may call the function multiple times if the value has been changed from other threads in
2834 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
2835 /// only once to the stored value.
2836 ///
2837 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
2838 /// The first describes the required ordering for when the operation finally succeeds while the second
2839 /// describes the required ordering for loads. These correspond to the success and failure orderings of
2840 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
2841 /// respectively.
2842 ///
2843 /// Using [`Acquire`] as success ordering makes the store part
2844 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2845 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2846 ///
2847 /// **Note**: This method is only available on platforms that support atomic operations on
2848 #[doc = concat!("[`", $s_int_type, "`].")]
2849 ///
2850 /// # Considerations
2851 ///
2852 /// This method is not magic; it is not provided by the hardware.
2853 /// It is implemented in terms of
2854 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
2855 /// and suffers from the same drawbacks.
2856 /// In particular, this method will not circumvent the [ABA Problem].
2857 ///
2858 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2859 ///
2860 /// # Examples
2861 ///
2862 /// ```rust
2863 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2864 ///
2865 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
2866 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
2867 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
2868 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
2869 /// assert_eq!(x.load(Ordering::SeqCst), 9);
2870 /// ```
2871 #[inline]
2872 #[stable(feature = "no_more_cas", since = "1.45.0")]
2873 #[$cfg_cas]
2874 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2875 pub fn fetch_update<F>(&self,
2876 set_order: Ordering,
2877 fetch_order: Ordering,
2878 mut f: F) -> Result<$int_type, $int_type>
2879 where F: FnMut($int_type) -> Option<$int_type> {
2880 let mut prev = self.load(fetch_order);
2881 while let Some(next) = f(prev) {
2882 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2883 x @ Ok(_) => return x,
2884 Err(next_prev) => prev = next_prev
2885 }
2886 }
2887 Err(prev)
2888 }
2889
2890 /// Maximum with the current value.
2891 ///
2892 /// Finds the maximum of the current value and the argument `val`, and
2893 /// sets the new value to the result.
2894 ///
2895 /// Returns the previous value.
2896 ///
2897 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
2898 /// of this operation. All ordering modes are possible. Note that using
2899 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2900 /// using [`Release`] makes the load part [`Relaxed`].
2901 ///
2902 /// **Note**: This method is only available on platforms that support atomic operations on
2903 #[doc = concat!("[`", $s_int_type, "`].")]
2904 ///
2905 /// # Examples
2906 ///
2907 /// ```
2908 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2909 ///
2910 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2911 /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
2912 /// assert_eq!(foo.load(Ordering::SeqCst), 42);
2913 /// ```
2914 ///
2915 /// If you want to obtain the maximum value in one step, you can use the following:
2916 ///
2917 /// ```
2918 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2919 ///
2920 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2921 /// let bar = 42;
2922 /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
2923 /// assert!(max_foo == 42);
2924 /// ```
2925 #[inline]
2926 #[stable(feature = "atomic_min_max", since = "1.45.0")]
2927 #[$cfg_cas]
2928 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2929 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
2930 // SAFETY: data races are prevented by atomic intrinsics.
2931 unsafe { $max_fn(self.v.get(), val, order) }
2932 }
2933
2934 /// Minimum with the current value.
2935 ///
2936 /// Finds the minimum of the current value and the argument `val`, and
2937 /// sets the new value to the result.
2938 ///
2939 /// Returns the previous value.
2940 ///
2941 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
2942 /// of this operation. All ordering modes are possible. Note that using
2943 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2944 /// using [`Release`] makes the load part [`Relaxed`].
2945 ///
2946 /// **Note**: This method is only available on platforms that support atomic operations on
2947 #[doc = concat!("[`", $s_int_type, "`].")]
2948 ///
2949 /// # Examples
2950 ///
2951 /// ```
2952 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2953 ///
2954 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2955 /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
2956 /// assert_eq!(foo.load(Ordering::Relaxed), 23);
2957 /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
2958 /// assert_eq!(foo.load(Ordering::Relaxed), 22);
2959 /// ```
2960 ///
2961 /// If you want to obtain the minimum value in one step, you can use the following:
2962 ///
2963 /// ```
2964 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2965 ///
2966 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2967 /// let bar = 12;
2968 /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
2969 /// assert_eq!(min_foo, 12);
2970 /// ```
2971 #[inline]
2972 #[stable(feature = "atomic_min_max", since = "1.45.0")]
2973 #[$cfg_cas]
2974 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2975 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
2976 // SAFETY: data races are prevented by atomic intrinsics.
2977 unsafe { $min_fn(self.v.get(), val, order) }
2978 }
2979
2980 /// Returns a mutable pointer to the underlying integer.
2981 ///
2982 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
2983 /// This method is mostly useful for FFI, where the function signature may use
2984 #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
2985 ///
2986 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2987 /// atomic types work with interior mutability. All modifications of an atomic change the value
2988 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2989 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2990 /// restriction: operations on it must be atomic.
2991 ///
2992 /// # Examples
2993 ///
2994 /// ```ignore (extern-declaration)
2995 /// # fn main() {
2996 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2997 ///
2998 /// extern "C" {
2999 #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
3000 /// }
3001 ///
3002 #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
3003 ///
3004 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
3005 /// unsafe {
3006 /// my_atomic_op(atomic.as_ptr());
3007 /// }
3008 /// # }
3009 /// ```
3010 #[inline]
3011 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3012 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3013 #[rustc_never_returns_null_ptr]
3014 pub const fn as_ptr(&self) -> *mut $int_type {
3015 self.v.get()
3016 }
3017 }
3018 }
3019}
3020
3021#[cfg(target_has_atomic_load_store = "8")]
3022atomic_int! {
3023 cfg(target_has_atomic = "8"),
3024 cfg(target_has_atomic_equal_alignment = "8"),
3025 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3026 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3027 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3028 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3029 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3030 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3031 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3032 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI8"),
3033 "i8",
3034 "",
3035 atomic_min, atomic_max,
3036 1,
3037 i8 AtomicI8
3038}
3039#[cfg(target_has_atomic_load_store = "8")]
3040atomic_int! {
3041 cfg(target_has_atomic = "8"),
3042 cfg(target_has_atomic_equal_alignment = "8"),
3043 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3044 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3045 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3046 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3047 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3048 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3049 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3050 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU8"),
3051 "u8",
3052 "",
3053 atomic_umin, atomic_umax,
3054 1,
3055 u8 AtomicU8
3056}
3057#[cfg(target_has_atomic_load_store = "16")]
3058atomic_int! {
3059 cfg(target_has_atomic = "16"),
3060 cfg(target_has_atomic_equal_alignment = "16"),
3061 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3062 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3063 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3064 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3065 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3066 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3067 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3068 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI16"),
3069 "i16",
3070 "",
3071 atomic_min, atomic_max,
3072 2,
3073 i16 AtomicI16
3074}
3075#[cfg(target_has_atomic_load_store = "16")]
3076atomic_int! {
3077 cfg(target_has_atomic = "16"),
3078 cfg(target_has_atomic_equal_alignment = "16"),
3079 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3080 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3081 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3082 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3083 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3084 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3085 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3086 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU16"),
3087 "u16",
3088 "",
3089 atomic_umin, atomic_umax,
3090 2,
3091 u16 AtomicU16
3092}
3093#[cfg(target_has_atomic_load_store = "32")]
3094atomic_int! {
3095 cfg(target_has_atomic = "32"),
3096 cfg(target_has_atomic_equal_alignment = "32"),
3097 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3098 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3099 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3100 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3101 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3102 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3103 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3104 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI32"),
3105 "i32",
3106 "",
3107 atomic_min, atomic_max,
3108 4,
3109 i32 AtomicI32
3110}
3111#[cfg(target_has_atomic_load_store = "32")]
3112atomic_int! {
3113 cfg(target_has_atomic = "32"),
3114 cfg(target_has_atomic_equal_alignment = "32"),
3115 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3116 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3117 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3118 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3119 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3120 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3121 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3122 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU32"),
3123 "u32",
3124 "",
3125 atomic_umin, atomic_umax,
3126 4,
3127 u32 AtomicU32
3128}
3129#[cfg(target_has_atomic_load_store = "64")]
3130atomic_int! {
3131 cfg(target_has_atomic = "64"),
3132 cfg(target_has_atomic_equal_alignment = "64"),
3133 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3134 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3135 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3136 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3137 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3138 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3139 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3140 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI64"),
3141 "i64",
3142 "",
3143 atomic_min, atomic_max,
3144 8,
3145 i64 AtomicI64
3146}
3147#[cfg(target_has_atomic_load_store = "64")]
3148atomic_int! {
3149 cfg(target_has_atomic = "64"),
3150 cfg(target_has_atomic_equal_alignment = "64"),
3151 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3152 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3153 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3154 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3155 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3156 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3157 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3158 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU64"),
3159 "u64",
3160 "",
3161 atomic_umin, atomic_umax,
3162 8,
3163 u64 AtomicU64
3164}
3165#[cfg(target_has_atomic_load_store = "128")]
3166atomic_int! {
3167 cfg(target_has_atomic = "128"),
3168 cfg(target_has_atomic_equal_alignment = "128"),
3169 unstable(feature = "integer_atomics", issue = "99069"),
3170 unstable(feature = "integer_atomics", issue = "99069"),
3171 unstable(feature = "integer_atomics", issue = "99069"),
3172 unstable(feature = "integer_atomics", issue = "99069"),
3173 unstable(feature = "integer_atomics", issue = "99069"),
3174 unstable(feature = "integer_atomics", issue = "99069"),
3175 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3176 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI128"),
3177 "i128",
3178 "#![feature(integer_atomics)]\n\n",
3179 atomic_min, atomic_max,
3180 16,
3181 i128 AtomicI128
3182}
3183#[cfg(target_has_atomic_load_store = "128")]
3184atomic_int! {
3185 cfg(target_has_atomic = "128"),
3186 cfg(target_has_atomic_equal_alignment = "128"),
3187 unstable(feature = "integer_atomics", issue = "99069"),
3188 unstable(feature = "integer_atomics", issue = "99069"),
3189 unstable(feature = "integer_atomics", issue = "99069"),
3190 unstable(feature = "integer_atomics", issue = "99069"),
3191 unstable(feature = "integer_atomics", issue = "99069"),
3192 unstable(feature = "integer_atomics", issue = "99069"),
3193 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3194 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU128"),
3195 "u128",
3196 "#![feature(integer_atomics)]\n\n",
3197 atomic_umin, atomic_umax,
3198 16,
3199 u128 AtomicU128
3200}
3201
3202#[cfg(target_has_atomic_load_store = "ptr")]
3203macro_rules! atomic_int_ptr_sized {
3204 ( $($target_pointer_width:literal $align:literal)* ) => { $(
3205 #[cfg(target_pointer_width = $target_pointer_width)]
3206 atomic_int! {
3207 cfg(target_has_atomic = "ptr"),
3208 cfg(target_has_atomic_equal_alignment = "ptr"),
3209 stable(feature = "rust1", since = "1.0.0"),
3210 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3211 stable(feature = "atomic_debug", since = "1.3.0"),
3212 stable(feature = "atomic_access", since = "1.15.0"),
3213 stable(feature = "atomic_from", since = "1.23.0"),
3214 stable(feature = "atomic_nand", since = "1.27.0"),
3215 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3216 cfg_attr(not(test), rustc_diagnostic_item = "AtomicIsize"),
3217 "isize",
3218 "",
3219 atomic_min, atomic_max,
3220 $align,
3221 isize AtomicIsize
3222 }
3223 #[cfg(target_pointer_width = $target_pointer_width)]
3224 atomic_int! {
3225 cfg(target_has_atomic = "ptr"),
3226 cfg(target_has_atomic_equal_alignment = "ptr"),
3227 stable(feature = "rust1", since = "1.0.0"),
3228 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3229 stable(feature = "atomic_debug", since = "1.3.0"),
3230 stable(feature = "atomic_access", since = "1.15.0"),
3231 stable(feature = "atomic_from", since = "1.23.0"),
3232 stable(feature = "atomic_nand", since = "1.27.0"),
3233 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3234 cfg_attr(not(test), rustc_diagnostic_item = "AtomicUsize"),
3235 "usize",
3236 "",
3237 atomic_umin, atomic_umax,
3238 $align,
3239 usize AtomicUsize
3240 }
3241
3242 /// An [`AtomicIsize`] initialized to `0`.
3243 #[cfg(target_pointer_width = $target_pointer_width)]
3244 #[stable(feature = "rust1", since = "1.0.0")]
3245 #[deprecated(
3246 since = "1.34.0",
3247 note = "the `new` function is now preferred",
3248 suggestion = "AtomicIsize::new(0)",
3249 )]
3250 pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3251
3252 /// An [`AtomicUsize`] initialized to `0`.
3253 #[cfg(target_pointer_width = $target_pointer_width)]
3254 #[stable(feature = "rust1", since = "1.0.0")]
3255 #[deprecated(
3256 since = "1.34.0",
3257 note = "the `new` function is now preferred",
3258 suggestion = "AtomicUsize::new(0)",
3259 )]
3260 pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3261 )* };
3262}
3263
3264#[cfg(target_has_atomic_load_store = "ptr")]
3265atomic_int_ptr_sized! {
3266 "16" 2
3267 "32" 4
3268 "64" 8
3269}
3270
3271#[inline]
3272#[cfg(target_has_atomic)]
3273fn strongest_failure_ordering(order: Ordering) -> Ordering {
3274 match order {
3275 Release => Relaxed,
3276 Relaxed => Relaxed,
3277 SeqCst => SeqCst,
3278 Acquire => Acquire,
3279 AcqRel => Acquire,
3280 }
3281}
3282
3283#[inline]
3284#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3285unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3286 // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3287 unsafe {
3288 match order {
3289 Relaxed => intrinsics::atomic_store_relaxed(dst, val),
3290 Release => intrinsics::atomic_store_release(dst, val),
3291 SeqCst => intrinsics::atomic_store_seqcst(dst, val),
3292 Acquire => panic!("there is no such thing as an acquire store"),
3293 AcqRel => panic!("there is no such thing as an acquire-release store"),
3294 }
3295 }
3296}
3297
3298#[inline]
3299#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3300unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3301 // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3302 unsafe {
3303 match order {
3304 Relaxed => intrinsics::atomic_load_relaxed(src:dst),
3305 Acquire => intrinsics::atomic_load_acquire(src:dst),
3306 SeqCst => intrinsics::atomic_load_seqcst(src:dst),
3307 Release => panic!("there is no such thing as a release load"),
3308 AcqRel => panic!("there is no such thing as an acquire-release load"),
3309 }
3310 }
3311}
3312
3313#[inline]
3314#[cfg(target_has_atomic)]
3315#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3316unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3317 // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3318 unsafe {
3319 match order {
3320 Relaxed => intrinsics::atomic_xchg_relaxed(dst, src:val),
3321 Acquire => intrinsics::atomic_xchg_acquire(dst, src:val),
3322 Release => intrinsics::atomic_xchg_release(dst, src:val),
3323 AcqRel => intrinsics::atomic_xchg_acqrel(dst, src:val),
3324 SeqCst => intrinsics::atomic_xchg_seqcst(dst, src:val),
3325 }
3326 }
3327}
3328
3329/// Returns the previous value (like __sync_fetch_and_add).
3330#[inline]
3331#[cfg(target_has_atomic)]
3332#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3333unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3334 // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3335 unsafe {
3336 match order {
3337 Relaxed => intrinsics::atomic_xadd_relaxed(dst, src:val),
3338 Acquire => intrinsics::atomic_xadd_acquire(dst, src:val),
3339 Release => intrinsics::atomic_xadd_release(dst, src:val),
3340 AcqRel => intrinsics::atomic_xadd_acqrel(dst, src:val),
3341 SeqCst => intrinsics::atomic_xadd_seqcst(dst, src:val),
3342 }
3343 }
3344}
3345
3346/// Returns the previous value (like __sync_fetch_and_sub).
3347#[inline]
3348#[cfg(target_has_atomic)]
3349#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3350unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3351 // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3352 unsafe {
3353 match order {
3354 Relaxed => intrinsics::atomic_xsub_relaxed(dst, src:val),
3355 Acquire => intrinsics::atomic_xsub_acquire(dst, src:val),
3356 Release => intrinsics::atomic_xsub_release(dst, src:val),
3357 AcqRel => intrinsics::atomic_xsub_acqrel(dst, src:val),
3358 SeqCst => intrinsics::atomic_xsub_seqcst(dst, src:val),
3359 }
3360 }
3361}
3362
3363#[inline]
3364#[cfg(target_has_atomic)]
3365#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3366unsafe fn atomic_compare_exchange<T: Copy>(
3367 dst: *mut T,
3368 old: T,
3369 new: T,
3370 success: Ordering,
3371 failure: Ordering,
3372) -> Result<T, T> {
3373 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
3374 let (val, ok) = unsafe {
3375 match (success, failure) {
3376 (Relaxed, Relaxed) => intrinsics::atomic_cxchg_relaxed_relaxed(dst, old, new),
3377 (Relaxed, Acquire) => intrinsics::atomic_cxchg_relaxed_acquire(dst, old, new),
3378 (Relaxed, SeqCst) => intrinsics::atomic_cxchg_relaxed_seqcst(dst, old, new),
3379 (Acquire, Relaxed) => intrinsics::atomic_cxchg_acquire_relaxed(dst, old, new),
3380 (Acquire, Acquire) => intrinsics::atomic_cxchg_acquire_acquire(dst, old, new),
3381 (Acquire, SeqCst) => intrinsics::atomic_cxchg_acquire_seqcst(dst, old, new),
3382 (Release, Relaxed) => intrinsics::atomic_cxchg_release_relaxed(dst, old, new),
3383 (Release, Acquire) => intrinsics::atomic_cxchg_release_acquire(dst, old, new),
3384 (Release, SeqCst) => intrinsics::atomic_cxchg_release_seqcst(dst, old, new),
3385 (AcqRel, Relaxed) => intrinsics::atomic_cxchg_acqrel_relaxed(dst, old, new),
3386 (AcqRel, Acquire) => intrinsics::atomic_cxchg_acqrel_acquire(dst, old, new),
3387 (AcqRel, SeqCst) => intrinsics::atomic_cxchg_acqrel_seqcst(dst, old, new),
3388 (SeqCst, Relaxed) => intrinsics::atomic_cxchg_seqcst_relaxed(dst, old, new),
3389 (SeqCst, Acquire) => intrinsics::atomic_cxchg_seqcst_acquire(dst, old, new),
3390 (SeqCst, SeqCst) => intrinsics::atomic_cxchg_seqcst_seqcst(dst, old, new),
3391 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3392 (_, Release) => panic!("there is no such thing as a release failure ordering"),
3393 }
3394 };
3395 if ok { Ok(val) } else { Err(val) }
3396}
3397
3398#[inline]
3399#[cfg(target_has_atomic)]
3400#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3401unsafe fn atomic_compare_exchange_weak<T: Copy>(
3402 dst: *mut T,
3403 old: T,
3404 new: T,
3405 success: Ordering,
3406 failure: Ordering,
3407) -> Result<T, T> {
3408 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
3409 let (val, ok) = unsafe {
3410 match (success, failure) {
3411 (Relaxed, Relaxed) => intrinsics::atomic_cxchgweak_relaxed_relaxed(dst, old, new),
3412 (Relaxed, Acquire) => intrinsics::atomic_cxchgweak_relaxed_acquire(dst, old, new),
3413 (Relaxed, SeqCst) => intrinsics::atomic_cxchgweak_relaxed_seqcst(dst, old, new),
3414 (Acquire, Relaxed) => intrinsics::atomic_cxchgweak_acquire_relaxed(dst, old, new),
3415 (Acquire, Acquire) => intrinsics::atomic_cxchgweak_acquire_acquire(dst, old, new),
3416 (Acquire, SeqCst) => intrinsics::atomic_cxchgweak_acquire_seqcst(dst, old, new),
3417 (Release, Relaxed) => intrinsics::atomic_cxchgweak_release_relaxed(dst, old, new),
3418 (Release, Acquire) => intrinsics::atomic_cxchgweak_release_acquire(dst, old, new),
3419 (Release, SeqCst) => intrinsics::atomic_cxchgweak_release_seqcst(dst, old, new),
3420 (AcqRel, Relaxed) => intrinsics::atomic_cxchgweak_acqrel_relaxed(dst, old, new),
3421 (AcqRel, Acquire) => intrinsics::atomic_cxchgweak_acqrel_acquire(dst, old, new),
3422 (AcqRel, SeqCst) => intrinsics::atomic_cxchgweak_acqrel_seqcst(dst, old, new),
3423 (SeqCst, Relaxed) => intrinsics::atomic_cxchgweak_seqcst_relaxed(dst, old, new),
3424 (SeqCst, Acquire) => intrinsics::atomic_cxchgweak_seqcst_acquire(dst, old, new),
3425 (SeqCst, SeqCst) => intrinsics::atomic_cxchgweak_seqcst_seqcst(dst, old, new),
3426 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3427 (_, Release) => panic!("there is no such thing as a release failure ordering"),
3428 }
3429 };
3430 if ok { Ok(val) } else { Err(val) }
3431}
3432
3433#[inline]
3434#[cfg(target_has_atomic)]
3435#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3436unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3437 // SAFETY: the caller must uphold the safety contract for `atomic_and`
3438 unsafe {
3439 match order {
3440 Relaxed => intrinsics::atomic_and_relaxed(dst, src:val),
3441 Acquire => intrinsics::atomic_and_acquire(dst, src:val),
3442 Release => intrinsics::atomic_and_release(dst, src:val),
3443 AcqRel => intrinsics::atomic_and_acqrel(dst, src:val),
3444 SeqCst => intrinsics::atomic_and_seqcst(dst, src:val),
3445 }
3446 }
3447}
3448
3449#[inline]
3450#[cfg(target_has_atomic)]
3451#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3452unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3453 // SAFETY: the caller must uphold the safety contract for `atomic_nand`
3454 unsafe {
3455 match order {
3456 Relaxed => intrinsics::atomic_nand_relaxed(dst, src:val),
3457 Acquire => intrinsics::atomic_nand_acquire(dst, src:val),
3458 Release => intrinsics::atomic_nand_release(dst, src:val),
3459 AcqRel => intrinsics::atomic_nand_acqrel(dst, src:val),
3460 SeqCst => intrinsics::atomic_nand_seqcst(dst, src:val),
3461 }
3462 }
3463}
3464
3465#[inline]
3466#[cfg(target_has_atomic)]
3467#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3468unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3469 // SAFETY: the caller must uphold the safety contract for `atomic_or`
3470 unsafe {
3471 match order {
3472 SeqCst => intrinsics::atomic_or_seqcst(dst, src:val),
3473 Acquire => intrinsics::atomic_or_acquire(dst, src:val),
3474 Release => intrinsics::atomic_or_release(dst, src:val),
3475 AcqRel => intrinsics::atomic_or_acqrel(dst, src:val),
3476 Relaxed => intrinsics::atomic_or_relaxed(dst, src:val),
3477 }
3478 }
3479}
3480
3481#[inline]
3482#[cfg(target_has_atomic)]
3483#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3484unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3485 // SAFETY: the caller must uphold the safety contract for `atomic_xor`
3486 unsafe {
3487 match order {
3488 SeqCst => intrinsics::atomic_xor_seqcst(dst, src:val),
3489 Acquire => intrinsics::atomic_xor_acquire(dst, src:val),
3490 Release => intrinsics::atomic_xor_release(dst, src:val),
3491 AcqRel => intrinsics::atomic_xor_acqrel(dst, src:val),
3492 Relaxed => intrinsics::atomic_xor_relaxed(dst, src:val),
3493 }
3494 }
3495}
3496
3497/// returns the max value (signed comparison)
3498#[inline]
3499#[cfg(target_has_atomic)]
3500#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3501unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3502 // SAFETY: the caller must uphold the safety contract for `atomic_max`
3503 unsafe {
3504 match order {
3505 Relaxed => intrinsics::atomic_max_relaxed(dst, src:val),
3506 Acquire => intrinsics::atomic_max_acquire(dst, src:val),
3507 Release => intrinsics::atomic_max_release(dst, src:val),
3508 AcqRel => intrinsics::atomic_max_acqrel(dst, src:val),
3509 SeqCst => intrinsics::atomic_max_seqcst(dst, src:val),
3510 }
3511 }
3512}
3513
3514/// returns the min value (signed comparison)
3515#[inline]
3516#[cfg(target_has_atomic)]
3517#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3518unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3519 // SAFETY: the caller must uphold the safety contract for `atomic_min`
3520 unsafe {
3521 match order {
3522 Relaxed => intrinsics::atomic_min_relaxed(dst, src:val),
3523 Acquire => intrinsics::atomic_min_acquire(dst, src:val),
3524 Release => intrinsics::atomic_min_release(dst, src:val),
3525 AcqRel => intrinsics::atomic_min_acqrel(dst, src:val),
3526 SeqCst => intrinsics::atomic_min_seqcst(dst, src:val),
3527 }
3528 }
3529}
3530
3531/// returns the max value (unsigned comparison)
3532#[inline]
3533#[cfg(target_has_atomic)]
3534#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3535unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3536 // SAFETY: the caller must uphold the safety contract for `atomic_umax`
3537 unsafe {
3538 match order {
3539 Relaxed => intrinsics::atomic_umax_relaxed(dst, src:val),
3540 Acquire => intrinsics::atomic_umax_acquire(dst, src:val),
3541 Release => intrinsics::atomic_umax_release(dst, src:val),
3542 AcqRel => intrinsics::atomic_umax_acqrel(dst, src:val),
3543 SeqCst => intrinsics::atomic_umax_seqcst(dst, src:val),
3544 }
3545 }
3546}
3547
3548/// returns the min value (unsigned comparison)
3549#[inline]
3550#[cfg(target_has_atomic)]
3551#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3552unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3553 // SAFETY: the caller must uphold the safety contract for `atomic_umin`
3554 unsafe {
3555 match order {
3556 Relaxed => intrinsics::atomic_umin_relaxed(dst, src:val),
3557 Acquire => intrinsics::atomic_umin_acquire(dst, src:val),
3558 Release => intrinsics::atomic_umin_release(dst, src:val),
3559 AcqRel => intrinsics::atomic_umin_acqrel(dst, src:val),
3560 SeqCst => intrinsics::atomic_umin_seqcst(dst, src:val),
3561 }
3562 }
3563}
3564
3565/// An atomic fence.
3566///
3567/// Depending on the specified order, a fence prevents the compiler and CPU from
3568/// reordering certain types of memory operations around it.
3569/// That creates synchronizes-with relationships between it and atomic operations
3570/// or fences in other threads.
3571///
3572/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
3573/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
3574/// exist operations X and Y, both operating on some atomic object 'M' such
3575/// that A is sequenced before X, Y is sequenced before B and Y observes
3576/// the change to M. This provides a happens-before dependence between A and B.
3577///
3578/// ```text
3579/// Thread 1 Thread 2
3580///
3581/// fence(Release); A --------------
3582/// x.store(3, Relaxed); X --------- |
3583/// | |
3584/// | |
3585/// -------------> Y if x.load(Relaxed) == 3 {
3586/// |-------> B fence(Acquire);
3587/// ...
3588/// }
3589/// ```
3590///
3591/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
3592/// with a fence.
3593///
3594/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
3595/// and [`Release`] semantics, participates in the global program order of the
3596/// other [`SeqCst`] operations and/or fences.
3597///
3598/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
3599///
3600/// # Panics
3601///
3602/// Panics if `order` is [`Relaxed`].
3603///
3604/// # Examples
3605///
3606/// ```
3607/// use std::sync::atomic::AtomicBool;
3608/// use std::sync::atomic::fence;
3609/// use std::sync::atomic::Ordering;
3610///
3611/// // A mutual exclusion primitive based on spinlock.
3612/// pub struct Mutex {
3613/// flag: AtomicBool,
3614/// }
3615///
3616/// impl Mutex {
3617/// pub fn new() -> Mutex {
3618/// Mutex {
3619/// flag: AtomicBool::new(false),
3620/// }
3621/// }
3622///
3623/// pub fn lock(&self) {
3624/// // Wait until the old value is `false`.
3625/// while self
3626/// .flag
3627/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
3628/// .is_err()
3629/// {}
3630/// // This fence synchronizes-with store in `unlock`.
3631/// fence(Ordering::Acquire);
3632/// }
3633///
3634/// pub fn unlock(&self) {
3635/// self.flag.store(false, Ordering::Release);
3636/// }
3637/// }
3638/// ```
3639#[inline]
3640#[stable(feature = "rust1", since = "1.0.0")]
3641#[rustc_diagnostic_item = "fence"]
3642#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3643pub fn fence(order: Ordering) {
3644 // SAFETY: using an atomic fence is safe.
3645 unsafe {
3646 match order {
3647 Acquire => intrinsics::atomic_fence_acquire(),
3648 Release => intrinsics::atomic_fence_release(),
3649 AcqRel => intrinsics::atomic_fence_acqrel(),
3650 SeqCst => intrinsics::atomic_fence_seqcst(),
3651 Relaxed => panic!("there is no such thing as a relaxed fence"),
3652 }
3653 }
3654}
3655
3656/// A compiler memory fence.
3657///
3658/// `compiler_fence` does not emit any machine code, but restricts the kinds
3659/// of memory re-ordering the compiler is allowed to do. Specifically, depending on
3660/// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
3661/// or writes from before or after the call to the other side of the call to
3662/// `compiler_fence`. Note that it does **not** prevent the *hardware*
3663/// from doing such re-ordering. This is not a problem in a single-threaded,
3664/// execution context, but when other threads may modify memory at the same
3665/// time, stronger synchronization primitives such as [`fence`] are required.
3666///
3667/// The re-ordering prevented by the different ordering semantics are:
3668///
3669/// - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
3670/// - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
3671/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
3672/// - with [`AcqRel`], both of the above rules are enforced.
3673///
3674/// `compiler_fence` is generally only useful for preventing a thread from
3675/// racing *with itself*. That is, if a given thread is executing one piece
3676/// of code, and is then interrupted, and starts executing code elsewhere
3677/// (while still in the same thread, and conceptually still on the same
3678/// core). In traditional programs, this can only occur when a signal
3679/// handler is registered. In more low-level code, such situations can also
3680/// arise when handling interrupts, when implementing green threads with
3681/// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
3682/// discussion of [memory barriers].
3683///
3684/// # Panics
3685///
3686/// Panics if `order` is [`Relaxed`].
3687///
3688/// # Examples
3689///
3690/// Without `compiler_fence`, the `assert_eq!` in following code
3691/// is *not* guaranteed to succeed, despite everything happening in a single thread.
3692/// To see why, remember that the compiler is free to swap the stores to
3693/// `IMPORTANT_VARIABLE` and `IS_READY` since they are both
3694/// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
3695/// after `IS_READY` is updated, then the signal handler will see
3696/// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
3697/// Using a `compiler_fence` remedies this situation.
3698///
3699/// ```
3700/// use std::sync::atomic::{AtomicBool, AtomicUsize};
3701/// use std::sync::atomic::Ordering;
3702/// use std::sync::atomic::compiler_fence;
3703///
3704/// static IMPORTANT_VARIABLE: AtomicUsize = AtomicUsize::new(0);
3705/// static IS_READY: AtomicBool = AtomicBool::new(false);
3706///
3707/// fn main() {
3708/// IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
3709/// // prevent earlier writes from being moved beyond this point
3710/// compiler_fence(Ordering::Release);
3711/// IS_READY.store(true, Ordering::Relaxed);
3712/// }
3713///
3714/// fn signal_handler() {
3715/// if IS_READY.load(Ordering::Relaxed) {
3716/// assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
3717/// }
3718/// }
3719/// ```
3720///
3721/// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
3722#[inline]
3723#[stable(feature = "compiler_fences", since = "1.21.0")]
3724#[rustc_diagnostic_item = "compiler_fence"]
3725#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3726pub fn compiler_fence(order: Ordering) {
3727 // SAFETY: using an atomic fence is safe.
3728 unsafe {
3729 match order {
3730 Acquire => intrinsics::atomic_singlethreadfence_acquire(),
3731 Release => intrinsics::atomic_singlethreadfence_release(),
3732 AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
3733 SeqCst => intrinsics::atomic_singlethreadfence_seqcst(),
3734 Relaxed => panic!("there is no such thing as a relaxed compiler fence"),
3735 }
3736 }
3737}
3738
3739#[cfg(target_has_atomic_load_store = "8")]
3740#[stable(feature = "atomic_debug", since = "1.3.0")]
3741impl fmt::Debug for AtomicBool {
3742 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3743 fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f)
3744 }
3745}
3746
3747#[cfg(target_has_atomic_load_store = "ptr")]
3748#[stable(feature = "atomic_debug", since = "1.3.0")]
3749impl<T> fmt::Debug for AtomicPtr<T> {
3750 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3751 fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f)
3752 }
3753}
3754
3755#[cfg(target_has_atomic_load_store = "ptr")]
3756#[stable(feature = "atomic_pointer", since = "1.24.0")]
3757impl<T> fmt::Pointer for AtomicPtr<T> {
3758 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3759 fmt::Pointer::fmt(&self.load(order:Ordering::SeqCst), f)
3760 }
3761}
3762
3763/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
3764///
3765/// This function is deprecated in favor of [`hint::spin_loop`].
3766///
3767/// [`hint::spin_loop`]: crate::hint::spin_loop
3768#[inline]
3769#[stable(feature = "spin_loop_hint", since = "1.24.0")]
3770#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
3771pub fn spin_loop_hint() {
3772 spin_loop()
3773}
3774