1//! Atomic types
2//!
3//! Atomic types provide primitive shared-memory communication between
4//! threads, and are the building blocks of other concurrent
5//! types.
6//!
7//! This module defines atomic versions of a select number of primitive
8//! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`],
9//! [`AtomicI8`], [`AtomicU16`], etc.
10//! Atomic types present operations that, when used correctly, synchronize
11//! updates between threads.
12//!
13//! Atomic variables are safe to share between threads (they implement [`Sync`])
14//! but they do not themselves provide the mechanism for sharing and follow the
15//! [threading model](../../../std/thread/index.html#the-threading-model) of Rust.
16//! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an
17//! atomically-reference-counted shared pointer).
18//!
19//! [arc]: ../../../std/sync/struct.Arc.html
20//!
21//! Atomic types may be stored in static variables, initialized using
22//! the constant initializers like [`AtomicBool::new`]. Atomic statics
23//! are often used for lazy global initialization.
24//!
25//! ## Memory model for atomic accesses
26//!
27//! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically `atomic_ref`.
28//! Basically, creating a *shared reference* to one of the Rust atomic types corresponds to creating
29//! an `atomic_ref` in C++; the `atomic_ref` is destroyed when the lifetime of the shared reference
30//! ends. (A Rust atomic type that is exclusively owned or behind a mutable reference does *not*
31//! correspond to an "atomic object" in C++, since it can be accessed via non-atomic operations.)
32//!
33//! [cpp]: https://en.cppreference.com/w/cpp/atomic
34//!
35//! Each method takes an [`Ordering`] which represents the strength of
36//! the memory barrier for that operation. These orderings are the
37//! same as the [C++20 atomic orderings][1]. For more information see the [nomicon][2].
38//!
39//! [1]: https://en.cppreference.com/w/cpp/atomic/memory_order
40//! [2]: ../../../nomicon/atomics.html
41//!
42//! Since C++ does not support mixing atomic and non-atomic accesses, or non-synchronized
43//! different-sized accesses to the same data, Rust does not support those operations either.
44//! Note that both of those restrictions only apply if the accesses are non-synchronized.
45//!
46//! ```rust,no_run undefined_behavior
47//! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering};
48//! use std::mem::transmute;
49//! use std::thread;
50//!
51//! let atomic = AtomicU16::new(0);
52//!
53//! thread::scope(|s| {
54//! // This is UB: mixing atomic and non-atomic accesses
55//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
56//! s.spawn(|| unsafe { atomic.as_ptr().write(2) });
57//! });
58//!
59//! thread::scope(|s| {
60//! // This is UB: even reads are not allowed to be mixed
61//! s.spawn(|| atomic.load(Ordering::Relaxed));
62//! s.spawn(|| unsafe { atomic.as_ptr().read() });
63//! });
64//!
65//! thread::scope(|s| {
66//! // This is fine, `join` synchronizes the code in a way such that atomic
67//! // and non-atomic accesses can't happen "at the same time"
68//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
69//! handle.join().unwrap();
70//! s.spawn(|| unsafe { atomic.as_ptr().write(2) });
71//! });
72//!
73//! thread::scope(|s| {
74//! // This is UB: using different-sized atomic accesses to the same data
75//! s.spawn(|| atomic.store(1, Ordering::Relaxed));
76//! s.spawn(|| unsafe {
77//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
78//! differently_sized.store(2, Ordering::Relaxed);
79//! });
80//! });
81//!
82//! thread::scope(|s| {
83//! // This is fine, `join` synchronizes the code in a way such that
84//! // differently-sized accesses can't happen "at the same time"
85//! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed));
86//! handle.join().unwrap();
87//! s.spawn(|| unsafe {
88//! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic);
89//! differently_sized.store(2, Ordering::Relaxed);
90//! });
91//! });
92//! ```
93//!
94//! # Portability
95//!
96//! All atomic types in this module are guaranteed to be [lock-free] if they're
97//! available. This means they don't internally acquire a global mutex. Atomic
98//! types and operations are not guaranteed to be wait-free. This means that
99//! operations like `fetch_or` may be implemented with a compare-and-swap loop.
100//!
101//! Atomic operations may be implemented at the instruction layer with
102//! larger-size atomics. For example some platforms use 4-byte atomic
103//! instructions to implement `AtomicI8`. Note that this emulation should not
104//! have an impact on correctness of code, it's just something to be aware of.
105//!
106//! The atomic types in this module might not be available on all platforms. The
107//! atomic types here are all widely available, however, and can generally be
108//! relied upon existing. Some notable exceptions are:
109//!
110//! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or
111//! `AtomicI64` types.
112//! * ARM platforms like `armv5te` that aren't for Linux only provide `load`
113//! and `store` operations, and do not support Compare and Swap (CAS)
114//! operations, such as `swap`, `fetch_add`, etc. Additionally on Linux,
115//! these CAS operations are implemented via [operating system support], which
116//! may come with a performance penalty.
117//! * ARM targets with `thumbv6m` only provide `load` and `store` operations,
118//! and do not support Compare and Swap (CAS) operations, such as `swap`,
119//! `fetch_add`, etc.
120//!
121//! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt
122//!
123//! Note that future platforms may be added that also do not have support for
124//! some atomic operations. Maximally portable code will want to be careful
125//! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are
126//! generally the most portable, but even then they're not available everywhere.
127//! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although
128//! `core` does not.
129//!
130//! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally
131//! compile based on the target's supported bit widths. It is a key-value
132//! option set for each supported size, with values "8", "16", "32", "64",
133//! "128", and "ptr" for pointer-sized atomics.
134//!
135//! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm
136//!
137//! # Atomic accesses to read-only memory
138//!
139//! In general, *all* atomic accesses on read-only memory are Undefined Behavior. For instance, attempting
140//! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only
141//! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since
142//! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault
143//! on read-only memory.
144//!
145//! For the purpose of this section, "read-only memory" is defined as memory that is read-only in
146//! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write
147//! will cause a page fault. In particular, an `&u128` reference that points to memory that is
148//! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory
149//! is read-write; the only exceptions are memory created by `const` items or `static` items without
150//! interior mutability, and memory that was specifically marked as read-only by the operating
151//! system via platform-specific APIs.
152//!
153//! As an exception from the general rule stated above, "sufficiently small" atomic loads with
154//! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not
155//! Undefined Behavior. The exact size limit for what makes a load "sufficiently small" varies
156//! depending on the target:
157//!
158//! | `target_arch` | Size limit |
159//! |---------------|---------|
160//! | `x86`, `arm`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes |
161//! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes |
162//!
163//! Atomics loads that are larger than this limit as well as atomic loads with ordering other
164//! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be
165//! read-only under certain conditions, but that is not a stable guarantee and should not be relied
166//! upon.
167//!
168//! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an
169//! acquire fence instead.
170//!
171//! # Examples
172//!
173//! A simple spinlock:
174//!
175//! ```
176//! use std::sync::Arc;
177//! use std::sync::atomic::{AtomicUsize, Ordering};
178//! use std::{hint, thread};
179//!
180//! fn main() {
181//! let spinlock = Arc::new(AtomicUsize::new(1));
182//!
183//! let spinlock_clone = Arc::clone(&spinlock);
184//!
185//! let thread = thread::spawn(move|| {
186//! spinlock_clone.store(0, Ordering::Release);
187//! });
188//!
189//! // Wait for the other thread to release the lock
190//! while spinlock.load(Ordering::Acquire) != 0 {
191//! hint::spin_loop();
192//! }
193//!
194//! if let Err(panic) = thread.join() {
195//! println!("Thread had an error: {panic:?}");
196//! }
197//! }
198//! ```
199//!
200//! Keep a global count of live threads:
201//!
202//! ```
203//! use std::sync::atomic::{AtomicUsize, Ordering};
204//!
205//! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0);
206//!
207//! // Note that Relaxed ordering doesn't synchronize anything
208//! // except the global thread counter itself.
209//! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed);
210//! // Note that this number may not be true at the moment of printing
211//! // because some other thread may have changed static value already.
212//! println!("live threads: {}", old_thread_count + 1);
213//! ```
214
215#![stable(feature = "rust1", since = "1.0.0")]
216#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))]
217#![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))]
218#![rustc_diagnostic_item = "atomic_mod"]
219
220use self::Ordering::*;
221
222use crate::cell::UnsafeCell;
223use crate::fmt;
224use crate::intrinsics;
225
226use crate::hint::spin_loop;
227
228// Some architectures don't have byte-sized atomics, which results in LLVM
229// emulating them using a LL/SC loop. However for AtomicBool we can take
230// advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND
231// instead, which LLVM can emulate using a larger atomic OR/AND operation.
232//
233// This list should only contain architectures which have word-sized atomic-or/
234// atomic-and instructions but don't natively support byte-sized atomics.
235#[cfg(target_has_atomic = "8")]
236const EMULATE_ATOMIC_BOOL: bool =
237 cfg!(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"));
238
239/// A boolean type which can be safely shared between threads.
240///
241/// This type has the same in-memory representation as a [`bool`].
242///
243/// **Note**: This type is only available on platforms that support atomic
244/// loads and stores of `u8`.
245#[cfg(target_has_atomic_load_store = "8")]
246#[stable(feature = "rust1", since = "1.0.0")]
247#[rustc_diagnostic_item = "AtomicBool"]
248#[repr(C, align(1))]
249pub struct AtomicBool {
250 v: UnsafeCell<u8>,
251}
252
253#[cfg(target_has_atomic_load_store = "8")]
254#[stable(feature = "rust1", since = "1.0.0")]
255impl Default for AtomicBool {
256 /// Creates an `AtomicBool` initialized to `false`.
257 #[inline]
258 fn default() -> Self {
259 Self::new(false)
260 }
261}
262
263// Send is implicitly implemented for AtomicBool.
264#[cfg(target_has_atomic_load_store = "8")]
265#[stable(feature = "rust1", since = "1.0.0")]
266unsafe impl Sync for AtomicBool {}
267
268/// A raw pointer type which can be safely shared between threads.
269///
270/// This type has the same in-memory representation as a `*mut T`.
271///
272/// **Note**: This type is only available on platforms that support atomic
273/// loads and stores of pointers. Its size depends on the target pointer's size.
274#[cfg(target_has_atomic_load_store = "ptr")]
275#[stable(feature = "rust1", since = "1.0.0")]
276#[cfg_attr(not(test), rustc_diagnostic_item = "AtomicPtr")]
277#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
278#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
279#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
280pub struct AtomicPtr<T> {
281 p: UnsafeCell<*mut T>,
282}
283
284#[cfg(target_has_atomic_load_store = "ptr")]
285#[stable(feature = "rust1", since = "1.0.0")]
286impl<T> Default for AtomicPtr<T> {
287 /// Creates a null `AtomicPtr<T>`.
288 fn default() -> AtomicPtr<T> {
289 AtomicPtr::new(crate::ptr::null_mut())
290 }
291}
292
293#[cfg(target_has_atomic_load_store = "ptr")]
294#[stable(feature = "rust1", since = "1.0.0")]
295unsafe impl<T> Send for AtomicPtr<T> {}
296#[cfg(target_has_atomic_load_store = "ptr")]
297#[stable(feature = "rust1", since = "1.0.0")]
298unsafe impl<T> Sync for AtomicPtr<T> {}
299
300/// Atomic memory orderings
301///
302/// Memory orderings specify the way atomic operations synchronize memory.
303/// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the
304/// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`]
305/// operations synchronize other memory while additionally preserving a total order of such
306/// operations across all threads.
307///
308/// Rust's memory orderings are [the same as those of
309/// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order).
310///
311/// For more information see the [nomicon].
312///
313/// [nomicon]: ../../../nomicon/atomics.html
314#[stable(feature = "rust1", since = "1.0.0")]
315#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
316#[non_exhaustive]
317#[rustc_diagnostic_item = "Ordering"]
318pub enum Ordering {
319 /// No ordering constraints, only atomic operations.
320 ///
321 /// Corresponds to [`memory_order_relaxed`] in C++20.
322 ///
323 /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering
324 #[stable(feature = "rust1", since = "1.0.0")]
325 Relaxed,
326 /// When coupled with a store, all previous operations become ordered
327 /// before any load of this value with [`Acquire`] (or stronger) ordering.
328 /// In particular, all previous writes become visible to all threads
329 /// that perform an [`Acquire`] (or stronger) load of this value.
330 ///
331 /// Notice that using this ordering for an operation that combines loads
332 /// and stores leads to a [`Relaxed`] load operation!
333 ///
334 /// This ordering is only applicable for operations that can perform a store.
335 ///
336 /// Corresponds to [`memory_order_release`] in C++20.
337 ///
338 /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
339 #[stable(feature = "rust1", since = "1.0.0")]
340 Release,
341 /// When coupled with a load, if the loaded value was written by a store operation with
342 /// [`Release`] (or stronger) ordering, then all subsequent operations
343 /// become ordered after that store. In particular, all subsequent loads will see data
344 /// written before the store.
345 ///
346 /// Notice that using this ordering for an operation that combines loads
347 /// and stores leads to a [`Relaxed`] store operation!
348 ///
349 /// This ordering is only applicable for operations that can perform a load.
350 ///
351 /// Corresponds to [`memory_order_acquire`] in C++20.
352 ///
353 /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
354 #[stable(feature = "rust1", since = "1.0.0")]
355 Acquire,
356 /// Has the effects of both [`Acquire`] and [`Release`] together:
357 /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering.
358 ///
359 /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up
360 /// not performing any store and hence it has just [`Acquire`] ordering. However,
361 /// `AcqRel` will never perform [`Relaxed`] accesses.
362 ///
363 /// This ordering is only applicable for operations that combine both loads and stores.
364 ///
365 /// Corresponds to [`memory_order_acq_rel`] in C++20.
366 ///
367 /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering
368 #[stable(feature = "rust1", since = "1.0.0")]
369 AcqRel,
370 /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store
371 /// operations, respectively) with the additional guarantee that all threads see all
372 /// sequentially consistent operations in the same order.
373 ///
374 /// Corresponds to [`memory_order_seq_cst`] in C++20.
375 ///
376 /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering
377 #[stable(feature = "rust1", since = "1.0.0")]
378 SeqCst,
379}
380
381/// An [`AtomicBool`] initialized to `false`.
382#[cfg(target_has_atomic_load_store = "8")]
383#[stable(feature = "rust1", since = "1.0.0")]
384#[deprecated(
385 since = "1.34.0",
386 note = "the `new` function is now preferred",
387 suggestion = "AtomicBool::new(false)"
388)]
389pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false);
390
391#[cfg(target_has_atomic_load_store = "8")]
392impl AtomicBool {
393 /// Creates a new `AtomicBool`.
394 ///
395 /// # Examples
396 ///
397 /// ```
398 /// use std::sync::atomic::AtomicBool;
399 ///
400 /// let atomic_true = AtomicBool::new(true);
401 /// let atomic_false = AtomicBool::new(false);
402 /// ```
403 #[inline]
404 #[stable(feature = "rust1", since = "1.0.0")]
405 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
406 #[must_use]
407 pub const fn new(v: bool) -> AtomicBool {
408 AtomicBool { v: UnsafeCell::new(v as u8) }
409 }
410
411 /// Creates a new `AtomicBool` from a pointer.
412 ///
413 /// # Examples
414 ///
415 /// ```
416 /// #![feature(pointer_is_aligned)]
417 /// use std::sync::atomic::{self, AtomicBool};
418 /// use std::mem::align_of;
419 ///
420 /// // Get a pointer to an allocated value
421 /// let ptr: *mut bool = Box::into_raw(Box::new(false));
422 ///
423 /// assert!(ptr.is_aligned_to(align_of::<AtomicBool>()));
424 ///
425 /// {
426 /// // Create an atomic view of the allocated value
427 /// let atomic = unsafe { AtomicBool::from_ptr(ptr) };
428 ///
429 /// // Use `atomic` for atomic operations, possibly share it with other threads
430 /// atomic.store(true, atomic::Ordering::Relaxed);
431 /// }
432 ///
433 /// // It's ok to non-atomically access the value behind `ptr`,
434 /// // since the reference to the atomic ended its lifetime in the block above
435 /// assert_eq!(unsafe { *ptr }, true);
436 ///
437 /// // Deallocate the value
438 /// unsafe { drop(Box::from_raw(ptr)) }
439 /// ```
440 ///
441 /// # Safety
442 ///
443 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
444 /// be bigger than `align_of::<bool>()`).
445 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
446 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
447 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
448 /// without synchronization.
449 ///
450 /// [valid]: crate::ptr#safety
451 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
452 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
453 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
454 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool {
455 // SAFETY: guaranteed by the caller
456 unsafe { &*ptr.cast() }
457 }
458
459 /// Returns a mutable reference to the underlying [`bool`].
460 ///
461 /// This is safe because the mutable reference guarantees that no other threads are
462 /// concurrently accessing the atomic data.
463 ///
464 /// # Examples
465 ///
466 /// ```
467 /// use std::sync::atomic::{AtomicBool, Ordering};
468 ///
469 /// let mut some_bool = AtomicBool::new(true);
470 /// assert_eq!(*some_bool.get_mut(), true);
471 /// *some_bool.get_mut() = false;
472 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
473 /// ```
474 #[inline]
475 #[stable(feature = "atomic_access", since = "1.15.0")]
476 pub fn get_mut(&mut self) -> &mut bool {
477 // SAFETY: the mutable reference guarantees unique ownership.
478 unsafe { &mut *(self.v.get() as *mut bool) }
479 }
480
481 /// Get atomic access to a `&mut bool`.
482 ///
483 /// # Examples
484 ///
485 /// ```
486 /// #![feature(atomic_from_mut)]
487 /// use std::sync::atomic::{AtomicBool, Ordering};
488 ///
489 /// let mut some_bool = true;
490 /// let a = AtomicBool::from_mut(&mut some_bool);
491 /// a.store(false, Ordering::Relaxed);
492 /// assert_eq!(some_bool, false);
493 /// ```
494 #[inline]
495 #[cfg(target_has_atomic_equal_alignment = "8")]
496 #[unstable(feature = "atomic_from_mut", issue = "76314")]
497 pub fn from_mut(v: &mut bool) -> &mut Self {
498 // SAFETY: the mutable reference guarantees unique ownership, and
499 // alignment of both `bool` and `Self` is 1.
500 unsafe { &mut *(v as *mut bool as *mut Self) }
501 }
502
503 /// Get non-atomic access to a `&mut [AtomicBool]` slice.
504 ///
505 /// This is safe because the mutable reference guarantees that no other threads are
506 /// concurrently accessing the atomic data.
507 ///
508 /// # Examples
509 ///
510 /// ```
511 /// #![feature(atomic_from_mut, inline_const)]
512 /// use std::sync::atomic::{AtomicBool, Ordering};
513 ///
514 /// let mut some_bools = [const { AtomicBool::new(false) }; 10];
515 ///
516 /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools);
517 /// assert_eq!(view, [false; 10]);
518 /// view[..5].copy_from_slice(&[true; 5]);
519 ///
520 /// std::thread::scope(|s| {
521 /// for t in &some_bools[..5] {
522 /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true));
523 /// }
524 ///
525 /// for f in &some_bools[5..] {
526 /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false));
527 /// }
528 /// });
529 /// ```
530 #[inline]
531 #[unstable(feature = "atomic_from_mut", issue = "76314")]
532 pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] {
533 // SAFETY: the mutable reference guarantees unique ownership.
534 unsafe { &mut *(this as *mut [Self] as *mut [bool]) }
535 }
536
537 /// Get atomic access to a `&mut [bool]` slice.
538 ///
539 /// # Examples
540 ///
541 /// ```
542 /// #![feature(atomic_from_mut)]
543 /// use std::sync::atomic::{AtomicBool, Ordering};
544 ///
545 /// let mut some_bools = [false; 10];
546 /// let a = &*AtomicBool::from_mut_slice(&mut some_bools);
547 /// std::thread::scope(|s| {
548 /// for i in 0..a.len() {
549 /// s.spawn(move || a[i].store(true, Ordering::Relaxed));
550 /// }
551 /// });
552 /// assert_eq!(some_bools, [true; 10]);
553 /// ```
554 #[inline]
555 #[cfg(target_has_atomic_equal_alignment = "8")]
556 #[unstable(feature = "atomic_from_mut", issue = "76314")]
557 pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] {
558 // SAFETY: the mutable reference guarantees unique ownership, and
559 // alignment of both `bool` and `Self` is 1.
560 unsafe { &mut *(v as *mut [bool] as *mut [Self]) }
561 }
562
563 /// Consumes the atomic and returns the contained value.
564 ///
565 /// This is safe because passing `self` by value guarantees that no other threads are
566 /// concurrently accessing the atomic data.
567 ///
568 /// # Examples
569 ///
570 /// ```
571 /// use std::sync::atomic::AtomicBool;
572 ///
573 /// let some_bool = AtomicBool::new(true);
574 /// assert_eq!(some_bool.into_inner(), true);
575 /// ```
576 #[inline]
577 #[stable(feature = "atomic_access", since = "1.15.0")]
578 #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
579 pub const fn into_inner(self) -> bool {
580 self.v.into_inner() != 0
581 }
582
583 /// Loads a value from the bool.
584 ///
585 /// `load` takes an [`Ordering`] argument which describes the memory ordering
586 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
587 ///
588 /// # Panics
589 ///
590 /// Panics if `order` is [`Release`] or [`AcqRel`].
591 ///
592 /// # Examples
593 ///
594 /// ```
595 /// use std::sync::atomic::{AtomicBool, Ordering};
596 ///
597 /// let some_bool = AtomicBool::new(true);
598 ///
599 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
600 /// ```
601 #[inline]
602 #[stable(feature = "rust1", since = "1.0.0")]
603 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
604 pub fn load(&self, order: Ordering) -> bool {
605 // SAFETY: any data races are prevented by atomic intrinsics and the raw
606 // pointer passed in is valid because we got it from a reference.
607 unsafe { atomic_load(self.v.get(), order) != 0 }
608 }
609
610 /// Stores a value into the bool.
611 ///
612 /// `store` takes an [`Ordering`] argument which describes the memory ordering
613 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
614 ///
615 /// # Panics
616 ///
617 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
618 ///
619 /// # Examples
620 ///
621 /// ```
622 /// use std::sync::atomic::{AtomicBool, Ordering};
623 ///
624 /// let some_bool = AtomicBool::new(true);
625 ///
626 /// some_bool.store(false, Ordering::Relaxed);
627 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
628 /// ```
629 #[inline]
630 #[stable(feature = "rust1", since = "1.0.0")]
631 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
632 pub fn store(&self, val: bool, order: Ordering) {
633 // SAFETY: any data races are prevented by atomic intrinsics and the raw
634 // pointer passed in is valid because we got it from a reference.
635 unsafe {
636 atomic_store(self.v.get(), val as u8, order);
637 }
638 }
639
640 /// Stores a value into the bool, returning the previous value.
641 ///
642 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
643 /// of this operation. All ordering modes are possible. Note that using
644 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
645 /// using [`Release`] makes the load part [`Relaxed`].
646 ///
647 /// **Note:** This method is only available on platforms that support atomic
648 /// operations on `u8`.
649 ///
650 /// # Examples
651 ///
652 /// ```
653 /// use std::sync::atomic::{AtomicBool, Ordering};
654 ///
655 /// let some_bool = AtomicBool::new(true);
656 ///
657 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
658 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
659 /// ```
660 #[inline]
661 #[stable(feature = "rust1", since = "1.0.0")]
662 #[cfg(target_has_atomic = "8")]
663 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
664 pub fn swap(&self, val: bool, order: Ordering) -> bool {
665 if EMULATE_ATOMIC_BOOL {
666 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
667 } else {
668 // SAFETY: data races are prevented by atomic intrinsics.
669 unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 }
670 }
671 }
672
673 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
674 ///
675 /// The return value is always the previous value. If it is equal to `current`, then the value
676 /// was updated.
677 ///
678 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
679 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
680 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
681 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
682 /// happens, and using [`Release`] makes the load part [`Relaxed`].
683 ///
684 /// **Note:** This method is only available on platforms that support atomic
685 /// operations on `u8`.
686 ///
687 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
688 ///
689 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
690 /// memory orderings:
691 ///
692 /// Original | Success | Failure
693 /// -------- | ------- | -------
694 /// Relaxed | Relaxed | Relaxed
695 /// Acquire | Acquire | Acquire
696 /// Release | Release | Relaxed
697 /// AcqRel | AcqRel | Acquire
698 /// SeqCst | SeqCst | SeqCst
699 ///
700 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
701 /// which allows the compiler to generate better assembly code when the compare and swap
702 /// is used in a loop.
703 ///
704 /// # Examples
705 ///
706 /// ```
707 /// use std::sync::atomic::{AtomicBool, Ordering};
708 ///
709 /// let some_bool = AtomicBool::new(true);
710 ///
711 /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true);
712 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
713 ///
714 /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false);
715 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
716 /// ```
717 #[inline]
718 #[stable(feature = "rust1", since = "1.0.0")]
719 #[deprecated(
720 since = "1.50.0",
721 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
722 )]
723 #[cfg(target_has_atomic = "8")]
724 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
725 pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool {
726 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
727 Ok(x) => x,
728 Err(x) => x,
729 }
730 }
731
732 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
733 ///
734 /// The return value is a result indicating whether the new value was written and containing
735 /// the previous value. On success this value is guaranteed to be equal to `current`.
736 ///
737 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
738 /// ordering of this operation. `success` describes the required ordering for the
739 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
740 /// `failure` describes the required ordering for the load operation that takes place when
741 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
742 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
743 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
744 ///
745 /// **Note:** This method is only available on platforms that support atomic
746 /// operations on `u8`.
747 ///
748 /// # Examples
749 ///
750 /// ```
751 /// use std::sync::atomic::{AtomicBool, Ordering};
752 ///
753 /// let some_bool = AtomicBool::new(true);
754 ///
755 /// assert_eq!(some_bool.compare_exchange(true,
756 /// false,
757 /// Ordering::Acquire,
758 /// Ordering::Relaxed),
759 /// Ok(true));
760 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
761 ///
762 /// assert_eq!(some_bool.compare_exchange(true, true,
763 /// Ordering::SeqCst,
764 /// Ordering::Acquire),
765 /// Err(false));
766 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
767 /// ```
768 #[inline]
769 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
770 #[doc(alias = "compare_and_swap")]
771 #[cfg(target_has_atomic = "8")]
772 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
773 pub fn compare_exchange(
774 &self,
775 current: bool,
776 new: bool,
777 success: Ordering,
778 failure: Ordering,
779 ) -> Result<bool, bool> {
780 if EMULATE_ATOMIC_BOOL {
781 // Pick the strongest ordering from success and failure.
782 let order = match (success, failure) {
783 (SeqCst, _) => SeqCst,
784 (_, SeqCst) => SeqCst,
785 (AcqRel, _) => AcqRel,
786 (_, AcqRel) => {
787 panic!("there is no such thing as an acquire-release failure ordering")
788 }
789 (Release, Acquire) => AcqRel,
790 (Acquire, _) => Acquire,
791 (_, Acquire) => Acquire,
792 (Release, Relaxed) => Release,
793 (_, Release) => panic!("there is no such thing as a release failure ordering"),
794 (Relaxed, Relaxed) => Relaxed,
795 };
796 let old = if current == new {
797 // This is a no-op, but we still need to perform the operation
798 // for memory ordering reasons.
799 self.fetch_or(false, order)
800 } else {
801 // This sets the value to the new one and returns the old one.
802 self.swap(new, order)
803 };
804 if old == current { Ok(old) } else { Err(old) }
805 } else {
806 // SAFETY: data races are prevented by atomic intrinsics.
807 match unsafe {
808 atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure)
809 } {
810 Ok(x) => Ok(x != 0),
811 Err(x) => Err(x != 0),
812 }
813 }
814 }
815
816 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
817 ///
818 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
819 /// comparison succeeds, which can result in more efficient code on some platforms. The
820 /// return value is a result indicating whether the new value was written and containing the
821 /// previous value.
822 ///
823 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
824 /// ordering of this operation. `success` describes the required ordering for the
825 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
826 /// `failure` describes the required ordering for the load operation that takes place when
827 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
828 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
829 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
830 ///
831 /// **Note:** This method is only available on platforms that support atomic
832 /// operations on `u8`.
833 ///
834 /// # Examples
835 ///
836 /// ```
837 /// use std::sync::atomic::{AtomicBool, Ordering};
838 ///
839 /// let val = AtomicBool::new(false);
840 ///
841 /// let new = true;
842 /// let mut old = val.load(Ordering::Relaxed);
843 /// loop {
844 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
845 /// Ok(_) => break,
846 /// Err(x) => old = x,
847 /// }
848 /// }
849 /// ```
850 #[inline]
851 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
852 #[doc(alias = "compare_and_swap")]
853 #[cfg(target_has_atomic = "8")]
854 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
855 pub fn compare_exchange_weak(
856 &self,
857 current: bool,
858 new: bool,
859 success: Ordering,
860 failure: Ordering,
861 ) -> Result<bool, bool> {
862 if EMULATE_ATOMIC_BOOL {
863 return self.compare_exchange(current, new, success, failure);
864 }
865
866 // SAFETY: data races are prevented by atomic intrinsics.
867 match unsafe {
868 atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure)
869 } {
870 Ok(x) => Ok(x != 0),
871 Err(x) => Err(x != 0),
872 }
873 }
874
875 /// Logical "and" with a boolean value.
876 ///
877 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
878 /// the new value to the result.
879 ///
880 /// Returns the previous value.
881 ///
882 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
883 /// of this operation. All ordering modes are possible. Note that using
884 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
885 /// using [`Release`] makes the load part [`Relaxed`].
886 ///
887 /// **Note:** This method is only available on platforms that support atomic
888 /// operations on `u8`.
889 ///
890 /// # Examples
891 ///
892 /// ```
893 /// use std::sync::atomic::{AtomicBool, Ordering};
894 ///
895 /// let foo = AtomicBool::new(true);
896 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
897 /// assert_eq!(foo.load(Ordering::SeqCst), false);
898 ///
899 /// let foo = AtomicBool::new(true);
900 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
901 /// assert_eq!(foo.load(Ordering::SeqCst), true);
902 ///
903 /// let foo = AtomicBool::new(false);
904 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
905 /// assert_eq!(foo.load(Ordering::SeqCst), false);
906 /// ```
907 #[inline]
908 #[stable(feature = "rust1", since = "1.0.0")]
909 #[cfg(target_has_atomic = "8")]
910 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
911 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
912 // SAFETY: data races are prevented by atomic intrinsics.
913 unsafe { atomic_and(self.v.get(), val as u8, order) != 0 }
914 }
915
916 /// Logical "nand" with a boolean value.
917 ///
918 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
919 /// the new value to the result.
920 ///
921 /// Returns the previous value.
922 ///
923 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
924 /// of this operation. All ordering modes are possible. Note that using
925 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
926 /// using [`Release`] makes the load part [`Relaxed`].
927 ///
928 /// **Note:** This method is only available on platforms that support atomic
929 /// operations on `u8`.
930 ///
931 /// # Examples
932 ///
933 /// ```
934 /// use std::sync::atomic::{AtomicBool, Ordering};
935 ///
936 /// let foo = AtomicBool::new(true);
937 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
938 /// assert_eq!(foo.load(Ordering::SeqCst), true);
939 ///
940 /// let foo = AtomicBool::new(true);
941 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
942 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
943 /// assert_eq!(foo.load(Ordering::SeqCst), false);
944 ///
945 /// let foo = AtomicBool::new(false);
946 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
947 /// assert_eq!(foo.load(Ordering::SeqCst), true);
948 /// ```
949 #[inline]
950 #[stable(feature = "rust1", since = "1.0.0")]
951 #[cfg(target_has_atomic = "8")]
952 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
953 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
954 // We can't use atomic_nand here because it can result in a bool with
955 // an invalid value. This happens because the atomic operation is done
956 // with an 8-bit integer internally, which would set the upper 7 bits.
957 // So we just use fetch_xor or swap instead.
958 if val {
959 // !(x & true) == !x
960 // We must invert the bool.
961 self.fetch_xor(true, order)
962 } else {
963 // !(x & false) == true
964 // We must set the bool to true.
965 self.swap(true, order)
966 }
967 }
968
969 /// Logical "or" with a boolean value.
970 ///
971 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
972 /// new value to the result.
973 ///
974 /// Returns the previous value.
975 ///
976 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
977 /// of this operation. All ordering modes are possible. Note that using
978 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
979 /// using [`Release`] makes the load part [`Relaxed`].
980 ///
981 /// **Note:** This method is only available on platforms that support atomic
982 /// operations on `u8`.
983 ///
984 /// # Examples
985 ///
986 /// ```
987 /// use std::sync::atomic::{AtomicBool, Ordering};
988 ///
989 /// let foo = AtomicBool::new(true);
990 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
991 /// assert_eq!(foo.load(Ordering::SeqCst), true);
992 ///
993 /// let foo = AtomicBool::new(true);
994 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
995 /// assert_eq!(foo.load(Ordering::SeqCst), true);
996 ///
997 /// let foo = AtomicBool::new(false);
998 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
999 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1000 /// ```
1001 #[inline]
1002 #[stable(feature = "rust1", since = "1.0.0")]
1003 #[cfg(target_has_atomic = "8")]
1004 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1005 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1006 // SAFETY: data races are prevented by atomic intrinsics.
1007 unsafe { atomic_or(self.v.get(), val as u8, order) != 0 }
1008 }
1009
1010 /// Logical "xor" with a boolean value.
1011 ///
1012 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1013 /// the new value to the result.
1014 ///
1015 /// Returns the previous value.
1016 ///
1017 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1018 /// of this operation. All ordering modes are possible. Note that using
1019 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1020 /// using [`Release`] makes the load part [`Relaxed`].
1021 ///
1022 /// **Note:** This method is only available on platforms that support atomic
1023 /// operations on `u8`.
1024 ///
1025 /// # Examples
1026 ///
1027 /// ```
1028 /// use std::sync::atomic::{AtomicBool, Ordering};
1029 ///
1030 /// let foo = AtomicBool::new(true);
1031 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1032 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1033 ///
1034 /// let foo = AtomicBool::new(true);
1035 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1036 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1037 ///
1038 /// let foo = AtomicBool::new(false);
1039 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1040 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1041 /// ```
1042 #[inline]
1043 #[stable(feature = "rust1", since = "1.0.0")]
1044 #[cfg(target_has_atomic = "8")]
1045 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1046 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1047 // SAFETY: data races are prevented by atomic intrinsics.
1048 unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 }
1049 }
1050
1051 /// Logical "not" with a boolean value.
1052 ///
1053 /// Performs a logical "not" operation on the current value, and sets
1054 /// the new value to the result.
1055 ///
1056 /// Returns the previous value.
1057 ///
1058 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1059 /// of this operation. All ordering modes are possible. Note that using
1060 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1061 /// using [`Release`] makes the load part [`Relaxed`].
1062 ///
1063 /// **Note:** This method is only available on platforms that support atomic
1064 /// operations on `u8`.
1065 ///
1066 /// # Examples
1067 ///
1068 /// ```
1069 /// #![feature(atomic_bool_fetch_not)]
1070 /// use std::sync::atomic::{AtomicBool, Ordering};
1071 ///
1072 /// let foo = AtomicBool::new(true);
1073 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1074 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1075 ///
1076 /// let foo = AtomicBool::new(false);
1077 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1078 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1079 /// ```
1080 #[inline]
1081 #[unstable(feature = "atomic_bool_fetch_not", issue = "98485")]
1082 #[cfg(target_has_atomic = "8")]
1083 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1084 pub fn fetch_not(&self, order: Ordering) -> bool {
1085 self.fetch_xor(true, order)
1086 }
1087
1088 /// Returns a mutable pointer to the underlying [`bool`].
1089 ///
1090 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
1091 /// This method is mostly useful for FFI, where the function signature may use
1092 /// `*mut bool` instead of `&AtomicBool`.
1093 ///
1094 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
1095 /// atomic types work with interior mutability. All modifications of an atomic change the value
1096 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
1097 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
1098 /// restriction: operations on it must be atomic.
1099 ///
1100 /// # Examples
1101 ///
1102 /// ```ignore (extern-declaration)
1103 /// # fn main() {
1104 /// use std::sync::atomic::AtomicBool;
1105 ///
1106 /// extern "C" {
1107 /// fn my_atomic_op(arg: *mut bool);
1108 /// }
1109 ///
1110 /// let mut atomic = AtomicBool::new(true);
1111 /// unsafe {
1112 /// my_atomic_op(atomic.as_ptr());
1113 /// }
1114 /// # }
1115 /// ```
1116 #[inline]
1117 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
1118 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
1119 #[rustc_never_returns_null_ptr]
1120 pub const fn as_ptr(&self) -> *mut bool {
1121 self.v.get().cast()
1122 }
1123
1124 /// Fetches the value, and applies a function to it that returns an optional
1125 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1126 /// returned `Some(_)`, else `Err(previous_value)`.
1127 ///
1128 /// Note: This may call the function multiple times if the value has been
1129 /// changed from other threads in the meantime, as long as the function
1130 /// returns `Some(_)`, but the function will have been applied only once to
1131 /// the stored value.
1132 ///
1133 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1134 /// ordering of this operation. The first describes the required ordering for
1135 /// when the operation finally succeeds while the second describes the
1136 /// required ordering for loads. These correspond to the success and failure
1137 /// orderings of [`AtomicBool::compare_exchange`] respectively.
1138 ///
1139 /// Using [`Acquire`] as success ordering makes the store part of this
1140 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1141 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1142 /// [`Acquire`] or [`Relaxed`].
1143 ///
1144 /// **Note:** This method is only available on platforms that support atomic
1145 /// operations on `u8`.
1146 ///
1147 /// # Considerations
1148 ///
1149 /// This method is not magic; it is not provided by the hardware.
1150 /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks.
1151 /// In particular, this method will not circumvent the [ABA Problem].
1152 ///
1153 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1154 ///
1155 /// # Examples
1156 ///
1157 /// ```rust
1158 /// use std::sync::atomic::{AtomicBool, Ordering};
1159 ///
1160 /// let x = AtomicBool::new(false);
1161 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1162 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1163 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1164 /// assert_eq!(x.load(Ordering::SeqCst), false);
1165 /// ```
1166 #[inline]
1167 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1168 #[cfg(target_has_atomic = "8")]
1169 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1170 pub fn fetch_update<F>(
1171 &self,
1172 set_order: Ordering,
1173 fetch_order: Ordering,
1174 mut f: F,
1175 ) -> Result<bool, bool>
1176 where
1177 F: FnMut(bool) -> Option<bool>,
1178 {
1179 let mut prev = self.load(fetch_order);
1180 while let Some(next) = f(prev) {
1181 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1182 x @ Ok(_) => return x,
1183 Err(next_prev) => prev = next_prev,
1184 }
1185 }
1186 Err(prev)
1187 }
1188}
1189
1190#[cfg(target_has_atomic_load_store = "ptr")]
1191impl<T> AtomicPtr<T> {
1192 /// Creates a new `AtomicPtr`.
1193 ///
1194 /// # Examples
1195 ///
1196 /// ```
1197 /// use std::sync::atomic::AtomicPtr;
1198 ///
1199 /// let ptr = &mut 5;
1200 /// let atomic_ptr = AtomicPtr::new(ptr);
1201 /// ```
1202 #[inline]
1203 #[stable(feature = "rust1", since = "1.0.0")]
1204 #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")]
1205 pub const fn new(p: *mut T) -> AtomicPtr<T> {
1206 AtomicPtr { p: UnsafeCell::new(p) }
1207 }
1208
1209 /// Creates a new `AtomicPtr` from a pointer.
1210 ///
1211 /// # Examples
1212 ///
1213 /// ```
1214 /// #![feature(pointer_is_aligned)]
1215 /// use std::sync::atomic::{self, AtomicPtr};
1216 /// use std::mem::align_of;
1217 ///
1218 /// // Get a pointer to an allocated value
1219 /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut()));
1220 ///
1221 /// assert!(ptr.is_aligned_to(align_of::<AtomicPtr<u8>>()));
1222 ///
1223 /// {
1224 /// // Create an atomic view of the allocated value
1225 /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) };
1226 ///
1227 /// // Use `atomic` for atomic operations, possibly share it with other threads
1228 /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed);
1229 /// }
1230 ///
1231 /// // It's ok to non-atomically access the value behind `ptr`,
1232 /// // since the reference to the atomic ended its lifetime in the block above
1233 /// assert!(!unsafe { *ptr }.is_null());
1234 ///
1235 /// // Deallocate the value
1236 /// unsafe { drop(Box::from_raw(ptr)) }
1237 /// ```
1238 ///
1239 /// # Safety
1240 ///
1241 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1242 /// can be bigger than `align_of::<*mut T>()`).
1243 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1244 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
1245 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
1246 /// without synchronization.
1247 ///
1248 /// [valid]: crate::ptr#safety
1249 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
1250 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
1251 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
1252 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> {
1253 // SAFETY: guaranteed by the caller
1254 unsafe { &*ptr.cast() }
1255 }
1256
1257 /// Returns a mutable reference to the underlying pointer.
1258 ///
1259 /// This is safe because the mutable reference guarantees that no other threads are
1260 /// concurrently accessing the atomic data.
1261 ///
1262 /// # Examples
1263 ///
1264 /// ```
1265 /// use std::sync::atomic::{AtomicPtr, Ordering};
1266 ///
1267 /// let mut data = 10;
1268 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1269 /// let mut other_data = 5;
1270 /// *atomic_ptr.get_mut() = &mut other_data;
1271 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1272 /// ```
1273 #[inline]
1274 #[stable(feature = "atomic_access", since = "1.15.0")]
1275 pub fn get_mut(&mut self) -> &mut *mut T {
1276 self.p.get_mut()
1277 }
1278
1279 /// Get atomic access to a pointer.
1280 ///
1281 /// # Examples
1282 ///
1283 /// ```
1284 /// #![feature(atomic_from_mut)]
1285 /// use std::sync::atomic::{AtomicPtr, Ordering};
1286 ///
1287 /// let mut data = 123;
1288 /// let mut some_ptr = &mut data as *mut i32;
1289 /// let a = AtomicPtr::from_mut(&mut some_ptr);
1290 /// let mut other_data = 456;
1291 /// a.store(&mut other_data, Ordering::Relaxed);
1292 /// assert_eq!(unsafe { *some_ptr }, 456);
1293 /// ```
1294 #[inline]
1295 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1296 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1297 pub fn from_mut(v: &mut *mut T) -> &mut Self {
1298 use crate::mem::align_of;
1299 let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()];
1300 // SAFETY:
1301 // - the mutable reference guarantees unique ownership.
1302 // - the alignment of `*mut T` and `Self` is the same on all platforms
1303 // supported by rust, as verified above.
1304 unsafe { &mut *(v as *mut *mut T as *mut Self) }
1305 }
1306
1307 /// Get non-atomic access to a `&mut [AtomicPtr]` slice.
1308 ///
1309 /// This is safe because the mutable reference guarantees that no other threads are
1310 /// concurrently accessing the atomic data.
1311 ///
1312 /// # Examples
1313 ///
1314 /// ```
1315 /// #![feature(atomic_from_mut, inline_const)]
1316 /// use std::ptr::null_mut;
1317 /// use std::sync::atomic::{AtomicPtr, Ordering};
1318 ///
1319 /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10];
1320 ///
1321 /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs);
1322 /// assert_eq!(view, [null_mut::<String>(); 10]);
1323 /// view
1324 /// .iter_mut()
1325 /// .enumerate()
1326 /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}"))));
1327 ///
1328 /// std::thread::scope(|s| {
1329 /// for ptr in &some_ptrs {
1330 /// s.spawn(move || {
1331 /// let ptr = ptr.load(Ordering::Relaxed);
1332 /// assert!(!ptr.is_null());
1333 ///
1334 /// let name = unsafe { Box::from_raw(ptr) };
1335 /// println!("Hello, {name}!");
1336 /// });
1337 /// }
1338 /// });
1339 /// ```
1340 #[inline]
1341 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1342 pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] {
1343 // SAFETY: the mutable reference guarantees unique ownership.
1344 unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) }
1345 }
1346
1347 /// Get atomic access to a slice of pointers.
1348 ///
1349 /// # Examples
1350 ///
1351 /// ```
1352 /// #![feature(atomic_from_mut)]
1353 /// use std::ptr::null_mut;
1354 /// use std::sync::atomic::{AtomicPtr, Ordering};
1355 ///
1356 /// let mut some_ptrs = [null_mut::<String>(); 10];
1357 /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs);
1358 /// std::thread::scope(|s| {
1359 /// for i in 0..a.len() {
1360 /// s.spawn(move || {
1361 /// let name = Box::new(format!("thread{i}"));
1362 /// a[i].store(Box::into_raw(name), Ordering::Relaxed);
1363 /// });
1364 /// }
1365 /// });
1366 /// for p in some_ptrs {
1367 /// assert!(!p.is_null());
1368 /// let name = unsafe { Box::from_raw(p) };
1369 /// println!("Hello, {name}!");
1370 /// }
1371 /// ```
1372 #[inline]
1373 #[cfg(target_has_atomic_equal_alignment = "ptr")]
1374 #[unstable(feature = "atomic_from_mut", issue = "76314")]
1375 pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] {
1376 // SAFETY:
1377 // - the mutable reference guarantees unique ownership.
1378 // - the alignment of `*mut T` and `Self` is the same on all platforms
1379 // supported by rust, as verified above.
1380 unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) }
1381 }
1382
1383 /// Consumes the atomic and returns the contained value.
1384 ///
1385 /// This is safe because passing `self` by value guarantees that no other threads are
1386 /// concurrently accessing the atomic data.
1387 ///
1388 /// # Examples
1389 ///
1390 /// ```
1391 /// use std::sync::atomic::AtomicPtr;
1392 ///
1393 /// let mut data = 5;
1394 /// let atomic_ptr = AtomicPtr::new(&mut data);
1395 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1396 /// ```
1397 #[inline]
1398 #[stable(feature = "atomic_access", since = "1.15.0")]
1399 #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
1400 pub const fn into_inner(self) -> *mut T {
1401 self.p.into_inner()
1402 }
1403
1404 /// Loads a value from the pointer.
1405 ///
1406 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1407 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1408 ///
1409 /// # Panics
1410 ///
1411 /// Panics if `order` is [`Release`] or [`AcqRel`].
1412 ///
1413 /// # Examples
1414 ///
1415 /// ```
1416 /// use std::sync::atomic::{AtomicPtr, Ordering};
1417 ///
1418 /// let ptr = &mut 5;
1419 /// let some_ptr = AtomicPtr::new(ptr);
1420 ///
1421 /// let value = some_ptr.load(Ordering::Relaxed);
1422 /// ```
1423 #[inline]
1424 #[stable(feature = "rust1", since = "1.0.0")]
1425 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1426 pub fn load(&self, order: Ordering) -> *mut T {
1427 // SAFETY: data races are prevented by atomic intrinsics.
1428 unsafe { atomic_load(self.p.get(), order) }
1429 }
1430
1431 /// Stores a value into the pointer.
1432 ///
1433 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1434 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1435 ///
1436 /// # Panics
1437 ///
1438 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1439 ///
1440 /// # Examples
1441 ///
1442 /// ```
1443 /// use std::sync::atomic::{AtomicPtr, Ordering};
1444 ///
1445 /// let ptr = &mut 5;
1446 /// let some_ptr = AtomicPtr::new(ptr);
1447 ///
1448 /// let other_ptr = &mut 10;
1449 ///
1450 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1451 /// ```
1452 #[inline]
1453 #[stable(feature = "rust1", since = "1.0.0")]
1454 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1455 pub fn store(&self, ptr: *mut T, order: Ordering) {
1456 // SAFETY: data races are prevented by atomic intrinsics.
1457 unsafe {
1458 atomic_store(self.p.get(), ptr, order);
1459 }
1460 }
1461
1462 /// Stores a value into the pointer, returning the previous value.
1463 ///
1464 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1465 /// of this operation. All ordering modes are possible. Note that using
1466 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1467 /// using [`Release`] makes the load part [`Relaxed`].
1468 ///
1469 /// **Note:** This method is only available on platforms that support atomic
1470 /// operations on pointers.
1471 ///
1472 /// # Examples
1473 ///
1474 /// ```
1475 /// use std::sync::atomic::{AtomicPtr, Ordering};
1476 ///
1477 /// let ptr = &mut 5;
1478 /// let some_ptr = AtomicPtr::new(ptr);
1479 ///
1480 /// let other_ptr = &mut 10;
1481 ///
1482 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1483 /// ```
1484 #[inline]
1485 #[stable(feature = "rust1", since = "1.0.0")]
1486 #[cfg(target_has_atomic = "ptr")]
1487 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1488 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1489 // SAFETY: data races are prevented by atomic intrinsics.
1490 unsafe { atomic_swap(self.p.get(), ptr, order) }
1491 }
1492
1493 /// Stores a value into the pointer if the current value is the same as the `current` value.
1494 ///
1495 /// The return value is always the previous value. If it is equal to `current`, then the value
1496 /// was updated.
1497 ///
1498 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
1499 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
1500 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
1501 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
1502 /// happens, and using [`Release`] makes the load part [`Relaxed`].
1503 ///
1504 /// **Note:** This method is only available on platforms that support atomic
1505 /// operations on pointers.
1506 ///
1507 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
1508 ///
1509 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
1510 /// memory orderings:
1511 ///
1512 /// Original | Success | Failure
1513 /// -------- | ------- | -------
1514 /// Relaxed | Relaxed | Relaxed
1515 /// Acquire | Acquire | Acquire
1516 /// Release | Release | Relaxed
1517 /// AcqRel | AcqRel | Acquire
1518 /// SeqCst | SeqCst | SeqCst
1519 ///
1520 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
1521 /// which allows the compiler to generate better assembly code when the compare and swap
1522 /// is used in a loop.
1523 ///
1524 /// # Examples
1525 ///
1526 /// ```
1527 /// use std::sync::atomic::{AtomicPtr, Ordering};
1528 ///
1529 /// let ptr = &mut 5;
1530 /// let some_ptr = AtomicPtr::new(ptr);
1531 ///
1532 /// let other_ptr = &mut 10;
1533 ///
1534 /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed);
1535 /// ```
1536 #[inline]
1537 #[stable(feature = "rust1", since = "1.0.0")]
1538 #[deprecated(
1539 since = "1.50.0",
1540 note = "Use `compare_exchange` or `compare_exchange_weak` instead"
1541 )]
1542 #[cfg(target_has_atomic = "ptr")]
1543 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1544 pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T {
1545 match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) {
1546 Ok(x) => x,
1547 Err(x) => x,
1548 }
1549 }
1550
1551 /// Stores a value into the pointer if the current value is the same as the `current` value.
1552 ///
1553 /// The return value is a result indicating whether the new value was written and containing
1554 /// the previous value. On success this value is guaranteed to be equal to `current`.
1555 ///
1556 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1557 /// ordering of this operation. `success` describes the required ordering for the
1558 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1559 /// `failure` describes the required ordering for the load operation that takes place when
1560 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1561 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1562 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1563 ///
1564 /// **Note:** This method is only available on platforms that support atomic
1565 /// operations on pointers.
1566 ///
1567 /// # Examples
1568 ///
1569 /// ```
1570 /// use std::sync::atomic::{AtomicPtr, Ordering};
1571 ///
1572 /// let ptr = &mut 5;
1573 /// let some_ptr = AtomicPtr::new(ptr);
1574 ///
1575 /// let other_ptr = &mut 10;
1576 ///
1577 /// let value = some_ptr.compare_exchange(ptr, other_ptr,
1578 /// Ordering::SeqCst, Ordering::Relaxed);
1579 /// ```
1580 #[inline]
1581 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1582 #[cfg(target_has_atomic = "ptr")]
1583 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1584 pub fn compare_exchange(
1585 &self,
1586 current: *mut T,
1587 new: *mut T,
1588 success: Ordering,
1589 failure: Ordering,
1590 ) -> Result<*mut T, *mut T> {
1591 // SAFETY: data races are prevented by atomic intrinsics.
1592 unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) }
1593 }
1594
1595 /// Stores a value into the pointer if the current value is the same as the `current` value.
1596 ///
1597 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1598 /// comparison succeeds, which can result in more efficient code on some platforms. The
1599 /// return value is a result indicating whether the new value was written and containing the
1600 /// previous value.
1601 ///
1602 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1603 /// ordering of this operation. `success` describes the required ordering for the
1604 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1605 /// `failure` describes the required ordering for the load operation that takes place when
1606 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1607 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1608 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1609 ///
1610 /// **Note:** This method is only available on platforms that support atomic
1611 /// operations on pointers.
1612 ///
1613 /// # Examples
1614 ///
1615 /// ```
1616 /// use std::sync::atomic::{AtomicPtr, Ordering};
1617 ///
1618 /// let some_ptr = AtomicPtr::new(&mut 5);
1619 ///
1620 /// let new = &mut 10;
1621 /// let mut old = some_ptr.load(Ordering::Relaxed);
1622 /// loop {
1623 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1624 /// Ok(_) => break,
1625 /// Err(x) => old = x,
1626 /// }
1627 /// }
1628 /// ```
1629 #[inline]
1630 #[stable(feature = "extended_compare_and_swap", since = "1.10.0")]
1631 #[cfg(target_has_atomic = "ptr")]
1632 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1633 pub fn compare_exchange_weak(
1634 &self,
1635 current: *mut T,
1636 new: *mut T,
1637 success: Ordering,
1638 failure: Ordering,
1639 ) -> Result<*mut T, *mut T> {
1640 // SAFETY: This intrinsic is unsafe because it operates on a raw pointer
1641 // but we know for sure that the pointer is valid (we just got it from
1642 // an `UnsafeCell` that we have by reference) and the atomic operation
1643 // itself allows us to safely mutate the `UnsafeCell` contents.
1644 unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) }
1645 }
1646
1647 /// Fetches the value, and applies a function to it that returns an optional
1648 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1649 /// returned `Some(_)`, else `Err(previous_value)`.
1650 ///
1651 /// Note: This may call the function multiple times if the value has been
1652 /// changed from other threads in the meantime, as long as the function
1653 /// returns `Some(_)`, but the function will have been applied only once to
1654 /// the stored value.
1655 ///
1656 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1657 /// ordering of this operation. The first describes the required ordering for
1658 /// when the operation finally succeeds while the second describes the
1659 /// required ordering for loads. These correspond to the success and failure
1660 /// orderings of [`AtomicPtr::compare_exchange`] respectively.
1661 ///
1662 /// Using [`Acquire`] as success ordering makes the store part of this
1663 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1664 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1665 /// [`Acquire`] or [`Relaxed`].
1666 ///
1667 /// **Note:** This method is only available on platforms that support atomic
1668 /// operations on pointers.
1669 ///
1670 /// # Considerations
1671 ///
1672 /// This method is not magic; it is not provided by the hardware.
1673 /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks.
1674 /// In particular, this method will not circumvent the [ABA Problem].
1675 ///
1676 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1677 ///
1678 /// # Examples
1679 ///
1680 /// ```rust
1681 /// use std::sync::atomic::{AtomicPtr, Ordering};
1682 ///
1683 /// let ptr: *mut _ = &mut 5;
1684 /// let some_ptr = AtomicPtr::new(ptr);
1685 ///
1686 /// let new: *mut _ = &mut 10;
1687 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1688 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1689 /// if x == ptr {
1690 /// Some(new)
1691 /// } else {
1692 /// None
1693 /// }
1694 /// });
1695 /// assert_eq!(result, Ok(ptr));
1696 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1697 /// ```
1698 #[inline]
1699 #[stable(feature = "atomic_fetch_update", since = "1.53.0")]
1700 #[cfg(target_has_atomic = "ptr")]
1701 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1702 pub fn fetch_update<F>(
1703 &self,
1704 set_order: Ordering,
1705 fetch_order: Ordering,
1706 mut f: F,
1707 ) -> Result<*mut T, *mut T>
1708 where
1709 F: FnMut(*mut T) -> Option<*mut T>,
1710 {
1711 let mut prev = self.load(fetch_order);
1712 while let Some(next) = f(prev) {
1713 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1714 x @ Ok(_) => return x,
1715 Err(next_prev) => prev = next_prev,
1716 }
1717 }
1718 Err(prev)
1719 }
1720
1721 /// Offsets the pointer's address by adding `val` (in units of `T`),
1722 /// returning the previous pointer.
1723 ///
1724 /// This is equivalent to using [`wrapping_add`] to atomically perform the
1725 /// equivalent of `ptr = ptr.wrapping_add(val);`.
1726 ///
1727 /// This method operates in units of `T`, which means that it cannot be used
1728 /// to offset the pointer by an amount which is not a multiple of
1729 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1730 /// work with a deliberately misaligned pointer. In such cases, you may use
1731 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
1732 ///
1733 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
1734 /// memory ordering of this operation. All ordering modes are possible. Note
1735 /// that using [`Acquire`] makes the store part of this operation
1736 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1737 ///
1738 /// **Note**: This method is only available on platforms that support atomic
1739 /// operations on [`AtomicPtr`].
1740 ///
1741 /// [`wrapping_add`]: pointer::wrapping_add
1742 ///
1743 /// # Examples
1744 ///
1745 /// ```
1746 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1747 /// use core::sync::atomic::{AtomicPtr, Ordering};
1748 ///
1749 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1750 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
1751 /// // Note: units of `size_of::<i64>()`.
1752 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
1753 /// ```
1754 #[inline]
1755 #[cfg(target_has_atomic = "ptr")]
1756 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1757 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1758 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
1759 self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
1760 }
1761
1762 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
1763 /// returning the previous pointer.
1764 ///
1765 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
1766 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
1767 ///
1768 /// This method operates in units of `T`, which means that it cannot be used
1769 /// to offset the pointer by an amount which is not a multiple of
1770 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1771 /// work with a deliberately misaligned pointer. In such cases, you may use
1772 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
1773 ///
1774 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
1775 /// ordering of this operation. All ordering modes are possible. Note that
1776 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1777 /// and using [`Release`] makes the load part [`Relaxed`].
1778 ///
1779 /// **Note**: This method is only available on platforms that support atomic
1780 /// operations on [`AtomicPtr`].
1781 ///
1782 /// [`wrapping_sub`]: pointer::wrapping_sub
1783 ///
1784 /// # Examples
1785 ///
1786 /// ```
1787 /// #![feature(strict_provenance_atomic_ptr)]
1788 /// use core::sync::atomic::{AtomicPtr, Ordering};
1789 ///
1790 /// let array = [1i32, 2i32];
1791 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
1792 ///
1793 /// assert!(core::ptr::eq(
1794 /// atom.fetch_ptr_sub(1, Ordering::Relaxed),
1795 /// &array[1],
1796 /// ));
1797 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
1798 /// ```
1799 #[inline]
1800 #[cfg(target_has_atomic = "ptr")]
1801 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1802 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1803 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
1804 self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
1805 }
1806
1807 /// Offsets the pointer's address by adding `val` *bytes*, returning the
1808 /// previous pointer.
1809 ///
1810 /// This is equivalent to using [`wrapping_byte_add`] to atomically
1811 /// perform `ptr = ptr.wrapping_byte_add(val)`.
1812 ///
1813 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
1814 /// memory ordering of this operation. All ordering modes are possible. Note
1815 /// that using [`Acquire`] makes the store part of this operation
1816 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1817 ///
1818 /// **Note**: This method is only available on platforms that support atomic
1819 /// operations on [`AtomicPtr`].
1820 ///
1821 /// [`wrapping_byte_add`]: pointer::wrapping_byte_add
1822 ///
1823 /// # Examples
1824 ///
1825 /// ```
1826 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1827 /// use core::sync::atomic::{AtomicPtr, Ordering};
1828 ///
1829 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1830 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
1831 /// // Note: in units of bytes, not `size_of::<i64>()`.
1832 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
1833 /// ```
1834 #[inline]
1835 #[cfg(target_has_atomic = "ptr")]
1836 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1837 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1838 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
1839 // SAFETY: data races are prevented by atomic intrinsics.
1840 unsafe { atomic_add(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1841 }
1842
1843 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
1844 /// previous pointer.
1845 ///
1846 /// This is equivalent to using [`wrapping_byte_sub`] to atomically
1847 /// perform `ptr = ptr.wrapping_byte_sub(val)`.
1848 ///
1849 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
1850 /// memory ordering of this operation. All ordering modes are possible. Note
1851 /// that using [`Acquire`] makes the store part of this operation
1852 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1853 ///
1854 /// **Note**: This method is only available on platforms that support atomic
1855 /// operations on [`AtomicPtr`].
1856 ///
1857 /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub
1858 ///
1859 /// # Examples
1860 ///
1861 /// ```
1862 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1863 /// use core::sync::atomic::{AtomicPtr, Ordering};
1864 ///
1865 /// let atom = AtomicPtr::<i64>::new(core::ptr::invalid_mut(1));
1866 /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
1867 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
1868 /// ```
1869 #[inline]
1870 #[cfg(target_has_atomic = "ptr")]
1871 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1872 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1873 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
1874 // SAFETY: data races are prevented by atomic intrinsics.
1875 unsafe { atomic_sub(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1876 }
1877
1878 /// Performs a bitwise "or" operation on the address of the current pointer,
1879 /// and the argument `val`, and stores a pointer with provenance of the
1880 /// current pointer and the resulting address.
1881 ///
1882 /// This is equivalent to using [`map_addr`] to atomically perform
1883 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
1884 /// pointer schemes to atomically set tag bits.
1885 ///
1886 /// **Caveat**: This operation returns the previous value. To compute the
1887 /// stored value without losing provenance, you may use [`map_addr`]. For
1888 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
1889 ///
1890 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
1891 /// ordering of this operation. All ordering modes are possible. Note that
1892 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1893 /// and using [`Release`] makes the load part [`Relaxed`].
1894 ///
1895 /// **Note**: This method is only available on platforms that support atomic
1896 /// operations on [`AtomicPtr`].
1897 ///
1898 /// This API and its claimed semantics are part of the Strict Provenance
1899 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1900 /// details.
1901 ///
1902 /// [`map_addr`]: pointer::map_addr
1903 ///
1904 /// # Examples
1905 ///
1906 /// ```
1907 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1908 /// use core::sync::atomic::{AtomicPtr, Ordering};
1909 ///
1910 /// let pointer = &mut 3i64 as *mut i64;
1911 ///
1912 /// let atom = AtomicPtr::<i64>::new(pointer);
1913 /// // Tag the bottom bit of the pointer.
1914 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
1915 /// // Extract and untag.
1916 /// let tagged = atom.load(Ordering::Relaxed);
1917 /// assert_eq!(tagged.addr() & 1, 1);
1918 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
1919 /// ```
1920 #[inline]
1921 #[cfg(target_has_atomic = "ptr")]
1922 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1923 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1924 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
1925 // SAFETY: data races are prevented by atomic intrinsics.
1926 unsafe { atomic_or(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1927 }
1928
1929 /// Performs a bitwise "and" operation on the address of the current
1930 /// pointer, and the argument `val`, and stores a pointer with provenance of
1931 /// the current pointer and the resulting address.
1932 ///
1933 /// This is equivalent to using [`map_addr`] to atomically perform
1934 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
1935 /// pointer schemes to atomically unset tag bits.
1936 ///
1937 /// **Caveat**: This operation returns the previous value. To compute the
1938 /// stored value without losing provenance, you may use [`map_addr`]. For
1939 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
1940 ///
1941 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
1942 /// ordering of this operation. All ordering modes are possible. Note that
1943 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1944 /// and using [`Release`] makes the load part [`Relaxed`].
1945 ///
1946 /// **Note**: This method is only available on platforms that support atomic
1947 /// operations on [`AtomicPtr`].
1948 ///
1949 /// This API and its claimed semantics are part of the Strict Provenance
1950 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
1951 /// details.
1952 ///
1953 /// [`map_addr`]: pointer::map_addr
1954 ///
1955 /// # Examples
1956 ///
1957 /// ```
1958 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
1959 /// use core::sync::atomic::{AtomicPtr, Ordering};
1960 ///
1961 /// let pointer = &mut 3i64 as *mut i64;
1962 /// // A tagged pointer
1963 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
1964 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
1965 /// // Untag, and extract the previously tagged pointer.
1966 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed)
1967 /// .map_addr(|a| a & !1);
1968 /// assert_eq!(untagged, pointer);
1969 /// ```
1970 #[inline]
1971 #[cfg(target_has_atomic = "ptr")]
1972 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
1973 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1974 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
1975 // SAFETY: data races are prevented by atomic intrinsics.
1976 unsafe { atomic_and(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
1977 }
1978
1979 /// Performs a bitwise "xor" operation on the address of the current
1980 /// pointer, and the argument `val`, and stores a pointer with provenance of
1981 /// the current pointer and the resulting address.
1982 ///
1983 /// This is equivalent to using [`map_addr`] to atomically perform
1984 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
1985 /// pointer schemes to atomically toggle tag bits.
1986 ///
1987 /// **Caveat**: This operation returns the previous value. To compute the
1988 /// stored value without losing provenance, you may use [`map_addr`]. For
1989 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
1990 ///
1991 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
1992 /// ordering of this operation. All ordering modes are possible. Note that
1993 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1994 /// and using [`Release`] makes the load part [`Relaxed`].
1995 ///
1996 /// **Note**: This method is only available on platforms that support atomic
1997 /// operations on [`AtomicPtr`].
1998 ///
1999 /// This API and its claimed semantics are part of the Strict Provenance
2000 /// experiment, see the [module documentation for `ptr`][crate::ptr] for
2001 /// details.
2002 ///
2003 /// [`map_addr`]: pointer::map_addr
2004 ///
2005 /// # Examples
2006 ///
2007 /// ```
2008 /// #![feature(strict_provenance_atomic_ptr, strict_provenance)]
2009 /// use core::sync::atomic::{AtomicPtr, Ordering};
2010 ///
2011 /// let pointer = &mut 3i64 as *mut i64;
2012 /// let atom = AtomicPtr::<i64>::new(pointer);
2013 ///
2014 /// // Toggle a tag bit on the pointer.
2015 /// atom.fetch_xor(1, Ordering::Relaxed);
2016 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2017 /// ```
2018 #[inline]
2019 #[cfg(target_has_atomic = "ptr")]
2020 #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")]
2021 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2022 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2023 // SAFETY: data races are prevented by atomic intrinsics.
2024 unsafe { atomic_xor(self.p.get(), core::ptr::invalid_mut(val), order).cast() }
2025 }
2026
2027 /// Returns a mutable pointer to the underlying pointer.
2028 ///
2029 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
2030 /// This method is mostly useful for FFI, where the function signature may use
2031 /// `*mut *mut T` instead of `&AtomicPtr<T>`.
2032 ///
2033 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2034 /// atomic types work with interior mutability. All modifications of an atomic change the value
2035 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2036 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2037 /// restriction: operations on it must be atomic.
2038 ///
2039 /// # Examples
2040 ///
2041 /// ```ignore (extern-declaration)
2042 /// use std::sync::atomic::AtomicPtr;
2043 ///
2044 /// extern "C" {
2045 /// fn my_atomic_op(arg: *mut *mut u32);
2046 /// }
2047 ///
2048 /// let mut value = 17;
2049 /// let atomic = AtomicPtr::new(&mut value);
2050 ///
2051 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2052 /// unsafe {
2053 /// my_atomic_op(atomic.as_ptr());
2054 /// }
2055 /// ```
2056 #[inline]
2057 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
2058 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
2059 #[rustc_never_returns_null_ptr]
2060 pub const fn as_ptr(&self) -> *mut *mut T {
2061 self.p.get()
2062 }
2063}
2064
2065#[cfg(target_has_atomic_load_store = "8")]
2066#[stable(feature = "atomic_bool_from", since = "1.24.0")]
2067impl From<bool> for AtomicBool {
2068 /// Converts a `bool` into an `AtomicBool`.
2069 ///
2070 /// # Examples
2071 ///
2072 /// ```
2073 /// use std::sync::atomic::AtomicBool;
2074 /// let atomic_bool = AtomicBool::from(true);
2075 /// assert_eq!(format!("{atomic_bool:?}"), "true")
2076 /// ```
2077 #[inline]
2078 fn from(b: bool) -> Self {
2079 Self::new(b)
2080 }
2081}
2082
2083#[cfg(target_has_atomic_load_store = "ptr")]
2084#[stable(feature = "atomic_from", since = "1.23.0")]
2085impl<T> From<*mut T> for AtomicPtr<T> {
2086 /// Converts a `*mut T` into an `AtomicPtr<T>`.
2087 #[inline]
2088 fn from(p: *mut T) -> Self {
2089 Self::new(p)
2090 }
2091}
2092
2093#[allow(unused_macros)] // This macro ends up being unused on some architectures.
2094macro_rules! if_not_8_bit {
2095 (u8, $($tt:tt)*) => { "" };
2096 (i8, $($tt:tt)*) => { "" };
2097 ($_:ident, $($tt:tt)*) => { $($tt)* };
2098}
2099
2100#[cfg(target_has_atomic_load_store)]
2101macro_rules! atomic_int {
2102 ($cfg_cas:meta,
2103 $cfg_align:meta,
2104 $stable:meta,
2105 $stable_cxchg:meta,
2106 $stable_debug:meta,
2107 $stable_access:meta,
2108 $stable_from:meta,
2109 $stable_nand:meta,
2110 $const_stable:meta,
2111 $diagnostic_item:meta,
2112 $s_int_type:literal,
2113 $extra_feature:expr,
2114 $min_fn:ident, $max_fn:ident,
2115 $align:expr,
2116 $int_type:ident $atomic_type:ident) => {
2117 /// An integer type which can be safely shared between threads.
2118 ///
2119 /// This type has the same in-memory representation as the underlying
2120 /// integer type, [`
2121 #[doc = $s_int_type]
2122 /// `]. For more about the differences between atomic types and
2123 /// non-atomic types as well as information about the portability of
2124 /// this type, please see the [module-level documentation].
2125 ///
2126 /// **Note:** This type is only available on platforms that support
2127 /// atomic loads and stores of [`
2128 #[doc = $s_int_type]
2129 /// `].
2130 ///
2131 /// [module-level documentation]: crate::sync::atomic
2132 #[$stable]
2133 #[$diagnostic_item]
2134 #[repr(C, align($align))]
2135 pub struct $atomic_type {
2136 v: UnsafeCell<$int_type>,
2137 }
2138
2139 #[$stable]
2140 impl Default for $atomic_type {
2141 #[inline]
2142 fn default() -> Self {
2143 Self::new(Default::default())
2144 }
2145 }
2146
2147 #[$stable_from]
2148 impl From<$int_type> for $atomic_type {
2149 #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")]
2150 #[inline]
2151 fn from(v: $int_type) -> Self { Self::new(v) }
2152 }
2153
2154 #[$stable_debug]
2155 impl fmt::Debug for $atomic_type {
2156 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
2157 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
2158 }
2159 }
2160
2161 // Send is implicitly implemented.
2162 #[$stable]
2163 unsafe impl Sync for $atomic_type {}
2164
2165 impl $atomic_type {
2166 /// Creates a new atomic integer.
2167 ///
2168 /// # Examples
2169 ///
2170 /// ```
2171 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2172 ///
2173 #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")]
2174 /// ```
2175 #[inline]
2176 #[$stable]
2177 #[$const_stable]
2178 #[must_use]
2179 pub const fn new(v: $int_type) -> Self {
2180 Self {v: UnsafeCell::new(v)}
2181 }
2182
2183 /// Creates a new reference to an atomic integer from a pointer.
2184 ///
2185 /// # Examples
2186 ///
2187 /// ```
2188 /// #![feature(pointer_is_aligned)]
2189 #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")]
2190 /// use std::mem::align_of;
2191 ///
2192 /// // Get a pointer to an allocated value
2193 #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")]
2194 ///
2195 #[doc = concat!("assert!(ptr.is_aligned_to(align_of::<", stringify!($atomic_type), ">()));")]
2196 ///
2197 /// {
2198 /// // Create an atomic view of the allocated value
2199 // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above)
2200 #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")]
2201 ///
2202 /// // Use `atomic` for atomic operations, possibly share it with other threads
2203 /// atomic.store(1, atomic::Ordering::Relaxed);
2204 /// }
2205 ///
2206 /// // It's ok to non-atomically access the value behind `ptr`,
2207 /// // since the reference to the atomic ended its lifetime in the block above
2208 /// assert_eq!(unsafe { *ptr }, 1);
2209 ///
2210 /// // Deallocate the value
2211 /// unsafe { drop(Box::from_raw(ptr)) }
2212 /// ```
2213 ///
2214 /// # Safety
2215 ///
2216 #[doc = concat!(" * `ptr` must be aligned to \
2217 `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this \
2218 can be bigger than `align_of::<", stringify!($int_type), ">()`).")]
2219 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2220 /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not
2221 /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes,
2222 /// without synchronization.
2223 ///
2224 /// [valid]: crate::ptr#safety
2225 /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
2226 #[stable(feature = "atomic_from_ptr", since = "1.75.0")]
2227 #[rustc_const_unstable(feature = "const_atomic_from_ptr", issue = "108652")]
2228 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type {
2229 // SAFETY: guaranteed by the caller
2230 unsafe { &*ptr.cast() }
2231 }
2232
2233
2234 /// Returns a mutable reference to the underlying integer.
2235 ///
2236 /// This is safe because the mutable reference guarantees that no other threads are
2237 /// concurrently accessing the atomic data.
2238 ///
2239 /// # Examples
2240 ///
2241 /// ```
2242 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2243 ///
2244 #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")]
2245 /// assert_eq!(*some_var.get_mut(), 10);
2246 /// *some_var.get_mut() = 5;
2247 /// assert_eq!(some_var.load(Ordering::SeqCst), 5);
2248 /// ```
2249 #[inline]
2250 #[$stable_access]
2251 pub fn get_mut(&mut self) -> &mut $int_type {
2252 self.v.get_mut()
2253 }
2254
2255 #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")]
2256 ///
2257 #[doc = if_not_8_bit! {
2258 $int_type,
2259 concat!(
2260 "**Note:** This function is only available on targets where `",
2261 stringify!($int_type), "` has an alignment of ", $align, " bytes."
2262 )
2263 }]
2264 ///
2265 /// # Examples
2266 ///
2267 /// ```
2268 /// #![feature(atomic_from_mut)]
2269 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2270 ///
2271 /// let mut some_int = 123;
2272 #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")]
2273 /// a.store(100, Ordering::Relaxed);
2274 /// assert_eq!(some_int, 100);
2275 /// ```
2276 ///
2277 #[inline]
2278 #[$cfg_align]
2279 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2280 pub fn from_mut(v: &mut $int_type) -> &mut Self {
2281 use crate::mem::align_of;
2282 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2283 // SAFETY:
2284 // - the mutable reference guarantees unique ownership.
2285 // - the alignment of `$int_type` and `Self` is the
2286 // same, as promised by $cfg_align and verified above.
2287 unsafe { &mut *(v as *mut $int_type as *mut Self) }
2288 }
2289
2290 #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")]
2291 ///
2292 /// This is safe because the mutable reference guarantees that no other threads are
2293 /// concurrently accessing the atomic data.
2294 ///
2295 /// # Examples
2296 ///
2297 /// ```
2298 /// #![feature(atomic_from_mut, inline_const)]
2299 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2300 ///
2301 #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")]
2302 ///
2303 #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")]
2304 /// assert_eq!(view, [0; 10]);
2305 /// view
2306 /// .iter_mut()
2307 /// .enumerate()
2308 /// .for_each(|(idx, int)| *int = idx as _);
2309 ///
2310 /// std::thread::scope(|s| {
2311 /// some_ints
2312 /// .iter()
2313 /// .enumerate()
2314 /// .for_each(|(idx, int)| {
2315 /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _));
2316 /// })
2317 /// });
2318 /// ```
2319 #[inline]
2320 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2321 pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] {
2322 // SAFETY: the mutable reference guarantees unique ownership.
2323 unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) }
2324 }
2325
2326 #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")]
2327 ///
2328 /// # Examples
2329 ///
2330 /// ```
2331 /// #![feature(atomic_from_mut)]
2332 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2333 ///
2334 /// let mut some_ints = [0; 10];
2335 #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")]
2336 /// std::thread::scope(|s| {
2337 /// for i in 0..a.len() {
2338 /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed));
2339 /// }
2340 /// });
2341 /// for (i, n) in some_ints.into_iter().enumerate() {
2342 /// assert_eq!(i, n as usize);
2343 /// }
2344 /// ```
2345 #[inline]
2346 #[$cfg_align]
2347 #[unstable(feature = "atomic_from_mut", issue = "76314")]
2348 pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] {
2349 use crate::mem::align_of;
2350 let [] = [(); align_of::<Self>() - align_of::<$int_type>()];
2351 // SAFETY:
2352 // - the mutable reference guarantees unique ownership.
2353 // - the alignment of `$int_type` and `Self` is the
2354 // same, as promised by $cfg_align and verified above.
2355 unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) }
2356 }
2357
2358 /// Consumes the atomic and returns the contained value.
2359 ///
2360 /// This is safe because passing `self` by value guarantees that no other threads are
2361 /// concurrently accessing the atomic data.
2362 ///
2363 /// # Examples
2364 ///
2365 /// ```
2366 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2367 ///
2368 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2369 /// assert_eq!(some_var.into_inner(), 5);
2370 /// ```
2371 #[inline]
2372 #[$stable_access]
2373 #[rustc_const_unstable(feature = "const_cell_into_inner", issue = "78729")]
2374 pub const fn into_inner(self) -> $int_type {
2375 self.v.into_inner()
2376 }
2377
2378 /// Loads a value from the atomic integer.
2379 ///
2380 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2381 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2382 ///
2383 /// # Panics
2384 ///
2385 /// Panics if `order` is [`Release`] or [`AcqRel`].
2386 ///
2387 /// # Examples
2388 ///
2389 /// ```
2390 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2391 ///
2392 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2393 ///
2394 /// assert_eq!(some_var.load(Ordering::Relaxed), 5);
2395 /// ```
2396 #[inline]
2397 #[$stable]
2398 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2399 pub fn load(&self, order: Ordering) -> $int_type {
2400 // SAFETY: data races are prevented by atomic intrinsics.
2401 unsafe { atomic_load(self.v.get(), order) }
2402 }
2403
2404 /// Stores a value into the atomic integer.
2405 ///
2406 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2407 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2408 ///
2409 /// # Panics
2410 ///
2411 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
2412 ///
2413 /// # Examples
2414 ///
2415 /// ```
2416 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2417 ///
2418 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2419 ///
2420 /// some_var.store(10, Ordering::Relaxed);
2421 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2422 /// ```
2423 #[inline]
2424 #[$stable]
2425 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2426 pub fn store(&self, val: $int_type, order: Ordering) {
2427 // SAFETY: data races are prevented by atomic intrinsics.
2428 unsafe { atomic_store(self.v.get(), val, order); }
2429 }
2430
2431 /// Stores a value into the atomic integer, returning the previous value.
2432 ///
2433 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
2434 /// of this operation. All ordering modes are possible. Note that using
2435 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2436 /// using [`Release`] makes the load part [`Relaxed`].
2437 ///
2438 /// **Note**: This method is only available on platforms that support atomic operations on
2439 #[doc = concat!("[`", $s_int_type, "`].")]
2440 ///
2441 /// # Examples
2442 ///
2443 /// ```
2444 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2445 ///
2446 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2447 ///
2448 /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2449 /// ```
2450 #[inline]
2451 #[$stable]
2452 #[$cfg_cas]
2453 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2454 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2455 // SAFETY: data races are prevented by atomic intrinsics.
2456 unsafe { atomic_swap(self.v.get(), val, order) }
2457 }
2458
2459 /// Stores a value into the atomic integer if the current value is the same as
2460 /// the `current` value.
2461 ///
2462 /// The return value is always the previous value. If it is equal to `current`, then the
2463 /// value was updated.
2464 ///
2465 /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory
2466 /// ordering of this operation. Notice that even when using [`AcqRel`], the operation
2467 /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics.
2468 /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it
2469 /// happens, and using [`Release`] makes the load part [`Relaxed`].
2470 ///
2471 /// **Note**: This method is only available on platforms that support atomic operations on
2472 #[doc = concat!("[`", $s_int_type, "`].")]
2473 ///
2474 /// # Migrating to `compare_exchange` and `compare_exchange_weak`
2475 ///
2476 /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for
2477 /// memory orderings:
2478 ///
2479 /// Original | Success | Failure
2480 /// -------- | ------- | -------
2481 /// Relaxed | Relaxed | Relaxed
2482 /// Acquire | Acquire | Acquire
2483 /// Release | Release | Relaxed
2484 /// AcqRel | AcqRel | Acquire
2485 /// SeqCst | SeqCst | SeqCst
2486 ///
2487 /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds,
2488 /// which allows the compiler to generate better assembly code when the compare and swap
2489 /// is used in a loop.
2490 ///
2491 /// # Examples
2492 ///
2493 /// ```
2494 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2495 ///
2496 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2497 ///
2498 /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5);
2499 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2500 ///
2501 /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10);
2502 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2503 /// ```
2504 #[inline]
2505 #[$stable]
2506 #[deprecated(
2507 since = "1.50.0",
2508 note = "Use `compare_exchange` or `compare_exchange_weak` instead")
2509 ]
2510 #[$cfg_cas]
2511 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2512 pub fn compare_and_swap(&self,
2513 current: $int_type,
2514 new: $int_type,
2515 order: Ordering) -> $int_type {
2516 match self.compare_exchange(current,
2517 new,
2518 order,
2519 strongest_failure_ordering(order)) {
2520 Ok(x) => x,
2521 Err(x) => x,
2522 }
2523 }
2524
2525 /// Stores a value into the atomic integer if the current value is the same as
2526 /// the `current` value.
2527 ///
2528 /// The return value is a result indicating whether the new value was written and
2529 /// containing the previous value. On success this value is guaranteed to be equal to
2530 /// `current`.
2531 ///
2532 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2533 /// ordering of this operation. `success` describes the required ordering for the
2534 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2535 /// `failure` describes the required ordering for the load operation that takes place when
2536 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2537 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2538 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2539 ///
2540 /// **Note**: This method is only available on platforms that support atomic operations on
2541 #[doc = concat!("[`", $s_int_type, "`].")]
2542 ///
2543 /// # Examples
2544 ///
2545 /// ```
2546 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2547 ///
2548 #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")]
2549 ///
2550 /// assert_eq!(some_var.compare_exchange(5, 10,
2551 /// Ordering::Acquire,
2552 /// Ordering::Relaxed),
2553 /// Ok(5));
2554 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2555 ///
2556 /// assert_eq!(some_var.compare_exchange(6, 12,
2557 /// Ordering::SeqCst,
2558 /// Ordering::Acquire),
2559 /// Err(10));
2560 /// assert_eq!(some_var.load(Ordering::Relaxed), 10);
2561 /// ```
2562 #[inline]
2563 #[$stable_cxchg]
2564 #[$cfg_cas]
2565 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2566 pub fn compare_exchange(&self,
2567 current: $int_type,
2568 new: $int_type,
2569 success: Ordering,
2570 failure: Ordering) -> Result<$int_type, $int_type> {
2571 // SAFETY: data races are prevented by atomic intrinsics.
2572 unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) }
2573 }
2574
2575 /// Stores a value into the atomic integer if the current value is the same as
2576 /// the `current` value.
2577 ///
2578 #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")]
2579 /// this function is allowed to spuriously fail even
2580 /// when the comparison succeeds, which can result in more efficient code on some
2581 /// platforms. The return value is a result indicating whether the new value was
2582 /// written and containing the previous value.
2583 ///
2584 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2585 /// ordering of this operation. `success` describes the required ordering for the
2586 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2587 /// `failure` describes the required ordering for the load operation that takes place when
2588 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2589 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2590 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2591 ///
2592 /// **Note**: This method is only available on platforms that support atomic operations on
2593 #[doc = concat!("[`", $s_int_type, "`].")]
2594 ///
2595 /// # Examples
2596 ///
2597 /// ```
2598 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2599 ///
2600 #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")]
2601 ///
2602 /// let mut old = val.load(Ordering::Relaxed);
2603 /// loop {
2604 /// let new = old * 2;
2605 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2606 /// Ok(_) => break,
2607 /// Err(x) => old = x,
2608 /// }
2609 /// }
2610 /// ```
2611 #[inline]
2612 #[$stable_cxchg]
2613 #[$cfg_cas]
2614 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2615 pub fn compare_exchange_weak(&self,
2616 current: $int_type,
2617 new: $int_type,
2618 success: Ordering,
2619 failure: Ordering) -> Result<$int_type, $int_type> {
2620 // SAFETY: data races are prevented by atomic intrinsics.
2621 unsafe {
2622 atomic_compare_exchange_weak(self.v.get(), current, new, success, failure)
2623 }
2624 }
2625
2626 /// Adds to the current value, returning the previous value.
2627 ///
2628 /// This operation wraps around on overflow.
2629 ///
2630 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
2631 /// of this operation. All ordering modes are possible. Note that using
2632 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2633 /// using [`Release`] makes the load part [`Relaxed`].
2634 ///
2635 /// **Note**: This method is only available on platforms that support atomic operations on
2636 #[doc = concat!("[`", $s_int_type, "`].")]
2637 ///
2638 /// # Examples
2639 ///
2640 /// ```
2641 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2642 ///
2643 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")]
2644 /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
2645 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2646 /// ```
2647 #[inline]
2648 #[$stable]
2649 #[$cfg_cas]
2650 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2651 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
2652 // SAFETY: data races are prevented by atomic intrinsics.
2653 unsafe { atomic_add(self.v.get(), val, order) }
2654 }
2655
2656 /// Subtracts from the current value, returning the previous value.
2657 ///
2658 /// This operation wraps around on overflow.
2659 ///
2660 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
2661 /// of this operation. All ordering modes are possible. Note that using
2662 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2663 /// using [`Release`] makes the load part [`Relaxed`].
2664 ///
2665 /// **Note**: This method is only available on platforms that support atomic operations on
2666 #[doc = concat!("[`", $s_int_type, "`].")]
2667 ///
2668 /// # Examples
2669 ///
2670 /// ```
2671 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2672 ///
2673 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")]
2674 /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
2675 /// assert_eq!(foo.load(Ordering::SeqCst), 10);
2676 /// ```
2677 #[inline]
2678 #[$stable]
2679 #[$cfg_cas]
2680 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2681 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
2682 // SAFETY: data races are prevented by atomic intrinsics.
2683 unsafe { atomic_sub(self.v.get(), val, order) }
2684 }
2685
2686 /// Bitwise "and" with the current value.
2687 ///
2688 /// Performs a bitwise "and" operation on the current value and the argument `val`, and
2689 /// sets the new value to the result.
2690 ///
2691 /// Returns the previous value.
2692 ///
2693 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
2694 /// of this operation. All ordering modes are possible. Note that using
2695 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2696 /// using [`Release`] makes the load part [`Relaxed`].
2697 ///
2698 /// **Note**: This method is only available on platforms that support atomic operations on
2699 #[doc = concat!("[`", $s_int_type, "`].")]
2700 ///
2701 /// # Examples
2702 ///
2703 /// ```
2704 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2705 ///
2706 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2707 /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
2708 /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
2709 /// ```
2710 #[inline]
2711 #[$stable]
2712 #[$cfg_cas]
2713 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2714 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
2715 // SAFETY: data races are prevented by atomic intrinsics.
2716 unsafe { atomic_and(self.v.get(), val, order) }
2717 }
2718
2719 /// Bitwise "nand" with the current value.
2720 ///
2721 /// Performs a bitwise "nand" operation on the current value and the argument `val`, and
2722 /// sets the new value to the result.
2723 ///
2724 /// Returns the previous value.
2725 ///
2726 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
2727 /// of this operation. All ordering modes are possible. Note that using
2728 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2729 /// using [`Release`] makes the load part [`Relaxed`].
2730 ///
2731 /// **Note**: This method is only available on platforms that support atomic operations on
2732 #[doc = concat!("[`", $s_int_type, "`].")]
2733 ///
2734 /// # Examples
2735 ///
2736 /// ```
2737 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2738 ///
2739 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")]
2740 /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
2741 /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
2742 /// ```
2743 #[inline]
2744 #[$stable_nand]
2745 #[$cfg_cas]
2746 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2747 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
2748 // SAFETY: data races are prevented by atomic intrinsics.
2749 unsafe { atomic_nand(self.v.get(), val, order) }
2750 }
2751
2752 /// Bitwise "or" with the current value.
2753 ///
2754 /// Performs a bitwise "or" operation on the current value and the argument `val`, and
2755 /// sets the new value to the result.
2756 ///
2757 /// Returns the previous value.
2758 ///
2759 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
2760 /// of this operation. All ordering modes are possible. Note that using
2761 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2762 /// using [`Release`] makes the load part [`Relaxed`].
2763 ///
2764 /// **Note**: This method is only available on platforms that support atomic operations on
2765 #[doc = concat!("[`", $s_int_type, "`].")]
2766 ///
2767 /// # Examples
2768 ///
2769 /// ```
2770 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2771 ///
2772 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2773 /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
2774 /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
2775 /// ```
2776 #[inline]
2777 #[$stable]
2778 #[$cfg_cas]
2779 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2780 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
2781 // SAFETY: data races are prevented by atomic intrinsics.
2782 unsafe { atomic_or(self.v.get(), val, order) }
2783 }
2784
2785 /// Bitwise "xor" with the current value.
2786 ///
2787 /// Performs a bitwise "xor" operation on the current value and the argument `val`, and
2788 /// sets the new value to the result.
2789 ///
2790 /// Returns the previous value.
2791 ///
2792 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
2793 /// of this operation. All ordering modes are possible. Note that using
2794 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2795 /// using [`Release`] makes the load part [`Relaxed`].
2796 ///
2797 /// **Note**: This method is only available on platforms that support atomic operations on
2798 #[doc = concat!("[`", $s_int_type, "`].")]
2799 ///
2800 /// # Examples
2801 ///
2802 /// ```
2803 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2804 ///
2805 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")]
2806 /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
2807 /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
2808 /// ```
2809 #[inline]
2810 #[$stable]
2811 #[$cfg_cas]
2812 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2813 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
2814 // SAFETY: data races are prevented by atomic intrinsics.
2815 unsafe { atomic_xor(self.v.get(), val, order) }
2816 }
2817
2818 /// Fetches the value, and applies a function to it that returns an optional
2819 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
2820 /// `Err(previous_value)`.
2821 ///
2822 /// Note: This may call the function multiple times if the value has been changed from other threads in
2823 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
2824 /// only once to the stored value.
2825 ///
2826 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
2827 /// The first describes the required ordering for when the operation finally succeeds while the second
2828 /// describes the required ordering for loads. These correspond to the success and failure orderings of
2829 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")]
2830 /// respectively.
2831 ///
2832 /// Using [`Acquire`] as success ordering makes the store part
2833 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
2834 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2835 ///
2836 /// **Note**: This method is only available on platforms that support atomic operations on
2837 #[doc = concat!("[`", $s_int_type, "`].")]
2838 ///
2839 /// # Considerations
2840 ///
2841 /// This method is not magic; it is not provided by the hardware.
2842 /// It is implemented in terms of
2843 #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")]
2844 /// and suffers from the same drawbacks.
2845 /// In particular, this method will not circumvent the [ABA Problem].
2846 ///
2847 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2848 ///
2849 /// # Examples
2850 ///
2851 /// ```rust
2852 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2853 ///
2854 #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")]
2855 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
2856 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
2857 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
2858 /// assert_eq!(x.load(Ordering::SeqCst), 9);
2859 /// ```
2860 #[inline]
2861 #[stable(feature = "no_more_cas", since = "1.45.0")]
2862 #[$cfg_cas]
2863 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2864 pub fn fetch_update<F>(&self,
2865 set_order: Ordering,
2866 fetch_order: Ordering,
2867 mut f: F) -> Result<$int_type, $int_type>
2868 where F: FnMut($int_type) -> Option<$int_type> {
2869 let mut prev = self.load(fetch_order);
2870 while let Some(next) = f(prev) {
2871 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2872 x @ Ok(_) => return x,
2873 Err(next_prev) => prev = next_prev
2874 }
2875 }
2876 Err(prev)
2877 }
2878
2879 /// Maximum with the current value.
2880 ///
2881 /// Finds the maximum of the current value and the argument `val`, and
2882 /// sets the new value to the result.
2883 ///
2884 /// Returns the previous value.
2885 ///
2886 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
2887 /// of this operation. All ordering modes are possible. Note that using
2888 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2889 /// using [`Release`] makes the load part [`Relaxed`].
2890 ///
2891 /// **Note**: This method is only available on platforms that support atomic operations on
2892 #[doc = concat!("[`", $s_int_type, "`].")]
2893 ///
2894 /// # Examples
2895 ///
2896 /// ```
2897 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2898 ///
2899 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2900 /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
2901 /// assert_eq!(foo.load(Ordering::SeqCst), 42);
2902 /// ```
2903 ///
2904 /// If you want to obtain the maximum value in one step, you can use the following:
2905 ///
2906 /// ```
2907 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2908 ///
2909 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2910 /// let bar = 42;
2911 /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
2912 /// assert!(max_foo == 42);
2913 /// ```
2914 #[inline]
2915 #[stable(feature = "atomic_min_max", since = "1.45.0")]
2916 #[$cfg_cas]
2917 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2918 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
2919 // SAFETY: data races are prevented by atomic intrinsics.
2920 unsafe { $max_fn(self.v.get(), val, order) }
2921 }
2922
2923 /// Minimum with the current value.
2924 ///
2925 /// Finds the minimum of the current value and the argument `val`, and
2926 /// sets the new value to the result.
2927 ///
2928 /// Returns the previous value.
2929 ///
2930 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
2931 /// of this operation. All ordering modes are possible. Note that using
2932 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2933 /// using [`Release`] makes the load part [`Relaxed`].
2934 ///
2935 /// **Note**: This method is only available on platforms that support atomic operations on
2936 #[doc = concat!("[`", $s_int_type, "`].")]
2937 ///
2938 /// # Examples
2939 ///
2940 /// ```
2941 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2942 ///
2943 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2944 /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
2945 /// assert_eq!(foo.load(Ordering::Relaxed), 23);
2946 /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
2947 /// assert_eq!(foo.load(Ordering::Relaxed), 22);
2948 /// ```
2949 ///
2950 /// If you want to obtain the minimum value in one step, you can use the following:
2951 ///
2952 /// ```
2953 #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")]
2954 ///
2955 #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")]
2956 /// let bar = 12;
2957 /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
2958 /// assert_eq!(min_foo, 12);
2959 /// ```
2960 #[inline]
2961 #[stable(feature = "atomic_min_max", since = "1.45.0")]
2962 #[$cfg_cas]
2963 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2964 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
2965 // SAFETY: data races are prevented by atomic intrinsics.
2966 unsafe { $min_fn(self.v.get(), val, order) }
2967 }
2968
2969 /// Returns a mutable pointer to the underlying integer.
2970 ///
2971 /// Doing non-atomic reads and writes on the resulting integer can be a data race.
2972 /// This method is mostly useful for FFI, where the function signature may use
2973 #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")]
2974 ///
2975 /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the
2976 /// atomic types work with interior mutability. All modifications of an atomic change the value
2977 /// through a shared reference, and can do so safely as long as they use atomic operations. Any
2978 /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same
2979 /// restriction: operations on it must be atomic.
2980 ///
2981 /// # Examples
2982 ///
2983 /// ```ignore (extern-declaration)
2984 /// # fn main() {
2985 #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")]
2986 ///
2987 /// extern "C" {
2988 #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")]
2989 /// }
2990 ///
2991 #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")]
2992 ///
2993 /// // SAFETY: Safe as long as `my_atomic_op` is atomic.
2994 /// unsafe {
2995 /// my_atomic_op(atomic.as_ptr());
2996 /// }
2997 /// # }
2998 /// ```
2999 #[inline]
3000 #[stable(feature = "atomic_as_ptr", since = "1.70.0")]
3001 #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")]
3002 #[rustc_never_returns_null_ptr]
3003 pub const fn as_ptr(&self) -> *mut $int_type {
3004 self.v.get()
3005 }
3006 }
3007 }
3008}
3009
3010#[cfg(target_has_atomic_load_store = "8")]
3011atomic_int! {
3012 cfg(target_has_atomic = "8"),
3013 cfg(target_has_atomic_equal_alignment = "8"),
3014 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3015 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3016 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3017 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3018 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3019 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3020 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3021 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI8"),
3022 "i8",
3023 "",
3024 atomic_min, atomic_max,
3025 1,
3026 i8 AtomicI8
3027}
3028#[cfg(target_has_atomic_load_store = "8")]
3029atomic_int! {
3030 cfg(target_has_atomic = "8"),
3031 cfg(target_has_atomic_equal_alignment = "8"),
3032 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3033 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3034 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3035 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3036 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3037 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3038 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3039 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU8"),
3040 "u8",
3041 "",
3042 atomic_umin, atomic_umax,
3043 1,
3044 u8 AtomicU8
3045}
3046#[cfg(target_has_atomic_load_store = "16")]
3047atomic_int! {
3048 cfg(target_has_atomic = "16"),
3049 cfg(target_has_atomic_equal_alignment = "16"),
3050 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3051 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3052 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3053 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3054 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3055 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3056 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3057 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI16"),
3058 "i16",
3059 "",
3060 atomic_min, atomic_max,
3061 2,
3062 i16 AtomicI16
3063}
3064#[cfg(target_has_atomic_load_store = "16")]
3065atomic_int! {
3066 cfg(target_has_atomic = "16"),
3067 cfg(target_has_atomic_equal_alignment = "16"),
3068 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3069 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3070 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3071 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3072 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3073 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3074 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3075 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU16"),
3076 "u16",
3077 "",
3078 atomic_umin, atomic_umax,
3079 2,
3080 u16 AtomicU16
3081}
3082#[cfg(target_has_atomic_load_store = "32")]
3083atomic_int! {
3084 cfg(target_has_atomic = "32"),
3085 cfg(target_has_atomic_equal_alignment = "32"),
3086 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3087 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3088 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3089 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3090 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3091 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3092 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3093 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI32"),
3094 "i32",
3095 "",
3096 atomic_min, atomic_max,
3097 4,
3098 i32 AtomicI32
3099}
3100#[cfg(target_has_atomic_load_store = "32")]
3101atomic_int! {
3102 cfg(target_has_atomic = "32"),
3103 cfg(target_has_atomic_equal_alignment = "32"),
3104 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3105 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3106 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3107 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3108 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3109 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3110 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3111 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU32"),
3112 "u32",
3113 "",
3114 atomic_umin, atomic_umax,
3115 4,
3116 u32 AtomicU32
3117}
3118#[cfg(target_has_atomic_load_store = "64")]
3119atomic_int! {
3120 cfg(target_has_atomic = "64"),
3121 cfg(target_has_atomic_equal_alignment = "64"),
3122 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3123 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3124 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3125 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3126 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3127 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3128 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3129 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI64"),
3130 "i64",
3131 "",
3132 atomic_min, atomic_max,
3133 8,
3134 i64 AtomicI64
3135}
3136#[cfg(target_has_atomic_load_store = "64")]
3137atomic_int! {
3138 cfg(target_has_atomic = "64"),
3139 cfg(target_has_atomic_equal_alignment = "64"),
3140 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3141 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3142 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3143 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3144 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3145 stable(feature = "integer_atomics_stable", since = "1.34.0"),
3146 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3147 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU64"),
3148 "u64",
3149 "",
3150 atomic_umin, atomic_umax,
3151 8,
3152 u64 AtomicU64
3153}
3154#[cfg(target_has_atomic_load_store = "128")]
3155atomic_int! {
3156 cfg(target_has_atomic = "128"),
3157 cfg(target_has_atomic_equal_alignment = "128"),
3158 unstable(feature = "integer_atomics", issue = "99069"),
3159 unstable(feature = "integer_atomics", issue = "99069"),
3160 unstable(feature = "integer_atomics", issue = "99069"),
3161 unstable(feature = "integer_atomics", issue = "99069"),
3162 unstable(feature = "integer_atomics", issue = "99069"),
3163 unstable(feature = "integer_atomics", issue = "99069"),
3164 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3165 cfg_attr(not(test), rustc_diagnostic_item = "AtomicI128"),
3166 "i128",
3167 "#![feature(integer_atomics)]\n\n",
3168 atomic_min, atomic_max,
3169 16,
3170 i128 AtomicI128
3171}
3172#[cfg(target_has_atomic_load_store = "128")]
3173atomic_int! {
3174 cfg(target_has_atomic = "128"),
3175 cfg(target_has_atomic_equal_alignment = "128"),
3176 unstable(feature = "integer_atomics", issue = "99069"),
3177 unstable(feature = "integer_atomics", issue = "99069"),
3178 unstable(feature = "integer_atomics", issue = "99069"),
3179 unstable(feature = "integer_atomics", issue = "99069"),
3180 unstable(feature = "integer_atomics", issue = "99069"),
3181 unstable(feature = "integer_atomics", issue = "99069"),
3182 rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"),
3183 cfg_attr(not(test), rustc_diagnostic_item = "AtomicU128"),
3184 "u128",
3185 "#![feature(integer_atomics)]\n\n",
3186 atomic_umin, atomic_umax,
3187 16,
3188 u128 AtomicU128
3189}
3190
3191#[cfg(target_has_atomic_load_store = "ptr")]
3192macro_rules! atomic_int_ptr_sized {
3193 ( $($target_pointer_width:literal $align:literal)* ) => { $(
3194 #[cfg(target_pointer_width = $target_pointer_width)]
3195 atomic_int! {
3196 cfg(target_has_atomic = "ptr"),
3197 cfg(target_has_atomic_equal_alignment = "ptr"),
3198 stable(feature = "rust1", since = "1.0.0"),
3199 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3200 stable(feature = "atomic_debug", since = "1.3.0"),
3201 stable(feature = "atomic_access", since = "1.15.0"),
3202 stable(feature = "atomic_from", since = "1.23.0"),
3203 stable(feature = "atomic_nand", since = "1.27.0"),
3204 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3205 cfg_attr(not(test), rustc_diagnostic_item = "AtomicIsize"),
3206 "isize",
3207 "",
3208 atomic_min, atomic_max,
3209 $align,
3210 isize AtomicIsize
3211 }
3212 #[cfg(target_pointer_width = $target_pointer_width)]
3213 atomic_int! {
3214 cfg(target_has_atomic = "ptr"),
3215 cfg(target_has_atomic_equal_alignment = "ptr"),
3216 stable(feature = "rust1", since = "1.0.0"),
3217 stable(feature = "extended_compare_and_swap", since = "1.10.0"),
3218 stable(feature = "atomic_debug", since = "1.3.0"),
3219 stable(feature = "atomic_access", since = "1.15.0"),
3220 stable(feature = "atomic_from", since = "1.23.0"),
3221 stable(feature = "atomic_nand", since = "1.27.0"),
3222 rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"),
3223 cfg_attr(not(test), rustc_diagnostic_item = "AtomicUsize"),
3224 "usize",
3225 "",
3226 atomic_umin, atomic_umax,
3227 $align,
3228 usize AtomicUsize
3229 }
3230
3231 /// An [`AtomicIsize`] initialized to `0`.
3232 #[cfg(target_pointer_width = $target_pointer_width)]
3233 #[stable(feature = "rust1", since = "1.0.0")]
3234 #[deprecated(
3235 since = "1.34.0",
3236 note = "the `new` function is now preferred",
3237 suggestion = "AtomicIsize::new(0)",
3238 )]
3239 pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0);
3240
3241 /// An [`AtomicUsize`] initialized to `0`.
3242 #[cfg(target_pointer_width = $target_pointer_width)]
3243 #[stable(feature = "rust1", since = "1.0.0")]
3244 #[deprecated(
3245 since = "1.34.0",
3246 note = "the `new` function is now preferred",
3247 suggestion = "AtomicUsize::new(0)",
3248 )]
3249 pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0);
3250 )* };
3251}
3252
3253#[cfg(target_has_atomic_load_store = "ptr")]
3254atomic_int_ptr_sized! {
3255 "16" 2
3256 "32" 4
3257 "64" 8
3258}
3259
3260#[inline]
3261#[cfg(target_has_atomic)]
3262fn strongest_failure_ordering(order: Ordering) -> Ordering {
3263 match order {
3264 Release => Relaxed,
3265 Relaxed => Relaxed,
3266 SeqCst => SeqCst,
3267 Acquire => Acquire,
3268 AcqRel => Acquire,
3269 }
3270}
3271
3272#[inline]
3273#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3274unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) {
3275 // SAFETY: the caller must uphold the safety contract for `atomic_store`.
3276 unsafe {
3277 match order {
3278 Relaxed => intrinsics::atomic_store_relaxed(dst, val),
3279 Release => intrinsics::atomic_store_release(dst, val),
3280 SeqCst => intrinsics::atomic_store_seqcst(dst, val),
3281 Acquire => panic!("there is no such thing as an acquire store"),
3282 AcqRel => panic!("there is no such thing as an acquire-release store"),
3283 }
3284 }
3285}
3286
3287#[inline]
3288#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3289unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T {
3290 // SAFETY: the caller must uphold the safety contract for `atomic_load`.
3291 unsafe {
3292 match order {
3293 Relaxed => intrinsics::atomic_load_relaxed(src:dst),
3294 Acquire => intrinsics::atomic_load_acquire(src:dst),
3295 SeqCst => intrinsics::atomic_load_seqcst(src:dst),
3296 Release => panic!("there is no such thing as a release load"),
3297 AcqRel => panic!("there is no such thing as an acquire-release load"),
3298 }
3299 }
3300}
3301
3302#[inline]
3303#[cfg(target_has_atomic)]
3304#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3305unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3306 // SAFETY: the caller must uphold the safety contract for `atomic_swap`.
3307 unsafe {
3308 match order {
3309 Relaxed => intrinsics::atomic_xchg_relaxed(dst, src:val),
3310 Acquire => intrinsics::atomic_xchg_acquire(dst, src:val),
3311 Release => intrinsics::atomic_xchg_release(dst, src:val),
3312 AcqRel => intrinsics::atomic_xchg_acqrel(dst, src:val),
3313 SeqCst => intrinsics::atomic_xchg_seqcst(dst, src:val),
3314 }
3315 }
3316}
3317
3318/// Returns the previous value (like __sync_fetch_and_add).
3319#[inline]
3320#[cfg(target_has_atomic)]
3321#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3322unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3323 // SAFETY: the caller must uphold the safety contract for `atomic_add`.
3324 unsafe {
3325 match order {
3326 Relaxed => intrinsics::atomic_xadd_relaxed(dst, src:val),
3327 Acquire => intrinsics::atomic_xadd_acquire(dst, src:val),
3328 Release => intrinsics::atomic_xadd_release(dst, src:val),
3329 AcqRel => intrinsics::atomic_xadd_acqrel(dst, src:val),
3330 SeqCst => intrinsics::atomic_xadd_seqcst(dst, src:val),
3331 }
3332 }
3333}
3334
3335/// Returns the previous value (like __sync_fetch_and_sub).
3336#[inline]
3337#[cfg(target_has_atomic)]
3338#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3339unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3340 // SAFETY: the caller must uphold the safety contract for `atomic_sub`.
3341 unsafe {
3342 match order {
3343 Relaxed => intrinsics::atomic_xsub_relaxed(dst, src:val),
3344 Acquire => intrinsics::atomic_xsub_acquire(dst, src:val),
3345 Release => intrinsics::atomic_xsub_release(dst, src:val),
3346 AcqRel => intrinsics::atomic_xsub_acqrel(dst, src:val),
3347 SeqCst => intrinsics::atomic_xsub_seqcst(dst, src:val),
3348 }
3349 }
3350}
3351
3352#[inline]
3353#[cfg(target_has_atomic)]
3354#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3355unsafe fn atomic_compare_exchange<T: Copy>(
3356 dst: *mut T,
3357 old: T,
3358 new: T,
3359 success: Ordering,
3360 failure: Ordering,
3361) -> Result<T, T> {
3362 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`.
3363 let (val, ok) = unsafe {
3364 match (success, failure) {
3365 (Relaxed, Relaxed) => intrinsics::atomic_cxchg_relaxed_relaxed(dst, old, new),
3366 (Relaxed, Acquire) => intrinsics::atomic_cxchg_relaxed_acquire(dst, old, new),
3367 (Relaxed, SeqCst) => intrinsics::atomic_cxchg_relaxed_seqcst(dst, old, new),
3368 (Acquire, Relaxed) => intrinsics::atomic_cxchg_acquire_relaxed(dst, old, new),
3369 (Acquire, Acquire) => intrinsics::atomic_cxchg_acquire_acquire(dst, old, new),
3370 (Acquire, SeqCst) => intrinsics::atomic_cxchg_acquire_seqcst(dst, old, new),
3371 (Release, Relaxed) => intrinsics::atomic_cxchg_release_relaxed(dst, old, new),
3372 (Release, Acquire) => intrinsics::atomic_cxchg_release_acquire(dst, old, new),
3373 (Release, SeqCst) => intrinsics::atomic_cxchg_release_seqcst(dst, old, new),
3374 (AcqRel, Relaxed) => intrinsics::atomic_cxchg_acqrel_relaxed(dst, old, new),
3375 (AcqRel, Acquire) => intrinsics::atomic_cxchg_acqrel_acquire(dst, old, new),
3376 (AcqRel, SeqCst) => intrinsics::atomic_cxchg_acqrel_seqcst(dst, old, new),
3377 (SeqCst, Relaxed) => intrinsics::atomic_cxchg_seqcst_relaxed(dst, old, new),
3378 (SeqCst, Acquire) => intrinsics::atomic_cxchg_seqcst_acquire(dst, old, new),
3379 (SeqCst, SeqCst) => intrinsics::atomic_cxchg_seqcst_seqcst(dst, old, new),
3380 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3381 (_, Release) => panic!("there is no such thing as a release failure ordering"),
3382 }
3383 };
3384 if ok { Ok(val) } else { Err(val) }
3385}
3386
3387#[inline]
3388#[cfg(target_has_atomic)]
3389#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3390unsafe fn atomic_compare_exchange_weak<T: Copy>(
3391 dst: *mut T,
3392 old: T,
3393 new: T,
3394 success: Ordering,
3395 failure: Ordering,
3396) -> Result<T, T> {
3397 // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`.
3398 let (val, ok) = unsafe {
3399 match (success, failure) {
3400 (Relaxed, Relaxed) => intrinsics::atomic_cxchgweak_relaxed_relaxed(dst, old, new),
3401 (Relaxed, Acquire) => intrinsics::atomic_cxchgweak_relaxed_acquire(dst, old, new),
3402 (Relaxed, SeqCst) => intrinsics::atomic_cxchgweak_relaxed_seqcst(dst, old, new),
3403 (Acquire, Relaxed) => intrinsics::atomic_cxchgweak_acquire_relaxed(dst, old, new),
3404 (Acquire, Acquire) => intrinsics::atomic_cxchgweak_acquire_acquire(dst, old, new),
3405 (Acquire, SeqCst) => intrinsics::atomic_cxchgweak_acquire_seqcst(dst, old, new),
3406 (Release, Relaxed) => intrinsics::atomic_cxchgweak_release_relaxed(dst, old, new),
3407 (Release, Acquire) => intrinsics::atomic_cxchgweak_release_acquire(dst, old, new),
3408 (Release, SeqCst) => intrinsics::atomic_cxchgweak_release_seqcst(dst, old, new),
3409 (AcqRel, Relaxed) => intrinsics::atomic_cxchgweak_acqrel_relaxed(dst, old, new),
3410 (AcqRel, Acquire) => intrinsics::atomic_cxchgweak_acqrel_acquire(dst, old, new),
3411 (AcqRel, SeqCst) => intrinsics::atomic_cxchgweak_acqrel_seqcst(dst, old, new),
3412 (SeqCst, Relaxed) => intrinsics::atomic_cxchgweak_seqcst_relaxed(dst, old, new),
3413 (SeqCst, Acquire) => intrinsics::atomic_cxchgweak_seqcst_acquire(dst, old, new),
3414 (SeqCst, SeqCst) => intrinsics::atomic_cxchgweak_seqcst_seqcst(dst, old, new),
3415 (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"),
3416 (_, Release) => panic!("there is no such thing as a release failure ordering"),
3417 }
3418 };
3419 if ok { Ok(val) } else { Err(val) }
3420}
3421
3422#[inline]
3423#[cfg(target_has_atomic)]
3424#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3425unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3426 // SAFETY: the caller must uphold the safety contract for `atomic_and`
3427 unsafe {
3428 match order {
3429 Relaxed => intrinsics::atomic_and_relaxed(dst, src:val),
3430 Acquire => intrinsics::atomic_and_acquire(dst, src:val),
3431 Release => intrinsics::atomic_and_release(dst, src:val),
3432 AcqRel => intrinsics::atomic_and_acqrel(dst, src:val),
3433 SeqCst => intrinsics::atomic_and_seqcst(dst, src:val),
3434 }
3435 }
3436}
3437
3438#[inline]
3439#[cfg(target_has_atomic)]
3440#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3441unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3442 // SAFETY: the caller must uphold the safety contract for `atomic_nand`
3443 unsafe {
3444 match order {
3445 Relaxed => intrinsics::atomic_nand_relaxed(dst, src:val),
3446 Acquire => intrinsics::atomic_nand_acquire(dst, src:val),
3447 Release => intrinsics::atomic_nand_release(dst, src:val),
3448 AcqRel => intrinsics::atomic_nand_acqrel(dst, src:val),
3449 SeqCst => intrinsics::atomic_nand_seqcst(dst, src:val),
3450 }
3451 }
3452}
3453
3454#[inline]
3455#[cfg(target_has_atomic)]
3456#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3457unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3458 // SAFETY: the caller must uphold the safety contract for `atomic_or`
3459 unsafe {
3460 match order {
3461 SeqCst => intrinsics::atomic_or_seqcst(dst, src:val),
3462 Acquire => intrinsics::atomic_or_acquire(dst, src:val),
3463 Release => intrinsics::atomic_or_release(dst, src:val),
3464 AcqRel => intrinsics::atomic_or_acqrel(dst, src:val),
3465 Relaxed => intrinsics::atomic_or_relaxed(dst, src:val),
3466 }
3467 }
3468}
3469
3470#[inline]
3471#[cfg(target_has_atomic)]
3472#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3473unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3474 // SAFETY: the caller must uphold the safety contract for `atomic_xor`
3475 unsafe {
3476 match order {
3477 SeqCst => intrinsics::atomic_xor_seqcst(dst, src:val),
3478 Acquire => intrinsics::atomic_xor_acquire(dst, src:val),
3479 Release => intrinsics::atomic_xor_release(dst, src:val),
3480 AcqRel => intrinsics::atomic_xor_acqrel(dst, src:val),
3481 Relaxed => intrinsics::atomic_xor_relaxed(dst, src:val),
3482 }
3483 }
3484}
3485
3486/// returns the max value (signed comparison)
3487#[inline]
3488#[cfg(target_has_atomic)]
3489#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3490unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3491 // SAFETY: the caller must uphold the safety contract for `atomic_max`
3492 unsafe {
3493 match order {
3494 Relaxed => intrinsics::atomic_max_relaxed(dst, src:val),
3495 Acquire => intrinsics::atomic_max_acquire(dst, src:val),
3496 Release => intrinsics::atomic_max_release(dst, src:val),
3497 AcqRel => intrinsics::atomic_max_acqrel(dst, src:val),
3498 SeqCst => intrinsics::atomic_max_seqcst(dst, src:val),
3499 }
3500 }
3501}
3502
3503/// returns the min value (signed comparison)
3504#[inline]
3505#[cfg(target_has_atomic)]
3506#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3507unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3508 // SAFETY: the caller must uphold the safety contract for `atomic_min`
3509 unsafe {
3510 match order {
3511 Relaxed => intrinsics::atomic_min_relaxed(dst, src:val),
3512 Acquire => intrinsics::atomic_min_acquire(dst, src:val),
3513 Release => intrinsics::atomic_min_release(dst, src:val),
3514 AcqRel => intrinsics::atomic_min_acqrel(dst, src:val),
3515 SeqCst => intrinsics::atomic_min_seqcst(dst, src:val),
3516 }
3517 }
3518}
3519
3520/// returns the max value (unsigned comparison)
3521#[inline]
3522#[cfg(target_has_atomic)]
3523#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3524unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3525 // SAFETY: the caller must uphold the safety contract for `atomic_umax`
3526 unsafe {
3527 match order {
3528 Relaxed => intrinsics::atomic_umax_relaxed(dst, src:val),
3529 Acquire => intrinsics::atomic_umax_acquire(dst, src:val),
3530 Release => intrinsics::atomic_umax_release(dst, src:val),
3531 AcqRel => intrinsics::atomic_umax_acqrel(dst, src:val),
3532 SeqCst => intrinsics::atomic_umax_seqcst(dst, src:val),
3533 }
3534 }
3535}
3536
3537/// returns the min value (unsigned comparison)
3538#[inline]
3539#[cfg(target_has_atomic)]
3540#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3541unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T {
3542 // SAFETY: the caller must uphold the safety contract for `atomic_umin`
3543 unsafe {
3544 match order {
3545 Relaxed => intrinsics::atomic_umin_relaxed(dst, src:val),
3546 Acquire => intrinsics::atomic_umin_acquire(dst, src:val),
3547 Release => intrinsics::atomic_umin_release(dst, src:val),
3548 AcqRel => intrinsics::atomic_umin_acqrel(dst, src:val),
3549 SeqCst => intrinsics::atomic_umin_seqcst(dst, src:val),
3550 }
3551 }
3552}
3553
3554/// An atomic fence.
3555///
3556/// Depending on the specified order, a fence prevents the compiler and CPU from
3557/// reordering certain types of memory operations around it.
3558/// That creates synchronizes-with relationships between it and atomic operations
3559/// or fences in other threads.
3560///
3561/// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes
3562/// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there
3563/// exist operations X and Y, both operating on some atomic object 'M' such
3564/// that A is sequenced before X, Y is sequenced before B and Y observes
3565/// the change to M. This provides a happens-before dependence between A and B.
3566///
3567/// ```text
3568/// Thread 1 Thread 2
3569///
3570/// fence(Release); A --------------
3571/// x.store(3, Relaxed); X --------- |
3572/// | |
3573/// | |
3574/// -------------> Y if x.load(Relaxed) == 3 {
3575/// |-------> B fence(Acquire);
3576/// ...
3577/// }
3578/// ```
3579///
3580/// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize
3581/// with a fence.
3582///
3583/// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`]
3584/// and [`Release`] semantics, participates in the global program order of the
3585/// other [`SeqCst`] operations and/or fences.
3586///
3587/// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings.
3588///
3589/// # Panics
3590///
3591/// Panics if `order` is [`Relaxed`].
3592///
3593/// # Examples
3594///
3595/// ```
3596/// use std::sync::atomic::AtomicBool;
3597/// use std::sync::atomic::fence;
3598/// use std::sync::atomic::Ordering;
3599///
3600/// // A mutual exclusion primitive based on spinlock.
3601/// pub struct Mutex {
3602/// flag: AtomicBool,
3603/// }
3604///
3605/// impl Mutex {
3606/// pub fn new() -> Mutex {
3607/// Mutex {
3608/// flag: AtomicBool::new(false),
3609/// }
3610/// }
3611///
3612/// pub fn lock(&self) {
3613/// // Wait until the old value is `false`.
3614/// while self
3615/// .flag
3616/// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed)
3617/// .is_err()
3618/// {}
3619/// // This fence synchronizes-with store in `unlock`.
3620/// fence(Ordering::Acquire);
3621/// }
3622///
3623/// pub fn unlock(&self) {
3624/// self.flag.store(false, Ordering::Release);
3625/// }
3626/// }
3627/// ```
3628#[inline]
3629#[stable(feature = "rust1", since = "1.0.0")]
3630#[rustc_diagnostic_item = "fence"]
3631#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3632pub fn fence(order: Ordering) {
3633 // SAFETY: using an atomic fence is safe.
3634 unsafe {
3635 match order {
3636 Acquire => intrinsics::atomic_fence_acquire(),
3637 Release => intrinsics::atomic_fence_release(),
3638 AcqRel => intrinsics::atomic_fence_acqrel(),
3639 SeqCst => intrinsics::atomic_fence_seqcst(),
3640 Relaxed => panic!("there is no such thing as a relaxed fence"),
3641 }
3642 }
3643}
3644
3645/// A compiler memory fence.
3646///
3647/// `compiler_fence` does not emit any machine code, but restricts the kinds
3648/// of memory re-ordering the compiler is allowed to do. Specifically, depending on
3649/// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads
3650/// or writes from before or after the call to the other side of the call to
3651/// `compiler_fence`. Note that it does **not** prevent the *hardware*
3652/// from doing such re-ordering. This is not a problem in a single-threaded,
3653/// execution context, but when other threads may modify memory at the same
3654/// time, stronger synchronization primitives such as [`fence`] are required.
3655///
3656/// The re-ordering prevented by the different ordering semantics are:
3657///
3658/// - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed.
3659/// - with [`Release`], preceding reads and writes cannot be moved past subsequent writes.
3660/// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads.
3661/// - with [`AcqRel`], both of the above rules are enforced.
3662///
3663/// `compiler_fence` is generally only useful for preventing a thread from
3664/// racing *with itself*. That is, if a given thread is executing one piece
3665/// of code, and is then interrupted, and starts executing code elsewhere
3666/// (while still in the same thread, and conceptually still on the same
3667/// core). In traditional programs, this can only occur when a signal
3668/// handler is registered. In more low-level code, such situations can also
3669/// arise when handling interrupts, when implementing green threads with
3670/// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's
3671/// discussion of [memory barriers].
3672///
3673/// # Panics
3674///
3675/// Panics if `order` is [`Relaxed`].
3676///
3677/// # Examples
3678///
3679/// Without `compiler_fence`, the `assert_eq!` in following code
3680/// is *not* guaranteed to succeed, despite everything happening in a single thread.
3681/// To see why, remember that the compiler is free to swap the stores to
3682/// `IMPORTANT_VARIABLE` and `IS_READY` since they are both
3683/// `Ordering::Relaxed`. If it does, and the signal handler is invoked right
3684/// after `IS_READY` is updated, then the signal handler will see
3685/// `IS_READY=1`, but `IMPORTANT_VARIABLE=0`.
3686/// Using a `compiler_fence` remedies this situation.
3687///
3688/// ```
3689/// use std::sync::atomic::{AtomicBool, AtomicUsize};
3690/// use std::sync::atomic::Ordering;
3691/// use std::sync::atomic::compiler_fence;
3692///
3693/// static IMPORTANT_VARIABLE: AtomicUsize = AtomicUsize::new(0);
3694/// static IS_READY: AtomicBool = AtomicBool::new(false);
3695///
3696/// fn main() {
3697/// IMPORTANT_VARIABLE.store(42, Ordering::Relaxed);
3698/// // prevent earlier writes from being moved beyond this point
3699/// compiler_fence(Ordering::Release);
3700/// IS_READY.store(true, Ordering::Relaxed);
3701/// }
3702///
3703/// fn signal_handler() {
3704/// if IS_READY.load(Ordering::Relaxed) {
3705/// assert_eq!(IMPORTANT_VARIABLE.load(Ordering::Relaxed), 42);
3706/// }
3707/// }
3708/// ```
3709///
3710/// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt
3711#[inline]
3712#[stable(feature = "compiler_fences", since = "1.21.0")]
3713#[rustc_diagnostic_item = "compiler_fence"]
3714#[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3715pub fn compiler_fence(order: Ordering) {
3716 // SAFETY: using an atomic fence is safe.
3717 unsafe {
3718 match order {
3719 Acquire => intrinsics::atomic_singlethreadfence_acquire(),
3720 Release => intrinsics::atomic_singlethreadfence_release(),
3721 AcqRel => intrinsics::atomic_singlethreadfence_acqrel(),
3722 SeqCst => intrinsics::atomic_singlethreadfence_seqcst(),
3723 Relaxed => panic!("there is no such thing as a relaxed compiler fence"),
3724 }
3725 }
3726}
3727
3728#[cfg(target_has_atomic_load_store = "8")]
3729#[stable(feature = "atomic_debug", since = "1.3.0")]
3730impl fmt::Debug for AtomicBool {
3731 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3732 fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f)
3733 }
3734}
3735
3736#[cfg(target_has_atomic_load_store = "ptr")]
3737#[stable(feature = "atomic_debug", since = "1.3.0")]
3738impl<T> fmt::Debug for AtomicPtr<T> {
3739 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3740 fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f)
3741 }
3742}
3743
3744#[cfg(target_has_atomic_load_store = "ptr")]
3745#[stable(feature = "atomic_pointer", since = "1.24.0")]
3746impl<T> fmt::Pointer for AtomicPtr<T> {
3747 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
3748 fmt::Pointer::fmt(&self.load(order:Ordering::SeqCst), f)
3749 }
3750}
3751
3752/// Signals the processor that it is inside a busy-wait spin-loop ("spin lock").
3753///
3754/// This function is deprecated in favor of [`hint::spin_loop`].
3755///
3756/// [`hint::spin_loop`]: crate::hint::spin_loop
3757#[inline]
3758#[stable(feature = "spin_loop_hint", since = "1.24.0")]
3759#[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")]
3760pub fn spin_loop_hint() {
3761 spin_loop()
3762}
3763