1 | //! Atomic types |
---|---|
2 | //! |
3 | //! Atomic types provide primitive shared-memory communication between |
4 | //! threads, and are the building blocks of other concurrent |
5 | //! types. |
6 | //! |
7 | //! This module defines atomic versions of a select number of primitive |
8 | //! types, including [`AtomicBool`], [`AtomicIsize`], [`AtomicUsize`], |
9 | //! [`AtomicI8`], [`AtomicU16`], etc. |
10 | //! Atomic types present operations that, when used correctly, synchronize |
11 | //! updates between threads. |
12 | //! |
13 | //! Atomic variables are safe to share between threads (they implement [`Sync`]) |
14 | //! but they do not themselves provide the mechanism for sharing and follow the |
15 | //! [threading model](../../../std/thread/index.html#the-threading-model) of Rust. |
16 | //! The most common way to share an atomic variable is to put it into an [`Arc`][arc] (an |
17 | //! atomically-reference-counted shared pointer). |
18 | //! |
19 | //! [arc]: ../../../std/sync/struct.Arc.html |
20 | //! |
21 | //! Atomic types may be stored in static variables, initialized using |
22 | //! the constant initializers like [`AtomicBool::new`]. Atomic statics |
23 | //! are often used for lazy global initialization. |
24 | //! |
25 | //! ## Memory model for atomic accesses |
26 | //! |
27 | //! Rust atomics currently follow the same rules as [C++20 atomics][cpp], specifically the rules |
28 | //! from the [`intro.races`][cpp-intro.races] section, without the "consume" memory ordering. Since |
29 | //! C++ uses an object-based memory model whereas Rust is access-based, a bit of translation work |
30 | //! has to be done to apply the C++ rules to Rust: whenever C++ talks about "the value of an |
31 | //! object", we understand that to mean the resulting bytes obtained when doing a read. When the C++ |
32 | //! standard talks about "the value of an atomic object", this refers to the result of doing an |
33 | //! atomic load (via the operations provided in this module). A "modification of an atomic object" |
34 | //! refers to an atomic store. |
35 | //! |
36 | //! The end result is *almost* equivalent to saying that creating a *shared reference* to one of the |
37 | //! Rust atomic types corresponds to creating an `atomic_ref` in C++, with the `atomic_ref` being |
38 | //! destroyed when the lifetime of the shared reference ends. The main difference is that Rust |
39 | //! permits concurrent atomic and non-atomic reads to the same memory as those cause no issue in the |
40 | //! C++ memory model, they are just forbidden in C++ because memory is partitioned into "atomic |
41 | //! objects" and "non-atomic objects" (with `atomic_ref` temporarily converting a non-atomic object |
42 | //! into an atomic object). |
43 | //! |
44 | //! The most important aspect of this model is that *data races* are undefined behavior. A data race |
45 | //! is defined as conflicting non-synchronized accesses where at least one of the accesses is |
46 | //! non-atomic. Here, accesses are *conflicting* if they affect overlapping regions of memory and at |
47 | //! least one of them is a write. (A `compare_exchange` or `compare_exchange_weak` that does not |
48 | //! succeed is not considered a write.) They are *non-synchronized* if neither of them |
49 | //! *happens-before* the other, according to the happens-before order of the memory model. |
50 | //! |
51 | //! The other possible cause of undefined behavior in the memory model are mixed-size accesses: Rust |
52 | //! inherits the C++ limitation that non-synchronized conflicting atomic accesses may not partially |
53 | //! overlap. In other words, every pair of non-synchronized atomic accesses must be either disjoint, |
54 | //! access the exact same memory (including using the same access size), or both be reads. |
55 | //! |
56 | //! Each atomic access takes an [`Ordering`] which defines how the operation interacts with the |
57 | //! happens-before order. These orderings behave the same as the corresponding [C++20 atomic |
58 | //! orderings][cpp_memory_order]. For more information, see the [nomicon]. |
59 | //! |
60 | //! [cpp]: https://en.cppreference.com/w/cpp/atomic |
61 | //! [cpp-intro.races]: https://timsong-cpp.github.io/cppwp/n4868/intro.multithread#intro.races |
62 | //! [cpp_memory_order]: https://en.cppreference.com/w/cpp/atomic/memory_order |
63 | //! [nomicon]: ../../../nomicon/atomics.html |
64 | //! |
65 | //! ```rust,no_run undefined_behavior |
66 | //! use std::sync::atomic::{AtomicU16, AtomicU8, Ordering}; |
67 | //! use std::mem::transmute; |
68 | //! use std::thread; |
69 | //! |
70 | //! let atomic = AtomicU16::new(0); |
71 | //! |
72 | //! thread::scope(|s| { |
73 | //! // This is UB: conflicting non-synchronized accesses, at least one of which is non-atomic. |
74 | //! s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store |
75 | //! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write |
76 | //! }); |
77 | //! |
78 | //! thread::scope(|s| { |
79 | //! // This is fine: the accesses do not conflict (as none of them performs any modification). |
80 | //! // In C++ this would be disallowed since creating an `atomic_ref` precludes |
81 | //! // further non-atomic accesses, but Rust does not have that limitation. |
82 | //! s.spawn(|| atomic.load(Ordering::Relaxed)); // atomic load |
83 | //! s.spawn(|| unsafe { atomic.as_ptr().read() }); // non-atomic read |
84 | //! }); |
85 | //! |
86 | //! thread::scope(|s| { |
87 | //! // This is fine: `join` synchronizes the code in a way such that the atomic |
88 | //! // store happens-before the non-atomic write. |
89 | //! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); // atomic store |
90 | //! handle.join().expect("thread won't panic"); // synchronize |
91 | //! s.spawn(|| unsafe { atomic.as_ptr().write(2) }); // non-atomic write |
92 | //! }); |
93 | //! |
94 | //! thread::scope(|s| { |
95 | //! // This is UB: non-synchronized conflicting differently-sized atomic accesses. |
96 | //! s.spawn(|| atomic.store(1, Ordering::Relaxed)); |
97 | //! s.spawn(|| unsafe { |
98 | //! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic); |
99 | //! differently_sized.store(2, Ordering::Relaxed); |
100 | //! }); |
101 | //! }); |
102 | //! |
103 | //! thread::scope(|s| { |
104 | //! // This is fine: `join` synchronizes the code in a way such that |
105 | //! // the 1-byte store happens-before the 2-byte store. |
106 | //! let handle = s.spawn(|| atomic.store(1, Ordering::Relaxed)); |
107 | //! handle.join().expect("thread won't panic"); |
108 | //! s.spawn(|| unsafe { |
109 | //! let differently_sized = transmute::<&AtomicU16, &AtomicU8>(&atomic); |
110 | //! differently_sized.store(2, Ordering::Relaxed); |
111 | //! }); |
112 | //! }); |
113 | //! ``` |
114 | //! |
115 | //! # Portability |
116 | //! |
117 | //! All atomic types in this module are guaranteed to be [lock-free] if they're |
118 | //! available. This means they don't internally acquire a global mutex. Atomic |
119 | //! types and operations are not guaranteed to be wait-free. This means that |
120 | //! operations like `fetch_or` may be implemented with a compare-and-swap loop. |
121 | //! |
122 | //! Atomic operations may be implemented at the instruction layer with |
123 | //! larger-size atomics. For example some platforms use 4-byte atomic |
124 | //! instructions to implement `AtomicI8`. Note that this emulation should not |
125 | //! have an impact on correctness of code, it's just something to be aware of. |
126 | //! |
127 | //! The atomic types in this module might not be available on all platforms. The |
128 | //! atomic types here are all widely available, however, and can generally be |
129 | //! relied upon existing. Some notable exceptions are: |
130 | //! |
131 | //! * PowerPC and MIPS platforms with 32-bit pointers do not have `AtomicU64` or |
132 | //! `AtomicI64` types. |
133 | //! * ARM platforms like `armv5te` that aren't for Linux only provide `load` |
134 | //! and `store` operations, and do not support Compare and Swap (CAS) |
135 | //! operations, such as `swap`, `fetch_add`, etc. Additionally on Linux, |
136 | //! these CAS operations are implemented via [operating system support], which |
137 | //! may come with a performance penalty. |
138 | //! * ARM targets with `thumbv6m` only provide `load` and `store` operations, |
139 | //! and do not support Compare and Swap (CAS) operations, such as `swap`, |
140 | //! `fetch_add`, etc. |
141 | //! |
142 | //! [operating system support]: https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt |
143 | //! |
144 | //! Note that future platforms may be added that also do not have support for |
145 | //! some atomic operations. Maximally portable code will want to be careful |
146 | //! about which atomic types are used. `AtomicUsize` and `AtomicIsize` are |
147 | //! generally the most portable, but even then they're not available everywhere. |
148 | //! For reference, the `std` library requires `AtomicBool`s and pointer-sized atomics, although |
149 | //! `core` does not. |
150 | //! |
151 | //! The `#[cfg(target_has_atomic)]` attribute can be used to conditionally |
152 | //! compile based on the target's supported bit widths. It is a key-value |
153 | //! option set for each supported size, with values "8", "16", "32", "64", |
154 | //! "128", and "ptr" for pointer-sized atomics. |
155 | //! |
156 | //! [lock-free]: https://en.wikipedia.org/wiki/Non-blocking_algorithm |
157 | //! |
158 | //! # Atomic accesses to read-only memory |
159 | //! |
160 | //! In general, *all* atomic accesses on read-only memory are undefined behavior. For instance, attempting |
161 | //! to do a `compare_exchange` that will definitely fail (making it conceptually a read-only |
162 | //! operation) can still cause a segmentation fault if the underlying memory page is mapped read-only. Since |
163 | //! atomic `load`s might be implemented using compare-exchange operations, even a `load` can fault |
164 | //! on read-only memory. |
165 | //! |
166 | //! For the purpose of this section, "read-only memory" is defined as memory that is read-only in |
167 | //! the underlying target, i.e., the pages are mapped with a read-only flag and any attempt to write |
168 | //! will cause a page fault. In particular, an `&u128` reference that points to memory that is |
169 | //! read-write mapped is *not* considered to point to "read-only memory". In Rust, almost all memory |
170 | //! is read-write; the only exceptions are memory created by `const` items or `static` items without |
171 | //! interior mutability, and memory that was specifically marked as read-only by the operating |
172 | //! system via platform-specific APIs. |
173 | //! |
174 | //! As an exception from the general rule stated above, "sufficiently small" atomic loads with |
175 | //! `Ordering::Relaxed` are implemented in a way that works on read-only memory, and are hence not |
176 | //! undefined behavior. The exact size limit for what makes a load "sufficiently small" varies |
177 | //! depending on the target: |
178 | //! |
179 | //! | `target_arch` | Size limit | |
180 | //! |---------------|---------| |
181 | //! | `x86`, `arm`, `loongarch32`, `mips`, `mips32r6`, `powerpc`, `riscv32`, `sparc`, `hexagon` | 4 bytes | |
182 | //! | `x86_64`, `aarch64`, `loongarch64`, `mips64`, `mips64r6`, `powerpc64`, `riscv64`, `sparc64`, `s390x` | 8 bytes | |
183 | //! |
184 | //! Atomics loads that are larger than this limit as well as atomic loads with ordering other |
185 | //! than `Relaxed`, as well as *all* atomic loads on targets not listed in the table, might still be |
186 | //! read-only under certain conditions, but that is not a stable guarantee and should not be relied |
187 | //! upon. |
188 | //! |
189 | //! If you need to do an acquire load on read-only memory, you can do a relaxed load followed by an |
190 | //! acquire fence instead. |
191 | //! |
192 | //! # Examples |
193 | //! |
194 | //! A simple spinlock: |
195 | //! |
196 | //! ```ignore-wasm |
197 | //! use std::sync::Arc; |
198 | //! use std::sync::atomic::{AtomicUsize, Ordering}; |
199 | //! use std::{hint, thread}; |
200 | //! |
201 | //! fn main() { |
202 | //! let spinlock = Arc::new(AtomicUsize::new(1)); |
203 | //! |
204 | //! let spinlock_clone = Arc::clone(&spinlock); |
205 | //! |
206 | //! let thread = thread::spawn(move || { |
207 | //! spinlock_clone.store(0, Ordering::Release); |
208 | //! }); |
209 | //! |
210 | //! // Wait for the other thread to release the lock |
211 | //! while spinlock.load(Ordering::Acquire) != 0 { |
212 | //! hint::spin_loop(); |
213 | //! } |
214 | //! |
215 | //! if let Err(panic) = thread.join() { |
216 | //! println!("Thread had an error: {panic:?}"); |
217 | //! } |
218 | //! } |
219 | //! ``` |
220 | //! |
221 | //! Keep a global count of live threads: |
222 | //! |
223 | //! ``` |
224 | //! use std::sync::atomic::{AtomicUsize, Ordering}; |
225 | //! |
226 | //! static GLOBAL_THREAD_COUNT: AtomicUsize = AtomicUsize::new(0); |
227 | //! |
228 | //! // Note that Relaxed ordering doesn't synchronize anything |
229 | //! // except the global thread counter itself. |
230 | //! let old_thread_count = GLOBAL_THREAD_COUNT.fetch_add(1, Ordering::Relaxed); |
231 | //! // Note that this number may not be true at the moment of printing |
232 | //! // because some other thread may have changed static value already. |
233 | //! println!("live threads: {}", old_thread_count + 1); |
234 | //! ``` |
235 | |
236 | #![stable(feature = "rust1", since = "1.0.0")] |
237 | #![cfg_attr(not(target_has_atomic_load_store = "8"), allow(dead_code))] |
238 | #![cfg_attr(not(target_has_atomic_load_store = "8"), allow(unused_imports))] |
239 | #![rustc_diagnostic_item= "atomic_mod"] |
240 | // Clippy complains about the pattern of "safe function calling unsafe function taking pointers". |
241 | // This happens with AtomicPtr intrinsics but is fine, as the pointers clippy is concerned about |
242 | // are just normal values that get loaded/stored, but not dereferenced. |
243 | #![allow(clippy::not_unsafe_ptr_arg_deref)] |
244 | |
245 | use self::Ordering::*; |
246 | use crate::cell::UnsafeCell; |
247 | use crate::hint::spin_loop; |
248 | use crate::intrinsics::AtomicOrdering as AO; |
249 | use crate::{fmt, intrinsics}; |
250 | |
251 | trait Sealed {} |
252 | |
253 | /// A marker trait for primitive types which can be modified atomically. |
254 | /// |
255 | /// This is an implementation detail for <code>[Atomic]\<T></code> which may disappear or be replaced at any time. |
256 | /// |
257 | /// # Safety |
258 | /// |
259 | /// Types implementing this trait must be primitives that can be modified atomically. |
260 | /// |
261 | /// The associated `Self::AtomicInner` type must have the same size and bit validity as `Self`, |
262 | /// but may have a higher alignment requirement, so the following `transmute`s are sound: |
263 | /// |
264 | /// - `&mut Self::AtomicInner` as `&mut Self` |
265 | /// - `Self` as `Self::AtomicInner` or the reverse |
266 | #[unstable( |
267 | feature = "atomic_internals", |
268 | reason = "implementation detail which may disappear or be replaced at any time", |
269 | issue = "none" |
270 | )] |
271 | #[expect(private_bounds)] |
272 | pub unsafe trait AtomicPrimitive: Sized + Copy + Sealed { |
273 | /// Temporary implementation detail. |
274 | type AtomicInner: Sized; |
275 | } |
276 | |
277 | macro impl_atomic_primitive( |
278 | $Atom:ident $(<$T:ident>)? ($Primitive:ty), |
279 | size($size:literal), |
280 | align($align:literal) $(,)? |
281 | ) { |
282 | impl $(<$T>)? Sealed for $Primitive {} |
283 | |
284 | #[unstable( |
285 | feature = "atomic_internals", |
286 | reason = "implementation detail which may disappear or be replaced at any time", |
287 | issue = "none" |
288 | )] |
289 | #[cfg(target_has_atomic_load_store = $size)] |
290 | unsafe impl $(<$T>)? AtomicPrimitive for $Primitive { |
291 | type AtomicInner = $Atom $(<$T>)?; |
292 | } |
293 | } |
294 | |
295 | impl_atomic_primitive!(AtomicBool(bool), size("8"), align(1)); |
296 | impl_atomic_primitive!(AtomicI8(i8), size("8"), align(1)); |
297 | impl_atomic_primitive!(AtomicU8(u8), size("8"), align(1)); |
298 | impl_atomic_primitive!(AtomicI16(i16), size("16"), align(2)); |
299 | impl_atomic_primitive!(AtomicU16(u16), size("16"), align(2)); |
300 | impl_atomic_primitive!(AtomicI32(i32), size("32"), align(4)); |
301 | impl_atomic_primitive!(AtomicU32(u32), size("32"), align(4)); |
302 | impl_atomic_primitive!(AtomicI64(i64), size("64"), align(8)); |
303 | impl_atomic_primitive!(AtomicU64(u64), size("64"), align(8)); |
304 | impl_atomic_primitive!(AtomicI128(i128), size("128"), align(16)); |
305 | impl_atomic_primitive!(AtomicU128(u128), size("128"), align(16)); |
306 | |
307 | #[cfg(target_pointer_width = "16")] |
308 | impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(2)); |
309 | #[cfg(target_pointer_width = "32")] |
310 | impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(4)); |
311 | #[cfg(target_pointer_width = "64")] |
312 | impl_atomic_primitive!(AtomicIsize(isize), size("ptr"), align(8)); |
313 | |
314 | #[cfg(target_pointer_width = "16")] |
315 | impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(2)); |
316 | #[cfg(target_pointer_width = "32")] |
317 | impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(4)); |
318 | #[cfg(target_pointer_width = "64")] |
319 | impl_atomic_primitive!(AtomicUsize(usize), size("ptr"), align(8)); |
320 | |
321 | #[cfg(target_pointer_width = "16")] |
322 | impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(2)); |
323 | #[cfg(target_pointer_width = "32")] |
324 | impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(4)); |
325 | #[cfg(target_pointer_width = "64")] |
326 | impl_atomic_primitive!(AtomicPtr<T>(*mut T), size("ptr"), align(8)); |
327 | |
328 | /// A memory location which can be safely modified from multiple threads. |
329 | /// |
330 | /// This has the same size and bit validity as the underlying type `T`. However, |
331 | /// the alignment of this type is always equal to its size, even on targets where |
332 | /// `T` has alignment less than its size. |
333 | /// |
334 | /// For more about the differences between atomic types and non-atomic types as |
335 | /// well as information about the portability of this type, please see the |
336 | /// [module-level documentation]. |
337 | /// |
338 | /// **Note:** This type is only available on platforms that support atomic loads |
339 | /// and stores of `T`. |
340 | /// |
341 | /// [module-level documentation]: crate::sync::atomic |
342 | #[unstable(feature = "generic_atomic", issue = "130539")] |
343 | pub type Atomic<T> = <T as AtomicPrimitive>::AtomicInner; |
344 | |
345 | // Some architectures don't have byte-sized atomics, which results in LLVM |
346 | // emulating them using a LL/SC loop. However for AtomicBool we can take |
347 | // advantage of the fact that it only ever contains 0 or 1 and use atomic OR/AND |
348 | // instead, which LLVM can emulate using a larger atomic OR/AND operation. |
349 | // |
350 | // This list should only contain architectures which have word-sized atomic-or/ |
351 | // atomic-and instructions but don't natively support byte-sized atomics. |
352 | #[cfg(target_has_atomic = "8")] |
353 | const EMULATE_ATOMIC_BOOL: bool = cfg!(any( |
354 | target_arch = "riscv32", |
355 | target_arch = "riscv64", |
356 | target_arch = "loongarch32", |
357 | target_arch = "loongarch64" |
358 | )); |
359 | |
360 | /// A boolean type which can be safely shared between threads. |
361 | /// |
362 | /// This type has the same size, alignment, and bit validity as a [`bool`]. |
363 | /// |
364 | /// **Note**: This type is only available on platforms that support atomic |
365 | /// loads and stores of `u8`. |
366 | #[cfg(target_has_atomic_load_store = "8")] |
367 | #[stable(feature = "rust1", since = "1.0.0")] |
368 | #[rustc_diagnostic_item= "AtomicBool"] |
369 | #[repr(C, align(1))] |
370 | pub struct AtomicBool { |
371 | v: UnsafeCell<u8>, |
372 | } |
373 | |
374 | #[cfg(target_has_atomic_load_store = "8")] |
375 | #[stable(feature = "rust1", since = "1.0.0")] |
376 | impl Default for AtomicBool { |
377 | /// Creates an `AtomicBool` initialized to `false`. |
378 | #[inline] |
379 | fn default() -> Self { |
380 | Self::new(false) |
381 | } |
382 | } |
383 | |
384 | // Send is implicitly implemented for AtomicBool. |
385 | #[cfg(target_has_atomic_load_store = "8")] |
386 | #[stable(feature = "rust1", since = "1.0.0")] |
387 | unsafe impl Sync for AtomicBool {} |
388 | |
389 | /// A raw pointer type which can be safely shared between threads. |
390 | /// |
391 | /// This type has the same size and bit validity as a `*mut T`. |
392 | /// |
393 | /// **Note**: This type is only available on platforms that support atomic |
394 | /// loads and stores of pointers. Its size depends on the target pointer's size. |
395 | #[cfg(target_has_atomic_load_store = "ptr")] |
396 | #[stable(feature = "rust1", since = "1.0.0")] |
397 | #[rustc_diagnostic_item= "AtomicPtr"] |
398 | #[cfg_attr(target_pointer_width = "16", repr(C, align(2)))] |
399 | #[cfg_attr(target_pointer_width = "32", repr(C, align(4)))] |
400 | #[cfg_attr(target_pointer_width = "64", repr(C, align(8)))] |
401 | pub struct AtomicPtr<T> { |
402 | p: UnsafeCell<*mut T>, |
403 | } |
404 | |
405 | #[cfg(target_has_atomic_load_store = "ptr")] |
406 | #[stable(feature = "rust1", since = "1.0.0")] |
407 | impl<T> Default for AtomicPtr<T> { |
408 | /// Creates a null `AtomicPtr<T>`. |
409 | fn default() -> AtomicPtr<T> { |
410 | AtomicPtr::new(crate::ptr::null_mut()) |
411 | } |
412 | } |
413 | |
414 | #[cfg(target_has_atomic_load_store = "ptr")] |
415 | #[stable(feature = "rust1", since = "1.0.0")] |
416 | unsafe impl<T> Send for AtomicPtr<T> {} |
417 | #[cfg(target_has_atomic_load_store = "ptr")] |
418 | #[stable(feature = "rust1", since = "1.0.0")] |
419 | unsafe impl<T> Sync for AtomicPtr<T> {} |
420 | |
421 | /// Atomic memory orderings |
422 | /// |
423 | /// Memory orderings specify the way atomic operations synchronize memory. |
424 | /// In its weakest [`Ordering::Relaxed`], only the memory directly touched by the |
425 | /// operation is synchronized. On the other hand, a store-load pair of [`Ordering::SeqCst`] |
426 | /// operations synchronize other memory while additionally preserving a total order of such |
427 | /// operations across all threads. |
428 | /// |
429 | /// Rust's memory orderings are [the same as those of |
430 | /// C++20](https://en.cppreference.com/w/cpp/atomic/memory_order). |
431 | /// |
432 | /// For more information see the [nomicon]. |
433 | /// |
434 | /// [nomicon]: ../../../nomicon/atomics.html |
435 | #[stable(feature = "rust1", since = "1.0.0")] |
436 | #[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)] |
437 | #[non_exhaustive] |
438 | #[rustc_diagnostic_item= "Ordering"] |
439 | pub enum Ordering { |
440 | /// No ordering constraints, only atomic operations. |
441 | /// |
442 | /// Corresponds to [`memory_order_relaxed`] in C++20. |
443 | /// |
444 | /// [`memory_order_relaxed`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Relaxed_ordering |
445 | #[stable(feature = "rust1", since = "1.0.0")] |
446 | Relaxed, |
447 | /// When coupled with a store, all previous operations become ordered |
448 | /// before any load of this value with [`Acquire`] (or stronger) ordering. |
449 | /// In particular, all previous writes become visible to all threads |
450 | /// that perform an [`Acquire`] (or stronger) load of this value. |
451 | /// |
452 | /// Notice that using this ordering for an operation that combines loads |
453 | /// and stores leads to a [`Relaxed`] load operation! |
454 | /// |
455 | /// This ordering is only applicable for operations that can perform a store. |
456 | /// |
457 | /// Corresponds to [`memory_order_release`] in C++20. |
458 | /// |
459 | /// [`memory_order_release`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering |
460 | #[stable(feature = "rust1", since = "1.0.0")] |
461 | Release, |
462 | /// When coupled with a load, if the loaded value was written by a store operation with |
463 | /// [`Release`] (or stronger) ordering, then all subsequent operations |
464 | /// become ordered after that store. In particular, all subsequent loads will see data |
465 | /// written before the store. |
466 | /// |
467 | /// Notice that using this ordering for an operation that combines loads |
468 | /// and stores leads to a [`Relaxed`] store operation! |
469 | /// |
470 | /// This ordering is only applicable for operations that can perform a load. |
471 | /// |
472 | /// Corresponds to [`memory_order_acquire`] in C++20. |
473 | /// |
474 | /// [`memory_order_acquire`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering |
475 | #[stable(feature = "rust1", since = "1.0.0")] |
476 | Acquire, |
477 | /// Has the effects of both [`Acquire`] and [`Release`] together: |
478 | /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering. |
479 | /// |
480 | /// Notice that in the case of `compare_and_swap`, it is possible that the operation ends up |
481 | /// not performing any store and hence it has just [`Acquire`] ordering. However, |
482 | /// `AcqRel` will never perform [`Relaxed`] accesses. |
483 | /// |
484 | /// This ordering is only applicable for operations that combine both loads and stores. |
485 | /// |
486 | /// Corresponds to [`memory_order_acq_rel`] in C++20. |
487 | /// |
488 | /// [`memory_order_acq_rel`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release-Acquire_ordering |
489 | #[stable(feature = "rust1", since = "1.0.0")] |
490 | AcqRel, |
491 | /// Like [`Acquire`]/[`Release`]/[`AcqRel`] (for load, store, and load-with-store |
492 | /// operations, respectively) with the additional guarantee that all threads see all |
493 | /// sequentially consistent operations in the same order. |
494 | /// |
495 | /// Corresponds to [`memory_order_seq_cst`] in C++20. |
496 | /// |
497 | /// [`memory_order_seq_cst`]: https://en.cppreference.com/w/cpp/atomic/memory_order#Sequentially-consistent_ordering |
498 | #[stable(feature = "rust1", since = "1.0.0")] |
499 | SeqCst, |
500 | } |
501 | |
502 | /// An [`AtomicBool`] initialized to `false`. |
503 | #[cfg(target_has_atomic_load_store = "8")] |
504 | #[stable(feature = "rust1", since = "1.0.0")] |
505 | #[deprecated( |
506 | since = "1.34.0", |
507 | note = "the `new` function is now preferred", |
508 | suggestion = "AtomicBool::new(false)" |
509 | )] |
510 | pub const ATOMIC_BOOL_INIT: AtomicBool = AtomicBool::new(false); |
511 | |
512 | #[cfg(target_has_atomic_load_store = "8")] |
513 | impl AtomicBool { |
514 | /// Creates a new `AtomicBool`. |
515 | /// |
516 | /// # Examples |
517 | /// |
518 | /// ``` |
519 | /// use std::sync::atomic::AtomicBool; |
520 | /// |
521 | /// let atomic_true = AtomicBool::new(true); |
522 | /// let atomic_false = AtomicBool::new(false); |
523 | /// ``` |
524 | #[inline] |
525 | #[stable(feature = "rust1", since = "1.0.0")] |
526 | #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")] |
527 | #[must_use] |
528 | pub const fn new(v: bool) -> AtomicBool { |
529 | AtomicBool { v: UnsafeCell::new(v as u8) } |
530 | } |
531 | |
532 | /// Creates a new `AtomicBool` from a pointer. |
533 | /// |
534 | /// # Examples |
535 | /// |
536 | /// ``` |
537 | /// use std::sync::atomic::{self, AtomicBool}; |
538 | /// |
539 | /// // Get a pointer to an allocated value |
540 | /// let ptr: *mut bool = Box::into_raw(Box::new(false)); |
541 | /// |
542 | /// assert!(ptr.cast::<AtomicBool>().is_aligned()); |
543 | /// |
544 | /// { |
545 | /// // Create an atomic view of the allocated value |
546 | /// let atomic = unsafe { AtomicBool::from_ptr(ptr) }; |
547 | /// |
548 | /// // Use `atomic` for atomic operations, possibly share it with other threads |
549 | /// atomic.store(true, atomic::Ordering::Relaxed); |
550 | /// } |
551 | /// |
552 | /// // It's ok to non-atomically access the value behind `ptr`, |
553 | /// // since the reference to the atomic ended its lifetime in the block above |
554 | /// assert_eq!(unsafe { *ptr }, true); |
555 | /// |
556 | /// // Deallocate the value |
557 | /// unsafe { drop(Box::from_raw(ptr)) } |
558 | /// ``` |
559 | /// |
560 | /// # Safety |
561 | /// |
562 | /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since |
563 | /// `align_of::<AtomicBool>() == 1`). |
564 | /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
565 | /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not |
566 | /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes, |
567 | /// without synchronization. |
568 | /// |
569 | /// [valid]: crate::ptr#safety |
570 | /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses |
571 | #[inline] |
572 | #[stable(feature = "atomic_from_ptr", since = "1.75.0")] |
573 | #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")] |
574 | pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a AtomicBool { |
575 | // SAFETY: guaranteed by the caller |
576 | unsafe { &*ptr.cast() } |
577 | } |
578 | |
579 | /// Returns a mutable reference to the underlying [`bool`]. |
580 | /// |
581 | /// This is safe because the mutable reference guarantees that no other threads are |
582 | /// concurrently accessing the atomic data. |
583 | /// |
584 | /// # Examples |
585 | /// |
586 | /// ``` |
587 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
588 | /// |
589 | /// let mut some_bool = AtomicBool::new(true); |
590 | /// assert_eq!(*some_bool.get_mut(), true); |
591 | /// *some_bool.get_mut() = false; |
592 | /// assert_eq!(some_bool.load(Ordering::SeqCst), false); |
593 | /// ``` |
594 | #[inline] |
595 | #[stable(feature = "atomic_access", since = "1.15.0")] |
596 | pub fn get_mut(&mut self) -> &mut bool { |
597 | // SAFETY: the mutable reference guarantees unique ownership. |
598 | unsafe { &mut *(self.v.get() as *mut bool) } |
599 | } |
600 | |
601 | /// Gets atomic access to a `&mut bool`. |
602 | /// |
603 | /// # Examples |
604 | /// |
605 | /// ``` |
606 | /// #![feature(atomic_from_mut)] |
607 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
608 | /// |
609 | /// let mut some_bool = true; |
610 | /// let a = AtomicBool::from_mut(&mut some_bool); |
611 | /// a.store(false, Ordering::Relaxed); |
612 | /// assert_eq!(some_bool, false); |
613 | /// ``` |
614 | #[inline] |
615 | #[cfg(target_has_atomic_equal_alignment = "8")] |
616 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
617 | pub fn from_mut(v: &mut bool) -> &mut Self { |
618 | // SAFETY: the mutable reference guarantees unique ownership, and |
619 | // alignment of both `bool` and `Self` is 1. |
620 | unsafe { &mut *(v as *mut bool as *mut Self) } |
621 | } |
622 | |
623 | /// Gets non-atomic access to a `&mut [AtomicBool]` slice. |
624 | /// |
625 | /// This is safe because the mutable reference guarantees that no other threads are |
626 | /// concurrently accessing the atomic data. |
627 | /// |
628 | /// # Examples |
629 | /// |
630 | /// ```ignore-wasm |
631 | /// #![feature(atomic_from_mut)] |
632 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
633 | /// |
634 | /// let mut some_bools = [const { AtomicBool::new(false) }; 10]; |
635 | /// |
636 | /// let view: &mut [bool] = AtomicBool::get_mut_slice(&mut some_bools); |
637 | /// assert_eq!(view, [false; 10]); |
638 | /// view[..5].copy_from_slice(&[true; 5]); |
639 | /// |
640 | /// std::thread::scope(|s| { |
641 | /// for t in &some_bools[..5] { |
642 | /// s.spawn(move || assert_eq!(t.load(Ordering::Relaxed), true)); |
643 | /// } |
644 | /// |
645 | /// for f in &some_bools[5..] { |
646 | /// s.spawn(move || assert_eq!(f.load(Ordering::Relaxed), false)); |
647 | /// } |
648 | /// }); |
649 | /// ``` |
650 | #[inline] |
651 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
652 | pub fn get_mut_slice(this: &mut [Self]) -> &mut [bool] { |
653 | // SAFETY: the mutable reference guarantees unique ownership. |
654 | unsafe { &mut *(this as *mut [Self] as *mut [bool]) } |
655 | } |
656 | |
657 | /// Gets atomic access to a `&mut [bool]` slice. |
658 | /// |
659 | /// # Examples |
660 | /// |
661 | /// ```rust,ignore-wasm |
662 | /// #![feature(atomic_from_mut)] |
663 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
664 | /// |
665 | /// let mut some_bools = [false; 10]; |
666 | /// let a = &*AtomicBool::from_mut_slice(&mut some_bools); |
667 | /// std::thread::scope(|s| { |
668 | /// for i in 0..a.len() { |
669 | /// s.spawn(move || a[i].store(true, Ordering::Relaxed)); |
670 | /// } |
671 | /// }); |
672 | /// assert_eq!(some_bools, [true; 10]); |
673 | /// ``` |
674 | #[inline] |
675 | #[cfg(target_has_atomic_equal_alignment = "8")] |
676 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
677 | pub fn from_mut_slice(v: &mut [bool]) -> &mut [Self] { |
678 | // SAFETY: the mutable reference guarantees unique ownership, and |
679 | // alignment of both `bool` and `Self` is 1. |
680 | unsafe { &mut *(v as *mut [bool] as *mut [Self]) } |
681 | } |
682 | |
683 | /// Consumes the atomic and returns the contained value. |
684 | /// |
685 | /// This is safe because passing `self` by value guarantees that no other threads are |
686 | /// concurrently accessing the atomic data. |
687 | /// |
688 | /// # Examples |
689 | /// |
690 | /// ``` |
691 | /// use std::sync::atomic::AtomicBool; |
692 | /// |
693 | /// let some_bool = AtomicBool::new(true); |
694 | /// assert_eq!(some_bool.into_inner(), true); |
695 | /// ``` |
696 | #[inline] |
697 | #[stable(feature = "atomic_access", since = "1.15.0")] |
698 | #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")] |
699 | pub const fn into_inner(self) -> bool { |
700 | self.v.into_inner() != 0 |
701 | } |
702 | |
703 | /// Loads a value from the bool. |
704 | /// |
705 | /// `load` takes an [`Ordering`] argument which describes the memory ordering |
706 | /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
707 | /// |
708 | /// # Panics |
709 | /// |
710 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
711 | /// |
712 | /// # Examples |
713 | /// |
714 | /// ``` |
715 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
716 | /// |
717 | /// let some_bool = AtomicBool::new(true); |
718 | /// |
719 | /// assert_eq!(some_bool.load(Ordering::Relaxed), true); |
720 | /// ``` |
721 | #[inline] |
722 | #[stable(feature = "rust1", since = "1.0.0")] |
723 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
724 | pub fn load(&self, order: Ordering) -> bool { |
725 | // SAFETY: any data races are prevented by atomic intrinsics and the raw |
726 | // pointer passed in is valid because we got it from a reference. |
727 | unsafe { atomic_load(self.v.get(), order) != 0 } |
728 | } |
729 | |
730 | /// Stores a value into the bool. |
731 | /// |
732 | /// `store` takes an [`Ordering`] argument which describes the memory ordering |
733 | /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
734 | /// |
735 | /// # Panics |
736 | /// |
737 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
738 | /// |
739 | /// # Examples |
740 | /// |
741 | /// ``` |
742 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
743 | /// |
744 | /// let some_bool = AtomicBool::new(true); |
745 | /// |
746 | /// some_bool.store(false, Ordering::Relaxed); |
747 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
748 | /// ``` |
749 | #[inline] |
750 | #[stable(feature = "rust1", since = "1.0.0")] |
751 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
752 | pub fn store(&self, val: bool, order: Ordering) { |
753 | // SAFETY: any data races are prevented by atomic intrinsics and the raw |
754 | // pointer passed in is valid because we got it from a reference. |
755 | unsafe { |
756 | atomic_store(self.v.get(), val as u8, order); |
757 | } |
758 | } |
759 | |
760 | /// Stores a value into the bool, returning the previous value. |
761 | /// |
762 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
763 | /// of this operation. All ordering modes are possible. Note that using |
764 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
765 | /// using [`Release`] makes the load part [`Relaxed`]. |
766 | /// |
767 | /// **Note:** This method is only available on platforms that support atomic |
768 | /// operations on `u8`. |
769 | /// |
770 | /// # Examples |
771 | /// |
772 | /// ``` |
773 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
774 | /// |
775 | /// let some_bool = AtomicBool::new(true); |
776 | /// |
777 | /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true); |
778 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
779 | /// ``` |
780 | #[inline] |
781 | #[stable(feature = "rust1", since = "1.0.0")] |
782 | #[cfg(target_has_atomic = "8")] |
783 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
784 | pub fn swap(&self, val: bool, order: Ordering) -> bool { |
785 | if EMULATE_ATOMIC_BOOL { |
786 | if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) } |
787 | } else { |
788 | // SAFETY: data races are prevented by atomic intrinsics. |
789 | unsafe { atomic_swap(self.v.get(), val as u8, order) != 0 } |
790 | } |
791 | } |
792 | |
793 | /// Stores a value into the [`bool`] if the current value is the same as the `current` value. |
794 | /// |
795 | /// The return value is always the previous value. If it is equal to `current`, then the value |
796 | /// was updated. |
797 | /// |
798 | /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory |
799 | /// ordering of this operation. Notice that even when using [`AcqRel`], the operation |
800 | /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics. |
801 | /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it |
802 | /// happens, and using [`Release`] makes the load part [`Relaxed`]. |
803 | /// |
804 | /// **Note:** This method is only available on platforms that support atomic |
805 | /// operations on `u8`. |
806 | /// |
807 | /// # Migrating to `compare_exchange` and `compare_exchange_weak` |
808 | /// |
809 | /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for |
810 | /// memory orderings: |
811 | /// |
812 | /// Original | Success | Failure |
813 | /// -------- | ------- | ------- |
814 | /// Relaxed | Relaxed | Relaxed |
815 | /// Acquire | Acquire | Acquire |
816 | /// Release | Release | Relaxed |
817 | /// AcqRel | AcqRel | Acquire |
818 | /// SeqCst | SeqCst | SeqCst |
819 | /// |
820 | /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use |
821 | /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`, |
822 | /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err` |
823 | /// rather than to infer success vs failure based on the value that was read. |
824 | /// |
825 | /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead. |
826 | /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds, |
827 | /// which allows the compiler to generate better assembly code when the compare and swap |
828 | /// is used in a loop. |
829 | /// |
830 | /// # Examples |
831 | /// |
832 | /// ``` |
833 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
834 | /// |
835 | /// let some_bool = AtomicBool::new(true); |
836 | /// |
837 | /// assert_eq!(some_bool.compare_and_swap(true, false, Ordering::Relaxed), true); |
838 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
839 | /// |
840 | /// assert_eq!(some_bool.compare_and_swap(true, true, Ordering::Relaxed), false); |
841 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
842 | /// ``` |
843 | #[inline] |
844 | #[stable(feature = "rust1", since = "1.0.0")] |
845 | #[deprecated( |
846 | since = "1.50.0", |
847 | note = "Use `compare_exchange` or `compare_exchange_weak` instead" |
848 | )] |
849 | #[cfg(target_has_atomic = "8")] |
850 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
851 | pub fn compare_and_swap(&self, current: bool, new: bool, order: Ordering) -> bool { |
852 | match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) { |
853 | Ok(x) => x, |
854 | Err(x) => x, |
855 | } |
856 | } |
857 | |
858 | /// Stores a value into the [`bool`] if the current value is the same as the `current` value. |
859 | /// |
860 | /// The return value is a result indicating whether the new value was written and containing |
861 | /// the previous value. On success this value is guaranteed to be equal to `current`. |
862 | /// |
863 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
864 | /// ordering of this operation. `success` describes the required ordering for the |
865 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
866 | /// `failure` describes the required ordering for the load operation that takes place when |
867 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
868 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
869 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
870 | /// |
871 | /// **Note:** This method is only available on platforms that support atomic |
872 | /// operations on `u8`. |
873 | /// |
874 | /// # Examples |
875 | /// |
876 | /// ``` |
877 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
878 | /// |
879 | /// let some_bool = AtomicBool::new(true); |
880 | /// |
881 | /// assert_eq!(some_bool.compare_exchange(true, |
882 | /// false, |
883 | /// Ordering::Acquire, |
884 | /// Ordering::Relaxed), |
885 | /// Ok(true)); |
886 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
887 | /// |
888 | /// assert_eq!(some_bool.compare_exchange(true, true, |
889 | /// Ordering::SeqCst, |
890 | /// Ordering::Acquire), |
891 | /// Err(false)); |
892 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
893 | /// ``` |
894 | #[inline] |
895 | #[stable(feature = "extended_compare_and_swap", since = "1.10.0")] |
896 | #[doc(alias = "compare_and_swap")] |
897 | #[cfg(target_has_atomic = "8")] |
898 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
899 | pub fn compare_exchange( |
900 | &self, |
901 | current: bool, |
902 | new: bool, |
903 | success: Ordering, |
904 | failure: Ordering, |
905 | ) -> Result<bool, bool> { |
906 | if EMULATE_ATOMIC_BOOL { |
907 | // Pick the strongest ordering from success and failure. |
908 | let order = match (success, failure) { |
909 | (SeqCst, _) => SeqCst, |
910 | (_, SeqCst) => SeqCst, |
911 | (AcqRel, _) => AcqRel, |
912 | (_, AcqRel) => { |
913 | panic!("there is no such thing as an acquire-release failure ordering") |
914 | } |
915 | (Release, Acquire) => AcqRel, |
916 | (Acquire, _) => Acquire, |
917 | (_, Acquire) => Acquire, |
918 | (Release, Relaxed) => Release, |
919 | (_, Release) => panic!("there is no such thing as a release failure ordering"), |
920 | (Relaxed, Relaxed) => Relaxed, |
921 | }; |
922 | let old = if current == new { |
923 | // This is a no-op, but we still need to perform the operation |
924 | // for memory ordering reasons. |
925 | self.fetch_or(false, order) |
926 | } else { |
927 | // This sets the value to the new one and returns the old one. |
928 | self.swap(new, order) |
929 | }; |
930 | if old == current { Ok(old) } else { Err(old) } |
931 | } else { |
932 | // SAFETY: data races are prevented by atomic intrinsics. |
933 | match unsafe { |
934 | atomic_compare_exchange(self.v.get(), current as u8, new as u8, success, failure) |
935 | } { |
936 | Ok(x) => Ok(x != 0), |
937 | Err(x) => Err(x != 0), |
938 | } |
939 | } |
940 | } |
941 | |
942 | /// Stores a value into the [`bool`] if the current value is the same as the `current` value. |
943 | /// |
944 | /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the |
945 | /// comparison succeeds, which can result in more efficient code on some platforms. The |
946 | /// return value is a result indicating whether the new value was written and containing the |
947 | /// previous value. |
948 | /// |
949 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
950 | /// ordering of this operation. `success` describes the required ordering for the |
951 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
952 | /// `failure` describes the required ordering for the load operation that takes place when |
953 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
954 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
955 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
956 | /// |
957 | /// **Note:** This method is only available on platforms that support atomic |
958 | /// operations on `u8`. |
959 | /// |
960 | /// # Examples |
961 | /// |
962 | /// ``` |
963 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
964 | /// |
965 | /// let val = AtomicBool::new(false); |
966 | /// |
967 | /// let new = true; |
968 | /// let mut old = val.load(Ordering::Relaxed); |
969 | /// loop { |
970 | /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
971 | /// Ok(_) => break, |
972 | /// Err(x) => old = x, |
973 | /// } |
974 | /// } |
975 | /// ``` |
976 | #[inline] |
977 | #[stable(feature = "extended_compare_and_swap", since = "1.10.0")] |
978 | #[doc(alias = "compare_and_swap")] |
979 | #[cfg(target_has_atomic = "8")] |
980 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
981 | pub fn compare_exchange_weak( |
982 | &self, |
983 | current: bool, |
984 | new: bool, |
985 | success: Ordering, |
986 | failure: Ordering, |
987 | ) -> Result<bool, bool> { |
988 | if EMULATE_ATOMIC_BOOL { |
989 | return self.compare_exchange(current, new, success, failure); |
990 | } |
991 | |
992 | // SAFETY: data races are prevented by atomic intrinsics. |
993 | match unsafe { |
994 | atomic_compare_exchange_weak(self.v.get(), current as u8, new as u8, success, failure) |
995 | } { |
996 | Ok(x) => Ok(x != 0), |
997 | Err(x) => Err(x != 0), |
998 | } |
999 | } |
1000 | |
1001 | /// Logical "and" with a boolean value. |
1002 | /// |
1003 | /// Performs a logical "and" operation on the current value and the argument `val`, and sets |
1004 | /// the new value to the result. |
1005 | /// |
1006 | /// Returns the previous value. |
1007 | /// |
1008 | /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering |
1009 | /// of this operation. All ordering modes are possible. Note that using |
1010 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1011 | /// using [`Release`] makes the load part [`Relaxed`]. |
1012 | /// |
1013 | /// **Note:** This method is only available on platforms that support atomic |
1014 | /// operations on `u8`. |
1015 | /// |
1016 | /// # Examples |
1017 | /// |
1018 | /// ``` |
1019 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1020 | /// |
1021 | /// let foo = AtomicBool::new(true); |
1022 | /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true); |
1023 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1024 | /// |
1025 | /// let foo = AtomicBool::new(true); |
1026 | /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true); |
1027 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1028 | /// |
1029 | /// let foo = AtomicBool::new(false); |
1030 | /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false); |
1031 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1032 | /// ``` |
1033 | #[inline] |
1034 | #[stable(feature = "rust1", since = "1.0.0")] |
1035 | #[cfg(target_has_atomic = "8")] |
1036 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1037 | pub fn fetch_and(&self, val: bool, order: Ordering) -> bool { |
1038 | // SAFETY: data races are prevented by atomic intrinsics. |
1039 | unsafe { atomic_and(self.v.get(), val as u8, order) != 0 } |
1040 | } |
1041 | |
1042 | /// Logical "nand" with a boolean value. |
1043 | /// |
1044 | /// Performs a logical "nand" operation on the current value and the argument `val`, and sets |
1045 | /// the new value to the result. |
1046 | /// |
1047 | /// Returns the previous value. |
1048 | /// |
1049 | /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering |
1050 | /// of this operation. All ordering modes are possible. Note that using |
1051 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1052 | /// using [`Release`] makes the load part [`Relaxed`]. |
1053 | /// |
1054 | /// **Note:** This method is only available on platforms that support atomic |
1055 | /// operations on `u8`. |
1056 | /// |
1057 | /// # Examples |
1058 | /// |
1059 | /// ``` |
1060 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1061 | /// |
1062 | /// let foo = AtomicBool::new(true); |
1063 | /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true); |
1064 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1065 | /// |
1066 | /// let foo = AtomicBool::new(true); |
1067 | /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true); |
1068 | /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0); |
1069 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1070 | /// |
1071 | /// let foo = AtomicBool::new(false); |
1072 | /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false); |
1073 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1074 | /// ``` |
1075 | #[inline] |
1076 | #[stable(feature = "rust1", since = "1.0.0")] |
1077 | #[cfg(target_has_atomic = "8")] |
1078 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1079 | pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool { |
1080 | // We can't use atomic_nand here because it can result in a bool with |
1081 | // an invalid value. This happens because the atomic operation is done |
1082 | // with an 8-bit integer internally, which would set the upper 7 bits. |
1083 | // So we just use fetch_xor or swap instead. |
1084 | if val { |
1085 | // !(x & true) == !x |
1086 | // We must invert the bool. |
1087 | self.fetch_xor(true, order) |
1088 | } else { |
1089 | // !(x & false) == true |
1090 | // We must set the bool to true. |
1091 | self.swap(true, order) |
1092 | } |
1093 | } |
1094 | |
1095 | /// Logical "or" with a boolean value. |
1096 | /// |
1097 | /// Performs a logical "or" operation on the current value and the argument `val`, and sets the |
1098 | /// new value to the result. |
1099 | /// |
1100 | /// Returns the previous value. |
1101 | /// |
1102 | /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering |
1103 | /// of this operation. All ordering modes are possible. Note that using |
1104 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1105 | /// using [`Release`] makes the load part [`Relaxed`]. |
1106 | /// |
1107 | /// **Note:** This method is only available on platforms that support atomic |
1108 | /// operations on `u8`. |
1109 | /// |
1110 | /// # Examples |
1111 | /// |
1112 | /// ``` |
1113 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1114 | /// |
1115 | /// let foo = AtomicBool::new(true); |
1116 | /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true); |
1117 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1118 | /// |
1119 | /// let foo = AtomicBool::new(true); |
1120 | /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true); |
1121 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1122 | /// |
1123 | /// let foo = AtomicBool::new(false); |
1124 | /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false); |
1125 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1126 | /// ``` |
1127 | #[inline] |
1128 | #[stable(feature = "rust1", since = "1.0.0")] |
1129 | #[cfg(target_has_atomic = "8")] |
1130 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1131 | pub fn fetch_or(&self, val: bool, order: Ordering) -> bool { |
1132 | // SAFETY: data races are prevented by atomic intrinsics. |
1133 | unsafe { atomic_or(self.v.get(), val as u8, order) != 0 } |
1134 | } |
1135 | |
1136 | /// Logical "xor" with a boolean value. |
1137 | /// |
1138 | /// Performs a logical "xor" operation on the current value and the argument `val`, and sets |
1139 | /// the new value to the result. |
1140 | /// |
1141 | /// Returns the previous value. |
1142 | /// |
1143 | /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering |
1144 | /// of this operation. All ordering modes are possible. Note that using |
1145 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1146 | /// using [`Release`] makes the load part [`Relaxed`]. |
1147 | /// |
1148 | /// **Note:** This method is only available on platforms that support atomic |
1149 | /// operations on `u8`. |
1150 | /// |
1151 | /// # Examples |
1152 | /// |
1153 | /// ``` |
1154 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1155 | /// |
1156 | /// let foo = AtomicBool::new(true); |
1157 | /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true); |
1158 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1159 | /// |
1160 | /// let foo = AtomicBool::new(true); |
1161 | /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true); |
1162 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1163 | /// |
1164 | /// let foo = AtomicBool::new(false); |
1165 | /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false); |
1166 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1167 | /// ``` |
1168 | #[inline] |
1169 | #[stable(feature = "rust1", since = "1.0.0")] |
1170 | #[cfg(target_has_atomic = "8")] |
1171 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1172 | pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool { |
1173 | // SAFETY: data races are prevented by atomic intrinsics. |
1174 | unsafe { atomic_xor(self.v.get(), val as u8, order) != 0 } |
1175 | } |
1176 | |
1177 | /// Logical "not" with a boolean value. |
1178 | /// |
1179 | /// Performs a logical "not" operation on the current value, and sets |
1180 | /// the new value to the result. |
1181 | /// |
1182 | /// Returns the previous value. |
1183 | /// |
1184 | /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering |
1185 | /// of this operation. All ordering modes are possible. Note that using |
1186 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1187 | /// using [`Release`] makes the load part [`Relaxed`]. |
1188 | /// |
1189 | /// **Note:** This method is only available on platforms that support atomic |
1190 | /// operations on `u8`. |
1191 | /// |
1192 | /// # Examples |
1193 | /// |
1194 | /// ``` |
1195 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1196 | /// |
1197 | /// let foo = AtomicBool::new(true); |
1198 | /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true); |
1199 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1200 | /// |
1201 | /// let foo = AtomicBool::new(false); |
1202 | /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false); |
1203 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1204 | /// ``` |
1205 | #[inline] |
1206 | #[stable(feature = "atomic_bool_fetch_not", since = "1.81.0")] |
1207 | #[cfg(target_has_atomic = "8")] |
1208 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1209 | pub fn fetch_not(&self, order: Ordering) -> bool { |
1210 | self.fetch_xor(true, order) |
1211 | } |
1212 | |
1213 | /// Returns a mutable pointer to the underlying [`bool`]. |
1214 | /// |
1215 | /// Doing non-atomic reads and writes on the resulting boolean can be a data race. |
1216 | /// This method is mostly useful for FFI, where the function signature may use |
1217 | /// `*mut bool` instead of `&AtomicBool`. |
1218 | /// |
1219 | /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the |
1220 | /// atomic types work with interior mutability. All modifications of an atomic change the value |
1221 | /// through a shared reference, and can do so safely as long as they use atomic operations. Any |
1222 | /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same |
1223 | /// restriction: operations on it must be atomic. |
1224 | /// |
1225 | /// # Examples |
1226 | /// |
1227 | /// ```ignore (extern-declaration) |
1228 | /// # fn main() { |
1229 | /// use std::sync::atomic::AtomicBool; |
1230 | /// |
1231 | /// extern "C"{ |
1232 | /// fn my_atomic_op(arg: *mut bool); |
1233 | /// } |
1234 | /// |
1235 | /// let mut atomic = AtomicBool::new(true); |
1236 | /// unsafe { |
1237 | /// my_atomic_op(atomic.as_ptr()); |
1238 | /// } |
1239 | /// # } |
1240 | /// ``` |
1241 | #[inline] |
1242 | #[stable(feature = "atomic_as_ptr", since = "1.70.0")] |
1243 | #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")] |
1244 | #[rustc_never_returns_null_ptr] |
1245 | pub const fn as_ptr(&self) -> *mut bool { |
1246 | self.v.get().cast() |
1247 | } |
1248 | |
1249 | /// Fetches the value, and applies a function to it that returns an optional |
1250 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1251 | /// returned `Some(_)`, else `Err(previous_value)`. |
1252 | /// |
1253 | /// Note: This may call the function multiple times if the value has been |
1254 | /// changed from other threads in the meantime, as long as the function |
1255 | /// returns `Some(_)`, but the function will have been applied only once to |
1256 | /// the stored value. |
1257 | /// |
1258 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory |
1259 | /// ordering of this operation. The first describes the required ordering for |
1260 | /// when the operation finally succeeds while the second describes the |
1261 | /// required ordering for loads. These correspond to the success and failure |
1262 | /// orderings of [`AtomicBool::compare_exchange`] respectively. |
1263 | /// |
1264 | /// Using [`Acquire`] as success ordering makes the store part of this |
1265 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1266 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1267 | /// [`Acquire`] or [`Relaxed`]. |
1268 | /// |
1269 | /// **Note:** This method is only available on platforms that support atomic |
1270 | /// operations on `u8`. |
1271 | /// |
1272 | /// # Considerations |
1273 | /// |
1274 | /// This method is not magic; it is not provided by the hardware. |
1275 | /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks. |
1276 | /// In particular, this method will not circumvent the [ABA Problem]. |
1277 | /// |
1278 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1279 | /// |
1280 | /// # Examples |
1281 | /// |
1282 | /// ```rust |
1283 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1284 | /// |
1285 | /// let x = AtomicBool::new(false); |
1286 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false)); |
1287 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false)); |
1288 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true)); |
1289 | /// assert_eq!(x.load(Ordering::SeqCst), false); |
1290 | /// ``` |
1291 | #[inline] |
1292 | #[stable(feature = "atomic_fetch_update", since = "1.53.0")] |
1293 | #[cfg(target_has_atomic = "8")] |
1294 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1295 | pub fn fetch_update<F>( |
1296 | &self, |
1297 | set_order: Ordering, |
1298 | fetch_order: Ordering, |
1299 | mut f: F, |
1300 | ) -> Result<bool, bool> |
1301 | where |
1302 | F: FnMut(bool) -> Option<bool>, |
1303 | { |
1304 | let mut prev = self.load(fetch_order); |
1305 | while let Some(next) = f(prev) { |
1306 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
1307 | x @ Ok(_) => return x, |
1308 | Err(next_prev) => prev = next_prev, |
1309 | } |
1310 | } |
1311 | Err(prev) |
1312 | } |
1313 | |
1314 | /// Fetches the value, and applies a function to it that returns an optional |
1315 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1316 | /// returned `Some(_)`, else `Err(previous_value)`. |
1317 | /// |
1318 | /// See also: [`update`](`AtomicBool::update`). |
1319 | /// |
1320 | /// Note: This may call the function multiple times if the value has been |
1321 | /// changed from other threads in the meantime, as long as the function |
1322 | /// returns `Some(_)`, but the function will have been applied only once to |
1323 | /// the stored value. |
1324 | /// |
1325 | /// `try_update` takes two [`Ordering`] arguments to describe the memory |
1326 | /// ordering of this operation. The first describes the required ordering for |
1327 | /// when the operation finally succeeds while the second describes the |
1328 | /// required ordering for loads. These correspond to the success and failure |
1329 | /// orderings of [`AtomicBool::compare_exchange`] respectively. |
1330 | /// |
1331 | /// Using [`Acquire`] as success ordering makes the store part of this |
1332 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1333 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1334 | /// [`Acquire`] or [`Relaxed`]. |
1335 | /// |
1336 | /// **Note:** This method is only available on platforms that support atomic |
1337 | /// operations on `u8`. |
1338 | /// |
1339 | /// # Considerations |
1340 | /// |
1341 | /// This method is not magic; it is not provided by the hardware. |
1342 | /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks. |
1343 | /// In particular, this method will not circumvent the [ABA Problem]. |
1344 | /// |
1345 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1346 | /// |
1347 | /// # Examples |
1348 | /// |
1349 | /// ```rust |
1350 | /// #![feature(atomic_try_update)] |
1351 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1352 | /// |
1353 | /// let x = AtomicBool::new(false); |
1354 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false)); |
1355 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false)); |
1356 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true)); |
1357 | /// assert_eq!(x.load(Ordering::SeqCst), false); |
1358 | /// ``` |
1359 | #[inline] |
1360 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
1361 | #[cfg(target_has_atomic = "8")] |
1362 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1363 | pub fn try_update( |
1364 | &self, |
1365 | set_order: Ordering, |
1366 | fetch_order: Ordering, |
1367 | f: impl FnMut(bool) -> Option<bool>, |
1368 | ) -> Result<bool, bool> { |
1369 | // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`; |
1370 | // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`. |
1371 | self.fetch_update(set_order, fetch_order, f) |
1372 | } |
1373 | |
1374 | /// Fetches the value, applies a function to it that it return a new value. |
1375 | /// The new value is stored and the old value is returned. |
1376 | /// |
1377 | /// See also: [`try_update`](`AtomicBool::try_update`). |
1378 | /// |
1379 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
1380 | /// the meantime, but the function will have been applied only once to the stored value. |
1381 | /// |
1382 | /// `update` takes two [`Ordering`] arguments to describe the memory |
1383 | /// ordering of this operation. The first describes the required ordering for |
1384 | /// when the operation finally succeeds while the second describes the |
1385 | /// required ordering for loads. These correspond to the success and failure |
1386 | /// orderings of [`AtomicBool::compare_exchange`] respectively. |
1387 | /// |
1388 | /// Using [`Acquire`] as success ordering makes the store part |
1389 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
1390 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
1391 | /// |
1392 | /// **Note:** This method is only available on platforms that support atomic operations on `u8`. |
1393 | /// |
1394 | /// # Considerations |
1395 | /// |
1396 | /// This method is not magic; it is not provided by the hardware. |
1397 | /// It is implemented in terms of [`AtomicBool::compare_exchange_weak`], and suffers from the same drawbacks. |
1398 | /// In particular, this method will not circumvent the [ABA Problem]. |
1399 | /// |
1400 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1401 | /// |
1402 | /// # Examples |
1403 | /// |
1404 | /// ```rust |
1405 | /// #![feature(atomic_try_update)] |
1406 | /// |
1407 | /// use std::sync::atomic::{AtomicBool, Ordering}; |
1408 | /// |
1409 | /// let x = AtomicBool::new(false); |
1410 | /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), false); |
1411 | /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| !x), true); |
1412 | /// assert_eq!(x.load(Ordering::SeqCst), false); |
1413 | /// ``` |
1414 | #[inline] |
1415 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
1416 | #[cfg(target_has_atomic = "8")] |
1417 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1418 | pub fn update( |
1419 | &self, |
1420 | set_order: Ordering, |
1421 | fetch_order: Ordering, |
1422 | mut f: impl FnMut(bool) -> bool, |
1423 | ) -> bool { |
1424 | let mut prev = self.load(fetch_order); |
1425 | loop { |
1426 | match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) { |
1427 | Ok(x) => break x, |
1428 | Err(next_prev) => prev = next_prev, |
1429 | } |
1430 | } |
1431 | } |
1432 | } |
1433 | |
1434 | #[cfg(target_has_atomic_load_store = "ptr")] |
1435 | impl<T> AtomicPtr<T> { |
1436 | /// Creates a new `AtomicPtr`. |
1437 | /// |
1438 | /// # Examples |
1439 | /// |
1440 | /// ``` |
1441 | /// use std::sync::atomic::AtomicPtr; |
1442 | /// |
1443 | /// let ptr = &mut 5; |
1444 | /// let atomic_ptr = AtomicPtr::new(ptr); |
1445 | /// ``` |
1446 | #[inline] |
1447 | #[stable(feature = "rust1", since = "1.0.0")] |
1448 | #[rustc_const_stable(feature = "const_atomic_new", since = "1.24.0")] |
1449 | pub const fn new(p: *mut T) -> AtomicPtr<T> { |
1450 | AtomicPtr { p: UnsafeCell::new(p) } |
1451 | } |
1452 | |
1453 | /// Creates a new `AtomicPtr` from a pointer. |
1454 | /// |
1455 | /// # Examples |
1456 | /// |
1457 | /// ``` |
1458 | /// use std::sync::atomic::{self, AtomicPtr}; |
1459 | /// |
1460 | /// // Get a pointer to an allocated value |
1461 | /// let ptr: *mut *mut u8 = Box::into_raw(Box::new(std::ptr::null_mut())); |
1462 | /// |
1463 | /// assert!(ptr.cast::<AtomicPtr<u8>>().is_aligned()); |
1464 | /// |
1465 | /// { |
1466 | /// // Create an atomic view of the allocated value |
1467 | /// let atomic = unsafe { AtomicPtr::from_ptr(ptr) }; |
1468 | /// |
1469 | /// // Use `atomic` for atomic operations, possibly share it with other threads |
1470 | /// atomic.store(std::ptr::NonNull::dangling().as_ptr(), atomic::Ordering::Relaxed); |
1471 | /// } |
1472 | /// |
1473 | /// // It's ok to non-atomically access the value behind `ptr`, |
1474 | /// // since the reference to the atomic ended its lifetime in the block above |
1475 | /// assert!(!unsafe { *ptr }.is_null()); |
1476 | /// |
1477 | /// // Deallocate the value |
1478 | /// unsafe { drop(Box::from_raw(ptr)) } |
1479 | /// ``` |
1480 | /// |
1481 | /// # Safety |
1482 | /// |
1483 | /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this |
1484 | /// can be bigger than `align_of::<*mut T>()`). |
1485 | /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
1486 | /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not |
1487 | /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes, |
1488 | /// without synchronization. |
1489 | /// |
1490 | /// [valid]: crate::ptr#safety |
1491 | /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses |
1492 | #[inline] |
1493 | #[stable(feature = "atomic_from_ptr", since = "1.75.0")] |
1494 | #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")] |
1495 | pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a AtomicPtr<T> { |
1496 | // SAFETY: guaranteed by the caller |
1497 | unsafe { &*ptr.cast() } |
1498 | } |
1499 | |
1500 | /// Returns a mutable reference to the underlying pointer. |
1501 | /// |
1502 | /// This is safe because the mutable reference guarantees that no other threads are |
1503 | /// concurrently accessing the atomic data. |
1504 | /// |
1505 | /// # Examples |
1506 | /// |
1507 | /// ``` |
1508 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1509 | /// |
1510 | /// let mut data = 10; |
1511 | /// let mut atomic_ptr = AtomicPtr::new(&mut data); |
1512 | /// let mut other_data = 5; |
1513 | /// *atomic_ptr.get_mut() = &mut other_data; |
1514 | /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5); |
1515 | /// ``` |
1516 | #[inline] |
1517 | #[stable(feature = "atomic_access", since = "1.15.0")] |
1518 | pub fn get_mut(&mut self) -> &mut *mut T { |
1519 | self.p.get_mut() |
1520 | } |
1521 | |
1522 | /// Gets atomic access to a pointer. |
1523 | /// |
1524 | /// # Examples |
1525 | /// |
1526 | /// ``` |
1527 | /// #![feature(atomic_from_mut)] |
1528 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1529 | /// |
1530 | /// let mut data = 123; |
1531 | /// let mut some_ptr = &mut data as *mut i32; |
1532 | /// let a = AtomicPtr::from_mut(&mut some_ptr); |
1533 | /// let mut other_data = 456; |
1534 | /// a.store(&mut other_data, Ordering::Relaxed); |
1535 | /// assert_eq!(unsafe { *some_ptr }, 456); |
1536 | /// ``` |
1537 | #[inline] |
1538 | #[cfg(target_has_atomic_equal_alignment = "ptr")] |
1539 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
1540 | pub fn from_mut(v: &mut *mut T) -> &mut Self { |
1541 | let [] = [(); align_of::<AtomicPtr<()>>() - align_of::<*mut ()>()]; |
1542 | // SAFETY: |
1543 | // - the mutable reference guarantees unique ownership. |
1544 | // - the alignment of `*mut T` and `Self` is the same on all platforms |
1545 | // supported by rust, as verified above. |
1546 | unsafe { &mut *(v as *mut *mut T as *mut Self) } |
1547 | } |
1548 | |
1549 | /// Gets non-atomic access to a `&mut [AtomicPtr]` slice. |
1550 | /// |
1551 | /// This is safe because the mutable reference guarantees that no other threads are |
1552 | /// concurrently accessing the atomic data. |
1553 | /// |
1554 | /// # Examples |
1555 | /// |
1556 | /// ```ignore-wasm |
1557 | /// #![feature(atomic_from_mut)] |
1558 | /// use std::ptr::null_mut; |
1559 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1560 | /// |
1561 | /// let mut some_ptrs = [const { AtomicPtr::new(null_mut::<String>()) }; 10]; |
1562 | /// |
1563 | /// let view: &mut [*mut String] = AtomicPtr::get_mut_slice(&mut some_ptrs); |
1564 | /// assert_eq!(view, [null_mut::<String>(); 10]); |
1565 | /// view |
1566 | /// .iter_mut() |
1567 | /// .enumerate() |
1568 | /// .for_each(|(i, ptr)| *ptr = Box::into_raw(Box::new(format!("iteration#{i}")))); |
1569 | /// |
1570 | /// std::thread::scope(|s| { |
1571 | /// for ptr in &some_ptrs { |
1572 | /// s.spawn(move || { |
1573 | /// let ptr = ptr.load(Ordering::Relaxed); |
1574 | /// assert!(!ptr.is_null()); |
1575 | /// |
1576 | /// let name = unsafe { Box::from_raw(ptr) }; |
1577 | /// println!("Hello, {name}!"); |
1578 | /// }); |
1579 | /// } |
1580 | /// }); |
1581 | /// ``` |
1582 | #[inline] |
1583 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
1584 | pub fn get_mut_slice(this: &mut [Self]) -> &mut [*mut T] { |
1585 | // SAFETY: the mutable reference guarantees unique ownership. |
1586 | unsafe { &mut *(this as *mut [Self] as *mut [*mut T]) } |
1587 | } |
1588 | |
1589 | /// Gets atomic access to a slice of pointers. |
1590 | /// |
1591 | /// # Examples |
1592 | /// |
1593 | /// ```ignore-wasm |
1594 | /// #![feature(atomic_from_mut)] |
1595 | /// use std::ptr::null_mut; |
1596 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1597 | /// |
1598 | /// let mut some_ptrs = [null_mut::<String>(); 10]; |
1599 | /// let a = &*AtomicPtr::from_mut_slice(&mut some_ptrs); |
1600 | /// std::thread::scope(|s| { |
1601 | /// for i in 0..a.len() { |
1602 | /// s.spawn(move || { |
1603 | /// let name = Box::new(format!("thread{i}")); |
1604 | /// a[i].store(Box::into_raw(name), Ordering::Relaxed); |
1605 | /// }); |
1606 | /// } |
1607 | /// }); |
1608 | /// for p in some_ptrs { |
1609 | /// assert!(!p.is_null()); |
1610 | /// let name = unsafe { Box::from_raw(p) }; |
1611 | /// println!("Hello, {name}!"); |
1612 | /// } |
1613 | /// ``` |
1614 | #[inline] |
1615 | #[cfg(target_has_atomic_equal_alignment = "ptr")] |
1616 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
1617 | pub fn from_mut_slice(v: &mut [*mut T]) -> &mut [Self] { |
1618 | // SAFETY: |
1619 | // - the mutable reference guarantees unique ownership. |
1620 | // - the alignment of `*mut T` and `Self` is the same on all platforms |
1621 | // supported by rust, as verified above. |
1622 | unsafe { &mut *(v as *mut [*mut T] as *mut [Self]) } |
1623 | } |
1624 | |
1625 | /// Consumes the atomic and returns the contained value. |
1626 | /// |
1627 | /// This is safe because passing `self` by value guarantees that no other threads are |
1628 | /// concurrently accessing the atomic data. |
1629 | /// |
1630 | /// # Examples |
1631 | /// |
1632 | /// ``` |
1633 | /// use std::sync::atomic::AtomicPtr; |
1634 | /// |
1635 | /// let mut data = 5; |
1636 | /// let atomic_ptr = AtomicPtr::new(&mut data); |
1637 | /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5); |
1638 | /// ``` |
1639 | #[inline] |
1640 | #[stable(feature = "atomic_access", since = "1.15.0")] |
1641 | #[rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0")] |
1642 | pub const fn into_inner(self) -> *mut T { |
1643 | self.p.into_inner() |
1644 | } |
1645 | |
1646 | /// Loads a value from the pointer. |
1647 | /// |
1648 | /// `load` takes an [`Ordering`] argument which describes the memory ordering |
1649 | /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
1650 | /// |
1651 | /// # Panics |
1652 | /// |
1653 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
1654 | /// |
1655 | /// # Examples |
1656 | /// |
1657 | /// ``` |
1658 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1659 | /// |
1660 | /// let ptr = &mut 5; |
1661 | /// let some_ptr = AtomicPtr::new(ptr); |
1662 | /// |
1663 | /// let value = some_ptr.load(Ordering::Relaxed); |
1664 | /// ``` |
1665 | #[inline] |
1666 | #[stable(feature = "rust1", since = "1.0.0")] |
1667 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1668 | pub fn load(&self, order: Ordering) -> *mut T { |
1669 | // SAFETY: data races are prevented by atomic intrinsics. |
1670 | unsafe { atomic_load(self.p.get(), order) } |
1671 | } |
1672 | |
1673 | /// Stores a value into the pointer. |
1674 | /// |
1675 | /// `store` takes an [`Ordering`] argument which describes the memory ordering |
1676 | /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
1677 | /// |
1678 | /// # Panics |
1679 | /// |
1680 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
1681 | /// |
1682 | /// # Examples |
1683 | /// |
1684 | /// ``` |
1685 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1686 | /// |
1687 | /// let ptr = &mut 5; |
1688 | /// let some_ptr = AtomicPtr::new(ptr); |
1689 | /// |
1690 | /// let other_ptr = &mut 10; |
1691 | /// |
1692 | /// some_ptr.store(other_ptr, Ordering::Relaxed); |
1693 | /// ``` |
1694 | #[inline] |
1695 | #[stable(feature = "rust1", since = "1.0.0")] |
1696 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1697 | pub fn store(&self, ptr: *mut T, order: Ordering) { |
1698 | // SAFETY: data races are prevented by atomic intrinsics. |
1699 | unsafe { |
1700 | atomic_store(self.p.get(), ptr, order); |
1701 | } |
1702 | } |
1703 | |
1704 | /// Stores a value into the pointer, returning the previous value. |
1705 | /// |
1706 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
1707 | /// of this operation. All ordering modes are possible. Note that using |
1708 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1709 | /// using [`Release`] makes the load part [`Relaxed`]. |
1710 | /// |
1711 | /// **Note:** This method is only available on platforms that support atomic |
1712 | /// operations on pointers. |
1713 | /// |
1714 | /// # Examples |
1715 | /// |
1716 | /// ``` |
1717 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1718 | /// |
1719 | /// let ptr = &mut 5; |
1720 | /// let some_ptr = AtomicPtr::new(ptr); |
1721 | /// |
1722 | /// let other_ptr = &mut 10; |
1723 | /// |
1724 | /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed); |
1725 | /// ``` |
1726 | #[inline] |
1727 | #[stable(feature = "rust1", since = "1.0.0")] |
1728 | #[cfg(target_has_atomic = "ptr")] |
1729 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1730 | pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T { |
1731 | // SAFETY: data races are prevented by atomic intrinsics. |
1732 | unsafe { atomic_swap(self.p.get(), ptr, order) } |
1733 | } |
1734 | |
1735 | /// Stores a value into the pointer if the current value is the same as the `current` value. |
1736 | /// |
1737 | /// The return value is always the previous value. If it is equal to `current`, then the value |
1738 | /// was updated. |
1739 | /// |
1740 | /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory |
1741 | /// ordering of this operation. Notice that even when using [`AcqRel`], the operation |
1742 | /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics. |
1743 | /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it |
1744 | /// happens, and using [`Release`] makes the load part [`Relaxed`]. |
1745 | /// |
1746 | /// **Note:** This method is only available on platforms that support atomic |
1747 | /// operations on pointers. |
1748 | /// |
1749 | /// # Migrating to `compare_exchange` and `compare_exchange_weak` |
1750 | /// |
1751 | /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for |
1752 | /// memory orderings: |
1753 | /// |
1754 | /// Original | Success | Failure |
1755 | /// -------- | ------- | ------- |
1756 | /// Relaxed | Relaxed | Relaxed |
1757 | /// Acquire | Acquire | Acquire |
1758 | /// Release | Release | Relaxed |
1759 | /// AcqRel | AcqRel | Acquire |
1760 | /// SeqCst | SeqCst | SeqCst |
1761 | /// |
1762 | /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use |
1763 | /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`, |
1764 | /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err` |
1765 | /// rather than to infer success vs failure based on the value that was read. |
1766 | /// |
1767 | /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead. |
1768 | /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds, |
1769 | /// which allows the compiler to generate better assembly code when the compare and swap |
1770 | /// is used in a loop. |
1771 | /// |
1772 | /// # Examples |
1773 | /// |
1774 | /// ``` |
1775 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1776 | /// |
1777 | /// let ptr = &mut 5; |
1778 | /// let some_ptr = AtomicPtr::new(ptr); |
1779 | /// |
1780 | /// let other_ptr = &mut 10; |
1781 | /// |
1782 | /// let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed); |
1783 | /// ``` |
1784 | #[inline] |
1785 | #[stable(feature = "rust1", since = "1.0.0")] |
1786 | #[deprecated( |
1787 | since = "1.50.0", |
1788 | note = "Use `compare_exchange` or `compare_exchange_weak` instead" |
1789 | )] |
1790 | #[cfg(target_has_atomic = "ptr")] |
1791 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1792 | pub fn compare_and_swap(&self, current: *mut T, new: *mut T, order: Ordering) -> *mut T { |
1793 | match self.compare_exchange(current, new, order, strongest_failure_ordering(order)) { |
1794 | Ok(x) => x, |
1795 | Err(x) => x, |
1796 | } |
1797 | } |
1798 | |
1799 | /// Stores a value into the pointer if the current value is the same as the `current` value. |
1800 | /// |
1801 | /// The return value is a result indicating whether the new value was written and containing |
1802 | /// the previous value. On success this value is guaranteed to be equal to `current`. |
1803 | /// |
1804 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
1805 | /// ordering of this operation. `success` describes the required ordering for the |
1806 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
1807 | /// `failure` describes the required ordering for the load operation that takes place when |
1808 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
1809 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
1810 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
1811 | /// |
1812 | /// **Note:** This method is only available on platforms that support atomic |
1813 | /// operations on pointers. |
1814 | /// |
1815 | /// # Examples |
1816 | /// |
1817 | /// ``` |
1818 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1819 | /// |
1820 | /// let ptr = &mut 5; |
1821 | /// let some_ptr = AtomicPtr::new(ptr); |
1822 | /// |
1823 | /// let other_ptr = &mut 10; |
1824 | /// |
1825 | /// let value = some_ptr.compare_exchange(ptr, other_ptr, |
1826 | /// Ordering::SeqCst, Ordering::Relaxed); |
1827 | /// ``` |
1828 | #[inline] |
1829 | #[stable(feature = "extended_compare_and_swap", since = "1.10.0")] |
1830 | #[cfg(target_has_atomic = "ptr")] |
1831 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1832 | pub fn compare_exchange( |
1833 | &self, |
1834 | current: *mut T, |
1835 | new: *mut T, |
1836 | success: Ordering, |
1837 | failure: Ordering, |
1838 | ) -> Result<*mut T, *mut T> { |
1839 | // SAFETY: data races are prevented by atomic intrinsics. |
1840 | unsafe { atomic_compare_exchange(self.p.get(), current, new, success, failure) } |
1841 | } |
1842 | |
1843 | /// Stores a value into the pointer if the current value is the same as the `current` value. |
1844 | /// |
1845 | /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the |
1846 | /// comparison succeeds, which can result in more efficient code on some platforms. The |
1847 | /// return value is a result indicating whether the new value was written and containing the |
1848 | /// previous value. |
1849 | /// |
1850 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
1851 | /// ordering of this operation. `success` describes the required ordering for the |
1852 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
1853 | /// `failure` describes the required ordering for the load operation that takes place when |
1854 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
1855 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
1856 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
1857 | /// |
1858 | /// **Note:** This method is only available on platforms that support atomic |
1859 | /// operations on pointers. |
1860 | /// |
1861 | /// # Examples |
1862 | /// |
1863 | /// ``` |
1864 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1865 | /// |
1866 | /// let some_ptr = AtomicPtr::new(&mut 5); |
1867 | /// |
1868 | /// let new = &mut 10; |
1869 | /// let mut old = some_ptr.load(Ordering::Relaxed); |
1870 | /// loop { |
1871 | /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
1872 | /// Ok(_) => break, |
1873 | /// Err(x) => old = x, |
1874 | /// } |
1875 | /// } |
1876 | /// ``` |
1877 | #[inline] |
1878 | #[stable(feature = "extended_compare_and_swap", since = "1.10.0")] |
1879 | #[cfg(target_has_atomic = "ptr")] |
1880 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1881 | pub fn compare_exchange_weak( |
1882 | &self, |
1883 | current: *mut T, |
1884 | new: *mut T, |
1885 | success: Ordering, |
1886 | failure: Ordering, |
1887 | ) -> Result<*mut T, *mut T> { |
1888 | // SAFETY: This intrinsic is unsafe because it operates on a raw pointer |
1889 | // but we know for sure that the pointer is valid (we just got it from |
1890 | // an `UnsafeCell` that we have by reference) and the atomic operation |
1891 | // itself allows us to safely mutate the `UnsafeCell` contents. |
1892 | unsafe { atomic_compare_exchange_weak(self.p.get(), current, new, success, failure) } |
1893 | } |
1894 | |
1895 | /// Fetches the value, and applies a function to it that returns an optional |
1896 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1897 | /// returned `Some(_)`, else `Err(previous_value)`. |
1898 | /// |
1899 | /// Note: This may call the function multiple times if the value has been |
1900 | /// changed from other threads in the meantime, as long as the function |
1901 | /// returns `Some(_)`, but the function will have been applied only once to |
1902 | /// the stored value. |
1903 | /// |
1904 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory |
1905 | /// ordering of this operation. The first describes the required ordering for |
1906 | /// when the operation finally succeeds while the second describes the |
1907 | /// required ordering for loads. These correspond to the success and failure |
1908 | /// orderings of [`AtomicPtr::compare_exchange`] respectively. |
1909 | /// |
1910 | /// Using [`Acquire`] as success ordering makes the store part of this |
1911 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1912 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1913 | /// [`Acquire`] or [`Relaxed`]. |
1914 | /// |
1915 | /// **Note:** This method is only available on platforms that support atomic |
1916 | /// operations on pointers. |
1917 | /// |
1918 | /// # Considerations |
1919 | /// |
1920 | /// This method is not magic; it is not provided by the hardware. |
1921 | /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks. |
1922 | /// In particular, this method will not circumvent the [ABA Problem]. |
1923 | /// |
1924 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1925 | /// |
1926 | /// # Examples |
1927 | /// |
1928 | /// ```rust |
1929 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
1930 | /// |
1931 | /// let ptr: *mut _ = &mut 5; |
1932 | /// let some_ptr = AtomicPtr::new(ptr); |
1933 | /// |
1934 | /// let new: *mut _ = &mut 10; |
1935 | /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr)); |
1936 | /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| { |
1937 | /// if x == ptr { |
1938 | /// Some(new) |
1939 | /// } else { |
1940 | /// None |
1941 | /// } |
1942 | /// }); |
1943 | /// assert_eq!(result, Ok(ptr)); |
1944 | /// assert_eq!(some_ptr.load(Ordering::SeqCst), new); |
1945 | /// ``` |
1946 | #[inline] |
1947 | #[stable(feature = "atomic_fetch_update", since = "1.53.0")] |
1948 | #[cfg(target_has_atomic = "ptr")] |
1949 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
1950 | pub fn fetch_update<F>( |
1951 | &self, |
1952 | set_order: Ordering, |
1953 | fetch_order: Ordering, |
1954 | mut f: F, |
1955 | ) -> Result<*mut T, *mut T> |
1956 | where |
1957 | F: FnMut(*mut T) -> Option<*mut T>, |
1958 | { |
1959 | let mut prev = self.load(fetch_order); |
1960 | while let Some(next) = f(prev) { |
1961 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
1962 | x @ Ok(_) => return x, |
1963 | Err(next_prev) => prev = next_prev, |
1964 | } |
1965 | } |
1966 | Err(prev) |
1967 | } |
1968 | /// Fetches the value, and applies a function to it that returns an optional |
1969 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1970 | /// returned `Some(_)`, else `Err(previous_value)`. |
1971 | /// |
1972 | /// See also: [`update`](`AtomicPtr::update`). |
1973 | /// |
1974 | /// Note: This may call the function multiple times if the value has been |
1975 | /// changed from other threads in the meantime, as long as the function |
1976 | /// returns `Some(_)`, but the function will have been applied only once to |
1977 | /// the stored value. |
1978 | /// |
1979 | /// `try_update` takes two [`Ordering`] arguments to describe the memory |
1980 | /// ordering of this operation. The first describes the required ordering for |
1981 | /// when the operation finally succeeds while the second describes the |
1982 | /// required ordering for loads. These correspond to the success and failure |
1983 | /// orderings of [`AtomicPtr::compare_exchange`] respectively. |
1984 | /// |
1985 | /// Using [`Acquire`] as success ordering makes the store part of this |
1986 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1987 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1988 | /// [`Acquire`] or [`Relaxed`]. |
1989 | /// |
1990 | /// **Note:** This method is only available on platforms that support atomic |
1991 | /// operations on pointers. |
1992 | /// |
1993 | /// # Considerations |
1994 | /// |
1995 | /// This method is not magic; it is not provided by the hardware. |
1996 | /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks. |
1997 | /// In particular, this method will not circumvent the [ABA Problem]. |
1998 | /// |
1999 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
2000 | /// |
2001 | /// # Examples |
2002 | /// |
2003 | /// ```rust |
2004 | /// #![feature(atomic_try_update)] |
2005 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
2006 | /// |
2007 | /// let ptr: *mut _ = &mut 5; |
2008 | /// let some_ptr = AtomicPtr::new(ptr); |
2009 | /// |
2010 | /// let new: *mut _ = &mut 10; |
2011 | /// assert_eq!(some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr)); |
2012 | /// let result = some_ptr.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| { |
2013 | /// if x == ptr { |
2014 | /// Some(new) |
2015 | /// } else { |
2016 | /// None |
2017 | /// } |
2018 | /// }); |
2019 | /// assert_eq!(result, Ok(ptr)); |
2020 | /// assert_eq!(some_ptr.load(Ordering::SeqCst), new); |
2021 | /// ``` |
2022 | #[inline] |
2023 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
2024 | #[cfg(target_has_atomic = "ptr")] |
2025 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2026 | pub fn try_update( |
2027 | &self, |
2028 | set_order: Ordering, |
2029 | fetch_order: Ordering, |
2030 | f: impl FnMut(*mut T) -> Option<*mut T>, |
2031 | ) -> Result<*mut T, *mut T> { |
2032 | // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`; |
2033 | // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`. |
2034 | self.fetch_update(set_order, fetch_order, f) |
2035 | } |
2036 | |
2037 | /// Fetches the value, applies a function to it that it return a new value. |
2038 | /// The new value is stored and the old value is returned. |
2039 | /// |
2040 | /// See also: [`try_update`](`AtomicPtr::try_update`). |
2041 | /// |
2042 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
2043 | /// the meantime, but the function will have been applied only once to the stored value. |
2044 | /// |
2045 | /// `update` takes two [`Ordering`] arguments to describe the memory |
2046 | /// ordering of this operation. The first describes the required ordering for |
2047 | /// when the operation finally succeeds while the second describes the |
2048 | /// required ordering for loads. These correspond to the success and failure |
2049 | /// orderings of [`AtomicPtr::compare_exchange`] respectively. |
2050 | /// |
2051 | /// Using [`Acquire`] as success ordering makes the store part |
2052 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
2053 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
2054 | /// |
2055 | /// **Note:** This method is only available on platforms that support atomic |
2056 | /// operations on pointers. |
2057 | /// |
2058 | /// # Considerations |
2059 | /// |
2060 | /// This method is not magic; it is not provided by the hardware. |
2061 | /// It is implemented in terms of [`AtomicPtr::compare_exchange_weak`], and suffers from the same drawbacks. |
2062 | /// In particular, this method will not circumvent the [ABA Problem]. |
2063 | /// |
2064 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
2065 | /// |
2066 | /// # Examples |
2067 | /// |
2068 | /// ```rust |
2069 | /// #![feature(atomic_try_update)] |
2070 | /// |
2071 | /// use std::sync::atomic::{AtomicPtr, Ordering}; |
2072 | /// |
2073 | /// let ptr: *mut _ = &mut 5; |
2074 | /// let some_ptr = AtomicPtr::new(ptr); |
2075 | /// |
2076 | /// let new: *mut _ = &mut 10; |
2077 | /// let result = some_ptr.update(Ordering::SeqCst, Ordering::SeqCst, |_| new); |
2078 | /// assert_eq!(result, ptr); |
2079 | /// assert_eq!(some_ptr.load(Ordering::SeqCst), new); |
2080 | /// ``` |
2081 | #[inline] |
2082 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
2083 | #[cfg(target_has_atomic = "8")] |
2084 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2085 | pub fn update( |
2086 | &self, |
2087 | set_order: Ordering, |
2088 | fetch_order: Ordering, |
2089 | mut f: impl FnMut(*mut T) -> *mut T, |
2090 | ) -> *mut T { |
2091 | let mut prev = self.load(fetch_order); |
2092 | loop { |
2093 | match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) { |
2094 | Ok(x) => break x, |
2095 | Err(next_prev) => prev = next_prev, |
2096 | } |
2097 | } |
2098 | } |
2099 | |
2100 | /// Offsets the pointer's address by adding `val` (in units of `T`), |
2101 | /// returning the previous pointer. |
2102 | /// |
2103 | /// This is equivalent to using [`wrapping_add`] to atomically perform the |
2104 | /// equivalent of `ptr = ptr.wrapping_add(val);`. |
2105 | /// |
2106 | /// This method operates in units of `T`, which means that it cannot be used |
2107 | /// to offset the pointer by an amount which is not a multiple of |
2108 | /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to |
2109 | /// work with a deliberately misaligned pointer. In such cases, you may use |
2110 | /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead. |
2111 | /// |
2112 | /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the |
2113 | /// memory ordering of this operation. All ordering modes are possible. Note |
2114 | /// that using [`Acquire`] makes the store part of this operation |
2115 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
2116 | /// |
2117 | /// **Note**: This method is only available on platforms that support atomic |
2118 | /// operations on [`AtomicPtr`]. |
2119 | /// |
2120 | /// [`wrapping_add`]: pointer::wrapping_add |
2121 | /// |
2122 | /// # Examples |
2123 | /// |
2124 | /// ``` |
2125 | /// #![feature(strict_provenance_atomic_ptr)] |
2126 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2127 | /// |
2128 | /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut()); |
2129 | /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0); |
2130 | /// // Note: units of `size_of::<i64>()`. |
2131 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8); |
2132 | /// ``` |
2133 | #[inline] |
2134 | #[cfg(target_has_atomic = "ptr")] |
2135 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2136 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2137 | pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T { |
2138 | self.fetch_byte_add(val.wrapping_mul(size_of::<T>()), order) |
2139 | } |
2140 | |
2141 | /// Offsets the pointer's address by subtracting `val` (in units of `T`), |
2142 | /// returning the previous pointer. |
2143 | /// |
2144 | /// This is equivalent to using [`wrapping_sub`] to atomically perform the |
2145 | /// equivalent of `ptr = ptr.wrapping_sub(val);`. |
2146 | /// |
2147 | /// This method operates in units of `T`, which means that it cannot be used |
2148 | /// to offset the pointer by an amount which is not a multiple of |
2149 | /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to |
2150 | /// work with a deliberately misaligned pointer. In such cases, you may use |
2151 | /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead. |
2152 | /// |
2153 | /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory |
2154 | /// ordering of this operation. All ordering modes are possible. Note that |
2155 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2156 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2157 | /// |
2158 | /// **Note**: This method is only available on platforms that support atomic |
2159 | /// operations on [`AtomicPtr`]. |
2160 | /// |
2161 | /// [`wrapping_sub`]: pointer::wrapping_sub |
2162 | /// |
2163 | /// # Examples |
2164 | /// |
2165 | /// ``` |
2166 | /// #![feature(strict_provenance_atomic_ptr)] |
2167 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2168 | /// |
2169 | /// let array = [1i32, 2i32]; |
2170 | /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _); |
2171 | /// |
2172 | /// assert!(core::ptr::eq( |
2173 | /// atom.fetch_ptr_sub(1, Ordering::Relaxed), |
2174 | /// &array[1], |
2175 | /// )); |
2176 | /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0])); |
2177 | /// ``` |
2178 | #[inline] |
2179 | #[cfg(target_has_atomic = "ptr")] |
2180 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2181 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2182 | pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T { |
2183 | self.fetch_byte_sub(val.wrapping_mul(size_of::<T>()), order) |
2184 | } |
2185 | |
2186 | /// Offsets the pointer's address by adding `val` *bytes*, returning the |
2187 | /// previous pointer. |
2188 | /// |
2189 | /// This is equivalent to using [`wrapping_byte_add`] to atomically |
2190 | /// perform `ptr = ptr.wrapping_byte_add(val)`. |
2191 | /// |
2192 | /// `fetch_byte_add` takes an [`Ordering`] argument which describes the |
2193 | /// memory ordering of this operation. All ordering modes are possible. Note |
2194 | /// that using [`Acquire`] makes the store part of this operation |
2195 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
2196 | /// |
2197 | /// **Note**: This method is only available on platforms that support atomic |
2198 | /// operations on [`AtomicPtr`]. |
2199 | /// |
2200 | /// [`wrapping_byte_add`]: pointer::wrapping_byte_add |
2201 | /// |
2202 | /// # Examples |
2203 | /// |
2204 | /// ``` |
2205 | /// #![feature(strict_provenance_atomic_ptr)] |
2206 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2207 | /// |
2208 | /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut()); |
2209 | /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0); |
2210 | /// // Note: in units of bytes, not `size_of::<i64>()`. |
2211 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1); |
2212 | /// ``` |
2213 | #[inline] |
2214 | #[cfg(target_has_atomic = "ptr")] |
2215 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2216 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2217 | pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T { |
2218 | // SAFETY: data races are prevented by atomic intrinsics. |
2219 | unsafe { atomic_add(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() } |
2220 | } |
2221 | |
2222 | /// Offsets the pointer's address by subtracting `val` *bytes*, returning the |
2223 | /// previous pointer. |
2224 | /// |
2225 | /// This is equivalent to using [`wrapping_byte_sub`] to atomically |
2226 | /// perform `ptr = ptr.wrapping_byte_sub(val)`. |
2227 | /// |
2228 | /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the |
2229 | /// memory ordering of this operation. All ordering modes are possible. Note |
2230 | /// that using [`Acquire`] makes the store part of this operation |
2231 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
2232 | /// |
2233 | /// **Note**: This method is only available on platforms that support atomic |
2234 | /// operations on [`AtomicPtr`]. |
2235 | /// |
2236 | /// [`wrapping_byte_sub`]: pointer::wrapping_byte_sub |
2237 | /// |
2238 | /// # Examples |
2239 | /// |
2240 | /// ``` |
2241 | /// #![feature(strict_provenance_atomic_ptr)] |
2242 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2243 | /// |
2244 | /// let atom = AtomicPtr::<i64>::new(core::ptr::without_provenance_mut(1)); |
2245 | /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1); |
2246 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0); |
2247 | /// ``` |
2248 | #[inline] |
2249 | #[cfg(target_has_atomic = "ptr")] |
2250 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2251 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2252 | pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T { |
2253 | // SAFETY: data races are prevented by atomic intrinsics. |
2254 | unsafe { atomic_sub(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() } |
2255 | } |
2256 | |
2257 | /// Performs a bitwise "or" operation on the address of the current pointer, |
2258 | /// and the argument `val`, and stores a pointer with provenance of the |
2259 | /// current pointer and the resulting address. |
2260 | /// |
2261 | /// This is equivalent to using [`map_addr`] to atomically perform |
2262 | /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged |
2263 | /// pointer schemes to atomically set tag bits. |
2264 | /// |
2265 | /// **Caveat**: This operation returns the previous value. To compute the |
2266 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2267 | /// example: `a.fetch_or(val).map_addr(|a| a | val)`. |
2268 | /// |
2269 | /// `fetch_or` takes an [`Ordering`] argument which describes the memory |
2270 | /// ordering of this operation. All ordering modes are possible. Note that |
2271 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2272 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2273 | /// |
2274 | /// **Note**: This method is only available on platforms that support atomic |
2275 | /// operations on [`AtomicPtr`]. |
2276 | /// |
2277 | /// This API and its claimed semantics are part of the Strict Provenance |
2278 | /// experiment, see the [module documentation for `ptr`][crate::ptr] for |
2279 | /// details. |
2280 | /// |
2281 | /// [`map_addr`]: pointer::map_addr |
2282 | /// |
2283 | /// # Examples |
2284 | /// |
2285 | /// ``` |
2286 | /// #![feature(strict_provenance_atomic_ptr)] |
2287 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2288 | /// |
2289 | /// let pointer = &mut 3i64 as *mut i64; |
2290 | /// |
2291 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2292 | /// // Tag the bottom bit of the pointer. |
2293 | /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0); |
2294 | /// // Extract and untag. |
2295 | /// let tagged = atom.load(Ordering::Relaxed); |
2296 | /// assert_eq!(tagged.addr() & 1, 1); |
2297 | /// assert_eq!(tagged.map_addr(|p| p & !1), pointer); |
2298 | /// ``` |
2299 | #[inline] |
2300 | #[cfg(target_has_atomic = "ptr")] |
2301 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2302 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2303 | pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T { |
2304 | // SAFETY: data races are prevented by atomic intrinsics. |
2305 | unsafe { atomic_or(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() } |
2306 | } |
2307 | |
2308 | /// Performs a bitwise "and" operation on the address of the current |
2309 | /// pointer, and the argument `val`, and stores a pointer with provenance of |
2310 | /// the current pointer and the resulting address. |
2311 | /// |
2312 | /// This is equivalent to using [`map_addr`] to atomically perform |
2313 | /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged |
2314 | /// pointer schemes to atomically unset tag bits. |
2315 | /// |
2316 | /// **Caveat**: This operation returns the previous value. To compute the |
2317 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2318 | /// example: `a.fetch_and(val).map_addr(|a| a & val)`. |
2319 | /// |
2320 | /// `fetch_and` takes an [`Ordering`] argument which describes the memory |
2321 | /// ordering of this operation. All ordering modes are possible. Note that |
2322 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2323 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2324 | /// |
2325 | /// **Note**: This method is only available on platforms that support atomic |
2326 | /// operations on [`AtomicPtr`]. |
2327 | /// |
2328 | /// This API and its claimed semantics are part of the Strict Provenance |
2329 | /// experiment, see the [module documentation for `ptr`][crate::ptr] for |
2330 | /// details. |
2331 | /// |
2332 | /// [`map_addr`]: pointer::map_addr |
2333 | /// |
2334 | /// # Examples |
2335 | /// |
2336 | /// ``` |
2337 | /// #![feature(strict_provenance_atomic_ptr)] |
2338 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2339 | /// |
2340 | /// let pointer = &mut 3i64 as *mut i64; |
2341 | /// // A tagged pointer |
2342 | /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1)); |
2343 | /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1); |
2344 | /// // Untag, and extract the previously tagged pointer. |
2345 | /// let untagged = atom.fetch_and(!1, Ordering::Relaxed) |
2346 | /// .map_addr(|a| a & !1); |
2347 | /// assert_eq!(untagged, pointer); |
2348 | /// ``` |
2349 | #[inline] |
2350 | #[cfg(target_has_atomic = "ptr")] |
2351 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2352 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2353 | pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T { |
2354 | // SAFETY: data races are prevented by atomic intrinsics. |
2355 | unsafe { atomic_and(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() } |
2356 | } |
2357 | |
2358 | /// Performs a bitwise "xor" operation on the address of the current |
2359 | /// pointer, and the argument `val`, and stores a pointer with provenance of |
2360 | /// the current pointer and the resulting address. |
2361 | /// |
2362 | /// This is equivalent to using [`map_addr`] to atomically perform |
2363 | /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged |
2364 | /// pointer schemes to atomically toggle tag bits. |
2365 | /// |
2366 | /// **Caveat**: This operation returns the previous value. To compute the |
2367 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2368 | /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`. |
2369 | /// |
2370 | /// `fetch_xor` takes an [`Ordering`] argument which describes the memory |
2371 | /// ordering of this operation. All ordering modes are possible. Note that |
2372 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2373 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2374 | /// |
2375 | /// **Note**: This method is only available on platforms that support atomic |
2376 | /// operations on [`AtomicPtr`]. |
2377 | /// |
2378 | /// This API and its claimed semantics are part of the Strict Provenance |
2379 | /// experiment, see the [module documentation for `ptr`][crate::ptr] for |
2380 | /// details. |
2381 | /// |
2382 | /// [`map_addr`]: pointer::map_addr |
2383 | /// |
2384 | /// # Examples |
2385 | /// |
2386 | /// ``` |
2387 | /// #![feature(strict_provenance_atomic_ptr)] |
2388 | /// use core::sync::atomic::{AtomicPtr, Ordering}; |
2389 | /// |
2390 | /// let pointer = &mut 3i64 as *mut i64; |
2391 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2392 | /// |
2393 | /// // Toggle a tag bit on the pointer. |
2394 | /// atom.fetch_xor(1, Ordering::Relaxed); |
2395 | /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1); |
2396 | /// ``` |
2397 | #[inline] |
2398 | #[cfg(target_has_atomic = "ptr")] |
2399 | #[unstable(feature = "strict_provenance_atomic_ptr", issue = "99108")] |
2400 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2401 | pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T { |
2402 | // SAFETY: data races are prevented by atomic intrinsics. |
2403 | unsafe { atomic_xor(self.p.get(), core::ptr::without_provenance_mut(val), order).cast() } |
2404 | } |
2405 | |
2406 | /// Returns a mutable pointer to the underlying pointer. |
2407 | /// |
2408 | /// Doing non-atomic reads and writes on the resulting pointer can be a data race. |
2409 | /// This method is mostly useful for FFI, where the function signature may use |
2410 | /// `*mut *mut T` instead of `&AtomicPtr<T>`. |
2411 | /// |
2412 | /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the |
2413 | /// atomic types work with interior mutability. All modifications of an atomic change the value |
2414 | /// through a shared reference, and can do so safely as long as they use atomic operations. Any |
2415 | /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same |
2416 | /// restriction: operations on it must be atomic. |
2417 | /// |
2418 | /// # Examples |
2419 | /// |
2420 | /// ```ignore (extern-declaration) |
2421 | /// use std::sync::atomic::AtomicPtr; |
2422 | /// |
2423 | /// extern "C"{ |
2424 | /// fn my_atomic_op(arg: *mut *mut u32); |
2425 | /// } |
2426 | /// |
2427 | /// let mut value = 17; |
2428 | /// let atomic = AtomicPtr::new(&mut value); |
2429 | /// |
2430 | /// // SAFETY: Safe as long as `my_atomic_op` is atomic. |
2431 | /// unsafe { |
2432 | /// my_atomic_op(atomic.as_ptr()); |
2433 | /// } |
2434 | /// ``` |
2435 | #[inline] |
2436 | #[stable(feature = "atomic_as_ptr", since = "1.70.0")] |
2437 | #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")] |
2438 | #[rustc_never_returns_null_ptr] |
2439 | pub const fn as_ptr(&self) -> *mut *mut T { |
2440 | self.p.get() |
2441 | } |
2442 | } |
2443 | |
2444 | #[cfg(target_has_atomic_load_store = "8")] |
2445 | #[stable(feature = "atomic_bool_from", since = "1.24.0")] |
2446 | impl From<bool> for AtomicBool { |
2447 | /// Converts a `bool` into an `AtomicBool`. |
2448 | /// |
2449 | /// # Examples |
2450 | /// |
2451 | /// ``` |
2452 | /// use std::sync::atomic::AtomicBool; |
2453 | /// let atomic_bool = AtomicBool::from(true); |
2454 | /// assert_eq!(format!("{atomic_bool:?}"), "true") |
2455 | /// ``` |
2456 | #[inline] |
2457 | fn from(b: bool) -> Self { |
2458 | Self::new(b) |
2459 | } |
2460 | } |
2461 | |
2462 | #[cfg(target_has_atomic_load_store = "ptr")] |
2463 | #[stable(feature = "atomic_from", since = "1.23.0")] |
2464 | impl<T> From<*mut T> for AtomicPtr<T> { |
2465 | /// Converts a `*mut T` into an `AtomicPtr<T>`. |
2466 | #[inline] |
2467 | fn from(p: *mut T) -> Self { |
2468 | Self::new(p) |
2469 | } |
2470 | } |
2471 | |
2472 | #[allow(unused_macros)] // This macro ends up being unused on some architectures. |
2473 | macro_rules! if_8_bit { |
2474 | (u8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) }; |
2475 | (i8, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($yes)*)?) }; |
2476 | ($_:ident, $( yes = [$($yes:tt)*], )? $( no = [$($no:tt)*], )? ) => { concat!("", $($($no)*)?) }; |
2477 | } |
2478 | |
2479 | #[cfg(target_has_atomic_load_store)] |
2480 | macro_rules! atomic_int { |
2481 | ($cfg_cas:meta, |
2482 | $cfg_align:meta, |
2483 | $stable:meta, |
2484 | $stable_cxchg:meta, |
2485 | $stable_debug:meta, |
2486 | $stable_access:meta, |
2487 | $stable_from:meta, |
2488 | $stable_nand:meta, |
2489 | $const_stable_new:meta, |
2490 | $const_stable_into_inner:meta, |
2491 | $diagnostic_item:meta, |
2492 | $s_int_type:literal, |
2493 | $extra_feature:expr, |
2494 | $min_fn:ident, $max_fn:ident, |
2495 | $align:expr, |
2496 | $int_type:ident $atomic_type:ident) => { |
2497 | /// An integer type which can be safely shared between threads. |
2498 | /// |
2499 | /// This type has the same |
2500 | #[doc = if_8_bit!( |
2501 | $int_type, |
2502 | yes = ["size, alignment, and bit validity"], |
2503 | no = ["size and bit validity"], |
2504 | )] |
2505 | /// as the underlying integer type, [` |
2506 | #[doc = $s_int_type] |
2507 | /// `]. |
2508 | #[doc = if_8_bit! { |
2509 | $int_type, |
2510 | no = [ |
2511 | "However, the alignment of this type is always equal to its ", |
2512 | "size, even on targets where [`", $s_int_type, "`] has a ", |
2513 | "lesser alignment." |
2514 | ], |
2515 | }] |
2516 | /// |
2517 | /// For more about the differences between atomic types and |
2518 | /// non-atomic types as well as information about the portability of |
2519 | /// this type, please see the [module-level documentation]. |
2520 | /// |
2521 | /// **Note:** This type is only available on platforms that support |
2522 | /// atomic loads and stores of [` |
2523 | #[doc = $s_int_type] |
2524 | /// `]. |
2525 | /// |
2526 | /// [module-level documentation]: crate::sync::atomic |
2527 | #[$stable] |
2528 | #[$diagnostic_item] |
2529 | #[repr(C, align($align))] |
2530 | pub struct $atomic_type { |
2531 | v: UnsafeCell<$int_type>, |
2532 | } |
2533 | |
2534 | #[$stable] |
2535 | impl Default for $atomic_type { |
2536 | #[inline] |
2537 | fn default() -> Self { |
2538 | Self::new(Default::default()) |
2539 | } |
2540 | } |
2541 | |
2542 | #[$stable_from] |
2543 | impl From<$int_type> for $atomic_type { |
2544 | #[doc = concat!("Converts an `", stringify!($int_type), "` into an `", stringify!($atomic_type), "`.")] |
2545 | #[inline] |
2546 | fn from(v: $int_type) -> Self { Self::new(v) } |
2547 | } |
2548 | |
2549 | #[$stable_debug] |
2550 | impl fmt::Debug for $atomic_type { |
2551 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
2552 | fmt::Debug::fmt(&self.load(Ordering::Relaxed), f) |
2553 | } |
2554 | } |
2555 | |
2556 | // Send is implicitly implemented. |
2557 | #[$stable] |
2558 | unsafe impl Sync for $atomic_type {} |
2559 | |
2560 | impl $atomic_type { |
2561 | /// Creates a new atomic integer. |
2562 | /// |
2563 | /// # Examples |
2564 | /// |
2565 | /// ``` |
2566 | #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")] |
2567 | /// |
2568 | #[doc = concat!("let atomic_forty_two = ", stringify!($atomic_type), "::new(42);")] |
2569 | /// ``` |
2570 | #[inline] |
2571 | #[$stable] |
2572 | #[$const_stable_new] |
2573 | #[must_use] |
2574 | pub const fn new(v: $int_type) -> Self { |
2575 | Self {v: UnsafeCell::new(v)} |
2576 | } |
2577 | |
2578 | /// Creates a new reference to an atomic integer from a pointer. |
2579 | /// |
2580 | /// # Examples |
2581 | /// |
2582 | /// ``` |
2583 | #[doc = concat!($extra_feature, "use std::sync::atomic::{self, ", stringify!($atomic_type), "};")] |
2584 | /// |
2585 | /// // Get a pointer to an allocated value |
2586 | #[doc = concat!("let ptr: *mut ", stringify!($int_type), " = Box::into_raw(Box::new(0));")] |
2587 | /// |
2588 | #[doc = concat!("assert!(ptr.cast::<", stringify!($atomic_type), ">().is_aligned());")] |
2589 | /// |
2590 | /// { |
2591 | /// // Create an atomic view of the allocated value |
2592 | // SAFETY: this is a doc comment, tidy, it can't hurt you (also guaranteed by the construction of `ptr` and the assert above) |
2593 | #[doc = concat!(" let atomic = unsafe {", stringify!($atomic_type), "::from_ptr(ptr) };")] |
2594 | /// |
2595 | /// // Use `atomic` for atomic operations, possibly share it with other threads |
2596 | /// atomic.store(1, atomic::Ordering::Relaxed); |
2597 | /// } |
2598 | /// |
2599 | /// // It's ok to non-atomically access the value behind `ptr`, |
2600 | /// // since the reference to the atomic ended its lifetime in the block above |
2601 | /// assert_eq!(unsafe { *ptr }, 1); |
2602 | /// |
2603 | /// // Deallocate the value |
2604 | /// unsafe { drop(Box::from_raw(ptr)) } |
2605 | /// ``` |
2606 | /// |
2607 | /// # Safety |
2608 | /// |
2609 | /// * `ptr` must be aligned to |
2610 | #[doc = concat!(" `align_of::<", stringify!($atomic_type), ">()`")] |
2611 | #[doc = if_8_bit!{ |
2612 | $int_type, |
2613 | yes = [ |
2614 | " (note that this is always true, since `align_of::<", |
2615 | stringify!($atomic_type), ">() == 1`)." |
2616 | ], |
2617 | no = [ |
2618 | " (note that on some platforms this can be bigger than `align_of::<", |
2619 | stringify!($int_type), ">()`)." |
2620 | ], |
2621 | }] |
2622 | /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
2623 | /// * You must adhere to the [Memory model for atomic accesses]. In particular, it is not |
2624 | /// allowed to mix atomic and non-atomic accesses, or atomic accesses of different sizes, |
2625 | /// without synchronization. |
2626 | /// |
2627 | /// [valid]: crate::ptr#safety |
2628 | /// [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses |
2629 | #[inline] |
2630 | #[stable(feature = "atomic_from_ptr", since = "1.75.0")] |
2631 | #[rustc_const_stable(feature = "const_atomic_from_ptr", since = "1.84.0")] |
2632 | pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a $atomic_type { |
2633 | // SAFETY: guaranteed by the caller |
2634 | unsafe { &*ptr.cast() } |
2635 | } |
2636 | |
2637 | |
2638 | /// Returns a mutable reference to the underlying integer. |
2639 | /// |
2640 | /// This is safe because the mutable reference guarantees that no other threads are |
2641 | /// concurrently accessing the atomic data. |
2642 | /// |
2643 | /// # Examples |
2644 | /// |
2645 | /// ``` |
2646 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2647 | /// |
2648 | #[doc = concat!("let mut some_var = ", stringify!($atomic_type), "::new(10);")] |
2649 | /// assert_eq!(*some_var.get_mut(), 10); |
2650 | /// *some_var.get_mut() = 5; |
2651 | /// assert_eq!(some_var.load(Ordering::SeqCst), 5); |
2652 | /// ``` |
2653 | #[inline] |
2654 | #[$stable_access] |
2655 | pub fn get_mut(&mut self) -> &mut $int_type { |
2656 | self.v.get_mut() |
2657 | } |
2658 | |
2659 | #[doc = concat!("Get atomic access to a `&mut ", stringify!($int_type), "`.")] |
2660 | /// |
2661 | #[doc = if_8_bit! { |
2662 | $int_type, |
2663 | no = [ |
2664 | "**Note:** This function is only available on targets where `", |
2665 | stringify!($atomic_type), "` has the same alignment as `", stringify!($int_type), "`." |
2666 | ], |
2667 | }] |
2668 | /// |
2669 | /// # Examples |
2670 | /// |
2671 | /// ``` |
2672 | /// #![feature(atomic_from_mut)] |
2673 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2674 | /// |
2675 | /// let mut some_int = 123; |
2676 | #[doc = concat!("let a = ", stringify!($atomic_type), "::from_mut(&mut some_int);")] |
2677 | /// a.store(100, Ordering::Relaxed); |
2678 | /// assert_eq!(some_int, 100); |
2679 | /// ``` |
2680 | /// |
2681 | #[inline] |
2682 | #[$cfg_align] |
2683 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
2684 | pub fn from_mut(v: &mut $int_type) -> &mut Self { |
2685 | let [] = [(); align_of::<Self>() - align_of::<$int_type>()]; |
2686 | // SAFETY: |
2687 | // - the mutable reference guarantees unique ownership. |
2688 | // - the alignment of `$int_type` and `Self` is the |
2689 | // same, as promised by $cfg_align and verified above. |
2690 | unsafe { &mut *(v as *mut $int_type as *mut Self) } |
2691 | } |
2692 | |
2693 | #[doc = concat!("Get non-atomic access to a `&mut [", stringify!($atomic_type), "]` slice")] |
2694 | /// |
2695 | /// This is safe because the mutable reference guarantees that no other threads are |
2696 | /// concurrently accessing the atomic data. |
2697 | /// |
2698 | /// # Examples |
2699 | /// |
2700 | /// ```ignore-wasm |
2701 | /// #![feature(atomic_from_mut)] |
2702 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2703 | /// |
2704 | #[doc = concat!("let mut some_ints = [const { ", stringify!($atomic_type), "::new(0) }; 10];")] |
2705 | /// |
2706 | #[doc = concat!("let view: &mut [", stringify!($int_type), "] = ", stringify!($atomic_type), "::get_mut_slice(&mut some_ints);")] |
2707 | /// assert_eq!(view, [0; 10]); |
2708 | /// view |
2709 | /// .iter_mut() |
2710 | /// .enumerate() |
2711 | /// .for_each(|(idx, int)| *int = idx as _); |
2712 | /// |
2713 | /// std::thread::scope(|s| { |
2714 | /// some_ints |
2715 | /// .iter() |
2716 | /// .enumerate() |
2717 | /// .for_each(|(idx, int)| { |
2718 | /// s.spawn(move || assert_eq!(int.load(Ordering::Relaxed), idx as _)); |
2719 | /// }) |
2720 | /// }); |
2721 | /// ``` |
2722 | #[inline] |
2723 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
2724 | pub fn get_mut_slice(this: &mut [Self]) -> &mut [$int_type] { |
2725 | // SAFETY: the mutable reference guarantees unique ownership. |
2726 | unsafe { &mut *(this as *mut [Self] as *mut [$int_type]) } |
2727 | } |
2728 | |
2729 | #[doc = concat!("Get atomic access to a `&mut [", stringify!($int_type), "]` slice.")] |
2730 | /// |
2731 | /// # Examples |
2732 | /// |
2733 | /// ```ignore-wasm |
2734 | /// #![feature(atomic_from_mut)] |
2735 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2736 | /// |
2737 | /// let mut some_ints = [0; 10]; |
2738 | #[doc = concat!("let a = &*", stringify!($atomic_type), "::from_mut_slice(&mut some_ints);")] |
2739 | /// std::thread::scope(|s| { |
2740 | /// for i in 0..a.len() { |
2741 | /// s.spawn(move || a[i].store(i as _, Ordering::Relaxed)); |
2742 | /// } |
2743 | /// }); |
2744 | /// for (i, n) in some_ints.into_iter().enumerate() { |
2745 | /// assert_eq!(i, n as usize); |
2746 | /// } |
2747 | /// ``` |
2748 | #[inline] |
2749 | #[$cfg_align] |
2750 | #[unstable(feature = "atomic_from_mut", issue = "76314")] |
2751 | pub fn from_mut_slice(v: &mut [$int_type]) -> &mut [Self] { |
2752 | let [] = [(); align_of::<Self>() - align_of::<$int_type>()]; |
2753 | // SAFETY: |
2754 | // - the mutable reference guarantees unique ownership. |
2755 | // - the alignment of `$int_type` and `Self` is the |
2756 | // same, as promised by $cfg_align and verified above. |
2757 | unsafe { &mut *(v as *mut [$int_type] as *mut [Self]) } |
2758 | } |
2759 | |
2760 | /// Consumes the atomic and returns the contained value. |
2761 | /// |
2762 | /// This is safe because passing `self` by value guarantees that no other threads are |
2763 | /// concurrently accessing the atomic data. |
2764 | /// |
2765 | /// # Examples |
2766 | /// |
2767 | /// ``` |
2768 | #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")] |
2769 | /// |
2770 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2771 | /// assert_eq!(some_var.into_inner(), 5); |
2772 | /// ``` |
2773 | #[inline] |
2774 | #[$stable_access] |
2775 | #[$const_stable_into_inner] |
2776 | pub const fn into_inner(self) -> $int_type { |
2777 | self.v.into_inner() |
2778 | } |
2779 | |
2780 | /// Loads a value from the atomic integer. |
2781 | /// |
2782 | /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
2783 | /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
2784 | /// |
2785 | /// # Panics |
2786 | /// |
2787 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
2788 | /// |
2789 | /// # Examples |
2790 | /// |
2791 | /// ``` |
2792 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2793 | /// |
2794 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2795 | /// |
2796 | /// assert_eq!(some_var.load(Ordering::Relaxed), 5); |
2797 | /// ``` |
2798 | #[inline] |
2799 | #[$stable] |
2800 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2801 | pub fn load(&self, order: Ordering) -> $int_type { |
2802 | // SAFETY: data races are prevented by atomic intrinsics. |
2803 | unsafe { atomic_load(self.v.get(), order) } |
2804 | } |
2805 | |
2806 | /// Stores a value into the atomic integer. |
2807 | /// |
2808 | /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
2809 | /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
2810 | /// |
2811 | /// # Panics |
2812 | /// |
2813 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
2814 | /// |
2815 | /// # Examples |
2816 | /// |
2817 | /// ``` |
2818 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2819 | /// |
2820 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2821 | /// |
2822 | /// some_var.store(10, Ordering::Relaxed); |
2823 | /// assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2824 | /// ``` |
2825 | #[inline] |
2826 | #[$stable] |
2827 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2828 | pub fn store(&self, val: $int_type, order: Ordering) { |
2829 | // SAFETY: data races are prevented by atomic intrinsics. |
2830 | unsafe { atomic_store(self.v.get(), val, order); } |
2831 | } |
2832 | |
2833 | /// Stores a value into the atomic integer, returning the previous value. |
2834 | /// |
2835 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
2836 | /// of this operation. All ordering modes are possible. Note that using |
2837 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2838 | /// using [`Release`] makes the load part [`Relaxed`]. |
2839 | /// |
2840 | /// **Note**: This method is only available on platforms that support atomic operations on |
2841 | #[doc = concat!("[`", $s_int_type, "`].")] |
2842 | /// |
2843 | /// # Examples |
2844 | /// |
2845 | /// ``` |
2846 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2847 | /// |
2848 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2849 | /// |
2850 | /// assert_eq!(some_var.swap(10, Ordering::Relaxed), 5); |
2851 | /// ``` |
2852 | #[inline] |
2853 | #[$stable] |
2854 | #[$cfg_cas] |
2855 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2856 | pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type { |
2857 | // SAFETY: data races are prevented by atomic intrinsics. |
2858 | unsafe { atomic_swap(self.v.get(), val, order) } |
2859 | } |
2860 | |
2861 | /// Stores a value into the atomic integer if the current value is the same as |
2862 | /// the `current` value. |
2863 | /// |
2864 | /// The return value is always the previous value. If it is equal to `current`, then the |
2865 | /// value was updated. |
2866 | /// |
2867 | /// `compare_and_swap` also takes an [`Ordering`] argument which describes the memory |
2868 | /// ordering of this operation. Notice that even when using [`AcqRel`], the operation |
2869 | /// might fail and hence just perform an `Acquire` load, but not have `Release` semantics. |
2870 | /// Using [`Acquire`] makes the store part of this operation [`Relaxed`] if it |
2871 | /// happens, and using [`Release`] makes the load part [`Relaxed`]. |
2872 | /// |
2873 | /// **Note**: This method is only available on platforms that support atomic operations on |
2874 | #[doc = concat!("[`", $s_int_type, "`].")] |
2875 | /// |
2876 | /// # Migrating to `compare_exchange` and `compare_exchange_weak` |
2877 | /// |
2878 | /// `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for |
2879 | /// memory orderings: |
2880 | /// |
2881 | /// Original | Success | Failure |
2882 | /// -------- | ------- | ------- |
2883 | /// Relaxed | Relaxed | Relaxed |
2884 | /// Acquire | Acquire | Acquire |
2885 | /// Release | Release | Relaxed |
2886 | /// AcqRel | AcqRel | Acquire |
2887 | /// SeqCst | SeqCst | SeqCst |
2888 | /// |
2889 | /// `compare_and_swap` and `compare_exchange` also differ in their return type. You can use |
2890 | /// `compare_exchange(...).unwrap_or_else(|x| x)` to recover the behavior of `compare_and_swap`, |
2891 | /// but in most cases it is more idiomatic to check whether the return value is `Ok` or `Err` |
2892 | /// rather than to infer success vs failure based on the value that was read. |
2893 | /// |
2894 | /// During migration, consider whether it makes sense to use `compare_exchange_weak` instead. |
2895 | /// `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds, |
2896 | /// which allows the compiler to generate better assembly code when the compare and swap |
2897 | /// is used in a loop. |
2898 | /// |
2899 | /// # Examples |
2900 | /// |
2901 | /// ``` |
2902 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2903 | /// |
2904 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2905 | /// |
2906 | /// assert_eq!(some_var.compare_and_swap(5, 10, Ordering::Relaxed), 5); |
2907 | /// assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2908 | /// |
2909 | /// assert_eq!(some_var.compare_and_swap(6, 12, Ordering::Relaxed), 10); |
2910 | /// assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2911 | /// ``` |
2912 | #[inline] |
2913 | #[$stable] |
2914 | #[deprecated( |
2915 | since = "1.50.0", |
2916 | note = "Use `compare_exchange` or `compare_exchange_weak` instead") |
2917 | ] |
2918 | #[$cfg_cas] |
2919 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2920 | pub fn compare_and_swap(&self, |
2921 | current: $int_type, |
2922 | new: $int_type, |
2923 | order: Ordering) -> $int_type { |
2924 | match self.compare_exchange(current, |
2925 | new, |
2926 | order, |
2927 | strongest_failure_ordering(order)) { |
2928 | Ok(x) => x, |
2929 | Err(x) => x, |
2930 | } |
2931 | } |
2932 | |
2933 | /// Stores a value into the atomic integer if the current value is the same as |
2934 | /// the `current` value. |
2935 | /// |
2936 | /// The return value is a result indicating whether the new value was written and |
2937 | /// containing the previous value. On success this value is guaranteed to be equal to |
2938 | /// `current`. |
2939 | /// |
2940 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
2941 | /// ordering of this operation. `success` describes the required ordering for the |
2942 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
2943 | /// `failure` describes the required ordering for the load operation that takes place when |
2944 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
2945 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
2946 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
2947 | /// |
2948 | /// **Note**: This method is only available on platforms that support atomic operations on |
2949 | #[doc = concat!("[`", $s_int_type, "`].")] |
2950 | /// |
2951 | /// # Examples |
2952 | /// |
2953 | /// ``` |
2954 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
2955 | /// |
2956 | #[doc = concat!("let some_var = ", stringify!($atomic_type), "::new(5);")] |
2957 | /// |
2958 | /// assert_eq!(some_var.compare_exchange(5, 10, |
2959 | /// Ordering::Acquire, |
2960 | /// Ordering::Relaxed), |
2961 | /// Ok(5)); |
2962 | /// assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2963 | /// |
2964 | /// assert_eq!(some_var.compare_exchange(6, 12, |
2965 | /// Ordering::SeqCst, |
2966 | /// Ordering::Acquire), |
2967 | /// Err(10)); |
2968 | /// assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2969 | /// ``` |
2970 | #[inline] |
2971 | #[$stable_cxchg] |
2972 | #[$cfg_cas] |
2973 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2974 | pub fn compare_exchange(&self, |
2975 | current: $int_type, |
2976 | new: $int_type, |
2977 | success: Ordering, |
2978 | failure: Ordering) -> Result<$int_type, $int_type> { |
2979 | // SAFETY: data races are prevented by atomic intrinsics. |
2980 | unsafe { atomic_compare_exchange(self.v.get(), current, new, success, failure) } |
2981 | } |
2982 | |
2983 | /// Stores a value into the atomic integer if the current value is the same as |
2984 | /// the `current` value. |
2985 | /// |
2986 | #[doc = concat!("Unlike [`", stringify!($atomic_type), "::compare_exchange`],")] |
2987 | /// this function is allowed to spuriously fail even |
2988 | /// when the comparison succeeds, which can result in more efficient code on some |
2989 | /// platforms. The return value is a result indicating whether the new value was |
2990 | /// written and containing the previous value. |
2991 | /// |
2992 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
2993 | /// ordering of this operation. `success` describes the required ordering for the |
2994 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
2995 | /// `failure` describes the required ordering for the load operation that takes place when |
2996 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
2997 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
2998 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
2999 | /// |
3000 | /// **Note**: This method is only available on platforms that support atomic operations on |
3001 | #[doc = concat!("[`", $s_int_type, "`].")] |
3002 | /// |
3003 | /// # Examples |
3004 | /// |
3005 | /// ``` |
3006 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3007 | /// |
3008 | #[doc = concat!("let val = ", stringify!($atomic_type), "::new(4);")] |
3009 | /// |
3010 | /// let mut old = val.load(Ordering::Relaxed); |
3011 | /// loop { |
3012 | /// let new = old * 2; |
3013 | /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
3014 | /// Ok(_) => break, |
3015 | /// Err(x) => old = x, |
3016 | /// } |
3017 | /// } |
3018 | /// ``` |
3019 | #[inline] |
3020 | #[$stable_cxchg] |
3021 | #[$cfg_cas] |
3022 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3023 | pub fn compare_exchange_weak(&self, |
3024 | current: $int_type, |
3025 | new: $int_type, |
3026 | success: Ordering, |
3027 | failure: Ordering) -> Result<$int_type, $int_type> { |
3028 | // SAFETY: data races are prevented by atomic intrinsics. |
3029 | unsafe { |
3030 | atomic_compare_exchange_weak(self.v.get(), current, new, success, failure) |
3031 | } |
3032 | } |
3033 | |
3034 | /// Adds to the current value, returning the previous value. |
3035 | /// |
3036 | /// This operation wraps around on overflow. |
3037 | /// |
3038 | /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering |
3039 | /// of this operation. All ordering modes are possible. Note that using |
3040 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3041 | /// using [`Release`] makes the load part [`Relaxed`]. |
3042 | /// |
3043 | /// **Note**: This method is only available on platforms that support atomic operations on |
3044 | #[doc = concat!("[`", $s_int_type, "`].")] |
3045 | /// |
3046 | /// # Examples |
3047 | /// |
3048 | /// ``` |
3049 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3050 | /// |
3051 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0);")] |
3052 | /// assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0); |
3053 | /// assert_eq!(foo.load(Ordering::SeqCst), 10); |
3054 | /// ``` |
3055 | #[inline] |
3056 | #[$stable] |
3057 | #[$cfg_cas] |
3058 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3059 | pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type { |
3060 | // SAFETY: data races are prevented by atomic intrinsics. |
3061 | unsafe { atomic_add(self.v.get(), val, order) } |
3062 | } |
3063 | |
3064 | /// Subtracts from the current value, returning the previous value. |
3065 | /// |
3066 | /// This operation wraps around on overflow. |
3067 | /// |
3068 | /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering |
3069 | /// of this operation. All ordering modes are possible. Note that using |
3070 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3071 | /// using [`Release`] makes the load part [`Relaxed`]. |
3072 | /// |
3073 | /// **Note**: This method is only available on platforms that support atomic operations on |
3074 | #[doc = concat!("[`", $s_int_type, "`].")] |
3075 | /// |
3076 | /// # Examples |
3077 | /// |
3078 | /// ``` |
3079 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3080 | /// |
3081 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(20);")] |
3082 | /// assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20); |
3083 | /// assert_eq!(foo.load(Ordering::SeqCst), 10); |
3084 | /// ``` |
3085 | #[inline] |
3086 | #[$stable] |
3087 | #[$cfg_cas] |
3088 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3089 | pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type { |
3090 | // SAFETY: data races are prevented by atomic intrinsics. |
3091 | unsafe { atomic_sub(self.v.get(), val, order) } |
3092 | } |
3093 | |
3094 | /// Bitwise "and" with the current value. |
3095 | /// |
3096 | /// Performs a bitwise "and" operation on the current value and the argument `val`, and |
3097 | /// sets the new value to the result. |
3098 | /// |
3099 | /// Returns the previous value. |
3100 | /// |
3101 | /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering |
3102 | /// of this operation. All ordering modes are possible. Note that using |
3103 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3104 | /// using [`Release`] makes the load part [`Relaxed`]. |
3105 | /// |
3106 | /// **Note**: This method is only available on platforms that support atomic operations on |
3107 | #[doc = concat!("[`", $s_int_type, "`].")] |
3108 | /// |
3109 | /// # Examples |
3110 | /// |
3111 | /// ``` |
3112 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3113 | /// |
3114 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")] |
3115 | /// assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101); |
3116 | /// assert_eq!(foo.load(Ordering::SeqCst), 0b100001); |
3117 | /// ``` |
3118 | #[inline] |
3119 | #[$stable] |
3120 | #[$cfg_cas] |
3121 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3122 | pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type { |
3123 | // SAFETY: data races are prevented by atomic intrinsics. |
3124 | unsafe { atomic_and(self.v.get(), val, order) } |
3125 | } |
3126 | |
3127 | /// Bitwise "nand" with the current value. |
3128 | /// |
3129 | /// Performs a bitwise "nand" operation on the current value and the argument `val`, and |
3130 | /// sets the new value to the result. |
3131 | /// |
3132 | /// Returns the previous value. |
3133 | /// |
3134 | /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering |
3135 | /// of this operation. All ordering modes are possible. Note that using |
3136 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3137 | /// using [`Release`] makes the load part [`Relaxed`]. |
3138 | /// |
3139 | /// **Note**: This method is only available on platforms that support atomic operations on |
3140 | #[doc = concat!("[`", $s_int_type, "`].")] |
3141 | /// |
3142 | /// # Examples |
3143 | /// |
3144 | /// ``` |
3145 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3146 | /// |
3147 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0x13);")] |
3148 | /// assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13); |
3149 | /// assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31)); |
3150 | /// ``` |
3151 | #[inline] |
3152 | #[$stable_nand] |
3153 | #[$cfg_cas] |
3154 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3155 | pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type { |
3156 | // SAFETY: data races are prevented by atomic intrinsics. |
3157 | unsafe { atomic_nand(self.v.get(), val, order) } |
3158 | } |
3159 | |
3160 | /// Bitwise "or" with the current value. |
3161 | /// |
3162 | /// Performs a bitwise "or" operation on the current value and the argument `val`, and |
3163 | /// sets the new value to the result. |
3164 | /// |
3165 | /// Returns the previous value. |
3166 | /// |
3167 | /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering |
3168 | /// of this operation. All ordering modes are possible. Note that using |
3169 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3170 | /// using [`Release`] makes the load part [`Relaxed`]. |
3171 | /// |
3172 | /// **Note**: This method is only available on platforms that support atomic operations on |
3173 | #[doc = concat!("[`", $s_int_type, "`].")] |
3174 | /// |
3175 | /// # Examples |
3176 | /// |
3177 | /// ``` |
3178 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3179 | /// |
3180 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")] |
3181 | /// assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101); |
3182 | /// assert_eq!(foo.load(Ordering::SeqCst), 0b111111); |
3183 | /// ``` |
3184 | #[inline] |
3185 | #[$stable] |
3186 | #[$cfg_cas] |
3187 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3188 | pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type { |
3189 | // SAFETY: data races are prevented by atomic intrinsics. |
3190 | unsafe { atomic_or(self.v.get(), val, order) } |
3191 | } |
3192 | |
3193 | /// Bitwise "xor" with the current value. |
3194 | /// |
3195 | /// Performs a bitwise "xor" operation on the current value and the argument `val`, and |
3196 | /// sets the new value to the result. |
3197 | /// |
3198 | /// Returns the previous value. |
3199 | /// |
3200 | /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering |
3201 | /// of this operation. All ordering modes are possible. Note that using |
3202 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3203 | /// using [`Release`] makes the load part [`Relaxed`]. |
3204 | /// |
3205 | /// **Note**: This method is only available on platforms that support atomic operations on |
3206 | #[doc = concat!("[`", $s_int_type, "`].")] |
3207 | /// |
3208 | /// # Examples |
3209 | /// |
3210 | /// ``` |
3211 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3212 | /// |
3213 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(0b101101);")] |
3214 | /// assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101); |
3215 | /// assert_eq!(foo.load(Ordering::SeqCst), 0b011110); |
3216 | /// ``` |
3217 | #[inline] |
3218 | #[$stable] |
3219 | #[$cfg_cas] |
3220 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3221 | pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type { |
3222 | // SAFETY: data races are prevented by atomic intrinsics. |
3223 | unsafe { atomic_xor(self.v.get(), val, order) } |
3224 | } |
3225 | |
3226 | /// Fetches the value, and applies a function to it that returns an optional |
3227 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else |
3228 | /// `Err(previous_value)`. |
3229 | /// |
3230 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
3231 | /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied |
3232 | /// only once to the stored value. |
3233 | /// |
3234 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation. |
3235 | /// The first describes the required ordering for when the operation finally succeeds while the second |
3236 | /// describes the required ordering for loads. These correspond to the success and failure orderings of |
3237 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")] |
3238 | /// respectively. |
3239 | /// |
3240 | /// Using [`Acquire`] as success ordering makes the store part |
3241 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
3242 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3243 | /// |
3244 | /// **Note**: This method is only available on platforms that support atomic operations on |
3245 | #[doc = concat!("[`", $s_int_type, "`].")] |
3246 | /// |
3247 | /// # Considerations |
3248 | /// |
3249 | /// This method is not magic; it is not provided by the hardware. |
3250 | /// It is implemented in terms of |
3251 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")] |
3252 | /// and suffers from the same drawbacks. |
3253 | /// In particular, this method will not circumvent the [ABA Problem]. |
3254 | /// |
3255 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
3256 | /// |
3257 | /// # Examples |
3258 | /// |
3259 | /// ```rust |
3260 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3261 | /// |
3262 | #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")] |
3263 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7)); |
3264 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7)); |
3265 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8)); |
3266 | /// assert_eq!(x.load(Ordering::SeqCst), 9); |
3267 | /// ``` |
3268 | #[inline] |
3269 | #[stable(feature = "no_more_cas", since = "1.45.0")] |
3270 | #[$cfg_cas] |
3271 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3272 | pub fn fetch_update<F>(&self, |
3273 | set_order: Ordering, |
3274 | fetch_order: Ordering, |
3275 | mut f: F) -> Result<$int_type, $int_type> |
3276 | where F: FnMut($int_type) -> Option<$int_type> { |
3277 | let mut prev = self.load(fetch_order); |
3278 | while let Some(next) = f(prev) { |
3279 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
3280 | x @ Ok(_) => return x, |
3281 | Err(next_prev) => prev = next_prev |
3282 | } |
3283 | } |
3284 | Err(prev) |
3285 | } |
3286 | |
3287 | /// Fetches the value, and applies a function to it that returns an optional |
3288 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else |
3289 | /// `Err(previous_value)`. |
3290 | /// |
3291 | #[doc = concat!("See also: [`update`](`", stringify!($atomic_type), "::update`).")] |
3292 | /// |
3293 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
3294 | /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied |
3295 | /// only once to the stored value. |
3296 | /// |
3297 | /// `try_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation. |
3298 | /// The first describes the required ordering for when the operation finally succeeds while the second |
3299 | /// describes the required ordering for loads. These correspond to the success and failure orderings of |
3300 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")] |
3301 | /// respectively. |
3302 | /// |
3303 | /// Using [`Acquire`] as success ordering makes the store part |
3304 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
3305 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3306 | /// |
3307 | /// **Note**: This method is only available on platforms that support atomic operations on |
3308 | #[doc = concat!("[`", $s_int_type, "`].")] |
3309 | /// |
3310 | /// # Considerations |
3311 | /// |
3312 | /// This method is not magic; it is not provided by the hardware. |
3313 | /// It is implemented in terms of |
3314 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")] |
3315 | /// and suffers from the same drawbacks. |
3316 | /// In particular, this method will not circumvent the [ABA Problem]. |
3317 | /// |
3318 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
3319 | /// |
3320 | /// # Examples |
3321 | /// |
3322 | /// ```rust |
3323 | /// #![feature(atomic_try_update)] |
3324 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3325 | /// |
3326 | #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")] |
3327 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7)); |
3328 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7)); |
3329 | /// assert_eq!(x.try_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8)); |
3330 | /// assert_eq!(x.load(Ordering::SeqCst), 9); |
3331 | /// ``` |
3332 | #[inline] |
3333 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
3334 | #[$cfg_cas] |
3335 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3336 | pub fn try_update( |
3337 | &self, |
3338 | set_order: Ordering, |
3339 | fetch_order: Ordering, |
3340 | f: impl FnMut($int_type) -> Option<$int_type>, |
3341 | ) -> Result<$int_type, $int_type> { |
3342 | // FIXME(atomic_try_update): this is currently an unstable alias to `fetch_update`; |
3343 | // when stabilizing, turn `fetch_update` into a deprecated alias to `try_update`. |
3344 | self.fetch_update(set_order, fetch_order, f) |
3345 | } |
3346 | |
3347 | /// Fetches the value, applies a function to it that it return a new value. |
3348 | /// The new value is stored and the old value is returned. |
3349 | /// |
3350 | #[doc = concat!("See also: [`try_update`](`", stringify!($atomic_type), "::try_update`).")] |
3351 | /// |
3352 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
3353 | /// the meantime, but the function will have been applied only once to the stored value. |
3354 | /// |
3355 | /// `update` takes two [`Ordering`] arguments to describe the memory ordering of this operation. |
3356 | /// The first describes the required ordering for when the operation finally succeeds while the second |
3357 | /// describes the required ordering for loads. These correspond to the success and failure orderings of |
3358 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange`]")] |
3359 | /// respectively. |
3360 | /// |
3361 | /// Using [`Acquire`] as success ordering makes the store part |
3362 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
3363 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3364 | /// |
3365 | /// **Note**: This method is only available on platforms that support atomic operations on |
3366 | #[doc = concat!("[`", $s_int_type, "`].")] |
3367 | /// |
3368 | /// # Considerations |
3369 | /// |
3370 | /// This method is not magic; it is not provided by the hardware. |
3371 | /// It is implemented in terms of |
3372 | #[doc = concat!("[`", stringify!($atomic_type), "::compare_exchange_weak`],")] |
3373 | /// and suffers from the same drawbacks. |
3374 | /// In particular, this method will not circumvent the [ABA Problem]. |
3375 | /// |
3376 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
3377 | /// |
3378 | /// # Examples |
3379 | /// |
3380 | /// ```rust |
3381 | /// #![feature(atomic_try_update)] |
3382 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3383 | /// |
3384 | #[doc = concat!("let x = ", stringify!($atomic_type), "::new(7);")] |
3385 | /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 7); |
3386 | /// assert_eq!(x.update(Ordering::SeqCst, Ordering::SeqCst, |x| x + 1), 8); |
3387 | /// assert_eq!(x.load(Ordering::SeqCst), 9); |
3388 | /// ``` |
3389 | #[inline] |
3390 | #[unstable(feature = "atomic_try_update", issue = "135894")] |
3391 | #[$cfg_cas] |
3392 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3393 | pub fn update( |
3394 | &self, |
3395 | set_order: Ordering, |
3396 | fetch_order: Ordering, |
3397 | mut f: impl FnMut($int_type) -> $int_type, |
3398 | ) -> $int_type { |
3399 | let mut prev = self.load(fetch_order); |
3400 | loop { |
3401 | match self.compare_exchange_weak(prev, f(prev), set_order, fetch_order) { |
3402 | Ok(x) => break x, |
3403 | Err(next_prev) => prev = next_prev, |
3404 | } |
3405 | } |
3406 | } |
3407 | |
3408 | /// Maximum with the current value. |
3409 | /// |
3410 | /// Finds the maximum of the current value and the argument `val`, and |
3411 | /// sets the new value to the result. |
3412 | /// |
3413 | /// Returns the previous value. |
3414 | /// |
3415 | /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering |
3416 | /// of this operation. All ordering modes are possible. Note that using |
3417 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3418 | /// using [`Release`] makes the load part [`Relaxed`]. |
3419 | /// |
3420 | /// **Note**: This method is only available on platforms that support atomic operations on |
3421 | #[doc = concat!("[`", $s_int_type, "`].")] |
3422 | /// |
3423 | /// # Examples |
3424 | /// |
3425 | /// ``` |
3426 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3427 | /// |
3428 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")] |
3429 | /// assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23); |
3430 | /// assert_eq!(foo.load(Ordering::SeqCst), 42); |
3431 | /// ``` |
3432 | /// |
3433 | /// If you want to obtain the maximum value in one step, you can use the following: |
3434 | /// |
3435 | /// ``` |
3436 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3437 | /// |
3438 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")] |
3439 | /// let bar = 42; |
3440 | /// let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar); |
3441 | /// assert!(max_foo == 42); |
3442 | /// ``` |
3443 | #[inline] |
3444 | #[stable(feature = "atomic_min_max", since = "1.45.0")] |
3445 | #[$cfg_cas] |
3446 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3447 | pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type { |
3448 | // SAFETY: data races are prevented by atomic intrinsics. |
3449 | unsafe { $max_fn(self.v.get(), val, order) } |
3450 | } |
3451 | |
3452 | /// Minimum with the current value. |
3453 | /// |
3454 | /// Finds the minimum of the current value and the argument `val`, and |
3455 | /// sets the new value to the result. |
3456 | /// |
3457 | /// Returns the previous value. |
3458 | /// |
3459 | /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering |
3460 | /// of this operation. All ordering modes are possible. Note that using |
3461 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3462 | /// using [`Release`] makes the load part [`Relaxed`]. |
3463 | /// |
3464 | /// **Note**: This method is only available on platforms that support atomic operations on |
3465 | #[doc = concat!("[`", $s_int_type, "`].")] |
3466 | /// |
3467 | /// # Examples |
3468 | /// |
3469 | /// ``` |
3470 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3471 | /// |
3472 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")] |
3473 | /// assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23); |
3474 | /// assert_eq!(foo.load(Ordering::Relaxed), 23); |
3475 | /// assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23); |
3476 | /// assert_eq!(foo.load(Ordering::Relaxed), 22); |
3477 | /// ``` |
3478 | /// |
3479 | /// If you want to obtain the minimum value in one step, you can use the following: |
3480 | /// |
3481 | /// ``` |
3482 | #[doc = concat!($extra_feature, "use std::sync::atomic::{", stringify!($atomic_type), ", Ordering};")] |
3483 | /// |
3484 | #[doc = concat!("let foo = ", stringify!($atomic_type), "::new(23);")] |
3485 | /// let bar = 12; |
3486 | /// let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar); |
3487 | /// assert_eq!(min_foo, 12); |
3488 | /// ``` |
3489 | #[inline] |
3490 | #[stable(feature = "atomic_min_max", since = "1.45.0")] |
3491 | #[$cfg_cas] |
3492 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3493 | pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type { |
3494 | // SAFETY: data races are prevented by atomic intrinsics. |
3495 | unsafe { $min_fn(self.v.get(), val, order) } |
3496 | } |
3497 | |
3498 | /// Returns a mutable pointer to the underlying integer. |
3499 | /// |
3500 | /// Doing non-atomic reads and writes on the resulting integer can be a data race. |
3501 | /// This method is mostly useful for FFI, where the function signature may use |
3502 | #[doc = concat!("`*mut ", stringify!($int_type), "` instead of `&", stringify!($atomic_type), "`.")] |
3503 | /// |
3504 | /// Returning an `*mut` pointer from a shared reference to this atomic is safe because the |
3505 | /// atomic types work with interior mutability. All modifications of an atomic change the value |
3506 | /// through a shared reference, and can do so safely as long as they use atomic operations. Any |
3507 | /// use of the returned raw pointer requires an `unsafe` block and still has to uphold the same |
3508 | /// restriction: operations on it must be atomic. |
3509 | /// |
3510 | /// # Examples |
3511 | /// |
3512 | /// ```ignore (extern-declaration) |
3513 | /// # fn main() { |
3514 | #[doc = concat!($extra_feature, "use std::sync::atomic::", stringify!($atomic_type), ";")] |
3515 | /// |
3516 | /// extern "C" { |
3517 | #[doc = concat!(" fn my_atomic_op(arg: *mut ", stringify!($int_type), ");")] |
3518 | /// } |
3519 | /// |
3520 | #[doc = concat!("let atomic = ", stringify!($atomic_type), "::new(1);")] |
3521 | /// |
3522 | /// // SAFETY: Safe as long as `my_atomic_op` is atomic. |
3523 | /// unsafe { |
3524 | /// my_atomic_op(atomic.as_ptr()); |
3525 | /// } |
3526 | /// # } |
3527 | /// ``` |
3528 | #[inline] |
3529 | #[stable(feature = "atomic_as_ptr", since = "1.70.0")] |
3530 | #[rustc_const_stable(feature = "atomic_as_ptr", since = "1.70.0")] |
3531 | #[rustc_never_returns_null_ptr] |
3532 | pub const fn as_ptr(&self) -> *mut $int_type { |
3533 | self.v.get() |
3534 | } |
3535 | } |
3536 | } |
3537 | } |
3538 | |
3539 | #[cfg(target_has_atomic_load_store = "8")] |
3540 | atomic_int! { |
3541 | cfg(target_has_atomic = "8"), |
3542 | cfg(target_has_atomic_equal_alignment = "8"), |
3543 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3544 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3545 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3546 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3547 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3548 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3549 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3550 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3551 | rustc_diagnostic_item= "AtomicI8", |
3552 | "i8", |
3553 | "", |
3554 | atomic_min, atomic_max, |
3555 | 1, |
3556 | i8 AtomicI8 |
3557 | } |
3558 | #[cfg(target_has_atomic_load_store = "8")] |
3559 | atomic_int! { |
3560 | cfg(target_has_atomic = "8"), |
3561 | cfg(target_has_atomic_equal_alignment = "8"), |
3562 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3563 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3564 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3565 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3566 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3567 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3568 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3569 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3570 | rustc_diagnostic_item= "AtomicU8", |
3571 | "u8", |
3572 | "", |
3573 | atomic_umin, atomic_umax, |
3574 | 1, |
3575 | u8 AtomicU8 |
3576 | } |
3577 | #[cfg(target_has_atomic_load_store = "16")] |
3578 | atomic_int! { |
3579 | cfg(target_has_atomic = "16"), |
3580 | cfg(target_has_atomic_equal_alignment = "16"), |
3581 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3582 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3583 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3584 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3585 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3586 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3587 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3588 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3589 | rustc_diagnostic_item= "AtomicI16", |
3590 | "i16", |
3591 | "", |
3592 | atomic_min, atomic_max, |
3593 | 2, |
3594 | i16 AtomicI16 |
3595 | } |
3596 | #[cfg(target_has_atomic_load_store = "16")] |
3597 | atomic_int! { |
3598 | cfg(target_has_atomic = "16"), |
3599 | cfg(target_has_atomic_equal_alignment = "16"), |
3600 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3601 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3602 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3603 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3604 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3605 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3606 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3607 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3608 | rustc_diagnostic_item= "AtomicU16", |
3609 | "u16", |
3610 | "", |
3611 | atomic_umin, atomic_umax, |
3612 | 2, |
3613 | u16 AtomicU16 |
3614 | } |
3615 | #[cfg(target_has_atomic_load_store = "32")] |
3616 | atomic_int! { |
3617 | cfg(target_has_atomic = "32"), |
3618 | cfg(target_has_atomic_equal_alignment = "32"), |
3619 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3620 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3621 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3622 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3623 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3624 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3625 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3626 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3627 | rustc_diagnostic_item= "AtomicI32", |
3628 | "i32", |
3629 | "", |
3630 | atomic_min, atomic_max, |
3631 | 4, |
3632 | i32 AtomicI32 |
3633 | } |
3634 | #[cfg(target_has_atomic_load_store = "32")] |
3635 | atomic_int! { |
3636 | cfg(target_has_atomic = "32"), |
3637 | cfg(target_has_atomic_equal_alignment = "32"), |
3638 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3639 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3640 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3641 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3642 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3643 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3644 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3645 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3646 | rustc_diagnostic_item= "AtomicU32", |
3647 | "u32", |
3648 | "", |
3649 | atomic_umin, atomic_umax, |
3650 | 4, |
3651 | u32 AtomicU32 |
3652 | } |
3653 | #[cfg(target_has_atomic_load_store = "64")] |
3654 | atomic_int! { |
3655 | cfg(target_has_atomic = "64"), |
3656 | cfg(target_has_atomic_equal_alignment = "64"), |
3657 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3658 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3659 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3660 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3661 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3662 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3663 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3664 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3665 | rustc_diagnostic_item= "AtomicI64", |
3666 | "i64", |
3667 | "", |
3668 | atomic_min, atomic_max, |
3669 | 8, |
3670 | i64 AtomicI64 |
3671 | } |
3672 | #[cfg(target_has_atomic_load_store = "64")] |
3673 | atomic_int! { |
3674 | cfg(target_has_atomic = "64"), |
3675 | cfg(target_has_atomic_equal_alignment = "64"), |
3676 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3677 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3678 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3679 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3680 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3681 | stable(feature = "integer_atomics_stable", since = "1.34.0"), |
3682 | rustc_const_stable(feature = "const_integer_atomics", since = "1.34.0"), |
3683 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3684 | rustc_diagnostic_item= "AtomicU64", |
3685 | "u64", |
3686 | "", |
3687 | atomic_umin, atomic_umax, |
3688 | 8, |
3689 | u64 AtomicU64 |
3690 | } |
3691 | #[cfg(target_has_atomic_load_store = "128")] |
3692 | atomic_int! { |
3693 | cfg(target_has_atomic = "128"), |
3694 | cfg(target_has_atomic_equal_alignment = "128"), |
3695 | unstable(feature = "integer_atomics", issue = "99069"), |
3696 | unstable(feature = "integer_atomics", issue = "99069"), |
3697 | unstable(feature = "integer_atomics", issue = "99069"), |
3698 | unstable(feature = "integer_atomics", issue = "99069"), |
3699 | unstable(feature = "integer_atomics", issue = "99069"), |
3700 | unstable(feature = "integer_atomics", issue = "99069"), |
3701 | rustc_const_unstable(feature = "integer_atomics", issue = "99069"), |
3702 | rustc_const_unstable(feature = "integer_atomics", issue = "99069"), |
3703 | rustc_diagnostic_item= "AtomicI128", |
3704 | "i128", |
3705 | "#![feature(integer_atomics)]\n\n ", |
3706 | atomic_min, atomic_max, |
3707 | 16, |
3708 | i128 AtomicI128 |
3709 | } |
3710 | #[cfg(target_has_atomic_load_store = "128")] |
3711 | atomic_int! { |
3712 | cfg(target_has_atomic = "128"), |
3713 | cfg(target_has_atomic_equal_alignment = "128"), |
3714 | unstable(feature = "integer_atomics", issue = "99069"), |
3715 | unstable(feature = "integer_atomics", issue = "99069"), |
3716 | unstable(feature = "integer_atomics", issue = "99069"), |
3717 | unstable(feature = "integer_atomics", issue = "99069"), |
3718 | unstable(feature = "integer_atomics", issue = "99069"), |
3719 | unstable(feature = "integer_atomics", issue = "99069"), |
3720 | rustc_const_unstable(feature = "integer_atomics", issue = "99069"), |
3721 | rustc_const_unstable(feature = "integer_atomics", issue = "99069"), |
3722 | rustc_diagnostic_item= "AtomicU128", |
3723 | "u128", |
3724 | "#![feature(integer_atomics)]\n\n ", |
3725 | atomic_umin, atomic_umax, |
3726 | 16, |
3727 | u128 AtomicU128 |
3728 | } |
3729 | |
3730 | #[cfg(target_has_atomic_load_store = "ptr")] |
3731 | macro_rules! atomic_int_ptr_sized { |
3732 | ( $($target_pointer_width:literal $align:literal)* ) => { $( |
3733 | #[cfg(target_pointer_width = $target_pointer_width)] |
3734 | atomic_int! { |
3735 | cfg(target_has_atomic = "ptr"), |
3736 | cfg(target_has_atomic_equal_alignment = "ptr"), |
3737 | stable(feature = "rust1", since = "1.0.0"), |
3738 | stable(feature = "extended_compare_and_swap", since = "1.10.0"), |
3739 | stable(feature = "atomic_debug", since = "1.3.0"), |
3740 | stable(feature = "atomic_access", since = "1.15.0"), |
3741 | stable(feature = "atomic_from", since = "1.23.0"), |
3742 | stable(feature = "atomic_nand", since = "1.27.0"), |
3743 | rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"), |
3744 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3745 | rustc_diagnostic_item = "AtomicIsize", |
3746 | "isize", |
3747 | "", |
3748 | atomic_min, atomic_max, |
3749 | $align, |
3750 | isize AtomicIsize |
3751 | } |
3752 | #[cfg(target_pointer_width = $target_pointer_width)] |
3753 | atomic_int! { |
3754 | cfg(target_has_atomic = "ptr"), |
3755 | cfg(target_has_atomic_equal_alignment = "ptr"), |
3756 | stable(feature = "rust1", since = "1.0.0"), |
3757 | stable(feature = "extended_compare_and_swap", since = "1.10.0"), |
3758 | stable(feature = "atomic_debug", since = "1.3.0"), |
3759 | stable(feature = "atomic_access", since = "1.15.0"), |
3760 | stable(feature = "atomic_from", since = "1.23.0"), |
3761 | stable(feature = "atomic_nand", since = "1.27.0"), |
3762 | rustc_const_stable(feature = "const_ptr_sized_atomics", since = "1.24.0"), |
3763 | rustc_const_stable(feature = "const_atomic_into_inner", since = "1.79.0"), |
3764 | rustc_diagnostic_item = "AtomicUsize", |
3765 | "usize", |
3766 | "", |
3767 | atomic_umin, atomic_umax, |
3768 | $align, |
3769 | usize AtomicUsize |
3770 | } |
3771 | |
3772 | /// An [`AtomicIsize`] initialized to `0`. |
3773 | #[cfg(target_pointer_width = $target_pointer_width)] |
3774 | #[stable(feature = "rust1", since = "1.0.0")] |
3775 | #[deprecated( |
3776 | since = "1.34.0", |
3777 | note = "the `new` function is now preferred", |
3778 | suggestion = "AtomicIsize::new(0)", |
3779 | )] |
3780 | pub const ATOMIC_ISIZE_INIT: AtomicIsize = AtomicIsize::new(0); |
3781 | |
3782 | /// An [`AtomicUsize`] initialized to `0`. |
3783 | #[cfg(target_pointer_width = $target_pointer_width)] |
3784 | #[stable(feature = "rust1", since = "1.0.0")] |
3785 | #[deprecated( |
3786 | since = "1.34.0", |
3787 | note = "the `new` function is now preferred", |
3788 | suggestion = "AtomicUsize::new(0)", |
3789 | )] |
3790 | pub const ATOMIC_USIZE_INIT: AtomicUsize = AtomicUsize::new(0); |
3791 | )* }; |
3792 | } |
3793 | |
3794 | #[cfg(target_has_atomic_load_store = "ptr")] |
3795 | atomic_int_ptr_sized! { |
3796 | "16"2 |
3797 | "32"4 |
3798 | "64"8 |
3799 | } |
3800 | |
3801 | #[inline] |
3802 | #[cfg(target_has_atomic)] |
3803 | fn strongest_failure_ordering(order: Ordering) -> Ordering { |
3804 | match order { |
3805 | Release => Relaxed, |
3806 | Relaxed => Relaxed, |
3807 | SeqCst => SeqCst, |
3808 | Acquire => Acquire, |
3809 | AcqRel => Acquire, |
3810 | } |
3811 | } |
3812 | |
3813 | #[inline] |
3814 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3815 | unsafe fn atomic_store<T: Copy>(dst: *mut T, val: T, order: Ordering) { |
3816 | // SAFETY: the caller must uphold the safety contract for `atomic_store`. |
3817 | unsafe { |
3818 | match order { |
3819 | Relaxed => intrinsics::atomic_store::<T, { AO::Relaxed }>(dst, val), |
3820 | Release => intrinsics::atomic_store::<T, { AO::Release }>(dst, val), |
3821 | SeqCst => intrinsics::atomic_store::<T, { AO::SeqCst }>(dst, val), |
3822 | Acquire => panic!("there is no such thing as an acquire store"), |
3823 | AcqRel => panic!("there is no such thing as an acquire-release store"), |
3824 | } |
3825 | } |
3826 | } |
3827 | |
3828 | #[inline] |
3829 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3830 | unsafe fn atomic_load<T: Copy>(dst: *const T, order: Ordering) -> T { |
3831 | // SAFETY: the caller must uphold the safety contract for `atomic_load`. |
3832 | unsafe { |
3833 | match order { |
3834 | Relaxed => intrinsics::atomic_load::<T, { AO::Relaxed }>(src:dst), |
3835 | Acquire => intrinsics::atomic_load::<T, { AO::Acquire }>(src:dst), |
3836 | SeqCst => intrinsics::atomic_load::<T, { AO::SeqCst }>(src:dst), |
3837 | Release => panic!("there is no such thing as a release load"), |
3838 | AcqRel => panic!("there is no such thing as an acquire-release load"), |
3839 | } |
3840 | } |
3841 | } |
3842 | |
3843 | #[inline] |
3844 | #[cfg(target_has_atomic)] |
3845 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3846 | unsafe fn atomic_swap<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
3847 | // SAFETY: the caller must uphold the safety contract for `atomic_swap`. |
3848 | unsafe { |
3849 | match order { |
3850 | Relaxed => intrinsics::atomic_xchg::<T, { AO::Relaxed }>(dst, src:val), |
3851 | Acquire => intrinsics::atomic_xchg::<T, { AO::Acquire }>(dst, src:val), |
3852 | Release => intrinsics::atomic_xchg::<T, { AO::Release }>(dst, src:val), |
3853 | AcqRel => intrinsics::atomic_xchg::<T, { AO::AcqRel }>(dst, src:val), |
3854 | SeqCst => intrinsics::atomic_xchg::<T, { AO::SeqCst }>(dst, src:val), |
3855 | } |
3856 | } |
3857 | } |
3858 | |
3859 | /// Returns the previous value (like __sync_fetch_and_add). |
3860 | #[inline] |
3861 | #[cfg(target_has_atomic)] |
3862 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3863 | unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
3864 | // SAFETY: the caller must uphold the safety contract for `atomic_add`. |
3865 | unsafe { |
3866 | match order { |
3867 | Relaxed => intrinsics::atomic_xadd::<T, { AO::Relaxed }>(dst, src:val), |
3868 | Acquire => intrinsics::atomic_xadd::<T, { AO::Acquire }>(dst, src:val), |
3869 | Release => intrinsics::atomic_xadd::<T, { AO::Release }>(dst, src:val), |
3870 | AcqRel => intrinsics::atomic_xadd::<T, { AO::AcqRel }>(dst, src:val), |
3871 | SeqCst => intrinsics::atomic_xadd::<T, { AO::SeqCst }>(dst, src:val), |
3872 | } |
3873 | } |
3874 | } |
3875 | |
3876 | /// Returns the previous value (like __sync_fetch_and_sub). |
3877 | #[inline] |
3878 | #[cfg(target_has_atomic)] |
3879 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3880 | unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
3881 | // SAFETY: the caller must uphold the safety contract for `atomic_sub`. |
3882 | unsafe { |
3883 | match order { |
3884 | Relaxed => intrinsics::atomic_xsub::<T, { AO::Relaxed }>(dst, src:val), |
3885 | Acquire => intrinsics::atomic_xsub::<T, { AO::Acquire }>(dst, src:val), |
3886 | Release => intrinsics::atomic_xsub::<T, { AO::Release }>(dst, src:val), |
3887 | AcqRel => intrinsics::atomic_xsub::<T, { AO::AcqRel }>(dst, src:val), |
3888 | SeqCst => intrinsics::atomic_xsub::<T, { AO::SeqCst }>(dst, src:val), |
3889 | } |
3890 | } |
3891 | } |
3892 | |
3893 | /// Publicly exposed for stdarch; nobody else should use this. |
3894 | #[inline] |
3895 | #[cfg(target_has_atomic)] |
3896 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3897 | #[unstable(feature = "core_intrinsics", issue = "none")] |
3898 | #[doc(hidden)] |
3899 | pub unsafe fn atomic_compare_exchange<T: Copy>( |
3900 | dst: *mut T, |
3901 | old: T, |
3902 | new: T, |
3903 | success: Ordering, |
3904 | failure: Ordering, |
3905 | ) -> Result<T, T> { |
3906 | // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange`. |
3907 | let (val, ok) = unsafe { |
3908 | match (success, failure) { |
3909 | (Relaxed, Relaxed) => { |
3910 | intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new) |
3911 | } |
3912 | (Relaxed, Acquire) => { |
3913 | intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new) |
3914 | } |
3915 | (Relaxed, SeqCst) => { |
3916 | intrinsics::atomic_cxchg::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new) |
3917 | } |
3918 | (Acquire, Relaxed) => { |
3919 | intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new) |
3920 | } |
3921 | (Acquire, Acquire) => { |
3922 | intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new) |
3923 | } |
3924 | (Acquire, SeqCst) => { |
3925 | intrinsics::atomic_cxchg::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new) |
3926 | } |
3927 | (Release, Relaxed) => { |
3928 | intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new) |
3929 | } |
3930 | (Release, Acquire) => { |
3931 | intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::Acquire }>(dst, old, new) |
3932 | } |
3933 | (Release, SeqCst) => { |
3934 | intrinsics::atomic_cxchg::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new) |
3935 | } |
3936 | (AcqRel, Relaxed) => { |
3937 | intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new) |
3938 | } |
3939 | (AcqRel, Acquire) => { |
3940 | intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new) |
3941 | } |
3942 | (AcqRel, SeqCst) => { |
3943 | intrinsics::atomic_cxchg::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new) |
3944 | } |
3945 | (SeqCst, Relaxed) => { |
3946 | intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new) |
3947 | } |
3948 | (SeqCst, Acquire) => { |
3949 | intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new) |
3950 | } |
3951 | (SeqCst, SeqCst) => { |
3952 | intrinsics::atomic_cxchg::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new) |
3953 | } |
3954 | (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"), |
3955 | (_, Release) => panic!("there is no such thing as a release failure ordering"), |
3956 | } |
3957 | }; |
3958 | if ok { Ok(val) } else { Err(val) } |
3959 | } |
3960 | |
3961 | #[inline] |
3962 | #[cfg(target_has_atomic)] |
3963 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3964 | unsafe fn atomic_compare_exchange_weak<T: Copy>( |
3965 | dst: *mut T, |
3966 | old: T, |
3967 | new: T, |
3968 | success: Ordering, |
3969 | failure: Ordering, |
3970 | ) -> Result<T, T> { |
3971 | // SAFETY: the caller must uphold the safety contract for `atomic_compare_exchange_weak`. |
3972 | let (val, ok) = unsafe { |
3973 | match (success, failure) { |
3974 | (Relaxed, Relaxed) => { |
3975 | intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Relaxed }>(dst, old, new) |
3976 | } |
3977 | (Relaxed, Acquire) => { |
3978 | intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::Acquire }>(dst, old, new) |
3979 | } |
3980 | (Relaxed, SeqCst) => { |
3981 | intrinsics::atomic_cxchgweak::<T, { AO::Relaxed }, { AO::SeqCst }>(dst, old, new) |
3982 | } |
3983 | (Acquire, Relaxed) => { |
3984 | intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Relaxed }>(dst, old, new) |
3985 | } |
3986 | (Acquire, Acquire) => { |
3987 | intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::Acquire }>(dst, old, new) |
3988 | } |
3989 | (Acquire, SeqCst) => { |
3990 | intrinsics::atomic_cxchgweak::<T, { AO::Acquire }, { AO::SeqCst }>(dst, old, new) |
3991 | } |
3992 | (Release, Relaxed) => { |
3993 | intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Relaxed }>(dst, old, new) |
3994 | } |
3995 | (Release, Acquire) => { |
3996 | intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::Acquire }>(dst, old, new) |
3997 | } |
3998 | (Release, SeqCst) => { |
3999 | intrinsics::atomic_cxchgweak::<T, { AO::Release }, { AO::SeqCst }>(dst, old, new) |
4000 | } |
4001 | (AcqRel, Relaxed) => { |
4002 | intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Relaxed }>(dst, old, new) |
4003 | } |
4004 | (AcqRel, Acquire) => { |
4005 | intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::Acquire }>(dst, old, new) |
4006 | } |
4007 | (AcqRel, SeqCst) => { |
4008 | intrinsics::atomic_cxchgweak::<T, { AO::AcqRel }, { AO::SeqCst }>(dst, old, new) |
4009 | } |
4010 | (SeqCst, Relaxed) => { |
4011 | intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Relaxed }>(dst, old, new) |
4012 | } |
4013 | (SeqCst, Acquire) => { |
4014 | intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::Acquire }>(dst, old, new) |
4015 | } |
4016 | (SeqCst, SeqCst) => { |
4017 | intrinsics::atomic_cxchgweak::<T, { AO::SeqCst }, { AO::SeqCst }>(dst, old, new) |
4018 | } |
4019 | (_, AcqRel) => panic!("there is no such thing as an acquire-release failure ordering"), |
4020 | (_, Release) => panic!("there is no such thing as a release failure ordering"), |
4021 | } |
4022 | }; |
4023 | if ok { Ok(val) } else { Err(val) } |
4024 | } |
4025 | |
4026 | #[inline] |
4027 | #[cfg(target_has_atomic)] |
4028 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4029 | unsafe fn atomic_and<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4030 | // SAFETY: the caller must uphold the safety contract for `atomic_and` |
4031 | unsafe { |
4032 | match order { |
4033 | Relaxed => intrinsics::atomic_and::<T, { AO::Relaxed }>(dst, src:val), |
4034 | Acquire => intrinsics::atomic_and::<T, { AO::Acquire }>(dst, src:val), |
4035 | Release => intrinsics::atomic_and::<T, { AO::Release }>(dst, src:val), |
4036 | AcqRel => intrinsics::atomic_and::<T, { AO::AcqRel }>(dst, src:val), |
4037 | SeqCst => intrinsics::atomic_and::<T, { AO::SeqCst }>(dst, src:val), |
4038 | } |
4039 | } |
4040 | } |
4041 | |
4042 | #[inline] |
4043 | #[cfg(target_has_atomic)] |
4044 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4045 | unsafe fn atomic_nand<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4046 | // SAFETY: the caller must uphold the safety contract for `atomic_nand` |
4047 | unsafe { |
4048 | match order { |
4049 | Relaxed => intrinsics::atomic_nand::<T, { AO::Relaxed }>(dst, src:val), |
4050 | Acquire => intrinsics::atomic_nand::<T, { AO::Acquire }>(dst, src:val), |
4051 | Release => intrinsics::atomic_nand::<T, { AO::Release }>(dst, src:val), |
4052 | AcqRel => intrinsics::atomic_nand::<T, { AO::AcqRel }>(dst, src:val), |
4053 | SeqCst => intrinsics::atomic_nand::<T, { AO::SeqCst }>(dst, src:val), |
4054 | } |
4055 | } |
4056 | } |
4057 | |
4058 | #[inline] |
4059 | #[cfg(target_has_atomic)] |
4060 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4061 | unsafe fn atomic_or<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4062 | // SAFETY: the caller must uphold the safety contract for `atomic_or` |
4063 | unsafe { |
4064 | match order { |
4065 | SeqCst => intrinsics::atomic_or::<T, { AO::SeqCst }>(dst, src:val), |
4066 | Acquire => intrinsics::atomic_or::<T, { AO::Acquire }>(dst, src:val), |
4067 | Release => intrinsics::atomic_or::<T, { AO::Release }>(dst, src:val), |
4068 | AcqRel => intrinsics::atomic_or::<T, { AO::AcqRel }>(dst, src:val), |
4069 | Relaxed => intrinsics::atomic_or::<T, { AO::Relaxed }>(dst, src:val), |
4070 | } |
4071 | } |
4072 | } |
4073 | |
4074 | #[inline] |
4075 | #[cfg(target_has_atomic)] |
4076 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4077 | unsafe fn atomic_xor<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4078 | // SAFETY: the caller must uphold the safety contract for `atomic_xor` |
4079 | unsafe { |
4080 | match order { |
4081 | SeqCst => intrinsics::atomic_xor::<T, { AO::SeqCst }>(dst, src:val), |
4082 | Acquire => intrinsics::atomic_xor::<T, { AO::Acquire }>(dst, src:val), |
4083 | Release => intrinsics::atomic_xor::<T, { AO::Release }>(dst, src:val), |
4084 | AcqRel => intrinsics::atomic_xor::<T, { AO::AcqRel }>(dst, src:val), |
4085 | Relaxed => intrinsics::atomic_xor::<T, { AO::Relaxed }>(dst, src:val), |
4086 | } |
4087 | } |
4088 | } |
4089 | |
4090 | /// Updates `*dst` to the max value of `val` and the old value (signed comparison) |
4091 | #[inline] |
4092 | #[cfg(target_has_atomic)] |
4093 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4094 | unsafe fn atomic_max<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4095 | // SAFETY: the caller must uphold the safety contract for `atomic_max` |
4096 | unsafe { |
4097 | match order { |
4098 | Relaxed => intrinsics::atomic_max::<T, { AO::Relaxed }>(dst, src:val), |
4099 | Acquire => intrinsics::atomic_max::<T, { AO::Acquire }>(dst, src:val), |
4100 | Release => intrinsics::atomic_max::<T, { AO::Release }>(dst, src:val), |
4101 | AcqRel => intrinsics::atomic_max::<T, { AO::AcqRel }>(dst, src:val), |
4102 | SeqCst => intrinsics::atomic_max::<T, { AO::SeqCst }>(dst, src:val), |
4103 | } |
4104 | } |
4105 | } |
4106 | |
4107 | /// Updates `*dst` to the min value of `val` and the old value (signed comparison) |
4108 | #[inline] |
4109 | #[cfg(target_has_atomic)] |
4110 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4111 | unsafe fn atomic_min<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4112 | // SAFETY: the caller must uphold the safety contract for `atomic_min` |
4113 | unsafe { |
4114 | match order { |
4115 | Relaxed => intrinsics::atomic_min::<T, { AO::Relaxed }>(dst, src:val), |
4116 | Acquire => intrinsics::atomic_min::<T, { AO::Acquire }>(dst, src:val), |
4117 | Release => intrinsics::atomic_min::<T, { AO::Release }>(dst, src:val), |
4118 | AcqRel => intrinsics::atomic_min::<T, { AO::AcqRel }>(dst, src:val), |
4119 | SeqCst => intrinsics::atomic_min::<T, { AO::SeqCst }>(dst, src:val), |
4120 | } |
4121 | } |
4122 | } |
4123 | |
4124 | /// Updates `*dst` to the max value of `val` and the old value (unsigned comparison) |
4125 | #[inline] |
4126 | #[cfg(target_has_atomic)] |
4127 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4128 | unsafe fn atomic_umax<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4129 | // SAFETY: the caller must uphold the safety contract for `atomic_umax` |
4130 | unsafe { |
4131 | match order { |
4132 | Relaxed => intrinsics::atomic_umax::<T, { AO::Relaxed }>(dst, src:val), |
4133 | Acquire => intrinsics::atomic_umax::<T, { AO::Acquire }>(dst, src:val), |
4134 | Release => intrinsics::atomic_umax::<T, { AO::Release }>(dst, src:val), |
4135 | AcqRel => intrinsics::atomic_umax::<T, { AO::AcqRel }>(dst, src:val), |
4136 | SeqCst => intrinsics::atomic_umax::<T, { AO::SeqCst }>(dst, src:val), |
4137 | } |
4138 | } |
4139 | } |
4140 | |
4141 | /// Updates `*dst` to the min value of `val` and the old value (unsigned comparison) |
4142 | #[inline] |
4143 | #[cfg(target_has_atomic)] |
4144 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4145 | unsafe fn atomic_umin<T: Copy>(dst: *mut T, val: T, order: Ordering) -> T { |
4146 | // SAFETY: the caller must uphold the safety contract for `atomic_umin` |
4147 | unsafe { |
4148 | match order { |
4149 | Relaxed => intrinsics::atomic_umin::<T, { AO::Relaxed }>(dst, src:val), |
4150 | Acquire => intrinsics::atomic_umin::<T, { AO::Acquire }>(dst, src:val), |
4151 | Release => intrinsics::atomic_umin::<T, { AO::Release }>(dst, src:val), |
4152 | AcqRel => intrinsics::atomic_umin::<T, { AO::AcqRel }>(dst, src:val), |
4153 | SeqCst => intrinsics::atomic_umin::<T, { AO::SeqCst }>(dst, src:val), |
4154 | } |
4155 | } |
4156 | } |
4157 | |
4158 | /// An atomic fence. |
4159 | /// |
4160 | /// Fences create synchronization between themselves and atomic operations or fences in other |
4161 | /// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of |
4162 | /// memory operations around it. |
4163 | /// |
4164 | /// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes |
4165 | /// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there |
4166 | /// exist operations X and Y, both operating on some atomic object 'm' such |
4167 | /// that A is sequenced before X, Y is sequenced before B and Y observes |
4168 | /// the change to m. This provides a happens-before dependence between A and B. |
4169 | /// |
4170 | /// ```text |
4171 | /// Thread 1 Thread 2 |
4172 | /// |
4173 | /// fence(Release); A -------------- |
4174 | /// m.store(3, Relaxed); X --------- | |
4175 | /// | | |
4176 | /// | | |
4177 | /// -------------> Y if m.load(Relaxed) == 3 { |
4178 | /// |-------> B fence(Acquire); |
4179 | /// ... |
4180 | /// } |
4181 | /// ``` |
4182 | /// |
4183 | /// Note that in the example above, it is crucial that the accesses to `m` are atomic. Fences cannot |
4184 | /// be used to establish synchronization among non-atomic accesses in different threads. However, |
4185 | /// thanks to the happens-before relationship between A and B, any non-atomic accesses that |
4186 | /// happen-before A are now also properly synchronized with any non-atomic accesses that |
4187 | /// happen-after B. |
4188 | /// |
4189 | /// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize |
4190 | /// with a fence. |
4191 | /// |
4192 | /// A fence which has [`SeqCst`] ordering, in addition to having both [`Acquire`] |
4193 | /// and [`Release`] semantics, participates in the global program order of the |
4194 | /// other [`SeqCst`] operations and/or fences. |
4195 | /// |
4196 | /// Accepts [`Acquire`], [`Release`], [`AcqRel`] and [`SeqCst`] orderings. |
4197 | /// |
4198 | /// # Panics |
4199 | /// |
4200 | /// Panics if `order` is [`Relaxed`]. |
4201 | /// |
4202 | /// # Examples |
4203 | /// |
4204 | /// ``` |
4205 | /// use std::sync::atomic::AtomicBool; |
4206 | /// use std::sync::atomic::fence; |
4207 | /// use std::sync::atomic::Ordering; |
4208 | /// |
4209 | /// // A mutual exclusion primitive based on spinlock. |
4210 | /// pub struct Mutex { |
4211 | /// flag: AtomicBool, |
4212 | /// } |
4213 | /// |
4214 | /// impl Mutex { |
4215 | /// pub fn new() -> Mutex { |
4216 | /// Mutex { |
4217 | /// flag: AtomicBool::new(false), |
4218 | /// } |
4219 | /// } |
4220 | /// |
4221 | /// pub fn lock(&self) { |
4222 | /// // Wait until the old value is `false`. |
4223 | /// while self |
4224 | /// .flag |
4225 | /// .compare_exchange_weak(false, true, Ordering::Relaxed, Ordering::Relaxed) |
4226 | /// .is_err() |
4227 | /// {} |
4228 | /// // This fence synchronizes-with store in `unlock`. |
4229 | /// fence(Ordering::Acquire); |
4230 | /// } |
4231 | /// |
4232 | /// pub fn unlock(&self) { |
4233 | /// self.flag.store(false, Ordering::Release); |
4234 | /// } |
4235 | /// } |
4236 | /// ``` |
4237 | #[inline] |
4238 | #[stable(feature = "rust1", since = "1.0.0")] |
4239 | #[rustc_diagnostic_item= "fence"] |
4240 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4241 | pub fn fence(order: Ordering) { |
4242 | // SAFETY: using an atomic fence is safe. |
4243 | unsafe { |
4244 | match order { |
4245 | Acquire => intrinsics::atomic_fence::<{ AO::Acquire }>(), |
4246 | Release => intrinsics::atomic_fence::<{ AO::Release }>(), |
4247 | AcqRel => intrinsics::atomic_fence::<{ AO::AcqRel }>(), |
4248 | SeqCst => intrinsics::atomic_fence::<{ AO::SeqCst }>(), |
4249 | Relaxed => panic!("there is no such thing as a relaxed fence"), |
4250 | } |
4251 | } |
4252 | } |
4253 | |
4254 | /// A "compiler-only" atomic fence. |
4255 | /// |
4256 | /// Like [`fence`], this function establishes synchronization with other atomic operations and |
4257 | /// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with |
4258 | /// operations *in the same thread*. This may at first sound rather useless, since code within a |
4259 | /// thread is typically already totally ordered and does not need any further synchronization. |
4260 | /// However, there are cases where code can run on the same thread without being ordered: |
4261 | /// - The most common case is that of a *signal handler*: a signal handler runs in the same thread |
4262 | /// as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence` |
4263 | /// can be used to establish synchronization between a thread and its signal handler, the same way |
4264 | /// that `fence` can be used to establish synchronization across threads. |
4265 | /// - Similar situations can arise in embedded programming with interrupt handlers, or in custom |
4266 | /// implementations of preemptive green threads. In general, `compiler_fence` can establish |
4267 | /// synchronization with code that is guaranteed to run on the same hardware CPU. |
4268 | /// |
4269 | /// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like |
4270 | /// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is |
4271 | /// not possible to perform synchronization entirely with fences and non-atomic operations. |
4272 | /// |
4273 | /// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering |
4274 | /// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and |
4275 | /// C++. |
4276 | /// |
4277 | /// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence |
4278 | /// |
4279 | /// # Panics |
4280 | /// |
4281 | /// Panics if `order` is [`Relaxed`]. |
4282 | /// |
4283 | /// # Examples |
4284 | /// |
4285 | /// Without the two `compiler_fence` calls, the read of `IMPORTANT_VARIABLE` in `signal_handler` |
4286 | /// is *undefined behavior* due to a data race, despite everything happening in a single thread. |
4287 | /// This is because the signal handler is considered to run concurrently with its associated |
4288 | /// thread, and explicit synchronization is required to pass data between a thread and its |
4289 | /// signal handler. The code below uses two `compiler_fence` calls to establish the usual |
4290 | /// release-acquire synchronization pattern (see [`fence`] for an image). |
4291 | /// |
4292 | /// ``` |
4293 | /// use std::sync::atomic::AtomicBool; |
4294 | /// use std::sync::atomic::Ordering; |
4295 | /// use std::sync::atomic::compiler_fence; |
4296 | /// |
4297 | /// static mut IMPORTANT_VARIABLE: usize = 0; |
4298 | /// static IS_READY: AtomicBool = AtomicBool::new(false); |
4299 | /// |
4300 | /// fn main() { |
4301 | /// unsafe { IMPORTANT_VARIABLE = 42 }; |
4302 | /// // Marks earlier writes as being released with future relaxed stores. |
4303 | /// compiler_fence(Ordering::Release); |
4304 | /// IS_READY.store(true, Ordering::Relaxed); |
4305 | /// } |
4306 | /// |
4307 | /// fn signal_handler() { |
4308 | /// if IS_READY.load(Ordering::Relaxed) { |
4309 | /// // Acquires writes that were released with relaxed stores that we read from. |
4310 | /// compiler_fence(Ordering::Acquire); |
4311 | /// assert_eq!(unsafe { IMPORTANT_VARIABLE }, 42); |
4312 | /// } |
4313 | /// } |
4314 | /// ``` |
4315 | #[inline] |
4316 | #[stable(feature = "compiler_fences", since = "1.21.0")] |
4317 | #[rustc_diagnostic_item= "compiler_fence"] |
4318 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
4319 | pub fn compiler_fence(order: Ordering) { |
4320 | // SAFETY: using an atomic fence is safe. |
4321 | unsafe { |
4322 | match order { |
4323 | Acquire => intrinsics::atomic_singlethreadfence::<{ AO::Acquire }>(), |
4324 | Release => intrinsics::atomic_singlethreadfence::<{ AO::Release }>(), |
4325 | AcqRel => intrinsics::atomic_singlethreadfence::<{ AO::AcqRel }>(), |
4326 | SeqCst => intrinsics::atomic_singlethreadfence::<{ AO::SeqCst }>(), |
4327 | Relaxed => panic!("there is no such thing as a relaxed fence"), |
4328 | } |
4329 | } |
4330 | } |
4331 | |
4332 | #[cfg(target_has_atomic_load_store = "8")] |
4333 | #[stable(feature = "atomic_debug", since = "1.3.0")] |
4334 | impl fmt::Debug for AtomicBool { |
4335 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
4336 | fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f) |
4337 | } |
4338 | } |
4339 | |
4340 | #[cfg(target_has_atomic_load_store = "ptr")] |
4341 | #[stable(feature = "atomic_debug", since = "1.3.0")] |
4342 | impl<T> fmt::Debug for AtomicPtr<T> { |
4343 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
4344 | fmt::Debug::fmt(&self.load(order:Ordering::Relaxed), f) |
4345 | } |
4346 | } |
4347 | |
4348 | #[cfg(target_has_atomic_load_store = "ptr")] |
4349 | #[stable(feature = "atomic_pointer", since = "1.24.0")] |
4350 | impl<T> fmt::Pointer for AtomicPtr<T> { |
4351 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
4352 | fmt::Pointer::fmt(&self.load(order:Ordering::Relaxed), f) |
4353 | } |
4354 | } |
4355 | |
4356 | /// Signals the processor that it is inside a busy-wait spin-loop ("spin lock"). |
4357 | /// |
4358 | /// This function is deprecated in favor of [`hint::spin_loop`]. |
4359 | /// |
4360 | /// [`hint::spin_loop`]: crate::hint::spin_loop |
4361 | #[inline] |
4362 | #[stable(feature = "spin_loop_hint", since = "1.24.0")] |
4363 | #[deprecated(since = "1.51.0", note = "use hint::spin_loop instead")] |
4364 | pub fn spin_loop_hint() { |
4365 | spin_loop() |
4366 | } |
4367 |
Definitions
- Sealed
- AtomicPrimitive
- AtomicInner
- impl_atomic_primitive
- Atomic
- AtomicBool
- v
- default
- AtomicPtr
- p
- default
- Ordering
- Relaxed
- Release
- Acquire
- AcqRel
- SeqCst
- new
- from_ptr
- get_mut
- from_mut
- get_mut_slice
- from_mut_slice
- into_inner
- load
- store
- swap
- compare_and_swap
- compare_exchange
- compare_exchange_weak
- fetch_and
- fetch_nand
- fetch_or
- fetch_xor
- fetch_not
- as_ptr
- fetch_update
- try_update
- update
- new
- from_ptr
- get_mut
- from_mut
- get_mut_slice
- from_mut_slice
- into_inner
- load
- store
- swap
- compare_and_swap
- compare_exchange
- compare_exchange_weak
- fetch_update
- try_update
- update
- fetch_ptr_add
- fetch_ptr_sub
- fetch_byte_add
- fetch_byte_sub
- fetch_or
- fetch_and
- fetch_xor
- as_ptr
- from
- from
- if_8_bit
- atomic_int
- AtomicI8
- AtomicU8
- AtomicI16
- AtomicU16
- AtomicI32
- AtomicU32
- AtomicI64
- AtomicU64
- AtomicI128
- AtomicU128
- atomic_int_ptr_sized
- strongest_failure_ordering
- atomic_store
- atomic_load
- atomic_swap
- atomic_add
- atomic_sub
- atomic_compare_exchange
- atomic_compare_exchange_weak
- atomic_and
- atomic_nand
- atomic_or
- atomic_xor
- atomic_max
- atomic_min
- atomic_umax
- atomic_umin
- fence
- compiler_fence
- fmt
- fmt
- fmt
Learn Rust with the experts
Find out more