1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- tidy:crate-doc:start -->
5Portable atomic types including support for 128-bit atomics, atomic float, etc.
6
7- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
8- Provide `AtomicI128` and `AtomicU128`.
9- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
10- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
11- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 ARM, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
12- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108), [`AtomicBool::fetch_not`](https://github.com/rust-lang/rust/issues/98485).
13- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
14- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
15
16<!-- TODO:
17- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
18- mention portable-atomic-util crate
19-->
20
21## Usage
22
23Add this to your `Cargo.toml`:
24
25```toml
26[dependencies]
27portable-atomic = "1"
28```
29
30The default features are mainly for users who use atomics larger than the pointer width.
31If you don't need them, disabling the default features may reduce code size and compile time slightly.
32
33```toml
34[dependencies]
35portable-atomic = { version = "1", default-features = false }
36```
37
38If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display helpful error messages to users on targets requiring additional action on the user side to provide atomic CAS.
39
40```toml
41[dependencies]
42portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
43```
44
45*Compiler support: requires rustc 1.34+*
46
47## 128-bit atomics support
48
49Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), aarch64 (Rust 1.59+), powerpc64 (nightly only), and s390x (nightly only), otherwise the fallback implementation is used.
50
51On x86_64, even if `cmpxchg16b` is not available at compile-time (note: `cmpxchg16b` target feature is enabled by default only on Apple targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
52
53They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
54
55See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
56
57## Optional features
58
59- **`fallback`** *(enabled by default)*<br>
60 Enable fallback implementations.
61
62 Disabling this allows only atomic types for which the platform natively supports atomic operations.
63
64- <a name="optional-features-float"></a>**`float`**<br>
65 Provide `AtomicF{32,64}`.
66
67 Note that most of `fetch_*` operations of atomic floats are implemented using CAS loops, which can be slower than equivalent operations of atomic integers. ([GPU targets have atomic instructions for float, so we plan to use these instructions for GPU targets in the future.](https://github.com/taiki-e/portable-atomic/issues/34))
68
69- **`std`**<br>
70 Use `std`.
71
72- <a name="optional-features-require-cas"></a>**`require-cas`**<br>
73 Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more.
74
75- <a name="optional-features-serde"></a>**`serde`**<br>
76 Implement `serde::{Serialize,Deserialize}` for atomic types.
77
78 Note:
79 - The MSRV when this feature is enabled depends on the MSRV of [serde].
80
81- <a name="optional-features-critical-section"></a>**`critical-section`**<br>
82 When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where
83 it is not natively available. When enabling it, you should provide a suitable critical section implementation
84 for the current target, see the [critical-section] documentation for details on how to do so.
85
86 `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used,
87 such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
88 needs extra care due to e.g. real-time requirements.
89
90 Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with
91 [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but
92 additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if
93 all the `critical-section` implementation for your target does is disable interrupts, prefer using
94 `unsafe-assume-single-core` feature instead.
95
96 Note:
97 - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
98 - It is usually *not* recommended to always enable this feature in dependencies of the library.
99
100 Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future).
101
102 The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
103
104 As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
105
106 ```toml
107 [dependencies]
108 portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
109 crate-provides-critical-section-impl = "..."
110 crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
111 ```
112
113- <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br>
114 Assume that the target is single-core.
115 When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
116
117 This feature is `unsafe`, and note the following safety requirements:
118 - Enabling this feature for multi-core systems is always **unsound**.
119 - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
120 Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
121
122 The following are known cases:
123 - On pre-v6 ARM, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together.
124 - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
125
126 See also [the `interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
127
128 Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature.
129
130 It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
131
132 ARMv6-M (thumbv6m), pre-v6 ARM (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported.
133
134 Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature.
135
136 Enabling this feature for targets that have atomic CAS will result in a compile error.
137
138 Feel free to submit an issue if your target is not supported yet.
139
140## Optional cfg
141
142One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
143
144```toml
145# .cargo/config.toml
146[target.<target>]
147rustflags = ["--cfg", "portable_atomic_no_outline_atomics"]
148```
149
150Or set environment variable:
151
152```sh
153RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ...
154```
155
156- <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br>
157 Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core).
158
159 Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more.
160
161- <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br>
162 Disable dynamic dispatching by run-time CPU feature detection.
163
164 If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE (aarch64).
165
166 Note:
167 - Dynamic detection is currently only enabled in Rust 1.59+ for aarch64, in Rust 1.59+ (AVX) or 1.69+ (CMPXCHG16B) for x86_64, nightly only for powerpc64 (disabled by default), otherwise it works the same as when this cfg is set.
168 - If the required target features are enabled at compile-time, the atomic operations are inlined.
169 - This is compatible with no-std (as with all features except `std`).
170 - On some targets, run-time detection is disabled by default mainly for compatibility with older versions of operating systems or incomplete build environments, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.)
171 - Some aarch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
172
173 See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
174
175## Related Projects
176
177- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
178- [atomic-memcpy]: Byte-wise atomic memcpy.
179
180[#60]: https://github.com/taiki-e/portable-atomic/issues/60
181[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
182[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
183[critical-section]: https://github.com/rust-embedded/critical-section
184[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
185[serde]: https://github.com/serde-rs/serde
186
187<!-- tidy:crate-doc:end -->
188*/
189
190#![no_std]
191#![doc(test(
192 no_crate_inject,
193 attr(
194 deny(warnings, rust_2018_idioms, single_use_lifetimes),
195 allow(dead_code, unused_variables)
196 )
197))]
198#![warn(
199 rust_2018_idioms,
200 single_use_lifetimes,
201 unreachable_pub,
202 clippy::pedantic,
203 // Lints that may help when writing public library.
204 missing_debug_implementations,
205 missing_docs,
206 clippy::alloc_instead_of_core,
207 clippy::exhaustive_enums,
208 clippy::exhaustive_structs,
209 clippy::impl_trait_in_params,
210 clippy::missing_inline_in_public_items,
211 clippy::std_instead_of_alloc,
212 clippy::std_instead_of_core,
213 // Lints that may help when writing unsafe code.
214 improper_ctypes,
215 // improper_ctypes_definitions, // requires Rust 1.46
216 // unsafe_op_in_unsafe_fn, // set conditionally since it requires Rust 1.52
217 clippy::as_ptr_cast_mut,
218 clippy::default_union_representation,
219 clippy::inline_asm_x86_att_syntax,
220 clippy::trailing_empty_array,
221 clippy::transmute_undefined_repr,
222 clippy::undocumented_unsafe_blocks,
223)]
224#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
225#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
226#![allow(
227 clippy::cast_lossless,
228 clippy::doc_markdown,
229 clippy::float_cmp,
230 clippy::inline_always,
231 clippy::missing_errors_doc,
232 clippy::module_inception,
233 clippy::naive_bytecount,
234 clippy::similar_names,
235 clippy::single_match,
236 clippy::too_many_lines,
237 clippy::type_complexity,
238 clippy::unreadable_literal
239)]
240// asm_experimental_arch
241// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
242// On tier 2 platforms (powerpc64 and s390x), we use cfg set by build script to
243// determine whether this feature is available or not.
244#![cfg_attr(
245 all(
246 not(portable_atomic_no_asm),
247 any(
248 target_arch = "avr",
249 target_arch = "msp430",
250 all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
251 all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
252 all(target_arch = "s390x", portable_atomic_unstable_asm_experimental_arch),
253 ),
254 ),
255 feature(asm_experimental_arch)
256)]
257// Old nightly only
258// These features are already stabilized or have already been removed from compilers,
259// and can safely be enabled for old nightly as long as version detection works.
260// - cfg(target_has_atomic)
261// - #[target_feature(enable = "cmpxchg16b")] on x86_64
262// - asm! on ARM, AArch64, RISC-V, x86_64
263// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
264// - #[instruction_set] on non-Linux/Android pre-v6 ARM (tier 3)
265#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
266#![cfg_attr(
267 all(
268 target_arch = "x86_64",
269 portable_atomic_unstable_cmpxchg16b_target_feature,
270 not(portable_atomic_no_outline_atomics),
271 not(any(target_env = "sgx", miri)),
272 feature = "fallback",
273 ),
274 feature(cmpxchg16b_target_feature)
275)]
276#![cfg_attr(
277 all(
278 portable_atomic_unstable_asm,
279 any(
280 target_arch = "aarch64",
281 target_arch = "arm",
282 target_arch = "riscv32",
283 target_arch = "riscv64",
284 target_arch = "x86_64",
285 ),
286 ),
287 feature(asm)
288)]
289#![cfg_attr(
290 all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
291 feature(llvm_asm)
292)]
293#![cfg_attr(
294 all(
295 target_arch = "arm",
296 portable_atomic_unstable_isa_attribute,
297 any(test, portable_atomic_unsafe_assume_single_core),
298 not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
299 not(target_has_atomic = "ptr"),
300 ),
301 feature(isa_attribute)
302)]
303// Miri and/or ThreadSanitizer only
304// They do not support inline assembly, so we need to use unstable features instead.
305// Since they require nightly compilers anyway, we can use the unstable features.
306#![cfg_attr(
307 all(
308 any(target_arch = "aarch64", target_arch = "powerpc64", target_arch = "s390x"),
309 any(miri, portable_atomic_sanitize_thread),
310 ),
311 feature(core_intrinsics)
312)]
313// This feature is only enabled for old nightly because cmpxchg16b_intrinsic has been stabilized.
314#![cfg_attr(
315 all(
316 target_arch = "x86_64",
317 portable_atomic_unstable_cmpxchg16b_intrinsic,
318 any(miri, portable_atomic_sanitize_thread),
319 ),
320 feature(stdsimd)
321)]
322// docs.rs only
323#![cfg_attr(docsrs, feature(doc_cfg))]
324#![cfg_attr(
325 all(
326 portable_atomic_no_atomic_load_store,
327 not(any(
328 target_arch = "avr",
329 target_arch = "bpf",
330 target_arch = "msp430",
331 target_arch = "riscv32",
332 target_arch = "riscv64",
333 feature = "critical-section",
334 )),
335 ),
336 allow(unused_imports, unused_macros)
337)]
338
339// There are currently no 8-bit, 128-bit, or higher builtin targets.
340// (Although some of our generic code is written with the future
341// addition of 128-bit targets in mind.)
342// Note that Rust (and C99) pointers must be at least 16-bits: https://github.com/rust-lang/rust/pull/49305
343#[cfg(not(any(
344 target_pointer_width = "16",
345 target_pointer_width = "32",
346 target_pointer_width = "64",
347)))]
348compile_error!(
349 "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
350 if you need support for others, \
351 please submit an issue at <https://github.com/taiki-e/portable-atomic>"
352);
353
354#[cfg(portable_atomic_unsafe_assume_single_core)]
355#[cfg_attr(
356 portable_atomic_no_cfg_target_has_atomic,
357 cfg(any(
358 not(portable_atomic_no_atomic_cas),
359 not(any(
360 target_arch = "arm",
361 target_arch = "avr",
362 target_arch = "msp430",
363 target_arch = "riscv32",
364 target_arch = "riscv64",
365 target_arch = "xtensa",
366 )),
367 ))
368)]
369#[cfg_attr(
370 not(portable_atomic_no_cfg_target_has_atomic),
371 cfg(any(
372 target_has_atomic = "ptr",
373 not(any(
374 target_arch = "arm",
375 target_arch = "avr",
376 target_arch = "msp430",
377 target_arch = "riscv32",
378 target_arch = "riscv64",
379 target_arch = "xtensa",
380 )),
381 ))
382)]
383compile_error!(
384 "cfg(portable_atomic_unsafe_assume_single_core) does not compatible with this target;\n\
385 if you need cfg(portable_atomic_unsafe_assume_single_core) support for this target,\n\
386 please submit an issue at <https://github.com/taiki-e/portable-atomic>"
387);
388
389#[cfg(portable_atomic_no_outline_atomics)]
390#[cfg(not(any(
391 target_arch = "aarch64",
392 target_arch = "arm",
393 target_arch = "powerpc64",
394 target_arch = "x86_64",
395)))]
396compile_error!("cfg(portable_atomic_no_outline_atomics) does not compatible with this target");
397#[cfg(portable_atomic_outline_atomics)]
398#[cfg(not(any(target_arch = "aarch64", target_arch = "powerpc64")))]
399compile_error!("cfg(portable_atomic_outline_atomics) does not compatible with this target");
400#[cfg(portable_atomic_disable_fiq)]
401#[cfg(not(all(
402 target_arch = "arm",
403 not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
404)))]
405compile_error!("cfg(portable_atomic_disable_fiq) does not compatible with this target");
406#[cfg(portable_atomic_s_mode)]
407#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
408compile_error!("cfg(portable_atomic_s_mode) does not compatible with this target");
409#[cfg(portable_atomic_force_amo)]
410#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
411compile_error!("cfg(portable_atomic_force_amo) does not compatible with this target");
412
413#[cfg(portable_atomic_disable_fiq)]
414#[cfg(not(portable_atomic_unsafe_assume_single_core))]
415compile_error!(
416 "cfg(portable_atomic_disable_fiq) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)"
417);
418#[cfg(portable_atomic_s_mode)]
419#[cfg(not(portable_atomic_unsafe_assume_single_core))]
420compile_error!(
421 "cfg(portable_atomic_s_mode) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)"
422);
423#[cfg(portable_atomic_force_amo)]
424#[cfg(not(portable_atomic_unsafe_assume_single_core))]
425compile_error!(
426 "cfg(portable_atomic_force_amo) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)"
427);
428
429#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
430compile_error!(
431 "you may not enable feature `critical-section` and cfg(portable_atomic_unsafe_assume_single_core) at the same time"
432);
433
434#[cfg(feature = "require-cas")]
435#[cfg_attr(
436 portable_atomic_no_cfg_target_has_atomic,
437 cfg(not(any(
438 not(portable_atomic_no_atomic_cas),
439 portable_atomic_unsafe_assume_single_core,
440 feature = "critical-section",
441 target_arch = "avr",
442 target_arch = "msp430",
443 )))
444)]
445#[cfg_attr(
446 not(portable_atomic_no_cfg_target_has_atomic),
447 cfg(not(any(
448 target_has_atomic = "ptr",
449 portable_atomic_unsafe_assume_single_core,
450 feature = "critical-section",
451 target_arch = "avr",
452 target_arch = "msp430",
453 )))
454)]
455compile_error!(
456 "dependents require atomic CAS but not available on this target by default;\n\
457 consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features.\n\
458 see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
459);
460
461#[cfg(any(test, feature = "std"))]
462extern crate std;
463
464#[macro_use]
465mod utils;
466
467#[cfg(test)]
468#[macro_use]
469mod tests;
470
471#[doc(no_inline)]
472pub use core::sync::atomic::Ordering;
473
474#[doc(no_inline)]
475// LLVM doesn't support fence/compiler_fence for MSP430.
476#[cfg(not(target_arch = "msp430"))]
477pub use core::sync::atomic::{compiler_fence, fence};
478#[cfg(target_arch = "msp430")]
479pub use imp::msp430::{compiler_fence, fence};
480
481mod imp;
482
483pub mod hint {
484 //! Re-export of the [`core::hint`] module.
485 //!
486 //! The only difference from the [`core::hint`] module is that [`spin_loop`]
487 //! is available in all rust versions that this crate supports.
488 //!
489 //! ```
490 //! use portable_atomic::hint;
491 //!
492 //! hint::spin_loop();
493 //! ```
494
495 #[doc(no_inline)]
496 pub use core::hint::*;
497
498 /// Emits a machine instruction to signal the processor that it is running in
499 /// a busy-wait spin-loop ("spin lock").
500 ///
501 /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
502 /// for example, saving power or switching hyper-threads.
503 ///
504 /// This function is different from [`thread::yield_now`] which directly
505 /// yields to the system's scheduler, whereas `spin_loop` does not interact
506 /// with the operating system.
507 ///
508 /// A common use case for `spin_loop` is implementing bounded optimistic
509 /// spinning in a CAS loop in synchronization primitives. To avoid problems
510 /// like priority inversion, it is strongly recommended that the spin loop is
511 /// terminated after a finite amount of iterations and an appropriate blocking
512 /// syscall is made.
513 ///
514 /// **Note:** On platforms that do not support receiving spin-loop hints this
515 /// function does not do anything at all.
516 ///
517 /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
518 #[inline]
519 pub fn spin_loop() {
520 #[allow(deprecated)]
521 core::sync::atomic::spin_loop_hint();
522 }
523}
524
525#[cfg(doc)]
526use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
527use core::{fmt, ptr};
528
529#[cfg(miri)]
530use crate::utils::strict;
531
532cfg_has_atomic_8! {
533cfg_has_atomic_cas! {
534// See https://github.com/rust-lang/rust/pull/114034 for details.
535// https://github.com/rust-lang/rust/blob/9339f446a5302cd5041d3f3b5e59761f36699167/library/core/src/sync/atomic.rs#L134
536// https://godbolt.org/z/5W85abT58
537#[cfg(portable_atomic_no_cfg_target_has_atomic)]
538const EMULATE_ATOMIC_BOOL: bool = cfg!(all(
539 not(portable_atomic_no_atomic_cas),
540 any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"),
541));
542#[cfg(not(portable_atomic_no_cfg_target_has_atomic))]
543const EMULATE_ATOMIC_BOOL: bool = cfg!(all(
544 target_has_atomic = "8",
545 any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"),
546));
547} // cfg_has_atomic_cas!
548
549/// A boolean type which can be safely shared between threads.
550///
551/// This type has the same in-memory representation as a [`bool`].
552///
553/// If the compiler and the platform support atomic loads and stores of `u8`,
554/// this type is a wrapper for the standard library's
555/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
556/// but the compiler does not, atomic operations are implemented using inline
557/// assembly.
558#[repr(C, align(1))]
559pub struct AtomicBool {
560 v: core::cell::UnsafeCell<u8>,
561}
562
563impl Default for AtomicBool {
564 /// Creates an `AtomicBool` initialized to `false`.
565 #[inline]
566 fn default() -> Self {
567 Self::new(false)
568 }
569}
570
571impl From<bool> for AtomicBool {
572 /// Converts a `bool` into an `AtomicBool`.
573 #[inline]
574 fn from(b: bool) -> Self {
575 Self::new(b)
576 }
577}
578
579// Send is implicitly implemented.
580// SAFETY: any data races are prevented by disabling interrupts or
581// atomic intrinsics (see module-level comments).
582unsafe impl Sync for AtomicBool {}
583
584// UnwindSafe is implicitly implemented.
585#[cfg(not(portable_atomic_no_core_unwind_safe))]
586impl core::panic::RefUnwindSafe for AtomicBool {}
587#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
588impl std::panic::RefUnwindSafe for AtomicBool {}
589
590impl_debug_and_serde!(AtomicBool);
591
592impl AtomicBool {
593 /// Creates a new `AtomicBool`.
594 ///
595 /// # Examples
596 ///
597 /// ```
598 /// use portable_atomic::AtomicBool;
599 ///
600 /// let atomic_true = AtomicBool::new(true);
601 /// let atomic_false = AtomicBool::new(false);
602 /// ```
603 #[inline]
604 #[must_use]
605 pub const fn new(v: bool) -> Self {
606 static_assert_layout!(AtomicBool, bool);
607 Self { v: core::cell::UnsafeCell::new(v as u8) }
608 }
609
610 /// Creates a new `AtomicBool` from a pointer.
611 ///
612 /// # Safety
613 ///
614 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
615 /// be bigger than `align_of::<bool>()`).
616 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
617 /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
618 /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned
619 /// value (or vice-versa).
620 /// * In other words, time periods where the value is accessed atomically may not overlap
621 /// with periods where the value is accessed non-atomically.
622 /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
623 /// duration of lifetime `'a`. Most use cases should be able to follow this guideline.
624 /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done
625 /// from the same thread.
626 /// * If this atomic type is *not* lock-free:
627 /// * Any accesses to the value behind `ptr` must have a happens-before relationship
628 /// with accesses via the returned value (or vice-versa).
629 /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
630 /// be compatible with operations performed by this atomic type.
631 /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
632 /// these are not supported by the memory model.
633 ///
634 /// [valid]: core::ptr#safety
635 #[inline]
636 #[must_use]
637 pub unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
638 #[allow(clippy::cast_ptr_alignment)]
639 // SAFETY: guaranteed by the caller
640 unsafe { &*(ptr as *mut Self) }
641 }
642
643 /// Returns `true` if operations on values of this type are lock-free.
644 ///
645 /// If the compiler or the platform doesn't support the necessary
646 /// atomic instructions, global locks for every potentially
647 /// concurrent atomic operation will be used.
648 ///
649 /// # Examples
650 ///
651 /// ```
652 /// use portable_atomic::AtomicBool;
653 ///
654 /// let is_lock_free = AtomicBool::is_lock_free();
655 /// ```
656 #[inline]
657 #[must_use]
658 pub fn is_lock_free() -> bool {
659 imp::AtomicU8::is_lock_free()
660 }
661
662 /// Returns `true` if operations on values of this type are lock-free.
663 ///
664 /// If the compiler or the platform doesn't support the necessary
665 /// atomic instructions, global locks for every potentially
666 /// concurrent atomic operation will be used.
667 ///
668 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
669 /// this type may be lock-free even if the function returns false.
670 ///
671 /// # Examples
672 ///
673 /// ```
674 /// use portable_atomic::AtomicBool;
675 ///
676 /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
677 /// ```
678 #[inline]
679 #[must_use]
680 pub const fn is_always_lock_free() -> bool {
681 imp::AtomicU8::is_always_lock_free()
682 }
683
684 /// Returns a mutable reference to the underlying [`bool`].
685 ///
686 /// This is safe because the mutable reference guarantees that no other threads are
687 /// concurrently accessing the atomic data.
688 ///
689 /// # Examples
690 ///
691 /// ```
692 /// use portable_atomic::{AtomicBool, Ordering};
693 ///
694 /// let mut some_bool = AtomicBool::new(true);
695 /// assert_eq!(*some_bool.get_mut(), true);
696 /// *some_bool.get_mut() = false;
697 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
698 /// ```
699 #[inline]
700 pub fn get_mut(&mut self) -> &mut bool {
701 // SAFETY: the mutable reference guarantees unique ownership.
702 unsafe { &mut *(self.v.get() as *mut bool) }
703 }
704
705 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
706 // https://github.com/rust-lang/rust/issues/76314
707
708 /// Consumes the atomic and returns the contained value.
709 ///
710 /// This is safe because passing `self` by value guarantees that no other threads are
711 /// concurrently accessing the atomic data.
712 ///
713 /// # Examples
714 ///
715 /// ```
716 /// use portable_atomic::AtomicBool;
717 ///
718 /// let some_bool = AtomicBool::new(true);
719 /// assert_eq!(some_bool.into_inner(), true);
720 /// ```
721 #[inline]
722 pub fn into_inner(self) -> bool {
723 self.v.into_inner() != 0
724 }
725
726 /// Loads a value from the bool.
727 ///
728 /// `load` takes an [`Ordering`] argument which describes the memory ordering
729 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
730 ///
731 /// # Panics
732 ///
733 /// Panics if `order` is [`Release`] or [`AcqRel`].
734 ///
735 /// # Examples
736 ///
737 /// ```
738 /// use portable_atomic::{AtomicBool, Ordering};
739 ///
740 /// let some_bool = AtomicBool::new(true);
741 ///
742 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
743 /// ```
744 #[inline]
745 #[cfg_attr(
746 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
747 track_caller
748 )]
749 pub fn load(&self, order: Ordering) -> bool {
750 self.as_atomic_u8().load(order) != 0
751 }
752
753 /// Stores a value into the bool.
754 ///
755 /// `store` takes an [`Ordering`] argument which describes the memory ordering
756 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
757 ///
758 /// # Panics
759 ///
760 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
761 ///
762 /// # Examples
763 ///
764 /// ```
765 /// use portable_atomic::{AtomicBool, Ordering};
766 ///
767 /// let some_bool = AtomicBool::new(true);
768 ///
769 /// some_bool.store(false, Ordering::Relaxed);
770 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
771 /// ```
772 #[inline]
773 #[cfg_attr(
774 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
775 track_caller
776 )]
777 pub fn store(&self, val: bool, order: Ordering) {
778 self.as_atomic_u8().store(val as u8, order);
779 }
780
781 cfg_has_atomic_cas! {
782 /// Stores a value into the bool, returning the previous value.
783 ///
784 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
785 /// of this operation. All ordering modes are possible. Note that using
786 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
787 /// using [`Release`] makes the load part [`Relaxed`].
788 ///
789 /// # Examples
790 ///
791 /// ```
792 /// use portable_atomic::{AtomicBool, Ordering};
793 ///
794 /// let some_bool = AtomicBool::new(true);
795 ///
796 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
797 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
798 /// ```
799 #[inline]
800 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
801 pub fn swap(&self, val: bool, order: Ordering) -> bool {
802 if EMULATE_ATOMIC_BOOL {
803 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
804 } else {
805 self.as_atomic_u8().swap(val as u8, order) != 0
806 }
807 }
808
809 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
810 ///
811 /// The return value is a result indicating whether the new value was written and containing
812 /// the previous value. On success this value is guaranteed to be equal to `current`.
813 ///
814 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
815 /// ordering of this operation. `success` describes the required ordering for the
816 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
817 /// `failure` describes the required ordering for the load operation that takes place when
818 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
819 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
820 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
821 ///
822 /// # Panics
823 ///
824 /// Panics if `failure` is [`Release`], [`AcqRel`].
825 ///
826 /// # Examples
827 ///
828 /// ```
829 /// use portable_atomic::{AtomicBool, Ordering};
830 ///
831 /// let some_bool = AtomicBool::new(true);
832 ///
833 /// assert_eq!(
834 /// some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
835 /// Ok(true)
836 /// );
837 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
838 ///
839 /// assert_eq!(
840 /// some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
841 /// Err(false)
842 /// );
843 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
844 /// ```
845 #[inline]
846 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
847 #[cfg_attr(
848 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
849 track_caller
850 )]
851 pub fn compare_exchange(
852 &self,
853 current: bool,
854 new: bool,
855 success: Ordering,
856 failure: Ordering,
857 ) -> Result<bool, bool> {
858 if EMULATE_ATOMIC_BOOL {
859 crate::utils::assert_compare_exchange_ordering(success, failure);
860 let order = crate::utils::upgrade_success_ordering(success, failure);
861 let old = if current == new {
862 // This is a no-op, but we still need to perform the operation
863 // for memory ordering reasons.
864 self.fetch_or(false, order)
865 } else {
866 // This sets the value to the new one and returns the old one.
867 self.swap(new, order)
868 };
869 if old == current { Ok(old) } else { Err(old) }
870 } else {
871 match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
872 Ok(x) => Ok(x != 0),
873 Err(x) => Err(x != 0),
874 }
875 }
876 }
877
878 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
879 ///
880 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
881 /// comparison succeeds, which can result in more efficient code on some platforms. The
882 /// return value is a result indicating whether the new value was written and containing the
883 /// previous value.
884 ///
885 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
886 /// ordering of this operation. `success` describes the required ordering for the
887 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
888 /// `failure` describes the required ordering for the load operation that takes place when
889 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
890 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
891 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
892 ///
893 /// # Panics
894 ///
895 /// Panics if `failure` is [`Release`], [`AcqRel`].
896 ///
897 /// # Examples
898 ///
899 /// ```
900 /// use portable_atomic::{AtomicBool, Ordering};
901 ///
902 /// let val = AtomicBool::new(false);
903 ///
904 /// let new = true;
905 /// let mut old = val.load(Ordering::Relaxed);
906 /// loop {
907 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
908 /// Ok(_) => break,
909 /// Err(x) => old = x,
910 /// }
911 /// }
912 /// ```
913 #[inline]
914 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
915 #[cfg_attr(
916 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
917 track_caller
918 )]
919 pub fn compare_exchange_weak(
920 &self,
921 current: bool,
922 new: bool,
923 success: Ordering,
924 failure: Ordering,
925 ) -> Result<bool, bool> {
926 if EMULATE_ATOMIC_BOOL {
927 return self.compare_exchange(current, new, success, failure);
928 }
929
930 match self.as_atomic_u8().compare_exchange_weak(current as u8, new as u8, success, failure)
931 {
932 Ok(x) => Ok(x != 0),
933 Err(x) => Err(x != 0),
934 }
935 }
936
937 /// Logical "and" with a boolean value.
938 ///
939 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
940 /// the new value to the result.
941 ///
942 /// Returns the previous value.
943 ///
944 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
945 /// of this operation. All ordering modes are possible. Note that using
946 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
947 /// using [`Release`] makes the load part [`Relaxed`].
948 ///
949 /// # Examples
950 ///
951 /// ```
952 /// use portable_atomic::{AtomicBool, Ordering};
953 ///
954 /// let foo = AtomicBool::new(true);
955 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
956 /// assert_eq!(foo.load(Ordering::SeqCst), false);
957 ///
958 /// let foo = AtomicBool::new(true);
959 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
960 /// assert_eq!(foo.load(Ordering::SeqCst), true);
961 ///
962 /// let foo = AtomicBool::new(false);
963 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
964 /// assert_eq!(foo.load(Ordering::SeqCst), false);
965 /// ```
966 #[inline]
967 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
968 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
969 self.as_atomic_u8().fetch_and(val as u8, order) != 0
970 }
971
972 /// Logical "and" with a boolean value.
973 ///
974 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
975 /// the new value to the result.
976 ///
977 /// Unlike `fetch_and`, this does not return the previous value.
978 ///
979 /// `and` takes an [`Ordering`] argument which describes the memory ordering
980 /// of this operation. All ordering modes are possible. Note that using
981 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
982 /// using [`Release`] makes the load part [`Relaxed`].
983 ///
984 /// This function may generate more efficient code than `fetch_and` on some platforms.
985 ///
986 /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
987 /// - MSP430: `and` instead of disabling interrupts
988 ///
989 /// Note: On x86/x86_64, the use of either function should not usually
990 /// affect the generated code, because LLVM can properly optimize the case
991 /// where the result is unused.
992 ///
993 /// # Examples
994 ///
995 /// ```
996 /// use portable_atomic::{AtomicBool, Ordering};
997 ///
998 /// let foo = AtomicBool::new(true);
999 /// foo.and(false, Ordering::SeqCst);
1000 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1001 ///
1002 /// let foo = AtomicBool::new(true);
1003 /// foo.and(true, Ordering::SeqCst);
1004 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1005 ///
1006 /// let foo = AtomicBool::new(false);
1007 /// foo.and(false, Ordering::SeqCst);
1008 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1009 /// ```
1010 #[inline]
1011 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1012 pub fn and(&self, val: bool, order: Ordering) {
1013 self.as_atomic_u8().and(val as u8, order);
1014 }
1015
1016 /// Logical "nand" with a boolean value.
1017 ///
1018 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1019 /// the new value to the result.
1020 ///
1021 /// Returns the previous value.
1022 ///
1023 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1024 /// of this operation. All ordering modes are possible. Note that using
1025 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1026 /// using [`Release`] makes the load part [`Relaxed`].
1027 ///
1028 /// # Examples
1029 ///
1030 /// ```
1031 /// use portable_atomic::{AtomicBool, Ordering};
1032 ///
1033 /// let foo = AtomicBool::new(true);
1034 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1035 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1036 ///
1037 /// let foo = AtomicBool::new(true);
1038 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1039 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1040 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1041 ///
1042 /// let foo = AtomicBool::new(false);
1043 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1044 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1045 /// ```
1046 #[inline]
1047 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1048 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1049 // https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L811-L825
1050 if val {
1051 // !(x & true) == !x
1052 // We must invert the bool.
1053 self.fetch_xor(true, order)
1054 } else {
1055 // !(x & false) == true
1056 // We must set the bool to true.
1057 self.swap(true, order)
1058 }
1059 }
1060
1061 /// Logical "or" with a boolean value.
1062 ///
1063 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1064 /// new value to the result.
1065 ///
1066 /// Returns the previous value.
1067 ///
1068 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1069 /// of this operation. All ordering modes are possible. Note that using
1070 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1071 /// using [`Release`] makes the load part [`Relaxed`].
1072 ///
1073 /// # Examples
1074 ///
1075 /// ```
1076 /// use portable_atomic::{AtomicBool, Ordering};
1077 ///
1078 /// let foo = AtomicBool::new(true);
1079 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1080 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1081 ///
1082 /// let foo = AtomicBool::new(true);
1083 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1084 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1085 ///
1086 /// let foo = AtomicBool::new(false);
1087 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1088 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1089 /// ```
1090 #[inline]
1091 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1092 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1093 self.as_atomic_u8().fetch_or(val as u8, order) != 0
1094 }
1095
1096 /// Logical "or" with a boolean value.
1097 ///
1098 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1099 /// new value to the result.
1100 ///
1101 /// Unlike `fetch_or`, this does not return the previous value.
1102 ///
1103 /// `or` takes an [`Ordering`] argument which describes the memory ordering
1104 /// of this operation. All ordering modes are possible. Note that using
1105 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1106 /// using [`Release`] makes the load part [`Relaxed`].
1107 ///
1108 /// This function may generate more efficient code than `fetch_or` on some platforms.
1109 ///
1110 /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1111 /// - MSP430: `bis` instead of disabling interrupts
1112 ///
1113 /// Note: On x86/x86_64, the use of either function should not usually
1114 /// affect the generated code, because LLVM can properly optimize the case
1115 /// where the result is unused.
1116 ///
1117 /// # Examples
1118 ///
1119 /// ```
1120 /// use portable_atomic::{AtomicBool, Ordering};
1121 ///
1122 /// let foo = AtomicBool::new(true);
1123 /// foo.or(false, Ordering::SeqCst);
1124 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1125 ///
1126 /// let foo = AtomicBool::new(true);
1127 /// foo.or(true, Ordering::SeqCst);
1128 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1129 ///
1130 /// let foo = AtomicBool::new(false);
1131 /// foo.or(false, Ordering::SeqCst);
1132 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1133 /// ```
1134 #[inline]
1135 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1136 pub fn or(&self, val: bool, order: Ordering) {
1137 self.as_atomic_u8().or(val as u8, order);
1138 }
1139
1140 /// Logical "xor" with a boolean value.
1141 ///
1142 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1143 /// the new value to the result.
1144 ///
1145 /// Returns the previous value.
1146 ///
1147 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1148 /// of this operation. All ordering modes are possible. Note that using
1149 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1150 /// using [`Release`] makes the load part [`Relaxed`].
1151 ///
1152 /// # Examples
1153 ///
1154 /// ```
1155 /// use portable_atomic::{AtomicBool, Ordering};
1156 ///
1157 /// let foo = AtomicBool::new(true);
1158 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1159 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1160 ///
1161 /// let foo = AtomicBool::new(true);
1162 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1163 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1164 ///
1165 /// let foo = AtomicBool::new(false);
1166 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1167 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1168 /// ```
1169 #[inline]
1170 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1171 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1172 self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1173 }
1174
1175 /// Logical "xor" with a boolean value.
1176 ///
1177 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1178 /// the new value to the result.
1179 ///
1180 /// Unlike `fetch_xor`, this does not return the previous value.
1181 ///
1182 /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1183 /// of this operation. All ordering modes are possible. Note that using
1184 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1185 /// using [`Release`] makes the load part [`Relaxed`].
1186 ///
1187 /// This function may generate more efficient code than `fetch_xor` on some platforms.
1188 ///
1189 /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1190 /// - MSP430: `xor` instead of disabling interrupts
1191 ///
1192 /// Note: On x86/x86_64, the use of either function should not usually
1193 /// affect the generated code, because LLVM can properly optimize the case
1194 /// where the result is unused.
1195 ///
1196 /// # Examples
1197 ///
1198 /// ```
1199 /// use portable_atomic::{AtomicBool, Ordering};
1200 ///
1201 /// let foo = AtomicBool::new(true);
1202 /// foo.xor(false, Ordering::SeqCst);
1203 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1204 ///
1205 /// let foo = AtomicBool::new(true);
1206 /// foo.xor(true, Ordering::SeqCst);
1207 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1208 ///
1209 /// let foo = AtomicBool::new(false);
1210 /// foo.xor(false, Ordering::SeqCst);
1211 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1212 /// ```
1213 #[inline]
1214 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1215 pub fn xor(&self, val: bool, order: Ordering) {
1216 self.as_atomic_u8().xor(val as u8, order);
1217 }
1218
1219 /// Logical "not" with a boolean value.
1220 ///
1221 /// Performs a logical "not" operation on the current value, and sets
1222 /// the new value to the result.
1223 ///
1224 /// Returns the previous value.
1225 ///
1226 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1227 /// of this operation. All ordering modes are possible. Note that using
1228 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1229 /// using [`Release`] makes the load part [`Relaxed`].
1230 ///
1231 /// # Examples
1232 ///
1233 /// ```
1234 /// use portable_atomic::{AtomicBool, Ordering};
1235 ///
1236 /// let foo = AtomicBool::new(true);
1237 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1238 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1239 ///
1240 /// let foo = AtomicBool::new(false);
1241 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1242 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1243 /// ```
1244 #[inline]
1245 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1246 pub fn fetch_not(&self, order: Ordering) -> bool {
1247 self.fetch_xor(true, order)
1248 }
1249
1250 /// Logical "not" with a boolean value.
1251 ///
1252 /// Performs a logical "not" operation on the current value, and sets
1253 /// the new value to the result.
1254 ///
1255 /// Unlike `fetch_not`, this does not return the previous value.
1256 ///
1257 /// `not` takes an [`Ordering`] argument which describes the memory ordering
1258 /// of this operation. All ordering modes are possible. Note that using
1259 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1260 /// using [`Release`] makes the load part [`Relaxed`].
1261 ///
1262 /// This function may generate more efficient code than `fetch_not` on some platforms.
1263 ///
1264 /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1265 /// - MSP430: `xor` instead of disabling interrupts
1266 ///
1267 /// Note: On x86/x86_64, the use of either function should not usually
1268 /// affect the generated code, because LLVM can properly optimize the case
1269 /// where the result is unused.
1270 ///
1271 /// # Examples
1272 ///
1273 /// ```
1274 /// use portable_atomic::{AtomicBool, Ordering};
1275 ///
1276 /// let foo = AtomicBool::new(true);
1277 /// foo.not(Ordering::SeqCst);
1278 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1279 ///
1280 /// let foo = AtomicBool::new(false);
1281 /// foo.not(Ordering::SeqCst);
1282 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1283 /// ```
1284 #[inline]
1285 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1286 pub fn not(&self, order: Ordering) {
1287 self.xor(true, order);
1288 }
1289
1290 /// Fetches the value, and applies a function to it that returns an optional
1291 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1292 /// returned `Some(_)`, else `Err(previous_value)`.
1293 ///
1294 /// Note: This may call the function multiple times if the value has been
1295 /// changed from other threads in the meantime, as long as the function
1296 /// returns `Some(_)`, but the function will have been applied only once to
1297 /// the stored value.
1298 ///
1299 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1300 /// ordering of this operation. The first describes the required ordering for
1301 /// when the operation finally succeeds while the second describes the
1302 /// required ordering for loads. These correspond to the success and failure
1303 /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1304 ///
1305 /// Using [`Acquire`] as success ordering makes the store part of this
1306 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1307 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1308 /// [`Acquire`] or [`Relaxed`].
1309 ///
1310 /// # Considerations
1311 ///
1312 /// This method is not magic; it is not provided by the hardware.
1313 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1314 /// and suffers from the same drawbacks.
1315 /// In particular, this method will not circumvent the [ABA Problem].
1316 ///
1317 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1318 ///
1319 /// # Panics
1320 ///
1321 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1322 ///
1323 /// # Examples
1324 ///
1325 /// ```rust
1326 /// use portable_atomic::{AtomicBool, Ordering};
1327 ///
1328 /// let x = AtomicBool::new(false);
1329 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1330 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1331 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1332 /// assert_eq!(x.load(Ordering::SeqCst), false);
1333 /// ```
1334 #[inline]
1335 #[cfg_attr(
1336 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1337 track_caller
1338 )]
1339 pub fn fetch_update<F>(
1340 &self,
1341 set_order: Ordering,
1342 fetch_order: Ordering,
1343 mut f: F,
1344 ) -> Result<bool, bool>
1345 where
1346 F: FnMut(bool) -> Option<bool>,
1347 {
1348 let mut prev = self.load(fetch_order);
1349 while let Some(next) = f(prev) {
1350 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1351 x @ Ok(_) => return x,
1352 Err(next_prev) => prev = next_prev,
1353 }
1354 }
1355 Err(prev)
1356 }
1357 } // cfg_has_atomic_cas!
1358
1359 const_fn! {
1360 // This function is actually `const fn`-compatible on Rust 1.32+,
1361 // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1362 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1363 /// Returns a mutable pointer to the underlying [`bool`].
1364 ///
1365 /// Returning an `*mut` pointer from a shared reference to this atomic is
1366 /// safe because the atomic types work with interior mutability. Any use of
1367 /// the returned raw pointer requires an `unsafe` block and has to uphold
1368 /// the safety requirements. If there is concurrent access, note the following
1369 /// additional safety requirements:
1370 ///
1371 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1372 /// operations on it must be atomic.
1373 /// - Otherwise, any concurrent operations on it must be compatible with
1374 /// operations performed by this atomic type.
1375 ///
1376 /// This is `const fn` on Rust 1.58+.
1377 #[inline]
1378 pub const fn as_ptr(&self) -> *mut bool {
1379 self.v.get() as *mut bool
1380 }
1381 }
1382
1383 #[inline]
1384 fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1385 // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1386 // and both access data in the same way.
1387 unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1388 }
1389}
1390} // cfg_has_atomic_8!
1391
1392cfg_has_atomic_ptr! {
1393/// A raw pointer type which can be safely shared between threads.
1394///
1395/// This type has the same in-memory representation as a `*mut T`.
1396///
1397/// If the compiler and the platform support atomic loads and stores of pointers,
1398/// this type is a wrapper for the standard library's
1399/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1400/// but the compiler does not, atomic operations are implemented using inline
1401/// assembly.
1402// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1403// will show clearer docs.
1404#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1405#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1406#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1407#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1408pub struct AtomicPtr<T> {
1409 inner: imp::AtomicPtr<T>,
1410}
1411
1412impl<T> Default for AtomicPtr<T> {
1413 /// Creates a null `AtomicPtr<T>`.
1414 #[inline]
1415 fn default() -> Self {
1416 Self::new(ptr::null_mut())
1417 }
1418}
1419
1420impl<T> From<*mut T> for AtomicPtr<T> {
1421 #[inline]
1422 fn from(p: *mut T) -> Self {
1423 Self::new(p)
1424 }
1425}
1426
1427impl<T> fmt::Debug for AtomicPtr<T> {
1428 #[allow(clippy::missing_inline_in_public_items)] // fmt is not hot path
1429 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1430 // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L2024
1431 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1432 }
1433}
1434
1435impl<T> fmt::Pointer for AtomicPtr<T> {
1436 #[allow(clippy::missing_inline_in_public_items)] // fmt is not hot path
1437 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1438 // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L2024
1439 fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1440 }
1441}
1442
1443// UnwindSafe is implicitly implemented.
1444#[cfg(not(portable_atomic_no_core_unwind_safe))]
1445impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1446#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1447impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1448
1449impl<T> AtomicPtr<T> {
1450 /// Creates a new `AtomicPtr`.
1451 ///
1452 /// # Examples
1453 ///
1454 /// ```
1455 /// use portable_atomic::AtomicPtr;
1456 ///
1457 /// let ptr = &mut 5;
1458 /// let atomic_ptr = AtomicPtr::new(ptr);
1459 /// ```
1460 #[inline]
1461 #[must_use]
1462 pub const fn new(p: *mut T) -> Self {
1463 static_assert_layout!(AtomicPtr<()>, *mut ());
1464 Self { inner: imp::AtomicPtr::new(p) }
1465 }
1466
1467 /// Creates a new `AtomicPtr` from a pointer.
1468 ///
1469 /// # Safety
1470 ///
1471 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1472 /// can be bigger than `align_of::<*mut T>()`).
1473 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1474 /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1475 /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1476 /// value (or vice-versa).
1477 /// * In other words, time periods where the value is accessed atomically may not overlap
1478 /// with periods where the value is accessed non-atomically.
1479 /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1480 /// duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1481 /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1482 /// from the same thread.
1483 /// * If this atomic type is *not* lock-free:
1484 /// * Any accesses to the value behind `ptr` must have a happens-before relationship
1485 /// with accesses via the returned value (or vice-versa).
1486 /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1487 /// be compatible with operations performed by this atomic type.
1488 /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1489 /// these are not supported by the memory model.
1490 ///
1491 /// [valid]: core::ptr#safety
1492 #[inline]
1493 #[must_use]
1494 pub unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1495 #[allow(clippy::cast_ptr_alignment)]
1496 // SAFETY: guaranteed by the caller
1497 unsafe { &*(ptr as *mut Self) }
1498 }
1499
1500 /// Returns `true` if operations on values of this type are lock-free.
1501 ///
1502 /// If the compiler or the platform doesn't support the necessary
1503 /// atomic instructions, global locks for every potentially
1504 /// concurrent atomic operation will be used.
1505 ///
1506 /// # Examples
1507 ///
1508 /// ```
1509 /// use portable_atomic::AtomicPtr;
1510 ///
1511 /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1512 /// ```
1513 #[inline]
1514 #[must_use]
1515 pub fn is_lock_free() -> bool {
1516 <imp::AtomicPtr<T>>::is_lock_free()
1517 }
1518
1519 /// Returns `true` if operations on values of this type are lock-free.
1520 ///
1521 /// If the compiler or the platform doesn't support the necessary
1522 /// atomic instructions, global locks for every potentially
1523 /// concurrent atomic operation will be used.
1524 ///
1525 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1526 /// this type may be lock-free even if the function returns false.
1527 ///
1528 /// # Examples
1529 ///
1530 /// ```
1531 /// use portable_atomic::AtomicPtr;
1532 ///
1533 /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1534 /// ```
1535 #[inline]
1536 #[must_use]
1537 pub const fn is_always_lock_free() -> bool {
1538 <imp::AtomicPtr<T>>::is_always_lock_free()
1539 }
1540
1541 /// Returns a mutable reference to the underlying pointer.
1542 ///
1543 /// This is safe because the mutable reference guarantees that no other threads are
1544 /// concurrently accessing the atomic data.
1545 ///
1546 /// # Examples
1547 ///
1548 /// ```
1549 /// use portable_atomic::{AtomicPtr, Ordering};
1550 ///
1551 /// let mut data = 10;
1552 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1553 /// let mut other_data = 5;
1554 /// *atomic_ptr.get_mut() = &mut other_data;
1555 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1556 /// ```
1557 #[inline]
1558 pub fn get_mut(&mut self) -> &mut *mut T {
1559 self.inner.get_mut()
1560 }
1561
1562 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1563 // https://github.com/rust-lang/rust/issues/76314
1564
1565 /// Consumes the atomic and returns the contained value.
1566 ///
1567 /// This is safe because passing `self` by value guarantees that no other threads are
1568 /// concurrently accessing the atomic data.
1569 ///
1570 /// # Examples
1571 ///
1572 /// ```
1573 /// use portable_atomic::AtomicPtr;
1574 ///
1575 /// let mut data = 5;
1576 /// let atomic_ptr = AtomicPtr::new(&mut data);
1577 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1578 /// ```
1579 #[inline]
1580 pub fn into_inner(self) -> *mut T {
1581 self.inner.into_inner()
1582 }
1583
1584 /// Loads a value from the pointer.
1585 ///
1586 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1587 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1588 ///
1589 /// # Panics
1590 ///
1591 /// Panics if `order` is [`Release`] or [`AcqRel`].
1592 ///
1593 /// # Examples
1594 ///
1595 /// ```
1596 /// use portable_atomic::{AtomicPtr, Ordering};
1597 ///
1598 /// let ptr = &mut 5;
1599 /// let some_ptr = AtomicPtr::new(ptr);
1600 ///
1601 /// let value = some_ptr.load(Ordering::Relaxed);
1602 /// ```
1603 #[inline]
1604 #[cfg_attr(
1605 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1606 track_caller
1607 )]
1608 pub fn load(&self, order: Ordering) -> *mut T {
1609 self.inner.load(order)
1610 }
1611
1612 /// Stores a value into the pointer.
1613 ///
1614 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1615 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1616 ///
1617 /// # Panics
1618 ///
1619 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1620 ///
1621 /// # Examples
1622 ///
1623 /// ```
1624 /// use portable_atomic::{AtomicPtr, Ordering};
1625 ///
1626 /// let ptr = &mut 5;
1627 /// let some_ptr = AtomicPtr::new(ptr);
1628 ///
1629 /// let other_ptr = &mut 10;
1630 ///
1631 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1632 /// ```
1633 #[inline]
1634 #[cfg_attr(
1635 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1636 track_caller
1637 )]
1638 pub fn store(&self, ptr: *mut T, order: Ordering) {
1639 self.inner.store(ptr, order);
1640 }
1641
1642 cfg_has_atomic_cas! {
1643 /// Stores a value into the pointer, returning the previous value.
1644 ///
1645 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1646 /// of this operation. All ordering modes are possible. Note that using
1647 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1648 /// using [`Release`] makes the load part [`Relaxed`].
1649 ///
1650 /// # Examples
1651 ///
1652 /// ```
1653 /// use portable_atomic::{AtomicPtr, Ordering};
1654 ///
1655 /// let ptr = &mut 5;
1656 /// let some_ptr = AtomicPtr::new(ptr);
1657 ///
1658 /// let other_ptr = &mut 10;
1659 ///
1660 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1661 /// ```
1662 #[inline]
1663 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1664 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1665 self.inner.swap(ptr, order)
1666 }
1667
1668 /// Stores a value into the pointer if the current value is the same as the `current` value.
1669 ///
1670 /// The return value is a result indicating whether the new value was written and containing
1671 /// the previous value. On success this value is guaranteed to be equal to `current`.
1672 ///
1673 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1674 /// ordering of this operation. `success` describes the required ordering for the
1675 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1676 /// `failure` describes the required ordering for the load operation that takes place when
1677 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1678 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1679 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1680 ///
1681 /// # Panics
1682 ///
1683 /// Panics if `failure` is [`Release`], [`AcqRel`].
1684 ///
1685 /// # Examples
1686 ///
1687 /// ```
1688 /// use portable_atomic::{AtomicPtr, Ordering};
1689 ///
1690 /// let ptr = &mut 5;
1691 /// let some_ptr = AtomicPtr::new(ptr);
1692 ///
1693 /// let other_ptr = &mut 10;
1694 ///
1695 /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1696 /// ```
1697 #[inline]
1698 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1699 #[cfg_attr(
1700 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1701 track_caller
1702 )]
1703 pub fn compare_exchange(
1704 &self,
1705 current: *mut T,
1706 new: *mut T,
1707 success: Ordering,
1708 failure: Ordering,
1709 ) -> Result<*mut T, *mut T> {
1710 self.inner.compare_exchange(current, new, success, failure)
1711 }
1712
1713 /// Stores a value into the pointer if the current value is the same as the `current` value.
1714 ///
1715 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1716 /// comparison succeeds, which can result in more efficient code on some platforms. The
1717 /// return value is a result indicating whether the new value was written and containing the
1718 /// previous value.
1719 ///
1720 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1721 /// ordering of this operation. `success` describes the required ordering for the
1722 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1723 /// `failure` describes the required ordering for the load operation that takes place when
1724 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1725 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1726 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1727 ///
1728 /// # Panics
1729 ///
1730 /// Panics if `failure` is [`Release`], [`AcqRel`].
1731 ///
1732 /// # Examples
1733 ///
1734 /// ```
1735 /// use portable_atomic::{AtomicPtr, Ordering};
1736 ///
1737 /// let some_ptr = AtomicPtr::new(&mut 5);
1738 ///
1739 /// let new = &mut 10;
1740 /// let mut old = some_ptr.load(Ordering::Relaxed);
1741 /// loop {
1742 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1743 /// Ok(_) => break,
1744 /// Err(x) => old = x,
1745 /// }
1746 /// }
1747 /// ```
1748 #[inline]
1749 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1750 #[cfg_attr(
1751 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1752 track_caller
1753 )]
1754 pub fn compare_exchange_weak(
1755 &self,
1756 current: *mut T,
1757 new: *mut T,
1758 success: Ordering,
1759 failure: Ordering,
1760 ) -> Result<*mut T, *mut T> {
1761 self.inner.compare_exchange_weak(current, new, success, failure)
1762 }
1763
1764 /// Fetches the value, and applies a function to it that returns an optional
1765 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1766 /// returned `Some(_)`, else `Err(previous_value)`.
1767 ///
1768 /// Note: This may call the function multiple times if the value has been
1769 /// changed from other threads in the meantime, as long as the function
1770 /// returns `Some(_)`, but the function will have been applied only once to
1771 /// the stored value.
1772 ///
1773 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1774 /// ordering of this operation. The first describes the required ordering for
1775 /// when the operation finally succeeds while the second describes the
1776 /// required ordering for loads. These correspond to the success and failure
1777 /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1778 ///
1779 /// Using [`Acquire`] as success ordering makes the store part of this
1780 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1781 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1782 /// [`Acquire`] or [`Relaxed`].
1783 ///
1784 /// # Panics
1785 ///
1786 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1787 ///
1788 /// # Considerations
1789 ///
1790 /// This method is not magic; it is not provided by the hardware.
1791 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1792 /// and suffers from the same drawbacks.
1793 /// In particular, this method will not circumvent the [ABA Problem].
1794 ///
1795 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1796 ///
1797 /// # Examples
1798 ///
1799 /// ```rust
1800 /// use portable_atomic::{AtomicPtr, Ordering};
1801 ///
1802 /// let ptr: *mut _ = &mut 5;
1803 /// let some_ptr = AtomicPtr::new(ptr);
1804 ///
1805 /// let new: *mut _ = &mut 10;
1806 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1807 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1808 /// if x == ptr {
1809 /// Some(new)
1810 /// } else {
1811 /// None
1812 /// }
1813 /// });
1814 /// assert_eq!(result, Ok(ptr));
1815 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1816 /// ```
1817 #[inline]
1818 #[cfg_attr(
1819 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1820 track_caller
1821 )]
1822 pub fn fetch_update<F>(
1823 &self,
1824 set_order: Ordering,
1825 fetch_order: Ordering,
1826 mut f: F,
1827 ) -> Result<*mut T, *mut T>
1828 where
1829 F: FnMut(*mut T) -> Option<*mut T>,
1830 {
1831 let mut prev = self.load(fetch_order);
1832 while let Some(next) = f(prev) {
1833 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1834 x @ Ok(_) => return x,
1835 Err(next_prev) => prev = next_prev,
1836 }
1837 }
1838 Err(prev)
1839 }
1840
1841 #[cfg(miri)]
1842 #[inline]
1843 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1844 fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
1845 where
1846 F: FnMut(*mut T) -> *mut T,
1847 {
1848 // This is a private function and all instances of `f` only operate on the value
1849 // loaded, so there is no need to synchronize the first load/failed CAS.
1850 let mut prev = self.load(Ordering::Relaxed);
1851 loop {
1852 let next = f(prev);
1853 match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
1854 Ok(x) => return x,
1855 Err(next_prev) => prev = next_prev,
1856 }
1857 }
1858 }
1859
1860 /// Offsets the pointer's address by adding `val` (in units of `T`),
1861 /// returning the previous pointer.
1862 ///
1863 /// This is equivalent to using [`wrapping_add`] to atomically perform the
1864 /// equivalent of `ptr = ptr.wrapping_add(val);`.
1865 ///
1866 /// This method operates in units of `T`, which means that it cannot be used
1867 /// to offset the pointer by an amount which is not a multiple of
1868 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1869 /// work with a deliberately misaligned pointer. In such cases, you may use
1870 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
1871 ///
1872 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
1873 /// memory ordering of this operation. All ordering modes are possible. Note
1874 /// that using [`Acquire`] makes the store part of this operation
1875 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1876 ///
1877 /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
1878 ///
1879 /// # Examples
1880 ///
1881 /// ```
1882 /// # #![allow(unstable_name_collisions)]
1883 /// use portable_atomic::{AtomicPtr, Ordering};
1884 /// use sptr::Strict; // stable polyfill for strict provenance
1885 ///
1886 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1887 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
1888 /// // Note: units of `size_of::<i64>()`.
1889 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
1890 /// ```
1891 #[inline]
1892 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1893 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
1894 self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
1895 }
1896
1897 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
1898 /// returning the previous pointer.
1899 ///
1900 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
1901 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
1902 ///
1903 /// This method operates in units of `T`, which means that it cannot be used
1904 /// to offset the pointer by an amount which is not a multiple of
1905 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
1906 /// work with a deliberately misaligned pointer. In such cases, you may use
1907 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
1908 ///
1909 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
1910 /// ordering of this operation. All ordering modes are possible. Note that
1911 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
1912 /// and using [`Release`] makes the load part [`Relaxed`].
1913 ///
1914 /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
1915 ///
1916 /// # Examples
1917 ///
1918 /// ```
1919 /// use portable_atomic::{AtomicPtr, Ordering};
1920 ///
1921 /// let array = [1i32, 2i32];
1922 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
1923 ///
1924 /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1],));
1925 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
1926 /// ```
1927 #[inline]
1928 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1929 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
1930 self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
1931 }
1932
1933 /// Offsets the pointer's address by adding `val` *bytes*, returning the
1934 /// previous pointer.
1935 ///
1936 /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
1937 /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
1938 ///
1939 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
1940 /// memory ordering of this operation. All ordering modes are possible. Note
1941 /// that using [`Acquire`] makes the store part of this operation
1942 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1943 ///
1944 /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
1945 /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
1946 ///
1947 /// # Examples
1948 ///
1949 /// ```
1950 /// # #![allow(unstable_name_collisions)]
1951 /// use portable_atomic::{AtomicPtr, Ordering};
1952 /// use sptr::Strict; // stable polyfill for strict provenance
1953 ///
1954 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
1955 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
1956 /// // Note: in units of bytes, not `size_of::<i64>()`.
1957 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
1958 /// ```
1959 #[inline]
1960 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1961 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
1962 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
1963 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
1964 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
1965 // compatible and is sound.
1966 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
1967 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
1968 #[cfg(miri)]
1969 {
1970 self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_add(val)))
1971 }
1972 #[cfg(not(miri))]
1973 {
1974 self.as_atomic_usize().fetch_add(val, order) as *mut T
1975 }
1976 }
1977
1978 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
1979 /// previous pointer.
1980 ///
1981 /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
1982 /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
1983 ///
1984 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
1985 /// memory ordering of this operation. All ordering modes are possible. Note
1986 /// that using [`Acquire`] makes the store part of this operation
1987 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
1988 ///
1989 /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
1990 /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
1991 ///
1992 /// # Examples
1993 ///
1994 /// ```
1995 /// # #![allow(unstable_name_collisions)]
1996 /// use portable_atomic::{AtomicPtr, Ordering};
1997 /// use sptr::Strict; // stable polyfill for strict provenance
1998 ///
1999 /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2000 /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2001 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2002 /// ```
2003 #[inline]
2004 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2005 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2006 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2007 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2008 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2009 // compatible and is sound.
2010 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2011 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2012 #[cfg(miri)]
2013 {
2014 self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_sub(val)))
2015 }
2016 #[cfg(not(miri))]
2017 {
2018 self.as_atomic_usize().fetch_sub(val, order) as *mut T
2019 }
2020 }
2021
2022 /// Performs a bitwise "or" operation on the address of the current pointer,
2023 /// and the argument `val`, and stores a pointer with provenance of the
2024 /// current pointer and the resulting address.
2025 ///
2026 /// This is equivalent to using [`map_addr`] to atomically perform
2027 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2028 /// pointer schemes to atomically set tag bits.
2029 ///
2030 /// **Caveat**: This operation returns the previous value. To compute the
2031 /// stored value without losing provenance, you may use [`map_addr`]. For
2032 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2033 ///
2034 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2035 /// ordering of this operation. All ordering modes are possible. Note that
2036 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2037 /// and using [`Release`] makes the load part [`Relaxed`].
2038 ///
2039 /// This API and its claimed semantics are part of the Strict Provenance
2040 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2041 /// details.
2042 ///
2043 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2044 ///
2045 /// # Examples
2046 ///
2047 /// ```
2048 /// # #![allow(unstable_name_collisions)]
2049 /// use portable_atomic::{AtomicPtr, Ordering};
2050 /// use sptr::Strict; // stable polyfill for strict provenance
2051 ///
2052 /// let pointer = &mut 3i64 as *mut i64;
2053 ///
2054 /// let atom = AtomicPtr::<i64>::new(pointer);
2055 /// // Tag the bottom bit of the pointer.
2056 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2057 /// // Extract and untag.
2058 /// let tagged = atom.load(Ordering::Relaxed);
2059 /// assert_eq!(tagged.addr() & 1, 1);
2060 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2061 /// ```
2062 #[inline]
2063 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2064 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2065 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2066 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2067 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2068 // compatible and is sound.
2069 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2070 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2071 #[cfg(miri)]
2072 {
2073 self.fetch_update_(order, |x| strict::map_addr(x, |x| x | val))
2074 }
2075 #[cfg(not(miri))]
2076 {
2077 self.as_atomic_usize().fetch_or(val, order) as *mut T
2078 }
2079 }
2080
2081 /// Performs a bitwise "and" operation on the address of the current
2082 /// pointer, and the argument `val`, and stores a pointer with provenance of
2083 /// the current pointer and the resulting address.
2084 ///
2085 /// This is equivalent to using [`map_addr`] to atomically perform
2086 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2087 /// pointer schemes to atomically unset tag bits.
2088 ///
2089 /// **Caveat**: This operation returns the previous value. To compute the
2090 /// stored value without losing provenance, you may use [`map_addr`]. For
2091 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2092 ///
2093 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2094 /// ordering of this operation. All ordering modes are possible. Note that
2095 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2096 /// and using [`Release`] makes the load part [`Relaxed`].
2097 ///
2098 /// This API and its claimed semantics are part of the Strict Provenance
2099 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2100 /// details.
2101 ///
2102 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2103 ///
2104 /// # Examples
2105 ///
2106 /// ```
2107 /// # #![allow(unstable_name_collisions)]
2108 /// use portable_atomic::{AtomicPtr, Ordering};
2109 /// use sptr::Strict; // stable polyfill for strict provenance
2110 ///
2111 /// let pointer = &mut 3i64 as *mut i64;
2112 /// // A tagged pointer
2113 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2114 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2115 /// // Untag, and extract the previously tagged pointer.
2116 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2117 /// assert_eq!(untagged, pointer);
2118 /// ```
2119 #[inline]
2120 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2121 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2122 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2123 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2124 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2125 // compatible and is sound.
2126 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2127 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2128 #[cfg(miri)]
2129 {
2130 self.fetch_update_(order, |x| strict::map_addr(x, |x| x & val))
2131 }
2132 #[cfg(not(miri))]
2133 {
2134 self.as_atomic_usize().fetch_and(val, order) as *mut T
2135 }
2136 }
2137
2138 /// Performs a bitwise "xor" operation on the address of the current
2139 /// pointer, and the argument `val`, and stores a pointer with provenance of
2140 /// the current pointer and the resulting address.
2141 ///
2142 /// This is equivalent to using [`map_addr`] to atomically perform
2143 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2144 /// pointer schemes to atomically toggle tag bits.
2145 ///
2146 /// **Caveat**: This operation returns the previous value. To compute the
2147 /// stored value without losing provenance, you may use [`map_addr`]. For
2148 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2149 ///
2150 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2151 /// ordering of this operation. All ordering modes are possible. Note that
2152 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2153 /// and using [`Release`] makes the load part [`Relaxed`].
2154 ///
2155 /// This API and its claimed semantics are part of the Strict Provenance
2156 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2157 /// details.
2158 ///
2159 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2160 ///
2161 /// # Examples
2162 ///
2163 /// ```
2164 /// # #![allow(unstable_name_collisions)]
2165 /// use portable_atomic::{AtomicPtr, Ordering};
2166 /// use sptr::Strict; // stable polyfill for strict provenance
2167 ///
2168 /// let pointer = &mut 3i64 as *mut i64;
2169 /// let atom = AtomicPtr::<i64>::new(pointer);
2170 ///
2171 /// // Toggle a tag bit on the pointer.
2172 /// atom.fetch_xor(1, Ordering::Relaxed);
2173 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2174 /// ```
2175 #[inline]
2176 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2177 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2178 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2179 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2180 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2181 // compatible and is sound.
2182 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2183 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2184 #[cfg(miri)]
2185 {
2186 self.fetch_update_(order, |x| strict::map_addr(x, |x| x ^ val))
2187 }
2188 #[cfg(not(miri))]
2189 {
2190 self.as_atomic_usize().fetch_xor(val, order) as *mut T
2191 }
2192 }
2193
2194 /// Sets the bit at the specified bit-position to 1.
2195 ///
2196 /// Returns `true` if the specified bit was previously set to 1.
2197 ///
2198 /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2199 /// of this operation. All ordering modes are possible. Note that using
2200 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2201 /// using [`Release`] makes the load part [`Relaxed`].
2202 ///
2203 /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2204 ///
2205 /// # Examples
2206 ///
2207 /// ```
2208 /// # #![allow(unstable_name_collisions)]
2209 /// use portable_atomic::{AtomicPtr, Ordering};
2210 /// use sptr::Strict; // stable polyfill for strict provenance
2211 ///
2212 /// let pointer = &mut 3i64 as *mut i64;
2213 ///
2214 /// let atom = AtomicPtr::<i64>::new(pointer);
2215 /// // Tag the bottom bit of the pointer.
2216 /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2217 /// // Extract and untag.
2218 /// let tagged = atom.load(Ordering::Relaxed);
2219 /// assert_eq!(tagged.addr() & 1, 1);
2220 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2221 /// ```
2222 #[inline]
2223 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2224 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2225 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2226 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2227 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2228 // compatible and is sound.
2229 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2230 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2231 #[cfg(miri)]
2232 {
2233 let mask = 1_usize.wrapping_shl(bit);
2234 self.fetch_or(mask, order) as usize & mask != 0
2235 }
2236 #[cfg(not(miri))]
2237 {
2238 self.as_atomic_usize().bit_set(bit, order)
2239 }
2240 }
2241
2242 /// Clears the bit at the specified bit-position to 1.
2243 ///
2244 /// Returns `true` if the specified bit was previously set to 1.
2245 ///
2246 /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2247 /// of this operation. All ordering modes are possible. Note that using
2248 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2249 /// using [`Release`] makes the load part [`Relaxed`].
2250 ///
2251 /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2252 ///
2253 /// # Examples
2254 ///
2255 /// ```
2256 /// # #![allow(unstable_name_collisions)]
2257 /// use portable_atomic::{AtomicPtr, Ordering};
2258 /// use sptr::Strict; // stable polyfill for strict provenance
2259 ///
2260 /// let pointer = &mut 3i64 as *mut i64;
2261 /// // A tagged pointer
2262 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2263 /// assert!(atom.bit_set(0, Ordering::Relaxed));
2264 /// // Untag
2265 /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2266 /// ```
2267 #[inline]
2268 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2269 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2270 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2271 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2272 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2273 // compatible and is sound.
2274 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2275 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2276 #[cfg(miri)]
2277 {
2278 let mask = 1_usize.wrapping_shl(bit);
2279 self.fetch_and(!mask, order) as usize & mask != 0
2280 }
2281 #[cfg(not(miri))]
2282 {
2283 self.as_atomic_usize().bit_clear(bit, order)
2284 }
2285 }
2286
2287 /// Toggles the bit at the specified bit-position.
2288 ///
2289 /// Returns `true` if the specified bit was previously set to 1.
2290 ///
2291 /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2292 /// of this operation. All ordering modes are possible. Note that using
2293 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2294 /// using [`Release`] makes the load part [`Relaxed`].
2295 ///
2296 /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2297 ///
2298 /// # Examples
2299 ///
2300 /// ```
2301 /// # #![allow(unstable_name_collisions)]
2302 /// use portable_atomic::{AtomicPtr, Ordering};
2303 /// use sptr::Strict; // stable polyfill for strict provenance
2304 ///
2305 /// let pointer = &mut 3i64 as *mut i64;
2306 /// let atom = AtomicPtr::<i64>::new(pointer);
2307 ///
2308 /// // Toggle a tag bit on the pointer.
2309 /// atom.bit_toggle(0, Ordering::Relaxed);
2310 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2311 /// ```
2312 #[inline]
2313 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2314 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2315 // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2316 // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2317 // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2318 // compatible and is sound.
2319 // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2320 // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2321 #[cfg(miri)]
2322 {
2323 let mask = 1_usize.wrapping_shl(bit);
2324 self.fetch_xor(mask, order) as usize & mask != 0
2325 }
2326 #[cfg(not(miri))]
2327 {
2328 self.as_atomic_usize().bit_toggle(bit, order)
2329 }
2330 }
2331
2332 #[cfg(not(miri))]
2333 #[inline]
2334 fn as_atomic_usize(&self) -> &AtomicUsize {
2335 static_assert!(
2336 core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2337 );
2338 static_assert!(
2339 core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2340 );
2341 // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2342 // and both access data in the same way.
2343 unsafe { &*(self as *const Self as *const AtomicUsize) }
2344 }
2345 } // cfg_has_atomic_cas!
2346
2347 const_fn! {
2348 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2349 /// Returns a mutable pointer to the underlying pointer.
2350 ///
2351 /// Returning an `*mut` pointer from a shared reference to this atomic is
2352 /// safe because the atomic types work with interior mutability. Any use of
2353 /// the returned raw pointer requires an `unsafe` block and has to uphold
2354 /// the safety requirements. If there is concurrent access, note the following
2355 /// additional safety requirements:
2356 ///
2357 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2358 /// operations on it must be atomic.
2359 /// - Otherwise, any concurrent operations on it must be compatible with
2360 /// operations performed by this atomic type.
2361 ///
2362 /// This is `const fn` on Rust 1.58+.
2363 #[inline]
2364 pub const fn as_ptr(&self) -> *mut *mut T {
2365 self.inner.as_ptr()
2366 }
2367 }
2368}
2369} // cfg_has_atomic_ptr!
2370
2371macro_rules! atomic_int {
2372 (AtomicU32, $int_type:ident, $align:literal) => {
2373 atomic_int!(int, AtomicU32, $int_type, $align);
2374 #[cfg(feature = "float")]
2375 atomic_int!(float, AtomicF32, f32, AtomicU32, $int_type, $align);
2376 };
2377 (AtomicU64, $int_type:ident, $align:literal) => {
2378 atomic_int!(int, AtomicU64, $int_type, $align);
2379 #[cfg(feature = "float")]
2380 atomic_int!(float, AtomicF64, f64, AtomicU64, $int_type, $align);
2381 };
2382 ($atomic_type:ident, $int_type:ident, $align:literal) => {
2383 atomic_int!(int, $atomic_type, $int_type, $align);
2384 };
2385
2386 // Atomic{I,U}* impls
2387 (int, $atomic_type:ident, $int_type:ident, $align:literal) => {
2388 doc_comment! {
2389 concat!("An integer type which can be safely shared between threads.
2390
2391This type has the same in-memory representation as the underlying integer type,
2392[`", stringify!($int_type), "`].
2393
2394If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2395"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2396"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2397inline assembly. Otherwise synchronizes using global locks.
2398You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2399atomic instructions or locks will be used.
2400"
2401 ),
2402 // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2403 // will show clearer docs.
2404 #[repr(C, align($align))]
2405 pub struct $atomic_type {
2406 inner: imp::$atomic_type,
2407 }
2408 }
2409
2410 impl Default for $atomic_type {
2411 #[inline]
2412 fn default() -> Self {
2413 Self::new($int_type::default())
2414 }
2415 }
2416
2417 impl From<$int_type> for $atomic_type {
2418 #[inline]
2419 fn from(v: $int_type) -> Self {
2420 Self::new(v)
2421 }
2422 }
2423
2424 // UnwindSafe is implicitly implemented.
2425 #[cfg(not(portable_atomic_no_core_unwind_safe))]
2426 impl core::panic::RefUnwindSafe for $atomic_type {}
2427 #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2428 impl std::panic::RefUnwindSafe for $atomic_type {}
2429
2430 impl_debug_and_serde!($atomic_type);
2431
2432 impl $atomic_type {
2433 doc_comment! {
2434 concat!(
2435 "Creates a new atomic integer.
2436
2437# Examples
2438
2439```
2440use portable_atomic::", stringify!($atomic_type), ";
2441
2442let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2443```"
2444 ),
2445 #[inline]
2446 #[must_use]
2447 pub const fn new(v: $int_type) -> Self {
2448 static_assert_layout!($atomic_type, $int_type);
2449 Self { inner: imp::$atomic_type::new(v) }
2450 }
2451 }
2452
2453 doc_comment! {
2454 concat!("Creates a new reference to an atomic integer from a pointer.
2455
2456# Safety
2457
2458* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2459 can be bigger than `align_of::<", stringify!($int_type), ">()`).
2460* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2461* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2462 behind `ptr` must have a happens-before relationship with atomic accesses via
2463 the returned value (or vice-versa).
2464 * In other words, time periods where the value is accessed atomically may not
2465 overlap with periods where the value is accessed non-atomically.
2466 * This requirement is trivially satisfied if `ptr` is never used non-atomically
2467 for the duration of lifetime `'a`. Most use cases should be able to follow
2468 this guideline.
2469 * This requirement is also trivially satisfied if all accesses (atomic or not) are
2470 done from the same thread.
2471* If this atomic type is *not* lock-free:
2472 * Any accesses to the value behind `ptr` must have a happens-before relationship
2473 with accesses via the returned value (or vice-versa).
2474 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2475 be compatible with operations performed by this atomic type.
2476* This method must not be used to create overlapping or mixed-size atomic
2477 accesses, as these are not supported by the memory model.
2478
2479[valid]: core::ptr#safety"),
2480 #[inline]
2481 #[must_use]
2482 pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2483 #[allow(clippy::cast_ptr_alignment)]
2484 // SAFETY: guaranteed by the caller
2485 unsafe { &*(ptr as *mut Self) }
2486 }
2487 }
2488
2489 doc_comment! {
2490 concat!("Returns `true` if operations on values of this type are lock-free.
2491
2492If the compiler or the platform doesn't support the necessary
2493atomic instructions, global locks for every potentially
2494concurrent atomic operation will be used.
2495
2496# Examples
2497
2498```
2499use portable_atomic::", stringify!($atomic_type), ";
2500
2501let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2502```"),
2503 #[inline]
2504 #[must_use]
2505 pub fn is_lock_free() -> bool {
2506 <imp::$atomic_type>::is_lock_free()
2507 }
2508 }
2509
2510 doc_comment! {
2511 concat!("Returns `true` if operations on values of this type are lock-free.
2512
2513If the compiler or the platform doesn't support the necessary
2514atomic instructions, global locks for every potentially
2515concurrent atomic operation will be used.
2516
2517**Note:** If the atomic operation relies on dynamic CPU feature detection,
2518this type may be lock-free even if the function returns false.
2519
2520# Examples
2521
2522```
2523use portable_atomic::", stringify!($atomic_type), ";
2524
2525const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2526```"),
2527 #[inline]
2528 #[must_use]
2529 pub const fn is_always_lock_free() -> bool {
2530 <imp::$atomic_type>::is_always_lock_free()
2531 }
2532 }
2533
2534 doc_comment! {
2535 concat!("Returns a mutable reference to the underlying integer.\n
2536This is safe because the mutable reference guarantees that no other threads are
2537concurrently accessing the atomic data.
2538
2539# Examples
2540
2541```
2542use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2543
2544let mut some_var = ", stringify!($atomic_type), "::new(10);
2545assert_eq!(*some_var.get_mut(), 10);
2546*some_var.get_mut() = 5;
2547assert_eq!(some_var.load(Ordering::SeqCst), 5);
2548```"),
2549 #[inline]
2550 pub fn get_mut(&mut self) -> &mut $int_type {
2551 self.inner.get_mut()
2552 }
2553 }
2554
2555 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2556 // https://github.com/rust-lang/rust/issues/76314
2557
2558 doc_comment! {
2559 concat!("Consumes the atomic and returns the contained value.
2560
2561This is safe because passing `self` by value guarantees that no other threads are
2562concurrently accessing the atomic data.
2563
2564# Examples
2565
2566```
2567use portable_atomic::", stringify!($atomic_type), ";
2568
2569let some_var = ", stringify!($atomic_type), "::new(5);
2570assert_eq!(some_var.into_inner(), 5);
2571```"),
2572 #[inline]
2573 pub fn into_inner(self) -> $int_type {
2574 self.inner.into_inner()
2575 }
2576 }
2577
2578 doc_comment! {
2579 concat!("Loads a value from the atomic integer.
2580
2581`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2582Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2583
2584# Panics
2585
2586Panics if `order` is [`Release`] or [`AcqRel`].
2587
2588# Examples
2589
2590```
2591use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2592
2593let some_var = ", stringify!($atomic_type), "::new(5);
2594
2595assert_eq!(some_var.load(Ordering::Relaxed), 5);
2596```"),
2597 #[inline]
2598 #[cfg_attr(
2599 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2600 track_caller
2601 )]
2602 pub fn load(&self, order: Ordering) -> $int_type {
2603 self.inner.load(order)
2604 }
2605 }
2606
2607 doc_comment! {
2608 concat!("Stores a value into the atomic integer.
2609
2610`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2611Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2612
2613# Panics
2614
2615Panics if `order` is [`Acquire`] or [`AcqRel`].
2616
2617# Examples
2618
2619```
2620use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2621
2622let some_var = ", stringify!($atomic_type), "::new(5);
2623
2624some_var.store(10, Ordering::Relaxed);
2625assert_eq!(some_var.load(Ordering::Relaxed), 10);
2626```"),
2627 #[inline]
2628 #[cfg_attr(
2629 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2630 track_caller
2631 )]
2632 pub fn store(&self, val: $int_type, order: Ordering) {
2633 self.inner.store(val, order)
2634 }
2635 }
2636
2637 cfg_has_atomic_cas! {
2638 doc_comment! {
2639 concat!("Stores a value into the atomic integer, returning the previous value.
2640
2641`swap` takes an [`Ordering`] argument which describes the memory ordering
2642of this operation. All ordering modes are possible. Note that using
2643[`Acquire`] makes the store part of this operation [`Relaxed`], and
2644using [`Release`] makes the load part [`Relaxed`].
2645
2646# Examples
2647
2648```
2649use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2650
2651let some_var = ", stringify!($atomic_type), "::new(5);
2652
2653assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2654```"),
2655 #[inline]
2656 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2657 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2658 self.inner.swap(val, order)
2659 }
2660 }
2661
2662 doc_comment! {
2663 concat!("Stores a value into the atomic integer if the current value is the same as
2664the `current` value.
2665
2666The return value is a result indicating whether the new value was written and
2667containing the previous value. On success this value is guaranteed to be equal to
2668`current`.
2669
2670`compare_exchange` takes two [`Ordering`] arguments to describe the memory
2671ordering of this operation. `success` describes the required ordering for the
2672read-modify-write operation that takes place if the comparison with `current` succeeds.
2673`failure` describes the required ordering for the load operation that takes place when
2674the comparison fails. Using [`Acquire`] as success ordering makes the store part
2675of this operation [`Relaxed`], and using [`Release`] makes the successful load
2676[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2677
2678# Panics
2679
2680Panics if `failure` is [`Release`], [`AcqRel`].
2681
2682# Examples
2683
2684```
2685use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2686
2687let some_var = ", stringify!($atomic_type), "::new(5);
2688
2689assert_eq!(
2690 some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
2691 Ok(5),
2692);
2693assert_eq!(some_var.load(Ordering::Relaxed), 10);
2694
2695assert_eq!(
2696 some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
2697 Err(10),
2698);
2699assert_eq!(some_var.load(Ordering::Relaxed), 10);
2700```"),
2701 #[inline]
2702 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2703 #[cfg_attr(
2704 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2705 track_caller
2706 )]
2707 pub fn compare_exchange(
2708 &self,
2709 current: $int_type,
2710 new: $int_type,
2711 success: Ordering,
2712 failure: Ordering,
2713 ) -> Result<$int_type, $int_type> {
2714 self.inner.compare_exchange(current, new, success, failure)
2715 }
2716 }
2717
2718 doc_comment! {
2719 concat!("Stores a value into the atomic integer if the current value is the same as
2720the `current` value.
2721Unlike [`compare_exchange`](Self::compare_exchange)
2722this function is allowed to spuriously fail even
2723when the comparison succeeds, which can result in more efficient code on some
2724platforms. The return value is a result indicating whether the new value was
2725written and containing the previous value.
2726
2727`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2728ordering of this operation. `success` describes the required ordering for the
2729read-modify-write operation that takes place if the comparison with `current` succeeds.
2730`failure` describes the required ordering for the load operation that takes place when
2731the comparison fails. Using [`Acquire`] as success ordering makes the store part
2732of this operation [`Relaxed`], and using [`Release`] makes the successful load
2733[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2734
2735# Panics
2736
2737Panics if `failure` is [`Release`], [`AcqRel`].
2738
2739# Examples
2740
2741```
2742use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2743
2744let val = ", stringify!($atomic_type), "::new(4);
2745
2746let mut old = val.load(Ordering::Relaxed);
2747loop {
2748 let new = old * 2;
2749 match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2750 Ok(_) => break,
2751 Err(x) => old = x,
2752 }
2753}
2754```"),
2755 #[inline]
2756 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2757 #[cfg_attr(
2758 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2759 track_caller
2760 )]
2761 pub fn compare_exchange_weak(
2762 &self,
2763 current: $int_type,
2764 new: $int_type,
2765 success: Ordering,
2766 failure: Ordering,
2767 ) -> Result<$int_type, $int_type> {
2768 self.inner.compare_exchange_weak(current, new, success, failure)
2769 }
2770 }
2771
2772 doc_comment! {
2773 concat!("Adds to the current value, returning the previous value.
2774
2775This operation wraps around on overflow.
2776
2777`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
2778of this operation. All ordering modes are possible. Note that using
2779[`Acquire`] makes the store part of this operation [`Relaxed`], and
2780using [`Release`] makes the load part [`Relaxed`].
2781
2782# Examples
2783
2784```
2785use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2786
2787let foo = ", stringify!($atomic_type), "::new(0);
2788assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
2789assert_eq!(foo.load(Ordering::SeqCst), 10);
2790```"),
2791 #[inline]
2792 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2793 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
2794 self.inner.fetch_add(val, order)
2795 }
2796 }
2797
2798 doc_comment! {
2799 concat!("Adds to the current value.
2800
2801This operation wraps around on overflow.
2802
2803Unlike `fetch_add`, this does not return the previous value.
2804
2805`add` takes an [`Ordering`] argument which describes the memory ordering
2806of this operation. All ordering modes are possible. Note that using
2807[`Acquire`] makes the store part of this operation [`Relaxed`], and
2808using [`Release`] makes the load part [`Relaxed`].
2809
2810This function may generate more efficient code than `fetch_add` on some platforms.
2811
2812- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
2813
2814# Examples
2815
2816```
2817use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2818
2819let foo = ", stringify!($atomic_type), "::new(0);
2820foo.add(10, Ordering::SeqCst);
2821assert_eq!(foo.load(Ordering::SeqCst), 10);
2822```"),
2823 #[inline]
2824 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2825 pub fn add(&self, val: $int_type, order: Ordering) {
2826 self.inner.add(val, order);
2827 }
2828 }
2829
2830 doc_comment! {
2831 concat!("Subtracts from the current value, returning the previous value.
2832
2833This operation wraps around on overflow.
2834
2835`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
2836of this operation. All ordering modes are possible. Note that using
2837[`Acquire`] makes the store part of this operation [`Relaxed`], and
2838using [`Release`] makes the load part [`Relaxed`].
2839
2840# Examples
2841
2842```
2843use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2844
2845let foo = ", stringify!($atomic_type), "::new(20);
2846assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
2847assert_eq!(foo.load(Ordering::SeqCst), 10);
2848```"),
2849 #[inline]
2850 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2851 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
2852 self.inner.fetch_sub(val, order)
2853 }
2854 }
2855
2856 doc_comment! {
2857 concat!("Subtracts from the current value.
2858
2859This operation wraps around on overflow.
2860
2861Unlike `fetch_sub`, this does not return the previous value.
2862
2863`sub` takes an [`Ordering`] argument which describes the memory ordering
2864of this operation. All ordering modes are possible. Note that using
2865[`Acquire`] makes the store part of this operation [`Relaxed`], and
2866using [`Release`] makes the load part [`Relaxed`].
2867
2868This function may generate more efficient code than `fetch_sub` on some platforms.
2869
2870- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
2871
2872# Examples
2873
2874```
2875use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2876
2877let foo = ", stringify!($atomic_type), "::new(20);
2878foo.sub(10, Ordering::SeqCst);
2879assert_eq!(foo.load(Ordering::SeqCst), 10);
2880```"),
2881 #[inline]
2882 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2883 pub fn sub(&self, val: $int_type, order: Ordering) {
2884 self.inner.sub(val, order);
2885 }
2886 }
2887
2888 doc_comment! {
2889 concat!("Bitwise \"and\" with the current value.
2890
2891Performs a bitwise \"and\" operation on the current value and the argument `val`, and
2892sets the new value to the result.
2893
2894Returns the previous value.
2895
2896`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
2897of this operation. All ordering modes are possible. Note that using
2898[`Acquire`] makes the store part of this operation [`Relaxed`], and
2899using [`Release`] makes the load part [`Relaxed`].
2900
2901# Examples
2902
2903```
2904use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2905
2906let foo = ", stringify!($atomic_type), "::new(0b101101);
2907assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
2908assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
2909```"),
2910 #[inline]
2911 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2912 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
2913 self.inner.fetch_and(val, order)
2914 }
2915 }
2916
2917 doc_comment! {
2918 concat!("Bitwise \"and\" with the current value.
2919
2920Performs a bitwise \"and\" operation on the current value and the argument `val`, and
2921sets the new value to the result.
2922
2923Unlike `fetch_and`, this does not return the previous value.
2924
2925`and` takes an [`Ordering`] argument which describes the memory ordering
2926of this operation. All ordering modes are possible. Note that using
2927[`Acquire`] makes the store part of this operation [`Relaxed`], and
2928using [`Release`] makes the load part [`Relaxed`].
2929
2930This function may generate more efficient code than `fetch_and` on some platforms.
2931
2932- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
2933- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
2934
2935Note: On x86/x86_64, the use of either function should not usually
2936affect the generated code, because LLVM can properly optimize the case
2937where the result is unused.
2938
2939# Examples
2940
2941```
2942use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2943
2944let foo = ", stringify!($atomic_type), "::new(0b101101);
2945assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
2946assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
2947```"),
2948 #[inline]
2949 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2950 pub fn and(&self, val: $int_type, order: Ordering) {
2951 self.inner.and(val, order);
2952 }
2953 }
2954
2955 doc_comment! {
2956 concat!("Bitwise \"nand\" with the current value.
2957
2958Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
2959sets the new value to the result.
2960
2961Returns the previous value.
2962
2963`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
2964of this operation. All ordering modes are possible. Note that using
2965[`Acquire`] makes the store part of this operation [`Relaxed`], and
2966using [`Release`] makes the load part [`Relaxed`].
2967
2968# Examples
2969
2970```
2971use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2972
2973let foo = ", stringify!($atomic_type), "::new(0x13);
2974assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
2975assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
2976```"),
2977 #[inline]
2978 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2979 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
2980 self.inner.fetch_nand(val, order)
2981 }
2982 }
2983
2984 doc_comment! {
2985 concat!("Bitwise \"or\" with the current value.
2986
2987Performs a bitwise \"or\" operation on the current value and the argument `val`, and
2988sets the new value to the result.
2989
2990Returns the previous value.
2991
2992`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
2993of this operation. All ordering modes are possible. Note that using
2994[`Acquire`] makes the store part of this operation [`Relaxed`], and
2995using [`Release`] makes the load part [`Relaxed`].
2996
2997# Examples
2998
2999```
3000use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3001
3002let foo = ", stringify!($atomic_type), "::new(0b101101);
3003assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3004assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3005```"),
3006 #[inline]
3007 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3008 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3009 self.inner.fetch_or(val, order)
3010 }
3011 }
3012
3013 doc_comment! {
3014 concat!("Bitwise \"or\" with the current value.
3015
3016Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3017sets the new value to the result.
3018
3019Unlike `fetch_or`, this does not return the previous value.
3020
3021`or` takes an [`Ordering`] argument which describes the memory ordering
3022of this operation. All ordering modes are possible. Note that using
3023[`Acquire`] makes the store part of this operation [`Relaxed`], and
3024using [`Release`] makes the load part [`Relaxed`].
3025
3026This function may generate more efficient code than `fetch_or` on some platforms.
3027
3028- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3029- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3030
3031Note: On x86/x86_64, the use of either function should not usually
3032affect the generated code, because LLVM can properly optimize the case
3033where the result is unused.
3034
3035# Examples
3036
3037```
3038use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3039
3040let foo = ", stringify!($atomic_type), "::new(0b101101);
3041assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3042assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3043```"),
3044 #[inline]
3045 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3046 pub fn or(&self, val: $int_type, order: Ordering) {
3047 self.inner.or(val, order);
3048 }
3049 }
3050
3051 doc_comment! {
3052 concat!("Bitwise \"xor\" with the current value.
3053
3054Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3055sets the new value to the result.
3056
3057Returns the previous value.
3058
3059`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3060of this operation. All ordering modes are possible. Note that using
3061[`Acquire`] makes the store part of this operation [`Relaxed`], and
3062using [`Release`] makes the load part [`Relaxed`].
3063
3064# Examples
3065
3066```
3067use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3068
3069let foo = ", stringify!($atomic_type), "::new(0b101101);
3070assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3071assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3072```"),
3073 #[inline]
3074 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3075 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3076 self.inner.fetch_xor(val, order)
3077 }
3078 }
3079
3080 doc_comment! {
3081 concat!("Bitwise \"xor\" with the current value.
3082
3083Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3084sets the new value to the result.
3085
3086Unlike `fetch_xor`, this does not return the previous value.
3087
3088`xor` takes an [`Ordering`] argument which describes the memory ordering
3089of this operation. All ordering modes are possible. Note that using
3090[`Acquire`] makes the store part of this operation [`Relaxed`], and
3091using [`Release`] makes the load part [`Relaxed`].
3092
3093This function may generate more efficient code than `fetch_xor` on some platforms.
3094
3095- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3096- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3097
3098Note: On x86/x86_64, the use of either function should not usually
3099affect the generated code, because LLVM can properly optimize the case
3100where the result is unused.
3101
3102# Examples
3103
3104```
3105use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3106
3107let foo = ", stringify!($atomic_type), "::new(0b101101);
3108foo.xor(0b110011, Ordering::SeqCst);
3109assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3110```"),
3111 #[inline]
3112 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3113 pub fn xor(&self, val: $int_type, order: Ordering) {
3114 self.inner.xor(val, order);
3115 }
3116 }
3117
3118 doc_comment! {
3119 concat!("Fetches the value, and applies a function to it that returns an optional
3120new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3121`Err(previous_value)`.
3122
3123Note: This may call the function multiple times if the value has been changed from other threads in
3124the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3125only once to the stored value.
3126
3127`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3128The first describes the required ordering for when the operation finally succeeds while the second
3129describes the required ordering for loads. These correspond to the success and failure orderings of
3130[`compare_exchange`](Self::compare_exchange) respectively.
3131
3132Using [`Acquire`] as success ordering makes the store part
3133of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3134[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3135
3136# Panics
3137
3138Panics if `fetch_order` is [`Release`], [`AcqRel`].
3139
3140# Considerations
3141
3142This method is not magic; it is not provided by the hardware.
3143It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3144and suffers from the same drawbacks.
3145In particular, this method will not circumvent the [ABA Problem].
3146
3147[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3148
3149# Examples
3150
3151```rust
3152use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3153
3154let x = ", stringify!($atomic_type), "::new(7);
3155assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3156assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3157assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3158assert_eq!(x.load(Ordering::SeqCst), 9);
3159```"),
3160 #[inline]
3161 #[cfg_attr(
3162 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3163 track_caller
3164 )]
3165 pub fn fetch_update<F>(
3166 &self,
3167 set_order: Ordering,
3168 fetch_order: Ordering,
3169 mut f: F,
3170 ) -> Result<$int_type, $int_type>
3171 where
3172 F: FnMut($int_type) -> Option<$int_type>,
3173 {
3174 let mut prev = self.load(fetch_order);
3175 while let Some(next) = f(prev) {
3176 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3177 x @ Ok(_) => return x,
3178 Err(next_prev) => prev = next_prev,
3179 }
3180 }
3181 Err(prev)
3182 }
3183 }
3184
3185 doc_comment! {
3186 concat!("Maximum with the current value.
3187
3188Finds the maximum of the current value and the argument `val`, and
3189sets the new value to the result.
3190
3191Returns the previous value.
3192
3193`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3194of this operation. All ordering modes are possible. Note that using
3195[`Acquire`] makes the store part of this operation [`Relaxed`], and
3196using [`Release`] makes the load part [`Relaxed`].
3197
3198# Examples
3199
3200```
3201use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3202
3203let foo = ", stringify!($atomic_type), "::new(23);
3204assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3205assert_eq!(foo.load(Ordering::SeqCst), 42);
3206```
3207
3208If you want to obtain the maximum value in one step, you can use the following:
3209
3210```
3211use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3212
3213let foo = ", stringify!($atomic_type), "::new(23);
3214let bar = 42;
3215let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3216assert!(max_foo == 42);
3217```"),
3218 #[inline]
3219 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3220 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3221 self.inner.fetch_max(val, order)
3222 }
3223 }
3224
3225 doc_comment! {
3226 concat!("Minimum with the current value.
3227
3228Finds the minimum of the current value and the argument `val`, and
3229sets the new value to the result.
3230
3231Returns the previous value.
3232
3233`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3234of this operation. All ordering modes are possible. Note that using
3235[`Acquire`] makes the store part of this operation [`Relaxed`], and
3236using [`Release`] makes the load part [`Relaxed`].
3237
3238# Examples
3239
3240```
3241use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3242
3243let foo = ", stringify!($atomic_type), "::new(23);
3244assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3245assert_eq!(foo.load(Ordering::Relaxed), 23);
3246assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3247assert_eq!(foo.load(Ordering::Relaxed), 22);
3248```
3249
3250If you want to obtain the minimum value in one step, you can use the following:
3251
3252```
3253use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3254
3255let foo = ", stringify!($atomic_type), "::new(23);
3256let bar = 12;
3257let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3258assert_eq!(min_foo, 12);
3259```"),
3260 #[inline]
3261 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3262 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3263 self.inner.fetch_min(val, order)
3264 }
3265 }
3266
3267 doc_comment! {
3268 concat!("Sets the bit at the specified bit-position to 1.
3269
3270Returns `true` if the specified bit was previously set to 1.
3271
3272`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3273of this operation. All ordering modes are possible. Note that using
3274[`Acquire`] makes the store part of this operation [`Relaxed`], and
3275using [`Release`] makes the load part [`Relaxed`].
3276
3277This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3278
3279# Examples
3280
3281```
3282use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3283
3284let foo = ", stringify!($atomic_type), "::new(0b0000);
3285assert!(!foo.bit_set(0, Ordering::Relaxed));
3286assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3287assert!(foo.bit_set(0, Ordering::Relaxed));
3288assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3289```"),
3290 #[inline]
3291 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3292 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3293 self.inner.bit_set(bit, order)
3294 }
3295 }
3296
3297 doc_comment! {
3298 concat!("Clears the bit at the specified bit-position to 1.
3299
3300Returns `true` if the specified bit was previously set to 1.
3301
3302`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3303of this operation. All ordering modes are possible. Note that using
3304[`Acquire`] makes the store part of this operation [`Relaxed`], and
3305using [`Release`] makes the load part [`Relaxed`].
3306
3307This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3308
3309# Examples
3310
3311```
3312use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3313
3314let foo = ", stringify!($atomic_type), "::new(0b0001);
3315assert!(foo.bit_clear(0, Ordering::Relaxed));
3316assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3317```"),
3318 #[inline]
3319 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3320 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3321 self.inner.bit_clear(bit, order)
3322 }
3323 }
3324
3325 doc_comment! {
3326 concat!("Toggles the bit at the specified bit-position.
3327
3328Returns `true` if the specified bit was previously set to 1.
3329
3330`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3331of this operation. All ordering modes are possible. Note that using
3332[`Acquire`] makes the store part of this operation [`Relaxed`], and
3333using [`Release`] makes the load part [`Relaxed`].
3334
3335This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3336
3337# Examples
3338
3339```
3340use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3341
3342let foo = ", stringify!($atomic_type), "::new(0b0000);
3343assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3344assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3345assert!(foo.bit_toggle(0, Ordering::Relaxed));
3346assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3347```"),
3348 #[inline]
3349 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3350 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3351 self.inner.bit_toggle(bit, order)
3352 }
3353 }
3354
3355 doc_comment! {
3356 concat!("Logical negates the current value, and sets the new value to the result.
3357
3358Returns the previous value.
3359
3360`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3361of this operation. All ordering modes are possible. Note that using
3362[`Acquire`] makes the store part of this operation [`Relaxed`], and
3363using [`Release`] makes the load part [`Relaxed`].
3364
3365# Examples
3366
3367```
3368use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3369
3370let foo = ", stringify!($atomic_type), "::new(0);
3371assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3372assert_eq!(foo.load(Ordering::Relaxed), !0);
3373```"),
3374 #[inline]
3375 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3376 pub fn fetch_not(&self, order: Ordering) -> $int_type {
3377 self.inner.fetch_not(order)
3378 }
3379
3380 doc_comment! {
3381 concat!("Logical negates the current value, and sets the new value to the result.
3382
3383Unlike `fetch_not`, this does not return the previous value.
3384
3385`not` takes an [`Ordering`] argument which describes the memory ordering
3386of this operation. All ordering modes are possible. Note that using
3387[`Acquire`] makes the store part of this operation [`Relaxed`], and
3388using [`Release`] makes the load part [`Relaxed`].
3389
3390This function may generate more efficient code than `fetch_not` on some platforms.
3391
3392- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3393- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3394
3395# Examples
3396
3397```
3398use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3399
3400let foo = ", stringify!($atomic_type), "::new(0);
3401foo.not(Ordering::Relaxed);
3402assert_eq!(foo.load(Ordering::Relaxed), !0);
3403```"),
3404 #[inline]
3405 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3406 pub fn not(&self, order: Ordering) {
3407 self.inner.not(order);
3408 }
3409 }
3410 }
3411
3412 doc_comment! {
3413 concat!("Negates the current value, and sets the new value to the result.
3414
3415Returns the previous value.
3416
3417`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3418of this operation. All ordering modes are possible. Note that using
3419[`Acquire`] makes the store part of this operation [`Relaxed`], and
3420using [`Release`] makes the load part [`Relaxed`].
3421
3422# Examples
3423
3424```
3425use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3426
3427let foo = ", stringify!($atomic_type), "::new(5);
3428assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3429assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3430assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3431assert_eq!(foo.load(Ordering::Relaxed), 5);
3432```"),
3433 #[inline]
3434 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3435 pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3436 self.inner.fetch_neg(order)
3437 }
3438
3439 doc_comment! {
3440 concat!("Negates the current value, and sets the new value to the result.
3441
3442Unlike `fetch_neg`, this does not return the previous value.
3443
3444`neg` takes an [`Ordering`] argument which describes the memory ordering
3445of this operation. All ordering modes are possible. Note that using
3446[`Acquire`] makes the store part of this operation [`Relaxed`], and
3447using [`Release`] makes the load part [`Relaxed`].
3448
3449This function may generate more efficient code than `fetch_neg` on some platforms.
3450
3451- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3452
3453# Examples
3454
3455```
3456use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3457
3458let foo = ", stringify!($atomic_type), "::new(5);
3459foo.neg(Ordering::Relaxed);
3460assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3461foo.neg(Ordering::Relaxed);
3462assert_eq!(foo.load(Ordering::Relaxed), 5);
3463```"),
3464 #[inline]
3465 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3466 pub fn neg(&self, order: Ordering) {
3467 self.inner.neg(order);
3468 }
3469 }
3470 }
3471 } // cfg_has_atomic_cas!
3472
3473 const_fn! {
3474 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3475 /// Returns a mutable pointer to the underlying integer.
3476 ///
3477 /// Returning an `*mut` pointer from a shared reference to this atomic is
3478 /// safe because the atomic types work with interior mutability. Any use of
3479 /// the returned raw pointer requires an `unsafe` block and has to uphold
3480 /// the safety requirements. If there is concurrent access, note the following
3481 /// additional safety requirements:
3482 ///
3483 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3484 /// operations on it must be atomic.
3485 /// - Otherwise, any concurrent operations on it must be compatible with
3486 /// operations performed by this atomic type.
3487 ///
3488 /// This is `const fn` on Rust 1.58+.
3489 #[inline]
3490 pub const fn as_ptr(&self) -> *mut $int_type {
3491 self.inner.as_ptr()
3492 }
3493 }
3494 }
3495 };
3496
3497 // AtomicF* impls
3498 (float,
3499 $atomic_type:ident,
3500 $float_type:ident,
3501 $atomic_int_type:ident,
3502 $int_type:ident,
3503 $align:literal
3504 ) => {
3505 doc_comment! {
3506 concat!("A floating point type which can be safely shared between threads.
3507
3508This type has the same in-memory representation as the underlying floating point type,
3509[`", stringify!($float_type), "`].
3510"
3511 ),
3512 #[cfg_attr(docsrs, doc(cfg(feature = "float")))]
3513 // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
3514 // will show clearer docs.
3515 #[repr(C, align($align))]
3516 pub struct $atomic_type {
3517 inner: imp::float::$atomic_type,
3518 }
3519 }
3520
3521 impl Default for $atomic_type {
3522 #[inline]
3523 fn default() -> Self {
3524 Self::new($float_type::default())
3525 }
3526 }
3527
3528 impl From<$float_type> for $atomic_type {
3529 #[inline]
3530 fn from(v: $float_type) -> Self {
3531 Self::new(v)
3532 }
3533 }
3534
3535 // UnwindSafe is implicitly implemented.
3536 #[cfg(not(portable_atomic_no_core_unwind_safe))]
3537 impl core::panic::RefUnwindSafe for $atomic_type {}
3538 #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
3539 impl std::panic::RefUnwindSafe for $atomic_type {}
3540
3541 impl_debug_and_serde!($atomic_type);
3542
3543 impl $atomic_type {
3544 /// Creates a new atomic float.
3545 #[inline]
3546 #[must_use]
3547 pub const fn new(v: $float_type) -> Self {
3548 static_assert_layout!($atomic_type, $float_type);
3549 Self { inner: imp::float::$atomic_type::new(v) }
3550 }
3551
3552 doc_comment! {
3553 concat!("Creates a new reference to an atomic float from a pointer.
3554
3555# Safety
3556
3557* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
3558 can be bigger than `align_of::<", stringify!($float_type), ">()`).
3559* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
3560* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
3561 behind `ptr` must have a happens-before relationship with atomic accesses via
3562 the returned value (or vice-versa).
3563 * In other words, time periods where the value is accessed atomically may not
3564 overlap with periods where the value is accessed non-atomically.
3565 * This requirement is trivially satisfied if `ptr` is never used non-atomically
3566 for the duration of lifetime `'a`. Most use cases should be able to follow
3567 this guideline.
3568 * This requirement is also trivially satisfied if all accesses (atomic or not) are
3569 done from the same thread.
3570* If this atomic type is *not* lock-free:
3571 * Any accesses to the value behind `ptr` must have a happens-before relationship
3572 with accesses via the returned value (or vice-versa).
3573 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
3574 be compatible with operations performed by this atomic type.
3575* This method must not be used to create overlapping or mixed-size atomic
3576 accesses, as these are not supported by the memory model.
3577
3578[valid]: core::ptr#safety"),
3579 #[inline]
3580 #[must_use]
3581 pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
3582 #[allow(clippy::cast_ptr_alignment)]
3583 // SAFETY: guaranteed by the caller
3584 unsafe { &*(ptr as *mut Self) }
3585 }
3586 }
3587
3588 /// Returns `true` if operations on values of this type are lock-free.
3589 ///
3590 /// If the compiler or the platform doesn't support the necessary
3591 /// atomic instructions, global locks for every potentially
3592 /// concurrent atomic operation will be used.
3593 #[inline]
3594 #[must_use]
3595 pub fn is_lock_free() -> bool {
3596 <imp::float::$atomic_type>::is_lock_free()
3597 }
3598
3599 /// Returns `true` if operations on values of this type are lock-free.
3600 ///
3601 /// If the compiler or the platform doesn't support the necessary
3602 /// atomic instructions, global locks for every potentially
3603 /// concurrent atomic operation will be used.
3604 ///
3605 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
3606 /// this type may be lock-free even if the function returns false.
3607 #[inline]
3608 #[must_use]
3609 pub const fn is_always_lock_free() -> bool {
3610 <imp::float::$atomic_type>::is_always_lock_free()
3611 }
3612
3613 /// Returns a mutable reference to the underlying float.
3614 ///
3615 /// This is safe because the mutable reference guarantees that no other threads are
3616 /// concurrently accessing the atomic data.
3617 #[inline]
3618 pub fn get_mut(&mut self) -> &mut $float_type {
3619 self.inner.get_mut()
3620 }
3621
3622 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
3623 // https://github.com/rust-lang/rust/issues/76314
3624
3625 /// Consumes the atomic and returns the contained value.
3626 ///
3627 /// This is safe because passing `self` by value guarantees that no other threads are
3628 /// concurrently accessing the atomic data.
3629 #[inline]
3630 pub fn into_inner(self) -> $float_type {
3631 self.inner.into_inner()
3632 }
3633
3634 /// Loads a value from the atomic float.
3635 ///
3636 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3637 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
3638 ///
3639 /// # Panics
3640 ///
3641 /// Panics if `order` is [`Release`] or [`AcqRel`].
3642 #[inline]
3643 #[cfg_attr(
3644 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3645 track_caller
3646 )]
3647 pub fn load(&self, order: Ordering) -> $float_type {
3648 self.inner.load(order)
3649 }
3650
3651 /// Stores a value into the atomic float.
3652 ///
3653 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3654 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3655 ///
3656 /// # Panics
3657 ///
3658 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
3659 #[inline]
3660 #[cfg_attr(
3661 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3662 track_caller
3663 )]
3664 pub fn store(&self, val: $float_type, order: Ordering) {
3665 self.inner.store(val, order)
3666 }
3667
3668 cfg_has_atomic_cas! {
3669 /// Stores a value into the atomic float, returning the previous value.
3670 ///
3671 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
3672 /// of this operation. All ordering modes are possible. Note that using
3673 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3674 /// using [`Release`] makes the load part [`Relaxed`].
3675 #[inline]
3676 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3677 pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
3678 self.inner.swap(val, order)
3679 }
3680
3681 /// Stores a value into the atomic float if the current value is the same as
3682 /// the `current` value.
3683 ///
3684 /// The return value is a result indicating whether the new value was written and
3685 /// containing the previous value. On success this value is guaranteed to be equal to
3686 /// `current`.
3687 ///
3688 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
3689 /// ordering of this operation. `success` describes the required ordering for the
3690 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3691 /// `failure` describes the required ordering for the load operation that takes place when
3692 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3693 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3694 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3695 ///
3696 /// # Panics
3697 ///
3698 /// Panics if `failure` is [`Release`], [`AcqRel`].
3699 #[inline]
3700 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3701 #[cfg_attr(
3702 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3703 track_caller
3704 )]
3705 pub fn compare_exchange(
3706 &self,
3707 current: $float_type,
3708 new: $float_type,
3709 success: Ordering,
3710 failure: Ordering,
3711 ) -> Result<$float_type, $float_type> {
3712 self.inner.compare_exchange(current, new, success, failure)
3713 }
3714
3715 /// Stores a value into the atomic float if the current value is the same as
3716 /// the `current` value.
3717 /// Unlike [`compare_exchange`](Self::compare_exchange)
3718 /// this function is allowed to spuriously fail even
3719 /// when the comparison succeeds, which can result in more efficient code on some
3720 /// platforms. The return value is a result indicating whether the new value was
3721 /// written and containing the previous value.
3722 ///
3723 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3724 /// ordering of this operation. `success` describes the required ordering for the
3725 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
3726 /// `failure` describes the required ordering for the load operation that takes place when
3727 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
3728 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
3729 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3730 ///
3731 /// # Panics
3732 ///
3733 /// Panics if `failure` is [`Release`], [`AcqRel`].
3734 #[inline]
3735 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3736 #[cfg_attr(
3737 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3738 track_caller
3739 )]
3740 pub fn compare_exchange_weak(
3741 &self,
3742 current: $float_type,
3743 new: $float_type,
3744 success: Ordering,
3745 failure: Ordering,
3746 ) -> Result<$float_type, $float_type> {
3747 self.inner.compare_exchange_weak(current, new, success, failure)
3748 }
3749
3750 /// Adds to the current value, returning the previous value.
3751 ///
3752 /// This operation wraps around on overflow.
3753 ///
3754 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3755 /// of this operation. All ordering modes are possible. Note that using
3756 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3757 /// using [`Release`] makes the load part [`Relaxed`].
3758 #[inline]
3759 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3760 pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
3761 self.inner.fetch_add(val, order)
3762 }
3763
3764 /// Subtracts from the current value, returning the previous value.
3765 ///
3766 /// This operation wraps around on overflow.
3767 ///
3768 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3769 /// of this operation. All ordering modes are possible. Note that using
3770 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3771 /// using [`Release`] makes the load part [`Relaxed`].
3772 #[inline]
3773 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3774 pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
3775 self.inner.fetch_sub(val, order)
3776 }
3777
3778 /// Fetches the value, and applies a function to it that returns an optional
3779 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3780 /// `Err(previous_value)`.
3781 ///
3782 /// Note: This may call the function multiple times if the value has been changed from other threads in
3783 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3784 /// only once to the stored value.
3785 ///
3786 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3787 /// The first describes the required ordering for when the operation finally succeeds while the second
3788 /// describes the required ordering for loads. These correspond to the success and failure orderings of
3789 /// [`compare_exchange`](Self::compare_exchange) respectively.
3790 ///
3791 /// Using [`Acquire`] as success ordering makes the store part
3792 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3793 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3794 ///
3795 /// # Panics
3796 ///
3797 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
3798 ///
3799 /// # Considerations
3800 ///
3801 /// This method is not magic; it is not provided by the hardware.
3802 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3803 /// and suffers from the same drawbacks.
3804 /// In particular, this method will not circumvent the [ABA Problem].
3805 ///
3806 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3807 #[inline]
3808 #[cfg_attr(
3809 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3810 track_caller
3811 )]
3812 pub fn fetch_update<F>(
3813 &self,
3814 set_order: Ordering,
3815 fetch_order: Ordering,
3816 mut f: F,
3817 ) -> Result<$float_type, $float_type>
3818 where
3819 F: FnMut($float_type) -> Option<$float_type>,
3820 {
3821 let mut prev = self.load(fetch_order);
3822 while let Some(next) = f(prev) {
3823 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3824 x @ Ok(_) => return x,
3825 Err(next_prev) => prev = next_prev,
3826 }
3827 }
3828 Err(prev)
3829 }
3830
3831 /// Maximum with the current value.
3832 ///
3833 /// Finds the maximum of the current value and the argument `val`, and
3834 /// sets the new value to the result.
3835 ///
3836 /// Returns the previous value.
3837 ///
3838 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3839 /// of this operation. All ordering modes are possible. Note that using
3840 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3841 /// using [`Release`] makes the load part [`Relaxed`].
3842 #[inline]
3843 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3844 pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
3845 self.inner.fetch_max(val, order)
3846 }
3847
3848 /// Minimum with the current value.
3849 ///
3850 /// Finds the minimum of the current value and the argument `val`, and
3851 /// sets the new value to the result.
3852 ///
3853 /// Returns the previous value.
3854 ///
3855 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3856 /// of this operation. All ordering modes are possible. Note that using
3857 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3858 /// using [`Release`] makes the load part [`Relaxed`].
3859 #[inline]
3860 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3861 pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
3862 self.inner.fetch_min(val, order)
3863 }
3864
3865 /// Negates the current value, and sets the new value to the result.
3866 ///
3867 /// Returns the previous value.
3868 ///
3869 /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3870 /// of this operation. All ordering modes are possible. Note that using
3871 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3872 /// using [`Release`] makes the load part [`Relaxed`].
3873 #[inline]
3874 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3875 pub fn fetch_neg(&self, order: Ordering) -> $float_type {
3876 self.inner.fetch_neg(order)
3877 }
3878
3879 /// Computes the absolute value of the current value, and sets the
3880 /// new value to the result.
3881 ///
3882 /// Returns the previous value.
3883 ///
3884 /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
3885 /// of this operation. All ordering modes are possible. Note that using
3886 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
3887 /// using [`Release`] makes the load part [`Relaxed`].
3888 #[inline]
3889 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3890 pub fn fetch_abs(&self, order: Ordering) -> $float_type {
3891 self.inner.fetch_abs(order)
3892 }
3893 } // cfg_has_atomic_cas!
3894
3895 #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
3896 doc_comment! {
3897 concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
3898
3899See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
3900portability of this operation (there are almost no issues).
3901
3902This is `const fn` on Rust 1.58+."),
3903 #[inline]
3904 pub const fn as_bits(&self) -> &$atomic_int_type {
3905 self.inner.as_bits()
3906 }
3907 }
3908 #[cfg(portable_atomic_no_const_raw_ptr_deref)]
3909 doc_comment! {
3910 concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
3911
3912See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
3913portability of this operation (there are almost no issues).
3914
3915This is `const fn` on Rust 1.58+."),
3916 #[inline]
3917 pub fn as_bits(&self) -> &$atomic_int_type {
3918 self.inner.as_bits()
3919 }
3920 }
3921
3922 const_fn! {
3923 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3924 /// Returns a mutable pointer to the underlying float.
3925 ///
3926 /// Returning an `*mut` pointer from a shared reference to this atomic is
3927 /// safe because the atomic types work with interior mutability. Any use of
3928 /// the returned raw pointer requires an `unsafe` block and has to uphold
3929 /// the safety requirements. If there is concurrent access, note the following
3930 /// additional safety requirements:
3931 ///
3932 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3933 /// operations on it must be atomic.
3934 /// - Otherwise, any concurrent operations on it must be compatible with
3935 /// operations performed by this atomic type.
3936 ///
3937 /// This is `const fn` on Rust 1.58+.
3938 #[inline]
3939 pub const fn as_ptr(&self) -> *mut $float_type {
3940 self.inner.as_ptr()
3941 }
3942 }
3943 }
3944 };
3945}
3946
3947cfg_has_atomic_ptr! {
3948 #[cfg(target_pointer_width = "16")]
3949 atomic_int!(AtomicIsize, isize, 2);
3950 #[cfg(target_pointer_width = "16")]
3951 atomic_int!(AtomicUsize, usize, 2);
3952 #[cfg(target_pointer_width = "32")]
3953 atomic_int!(AtomicIsize, isize, 4);
3954 #[cfg(target_pointer_width = "32")]
3955 atomic_int!(AtomicUsize, usize, 4);
3956 #[cfg(target_pointer_width = "64")]
3957 atomic_int!(AtomicIsize, isize, 8);
3958 #[cfg(target_pointer_width = "64")]
3959 atomic_int!(AtomicUsize, usize, 8);
3960 #[cfg(target_pointer_width = "128")]
3961 atomic_int!(AtomicIsize, isize, 16);
3962 #[cfg(target_pointer_width = "128")]
3963 atomic_int!(AtomicUsize, usize, 16);
3964}
3965
3966cfg_has_atomic_8! {
3967 atomic_int!(AtomicI8, i8, 1);
3968 atomic_int!(AtomicU8, u8, 1);
3969}
3970cfg_has_atomic_16! {
3971 atomic_int!(AtomicI16, i16, 2);
3972 atomic_int!(AtomicU16, u16, 2);
3973}
3974cfg_has_atomic_32! {
3975 atomic_int!(AtomicI32, i32, 4);
3976 atomic_int!(AtomicU32, u32, 4);
3977}
3978cfg_has_atomic_64! {
3979 atomic_int!(AtomicI64, i64, 8);
3980 atomic_int!(AtomicU64, u64, 8);
3981}
3982cfg_has_atomic_128! {
3983 atomic_int!(AtomicI128, i128, 16);
3984 atomic_int!(AtomicU128, u128, 16);
3985}
3986