1 | // SPDX-License-Identifier: Apache-2.0 OR MIT |
2 | |
3 | /*! |
4 | <!-- tidy:crate-doc:start --> |
5 | Portable atomic types including support for 128-bit atomics, atomic float, etc. |
6 | |
7 | - Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets) |
8 | - Provide `AtomicI128` and `AtomicU128`. |
9 | - Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float)) |
10 | - Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR) |
11 | - Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 ARM, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise) |
12 | - Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108), [`AtomicBool::fetch_not`](https://github.com/rust-lang/rust/issues/98485). |
13 | - Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+. |
14 | - Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc. |
15 | |
16 | <!-- TODO: |
17 | - mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc. |
18 | - mention portable-atomic-util crate |
19 | --> |
20 | |
21 | ## Usage |
22 | |
23 | Add this to your `Cargo.toml`: |
24 | |
25 | ```toml |
26 | [dependencies] |
27 | portable-atomic = "1" |
28 | ``` |
29 | |
30 | The default features are mainly for users who use atomics larger than the pointer width. |
31 | If you don't need them, disabling the default features may reduce code size and compile time slightly. |
32 | |
33 | ```toml |
34 | [dependencies] |
35 | portable-atomic = { version = "1", default-features = false } |
36 | ``` |
37 | |
38 | If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display helpful error messages to users on targets requiring additional action on the user side to provide atomic CAS. |
39 | |
40 | ```toml |
41 | [dependencies] |
42 | portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] } |
43 | ``` |
44 | |
45 | *Compiler support: requires rustc 1.34+* |
46 | |
47 | ## 128-bit atomics support |
48 | |
49 | Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), aarch64 (Rust 1.59+), powerpc64 (nightly only), and s390x (nightly only), otherwise the fallback implementation is used. |
50 | |
51 | On x86_64, even if `cmpxchg16b` is not available at compile-time (note: `cmpxchg16b` target feature is enabled by default only on Apple targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg. |
52 | |
53 | They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible. |
54 | |
55 | See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details. |
56 | |
57 | ## Optional features |
58 | |
59 | - **`fallback`** *(enabled by default)*<br> |
60 | Enable fallback implementations. |
61 | |
62 | Disabling this allows only atomic types for which the platform natively supports atomic operations. |
63 | |
64 | - <a name="optional-features-float"></a>**`float`**<br> |
65 | Provide `AtomicF{32,64}`. |
66 | |
67 | Note that most of `fetch_*` operations of atomic floats are implemented using CAS loops, which can be slower than equivalent operations of atomic integers. ([GPU targets have atomic instructions for float, so we plan to use these instructions for GPU targets in the future.](https://github.com/taiki-e/portable-atomic/issues/34)) |
68 | |
69 | - **`std`**<br> |
70 | Use `std`. |
71 | |
72 | - <a name="optional-features-require-cas"></a>**`require-cas`**<br> |
73 | Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more. |
74 | |
75 | - <a name="optional-features-serde"></a>**`serde`**<br> |
76 | Implement `serde::{Serialize,Deserialize}` for atomic types. |
77 | |
78 | Note: |
79 | - The MSRV when this feature is enabled depends on the MSRV of [serde]. |
80 | |
81 | - <a name="optional-features-critical-section"></a>**`critical-section`**<br> |
82 | When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where |
83 | it is not natively available. When enabling it, you should provide a suitable critical section implementation |
84 | for the current target, see the [critical-section] documentation for details on how to do so. |
85 | |
86 | `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used, |
87 | such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts |
88 | needs extra care due to e.g. real-time requirements. |
89 | |
90 | Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with |
91 | [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but |
92 | additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if |
93 | all the `critical-section` implementation for your target does is disable interrupts, prefer using |
94 | `unsafe-assume-single-core` feature instead. |
95 | |
96 | Note: |
97 | - The MSRV when this feature is enabled depends on the MSRV of [critical-section]. |
98 | - It is usually *not* recommended to always enable this feature in dependencies of the library. |
99 | |
100 | Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future). |
101 | |
102 | The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.) |
103 | |
104 | As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this: |
105 | |
106 | ```toml |
107 | [dependencies] |
108 | portable-atomic = { version = "1", default-features = false, features = ["critical-section"] } |
109 | crate-provides-critical-section-impl = "..." |
110 | crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] } |
111 | ``` |
112 | |
113 | - <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br> |
114 | Assume that the target is single-core. |
115 | When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts. |
116 | |
117 | This feature is `unsafe`, and note the following safety requirements: |
118 | - Enabling this feature for multi-core systems is always **unsound**. |
119 | - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode. |
120 | Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent. |
121 | |
122 | The following are known cases: |
123 | - On pre-v6 ARM, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together. |
124 | - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware. |
125 | |
126 | See also [the `interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md). |
127 | |
128 | Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature. |
129 | |
130 | It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.) |
131 | |
132 | ARMv6-M (thumbv6m), pre-v6 ARM (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported. |
133 | |
134 | Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature. |
135 | |
136 | Enabling this feature for targets that have atomic CAS will result in a compile error. |
137 | |
138 | Feel free to submit an issue if your target is not supported yet. |
139 | |
140 | ## Optional cfg |
141 | |
142 | One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags): |
143 | |
144 | ```toml |
145 | # .cargo/config.toml |
146 | [target.<target>] |
147 | rustflags = ["--cfg", "portable_atomic_no_outline_atomics"] |
148 | ``` |
149 | |
150 | Or set environment variable: |
151 | |
152 | ```sh |
153 | RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ... |
154 | ``` |
155 | |
156 | - <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br> |
157 | Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core). |
158 | |
159 | Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more. |
160 | |
161 | - <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br> |
162 | Disable dynamic dispatching by run-time CPU feature detection. |
163 | |
164 | If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE (aarch64). |
165 | |
166 | Note: |
167 | - Dynamic detection is currently only enabled in Rust 1.59+ for aarch64, in Rust 1.59+ (AVX) or 1.69+ (CMPXCHG16B) for x86_64, nightly only for powerpc64 (disabled by default), otherwise it works the same as when this cfg is set. |
168 | - If the required target features are enabled at compile-time, the atomic operations are inlined. |
169 | - This is compatible with no-std (as with all features except `std`). |
170 | - On some targets, run-time detection is disabled by default mainly for compatibility with older versions of operating systems or incomplete build environments, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.) |
171 | - Some aarch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.) |
172 | |
173 | See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md). |
174 | |
175 | ## Related Projects |
176 | |
177 | - [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers. |
178 | - [atomic-memcpy]: Byte-wise atomic memcpy. |
179 | |
180 | [#60]: https://github.com/taiki-e/portable-atomic/issues/60 |
181 | [atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit |
182 | [atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy |
183 | [critical-section]: https://github.com/rust-embedded/critical-section |
184 | [rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650 |
185 | [serde]: https://github.com/serde-rs/serde |
186 | |
187 | <!-- tidy:crate-doc:end --> |
188 | */ |
189 | |
190 | #![no_std ] |
191 | #![doc (test( |
192 | no_crate_inject, |
193 | attr( |
194 | deny(warnings, rust_2018_idioms, single_use_lifetimes), |
195 | allow(dead_code, unused_variables) |
196 | ) |
197 | ))] |
198 | #![cfg_attr (not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52 |
199 | #![cfg_attr (portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))] |
200 | #![warn ( |
201 | // Lints that may help when writing public library. |
202 | missing_debug_implementations, |
203 | // missing_docs, |
204 | clippy::alloc_instead_of_core, |
205 | clippy::exhaustive_enums, |
206 | clippy::exhaustive_structs, |
207 | clippy::impl_trait_in_params, |
208 | clippy::missing_inline_in_public_items, |
209 | clippy::std_instead_of_alloc, |
210 | clippy::std_instead_of_core, |
211 | )] |
212 | #![cfg_attr (not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc |
213 | #![allow ( |
214 | clippy::cast_lossless, |
215 | clippy::inline_always, |
216 | clippy::naive_bytecount, |
217 | clippy::unreadable_literal |
218 | )] |
219 | // asm_experimental_arch |
220 | // AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway. |
221 | // On tier 2 platforms (powerpc64 and s390x), we use cfg set by build script to |
222 | // determine whether this feature is available or not. |
223 | #![cfg_attr ( |
224 | all( |
225 | not(portable_atomic_no_asm), |
226 | any( |
227 | target_arch = "avr" , |
228 | target_arch = "msp430" , |
229 | all(target_arch = "xtensa" , portable_atomic_unsafe_assume_single_core), |
230 | all(target_arch = "powerpc64" , portable_atomic_unstable_asm_experimental_arch), |
231 | all(target_arch = "s390x" , portable_atomic_unstable_asm_experimental_arch), |
232 | ), |
233 | ), |
234 | feature(asm_experimental_arch) |
235 | )] |
236 | // Old nightly only |
237 | // These features are already stabilized or have already been removed from compilers, |
238 | // and can safely be enabled for old nightly as long as version detection works. |
239 | // - cfg(target_has_atomic) |
240 | // - #[target_feature(enable = "cmpxchg16b")] on x86_64 |
241 | // - asm! on ARM, AArch64, RISC-V, x86_64 |
242 | // - llvm_asm! on AVR (tier 3) and MSP430 (tier 3) |
243 | // - #[instruction_set] on non-Linux/Android pre-v6 ARM (tier 3) |
244 | #![cfg_attr (portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))] |
245 | #![cfg_attr ( |
246 | all( |
247 | target_arch = "x86_64" , |
248 | portable_atomic_unstable_cmpxchg16b_target_feature, |
249 | not(portable_atomic_no_outline_atomics), |
250 | not(any(target_env = "sgx" , miri)), |
251 | feature = "fallback" , |
252 | ), |
253 | feature(cmpxchg16b_target_feature) |
254 | )] |
255 | #![cfg_attr ( |
256 | all( |
257 | portable_atomic_unstable_asm, |
258 | any( |
259 | target_arch = "aarch64" , |
260 | target_arch = "arm" , |
261 | target_arch = "riscv32" , |
262 | target_arch = "riscv64" , |
263 | target_arch = "x86_64" , |
264 | ), |
265 | ), |
266 | feature(asm) |
267 | )] |
268 | #![cfg_attr ( |
269 | all(any(target_arch = "avr" , target_arch = "msp430" ), portable_atomic_no_asm), |
270 | feature(llvm_asm) |
271 | )] |
272 | #![cfg_attr ( |
273 | all( |
274 | target_arch = "arm" , |
275 | portable_atomic_unstable_isa_attribute, |
276 | any(test, portable_atomic_unsafe_assume_single_core), |
277 | not(any(target_feature = "v6" , portable_atomic_target_feature = "v6" )), |
278 | not(target_has_atomic = "ptr" ), |
279 | ), |
280 | feature(isa_attribute) |
281 | )] |
282 | // Miri and/or ThreadSanitizer only |
283 | // They do not support inline assembly, so we need to use unstable features instead. |
284 | // Since they require nightly compilers anyway, we can use the unstable features. |
285 | #![cfg_attr ( |
286 | all( |
287 | any(target_arch = "aarch64" , target_arch = "powerpc64" , target_arch = "s390x" ), |
288 | any(miri, portable_atomic_sanitize_thread), |
289 | ), |
290 | feature(core_intrinsics) |
291 | )] |
292 | // This feature is only enabled for old nightly because cmpxchg16b_intrinsic has been stabilized. |
293 | #![cfg_attr ( |
294 | all( |
295 | target_arch = "x86_64" , |
296 | portable_atomic_unstable_cmpxchg16b_intrinsic, |
297 | any(miri, portable_atomic_sanitize_thread), |
298 | ), |
299 | feature(stdsimd) |
300 | )] |
301 | // docs.rs only |
302 | #![cfg_attr (portable_atomic_doc_cfg, feature(doc_cfg))] |
303 | #![cfg_attr ( |
304 | all( |
305 | portable_atomic_no_atomic_load_store, |
306 | not(any( |
307 | target_arch = "avr" , |
308 | target_arch = "bpf" , |
309 | target_arch = "msp430" , |
310 | target_arch = "riscv32" , |
311 | target_arch = "riscv64" , |
312 | feature = "critical-section" , |
313 | )), |
314 | ), |
315 | allow(unused_imports, unused_macros) |
316 | )] |
317 | |
318 | // There are currently no 8-bit, 128-bit, or higher builtin targets. |
319 | // (Although some of our generic code is written with the future |
320 | // addition of 128-bit targets in mind.) |
321 | // Note that Rust (and C99) pointers must be at least 16-bits: https://github.com/rust-lang/rust/pull/49305 |
322 | #[cfg (not(any( |
323 | target_pointer_width = "16" , |
324 | target_pointer_width = "32" , |
325 | target_pointer_width = "64" , |
326 | )))] |
327 | compile_error!( |
328 | "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \ |
329 | if you need support for others, \ |
330 | please submit an issue at <https://github.com/taiki-e/portable-atomic>" |
331 | ); |
332 | |
333 | #[cfg (portable_atomic_unsafe_assume_single_core)] |
334 | #[cfg_attr ( |
335 | portable_atomic_no_cfg_target_has_atomic, |
336 | cfg(any( |
337 | not(portable_atomic_no_atomic_cas), |
338 | not(any( |
339 | target_arch = "arm" , |
340 | target_arch = "avr" , |
341 | target_arch = "msp430" , |
342 | target_arch = "riscv32" , |
343 | target_arch = "riscv64" , |
344 | target_arch = "xtensa" , |
345 | )), |
346 | )) |
347 | )] |
348 | #[cfg_attr ( |
349 | not(portable_atomic_no_cfg_target_has_atomic), |
350 | cfg(any( |
351 | target_has_atomic = "ptr" , |
352 | not(any( |
353 | target_arch = "arm" , |
354 | target_arch = "avr" , |
355 | target_arch = "msp430" , |
356 | target_arch = "riscv32" , |
357 | target_arch = "riscv64" , |
358 | target_arch = "xtensa" , |
359 | )), |
360 | )) |
361 | )] |
362 | compile_error!( |
363 | "cfg(portable_atomic_unsafe_assume_single_core) does not compatible with this target; \n\ |
364 | if you need cfg(portable_atomic_unsafe_assume_single_core) support for this target, \n\ |
365 | please submit an issue at <https://github.com/taiki-e/portable-atomic>" |
366 | ); |
367 | |
368 | #[cfg (portable_atomic_no_outline_atomics)] |
369 | #[cfg (not(any( |
370 | target_arch = "aarch64" , |
371 | target_arch = "arm" , |
372 | target_arch = "powerpc64" , |
373 | target_arch = "x86_64" , |
374 | )))] |
375 | compile_error!("cfg(portable_atomic_no_outline_atomics) does not compatible with this target" ); |
376 | #[cfg (portable_atomic_outline_atomics)] |
377 | #[cfg (not(any(target_arch = "aarch64" , target_arch = "powerpc64" )))] |
378 | compile_error!("cfg(portable_atomic_outline_atomics) does not compatible with this target" ); |
379 | #[cfg (portable_atomic_disable_fiq)] |
380 | #[cfg (not(all( |
381 | target_arch = "arm" , |
382 | not(any(target_feature = "mclass" , portable_atomic_target_feature = "mclass" )), |
383 | )))] |
384 | compile_error!("cfg(portable_atomic_disable_fiq) does not compatible with this target" ); |
385 | #[cfg (portable_atomic_s_mode)] |
386 | #[cfg (not(any(target_arch = "riscv32" , target_arch = "riscv64" )))] |
387 | compile_error!("cfg(portable_atomic_s_mode) does not compatible with this target" ); |
388 | #[cfg (portable_atomic_force_amo)] |
389 | #[cfg (not(any(target_arch = "riscv32" , target_arch = "riscv64" )))] |
390 | compile_error!("cfg(portable_atomic_force_amo) does not compatible with this target" ); |
391 | |
392 | #[cfg (portable_atomic_disable_fiq)] |
393 | #[cfg (not(portable_atomic_unsafe_assume_single_core))] |
394 | compile_error!( |
395 | "cfg(portable_atomic_disable_fiq) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)" |
396 | ); |
397 | #[cfg (portable_atomic_s_mode)] |
398 | #[cfg (not(portable_atomic_unsafe_assume_single_core))] |
399 | compile_error!( |
400 | "cfg(portable_atomic_s_mode) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)" |
401 | ); |
402 | #[cfg (portable_atomic_force_amo)] |
403 | #[cfg (not(portable_atomic_unsafe_assume_single_core))] |
404 | compile_error!( |
405 | "cfg(portable_atomic_force_amo) may only be used together with cfg(portable_atomic_unsafe_assume_single_core)" |
406 | ); |
407 | |
408 | #[cfg (all(portable_atomic_unsafe_assume_single_core, feature = "critical-section" ))] |
409 | compile_error!( |
410 | "you may not enable feature `critical-section` and cfg(portable_atomic_unsafe_assume_single_core) at the same time" |
411 | ); |
412 | |
413 | #[cfg (feature = "require-cas" )] |
414 | #[cfg_attr ( |
415 | portable_atomic_no_cfg_target_has_atomic, |
416 | cfg(not(any( |
417 | not(portable_atomic_no_atomic_cas), |
418 | portable_atomic_unsafe_assume_single_core, |
419 | feature = "critical-section" , |
420 | target_arch = "avr" , |
421 | target_arch = "msp430" , |
422 | ))) |
423 | )] |
424 | #[cfg_attr ( |
425 | not(portable_atomic_no_cfg_target_has_atomic), |
426 | cfg(not(any( |
427 | target_has_atomic = "ptr" , |
428 | portable_atomic_unsafe_assume_single_core, |
429 | feature = "critical-section" , |
430 | target_arch = "avr" , |
431 | target_arch = "msp430" , |
432 | ))) |
433 | )] |
434 | compile_error!( |
435 | "dependents require atomic CAS but not available on this target by default; \n\ |
436 | consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features. \n\ |
437 | see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more." |
438 | ); |
439 | |
440 | #[cfg (any(test, feature = "std" ))] |
441 | extern crate std; |
442 | |
443 | #[macro_use ] |
444 | mod cfgs; |
445 | #[cfg (target_pointer_width = "128" )] |
446 | pub use {cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr}; |
447 | #[cfg (target_pointer_width = "16" )] |
448 | pub use {cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr}; |
449 | #[cfg (target_pointer_width = "32" )] |
450 | pub use {cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr}; |
451 | #[cfg (target_pointer_width = "64" )] |
452 | pub use {cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr}; |
453 | |
454 | #[macro_use ] |
455 | mod utils; |
456 | |
457 | #[cfg (test)] |
458 | #[macro_use ] |
459 | mod tests; |
460 | |
461 | #[doc (no_inline)] |
462 | pub use core::sync::atomic::Ordering; |
463 | |
464 | #[doc (no_inline)] |
465 | // LLVM doesn't support fence/compiler_fence for MSP430. |
466 | #[cfg (not(target_arch = "msp430" ))] |
467 | pub use core::sync::atomic::{compiler_fence, fence}; |
468 | #[cfg (target_arch = "msp430" )] |
469 | pub use imp::msp430::{compiler_fence, fence}; |
470 | |
471 | mod imp; |
472 | |
473 | pub mod hint { |
474 | //! Re-export of the [`core::hint`] module. |
475 | //! |
476 | //! The only difference from the [`core::hint`] module is that [`spin_loop`] |
477 | //! is available in all rust versions that this crate supports. |
478 | //! |
479 | //! ``` |
480 | //! use portable_atomic::hint; |
481 | //! |
482 | //! hint::spin_loop(); |
483 | //! ``` |
484 | |
485 | #[doc (no_inline)] |
486 | pub use core::hint::*; |
487 | |
488 | /// Emits a machine instruction to signal the processor that it is running in |
489 | /// a busy-wait spin-loop ("spin lock"). |
490 | /// |
491 | /// Upon receiving the spin-loop signal the processor can optimize its behavior by, |
492 | /// for example, saving power or switching hyper-threads. |
493 | /// |
494 | /// This function is different from [`thread::yield_now`] which directly |
495 | /// yields to the system's scheduler, whereas `spin_loop` does not interact |
496 | /// with the operating system. |
497 | /// |
498 | /// A common use case for `spin_loop` is implementing bounded optimistic |
499 | /// spinning in a CAS loop in synchronization primitives. To avoid problems |
500 | /// like priority inversion, it is strongly recommended that the spin loop is |
501 | /// terminated after a finite amount of iterations and an appropriate blocking |
502 | /// syscall is made. |
503 | /// |
504 | /// **Note:** On platforms that do not support receiving spin-loop hints this |
505 | /// function does not do anything at all. |
506 | /// |
507 | /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html |
508 | #[inline ] |
509 | pub fn spin_loop() { |
510 | #[allow (deprecated)] |
511 | core::sync::atomic::spin_loop_hint(); |
512 | } |
513 | } |
514 | |
515 | #[cfg (doc)] |
516 | use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst}; |
517 | use core::{fmt, ptr}; |
518 | |
519 | #[cfg (miri)] |
520 | use crate::utils::strict; |
521 | |
522 | cfg_has_atomic_8! { |
523 | cfg_has_atomic_cas! { |
524 | // See https://github.com/rust-lang/rust/pull/114034 for details. |
525 | // https://github.com/rust-lang/rust/blob/9339f446a5302cd5041d3f3b5e59761f36699167/library/core/src/sync/atomic.rs#L134 |
526 | // https://godbolt.org/z/5W85abT58 |
527 | #[cfg (portable_atomic_no_cfg_target_has_atomic)] |
528 | const EMULATE_ATOMIC_BOOL: bool = cfg!(all( |
529 | not(portable_atomic_no_atomic_cas), |
530 | any(target_arch = "riscv32" , target_arch = "riscv64" , target_arch = "loongarch64" ), |
531 | )); |
532 | #[cfg (not(portable_atomic_no_cfg_target_has_atomic))] |
533 | const EMULATE_ATOMIC_BOOL: bool = cfg!(all( |
534 | target_has_atomic = "8" , |
535 | any(target_arch = "riscv32" , target_arch = "riscv64" , target_arch = "loongarch64" ), |
536 | )); |
537 | } // cfg_has_atomic_cas! |
538 | |
539 | /// A boolean type which can be safely shared between threads. |
540 | /// |
541 | /// This type has the same in-memory representation as a [`bool`]. |
542 | /// |
543 | /// If the compiler and the platform support atomic loads and stores of `u8`, |
544 | /// this type is a wrapper for the standard library's |
545 | /// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it |
546 | /// but the compiler does not, atomic operations are implemented using inline |
547 | /// assembly. |
548 | #[repr (C, align(1))] |
549 | pub struct AtomicBool { |
550 | v: core::cell::UnsafeCell<u8>, |
551 | } |
552 | |
553 | impl Default for AtomicBool { |
554 | /// Creates an `AtomicBool` initialized to `false`. |
555 | #[inline ] |
556 | fn default() -> Self { |
557 | Self::new(false) |
558 | } |
559 | } |
560 | |
561 | impl From<bool> for AtomicBool { |
562 | /// Converts a `bool` into an `AtomicBool`. |
563 | #[inline ] |
564 | fn from(b: bool) -> Self { |
565 | Self::new(b) |
566 | } |
567 | } |
568 | |
569 | // Send is implicitly implemented. |
570 | // SAFETY: any data races are prevented by disabling interrupts or |
571 | // atomic intrinsics (see module-level comments). |
572 | unsafe impl Sync for AtomicBool {} |
573 | |
574 | // UnwindSafe is implicitly implemented. |
575 | #[cfg (not(portable_atomic_no_core_unwind_safe))] |
576 | impl core::panic::RefUnwindSafe for AtomicBool {} |
577 | #[cfg (all(portable_atomic_no_core_unwind_safe, feature = "std" ))] |
578 | impl std::panic::RefUnwindSafe for AtomicBool {} |
579 | |
580 | impl_debug_and_serde!(AtomicBool); |
581 | |
582 | impl AtomicBool { |
583 | /// Creates a new `AtomicBool`. |
584 | /// |
585 | /// # Examples |
586 | /// |
587 | /// ``` |
588 | /// use portable_atomic::AtomicBool; |
589 | /// |
590 | /// let atomic_true = AtomicBool::new(true); |
591 | /// let atomic_false = AtomicBool::new(false); |
592 | /// ``` |
593 | #[inline ] |
594 | #[must_use ] |
595 | pub const fn new(v: bool) -> Self { |
596 | static_assert_layout!(AtomicBool, bool); |
597 | Self { v: core::cell::UnsafeCell::new(v as u8) } |
598 | } |
599 | |
600 | /// Creates a new `AtomicBool` from a pointer. |
601 | /// |
602 | /// # Safety |
603 | /// |
604 | /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can |
605 | /// be bigger than `align_of::<bool>()`). |
606 | /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
607 | /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value |
608 | /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned |
609 | /// value (or vice-versa). |
610 | /// * In other words, time periods where the value is accessed atomically may not overlap |
611 | /// with periods where the value is accessed non-atomically. |
612 | /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the |
613 | /// duration of lifetime `'a`. Most use cases should be able to follow this guideline. |
614 | /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done |
615 | /// from the same thread. |
616 | /// * If this atomic type is *not* lock-free: |
617 | /// * Any accesses to the value behind `ptr` must have a happens-before relationship |
618 | /// with accesses via the returned value (or vice-versa). |
619 | /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must |
620 | /// be compatible with operations performed by this atomic type. |
621 | /// * This method must not be used to create overlapping or mixed-size atomic accesses, as |
622 | /// these are not supported by the memory model. |
623 | /// |
624 | /// [valid]: core::ptr#safety |
625 | #[inline ] |
626 | #[must_use ] |
627 | pub unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self { |
628 | #[allow (clippy::cast_ptr_alignment)] |
629 | // SAFETY: guaranteed by the caller |
630 | unsafe { &*(ptr as *mut Self) } |
631 | } |
632 | |
633 | /// Returns `true` if operations on values of this type are lock-free. |
634 | /// |
635 | /// If the compiler or the platform doesn't support the necessary |
636 | /// atomic instructions, global locks for every potentially |
637 | /// concurrent atomic operation will be used. |
638 | /// |
639 | /// # Examples |
640 | /// |
641 | /// ``` |
642 | /// use portable_atomic::AtomicBool; |
643 | /// |
644 | /// let is_lock_free = AtomicBool::is_lock_free(); |
645 | /// ``` |
646 | #[inline ] |
647 | #[must_use ] |
648 | pub fn is_lock_free() -> bool { |
649 | imp::AtomicU8::is_lock_free() |
650 | } |
651 | |
652 | /// Returns `true` if operations on values of this type are lock-free. |
653 | /// |
654 | /// If the compiler or the platform doesn't support the necessary |
655 | /// atomic instructions, global locks for every potentially |
656 | /// concurrent atomic operation will be used. |
657 | /// |
658 | /// **Note:** If the atomic operation relies on dynamic CPU feature detection, |
659 | /// this type may be lock-free even if the function returns false. |
660 | /// |
661 | /// # Examples |
662 | /// |
663 | /// ``` |
664 | /// use portable_atomic::AtomicBool; |
665 | /// |
666 | /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free(); |
667 | /// ``` |
668 | #[inline ] |
669 | #[must_use ] |
670 | pub const fn is_always_lock_free() -> bool { |
671 | imp::AtomicU8::is_always_lock_free() |
672 | } |
673 | |
674 | /// Returns a mutable reference to the underlying [`bool`]. |
675 | /// |
676 | /// This is safe because the mutable reference guarantees that no other threads are |
677 | /// concurrently accessing the atomic data. |
678 | /// |
679 | /// # Examples |
680 | /// |
681 | /// ``` |
682 | /// use portable_atomic::{AtomicBool, Ordering}; |
683 | /// |
684 | /// let mut some_bool = AtomicBool::new(true); |
685 | /// assert_eq!(*some_bool.get_mut(), true); |
686 | /// *some_bool.get_mut() = false; |
687 | /// assert_eq!(some_bool.load(Ordering::SeqCst), false); |
688 | /// ``` |
689 | #[inline ] |
690 | pub fn get_mut(&mut self) -> &mut bool { |
691 | // SAFETY: the mutable reference guarantees unique ownership. |
692 | unsafe { &mut *(self.v.get() as *mut bool) } |
693 | } |
694 | |
695 | // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types. |
696 | // https://github.com/rust-lang/rust/issues/76314 |
697 | |
698 | /// Consumes the atomic and returns the contained value. |
699 | /// |
700 | /// This is safe because passing `self` by value guarantees that no other threads are |
701 | /// concurrently accessing the atomic data. |
702 | /// |
703 | /// # Examples |
704 | /// |
705 | /// ``` |
706 | /// use portable_atomic::AtomicBool; |
707 | /// |
708 | /// let some_bool = AtomicBool::new(true); |
709 | /// assert_eq!(some_bool.into_inner(), true); |
710 | /// ``` |
711 | #[inline ] |
712 | pub fn into_inner(self) -> bool { |
713 | self.v.into_inner() != 0 |
714 | } |
715 | |
716 | /// Loads a value from the bool. |
717 | /// |
718 | /// `load` takes an [`Ordering`] argument which describes the memory ordering |
719 | /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
720 | /// |
721 | /// # Panics |
722 | /// |
723 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
724 | /// |
725 | /// # Examples |
726 | /// |
727 | /// ``` |
728 | /// use portable_atomic::{AtomicBool, Ordering}; |
729 | /// |
730 | /// let some_bool = AtomicBool::new(true); |
731 | /// |
732 | /// assert_eq!(some_bool.load(Ordering::Relaxed), true); |
733 | /// ``` |
734 | #[inline ] |
735 | #[cfg_attr ( |
736 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
737 | track_caller |
738 | )] |
739 | pub fn load(&self, order: Ordering) -> bool { |
740 | self.as_atomic_u8().load(order) != 0 |
741 | } |
742 | |
743 | /// Stores a value into the bool. |
744 | /// |
745 | /// `store` takes an [`Ordering`] argument which describes the memory ordering |
746 | /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
747 | /// |
748 | /// # Panics |
749 | /// |
750 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
751 | /// |
752 | /// # Examples |
753 | /// |
754 | /// ``` |
755 | /// use portable_atomic::{AtomicBool, Ordering}; |
756 | /// |
757 | /// let some_bool = AtomicBool::new(true); |
758 | /// |
759 | /// some_bool.store(false, Ordering::Relaxed); |
760 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
761 | /// ``` |
762 | #[inline ] |
763 | #[cfg_attr ( |
764 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
765 | track_caller |
766 | )] |
767 | pub fn store(&self, val: bool, order: Ordering) { |
768 | self.as_atomic_u8().store(val as u8, order); |
769 | } |
770 | |
771 | cfg_has_atomic_cas! { |
772 | /// Stores a value into the bool, returning the previous value. |
773 | /// |
774 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
775 | /// of this operation. All ordering modes are possible. Note that using |
776 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
777 | /// using [`Release`] makes the load part [`Relaxed`]. |
778 | /// |
779 | /// # Examples |
780 | /// |
781 | /// ``` |
782 | /// use portable_atomic::{AtomicBool, Ordering}; |
783 | /// |
784 | /// let some_bool = AtomicBool::new(true); |
785 | /// |
786 | /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true); |
787 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
788 | /// ``` |
789 | #[inline ] |
790 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
791 | pub fn swap(&self, val: bool, order: Ordering) -> bool { |
792 | if EMULATE_ATOMIC_BOOL { |
793 | if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) } |
794 | } else { |
795 | self.as_atomic_u8().swap(val as u8, order) != 0 |
796 | } |
797 | } |
798 | |
799 | /// Stores a value into the [`bool`] if the current value is the same as the `current` value. |
800 | /// |
801 | /// The return value is a result indicating whether the new value was written and containing |
802 | /// the previous value. On success this value is guaranteed to be equal to `current`. |
803 | /// |
804 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
805 | /// ordering of this operation. `success` describes the required ordering for the |
806 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
807 | /// `failure` describes the required ordering for the load operation that takes place when |
808 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
809 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
810 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
811 | /// |
812 | /// # Panics |
813 | /// |
814 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
815 | /// |
816 | /// # Examples |
817 | /// |
818 | /// ``` |
819 | /// use portable_atomic::{AtomicBool, Ordering}; |
820 | /// |
821 | /// let some_bool = AtomicBool::new(true); |
822 | /// |
823 | /// assert_eq!( |
824 | /// some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed), |
825 | /// Ok(true) |
826 | /// ); |
827 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
828 | /// |
829 | /// assert_eq!( |
830 | /// some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire), |
831 | /// Err(false) |
832 | /// ); |
833 | /// assert_eq!(some_bool.load(Ordering::Relaxed), false); |
834 | /// ``` |
835 | #[inline ] |
836 | #[cfg_attr (portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
837 | #[cfg_attr ( |
838 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
839 | track_caller |
840 | )] |
841 | pub fn compare_exchange( |
842 | &self, |
843 | current: bool, |
844 | new: bool, |
845 | success: Ordering, |
846 | failure: Ordering, |
847 | ) -> Result<bool, bool> { |
848 | if EMULATE_ATOMIC_BOOL { |
849 | crate::utils::assert_compare_exchange_ordering(success, failure); |
850 | let order = crate::utils::upgrade_success_ordering(success, failure); |
851 | let old = if current == new { |
852 | // This is a no-op, but we still need to perform the operation |
853 | // for memory ordering reasons. |
854 | self.fetch_or(false, order) |
855 | } else { |
856 | // This sets the value to the new one and returns the old one. |
857 | self.swap(new, order) |
858 | }; |
859 | if old == current { Ok(old) } else { Err(old) } |
860 | } else { |
861 | match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) { |
862 | Ok(x) => Ok(x != 0), |
863 | Err(x) => Err(x != 0), |
864 | } |
865 | } |
866 | } |
867 | |
868 | /// Stores a value into the [`bool`] if the current value is the same as the `current` value. |
869 | /// |
870 | /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the |
871 | /// comparison succeeds, which can result in more efficient code on some platforms. The |
872 | /// return value is a result indicating whether the new value was written and containing the |
873 | /// previous value. |
874 | /// |
875 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
876 | /// ordering of this operation. `success` describes the required ordering for the |
877 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
878 | /// `failure` describes the required ordering for the load operation that takes place when |
879 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
880 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
881 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
882 | /// |
883 | /// # Panics |
884 | /// |
885 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
886 | /// |
887 | /// # Examples |
888 | /// |
889 | /// ``` |
890 | /// use portable_atomic::{AtomicBool, Ordering}; |
891 | /// |
892 | /// let val = AtomicBool::new(false); |
893 | /// |
894 | /// let new = true; |
895 | /// let mut old = val.load(Ordering::Relaxed); |
896 | /// loop { |
897 | /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
898 | /// Ok(_) => break, |
899 | /// Err(x) => old = x, |
900 | /// } |
901 | /// } |
902 | /// ``` |
903 | #[inline ] |
904 | #[cfg_attr (portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
905 | #[cfg_attr ( |
906 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
907 | track_caller |
908 | )] |
909 | pub fn compare_exchange_weak( |
910 | &self, |
911 | current: bool, |
912 | new: bool, |
913 | success: Ordering, |
914 | failure: Ordering, |
915 | ) -> Result<bool, bool> { |
916 | if EMULATE_ATOMIC_BOOL { |
917 | return self.compare_exchange(current, new, success, failure); |
918 | } |
919 | |
920 | match self.as_atomic_u8().compare_exchange_weak(current as u8, new as u8, success, failure) |
921 | { |
922 | Ok(x) => Ok(x != 0), |
923 | Err(x) => Err(x != 0), |
924 | } |
925 | } |
926 | |
927 | /// Logical "and" with a boolean value. |
928 | /// |
929 | /// Performs a logical "and" operation on the current value and the argument `val`, and sets |
930 | /// the new value to the result. |
931 | /// |
932 | /// Returns the previous value. |
933 | /// |
934 | /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering |
935 | /// of this operation. All ordering modes are possible. Note that using |
936 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
937 | /// using [`Release`] makes the load part [`Relaxed`]. |
938 | /// |
939 | /// # Examples |
940 | /// |
941 | /// ``` |
942 | /// use portable_atomic::{AtomicBool, Ordering}; |
943 | /// |
944 | /// let foo = AtomicBool::new(true); |
945 | /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true); |
946 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
947 | /// |
948 | /// let foo = AtomicBool::new(true); |
949 | /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true); |
950 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
951 | /// |
952 | /// let foo = AtomicBool::new(false); |
953 | /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false); |
954 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
955 | /// ``` |
956 | #[inline ] |
957 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
958 | pub fn fetch_and(&self, val: bool, order: Ordering) -> bool { |
959 | self.as_atomic_u8().fetch_and(val as u8, order) != 0 |
960 | } |
961 | |
962 | /// Logical "and" with a boolean value. |
963 | /// |
964 | /// Performs a logical "and" operation on the current value and the argument `val`, and sets |
965 | /// the new value to the result. |
966 | /// |
967 | /// Unlike `fetch_and`, this does not return the previous value. |
968 | /// |
969 | /// `and` takes an [`Ordering`] argument which describes the memory ordering |
970 | /// of this operation. All ordering modes are possible. Note that using |
971 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
972 | /// using [`Release`] makes the load part [`Relaxed`]. |
973 | /// |
974 | /// This function may generate more efficient code than `fetch_and` on some platforms. |
975 | /// |
976 | /// - x86/x86_64: `lock and` instead of `cmpxchg` loop |
977 | /// - MSP430: `and` instead of disabling interrupts |
978 | /// |
979 | /// Note: On x86/x86_64, the use of either function should not usually |
980 | /// affect the generated code, because LLVM can properly optimize the case |
981 | /// where the result is unused. |
982 | /// |
983 | /// # Examples |
984 | /// |
985 | /// ``` |
986 | /// use portable_atomic::{AtomicBool, Ordering}; |
987 | /// |
988 | /// let foo = AtomicBool::new(true); |
989 | /// foo.and(false, Ordering::SeqCst); |
990 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
991 | /// |
992 | /// let foo = AtomicBool::new(true); |
993 | /// foo.and(true, Ordering::SeqCst); |
994 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
995 | /// |
996 | /// let foo = AtomicBool::new(false); |
997 | /// foo.and(false, Ordering::SeqCst); |
998 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
999 | /// ``` |
1000 | #[inline ] |
1001 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1002 | pub fn and(&self, val: bool, order: Ordering) { |
1003 | self.as_atomic_u8().and(val as u8, order); |
1004 | } |
1005 | |
1006 | /// Logical "nand" with a boolean value. |
1007 | /// |
1008 | /// Performs a logical "nand" operation on the current value and the argument `val`, and sets |
1009 | /// the new value to the result. |
1010 | /// |
1011 | /// Returns the previous value. |
1012 | /// |
1013 | /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering |
1014 | /// of this operation. All ordering modes are possible. Note that using |
1015 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1016 | /// using [`Release`] makes the load part [`Relaxed`]. |
1017 | /// |
1018 | /// # Examples |
1019 | /// |
1020 | /// ``` |
1021 | /// use portable_atomic::{AtomicBool, Ordering}; |
1022 | /// |
1023 | /// let foo = AtomicBool::new(true); |
1024 | /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true); |
1025 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1026 | /// |
1027 | /// let foo = AtomicBool::new(true); |
1028 | /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true); |
1029 | /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0); |
1030 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1031 | /// |
1032 | /// let foo = AtomicBool::new(false); |
1033 | /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false); |
1034 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1035 | /// ``` |
1036 | #[inline ] |
1037 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1038 | pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool { |
1039 | // https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L811-L825 |
1040 | if val { |
1041 | // !(x & true) == !x |
1042 | // We must invert the bool. |
1043 | self.fetch_xor(true, order) |
1044 | } else { |
1045 | // !(x & false) == true |
1046 | // We must set the bool to true. |
1047 | self.swap(true, order) |
1048 | } |
1049 | } |
1050 | |
1051 | /// Logical "or" with a boolean value. |
1052 | /// |
1053 | /// Performs a logical "or" operation on the current value and the argument `val`, and sets the |
1054 | /// new value to the result. |
1055 | /// |
1056 | /// Returns the previous value. |
1057 | /// |
1058 | /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering |
1059 | /// of this operation. All ordering modes are possible. Note that using |
1060 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1061 | /// using [`Release`] makes the load part [`Relaxed`]. |
1062 | /// |
1063 | /// # Examples |
1064 | /// |
1065 | /// ``` |
1066 | /// use portable_atomic::{AtomicBool, Ordering}; |
1067 | /// |
1068 | /// let foo = AtomicBool::new(true); |
1069 | /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true); |
1070 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1071 | /// |
1072 | /// let foo = AtomicBool::new(true); |
1073 | /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true); |
1074 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1075 | /// |
1076 | /// let foo = AtomicBool::new(false); |
1077 | /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false); |
1078 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1079 | /// ``` |
1080 | #[inline ] |
1081 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1082 | pub fn fetch_or(&self, val: bool, order: Ordering) -> bool { |
1083 | self.as_atomic_u8().fetch_or(val as u8, order) != 0 |
1084 | } |
1085 | |
1086 | /// Logical "or" with a boolean value. |
1087 | /// |
1088 | /// Performs a logical "or" operation on the current value and the argument `val`, and sets the |
1089 | /// new value to the result. |
1090 | /// |
1091 | /// Unlike `fetch_or`, this does not return the previous value. |
1092 | /// |
1093 | /// `or` takes an [`Ordering`] argument which describes the memory ordering |
1094 | /// of this operation. All ordering modes are possible. Note that using |
1095 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1096 | /// using [`Release`] makes the load part [`Relaxed`]. |
1097 | /// |
1098 | /// This function may generate more efficient code than `fetch_or` on some platforms. |
1099 | /// |
1100 | /// - x86/x86_64: `lock or` instead of `cmpxchg` loop |
1101 | /// - MSP430: `bis` instead of disabling interrupts |
1102 | /// |
1103 | /// Note: On x86/x86_64, the use of either function should not usually |
1104 | /// affect the generated code, because LLVM can properly optimize the case |
1105 | /// where the result is unused. |
1106 | /// |
1107 | /// # Examples |
1108 | /// |
1109 | /// ``` |
1110 | /// use portable_atomic::{AtomicBool, Ordering}; |
1111 | /// |
1112 | /// let foo = AtomicBool::new(true); |
1113 | /// foo.or(false, Ordering::SeqCst); |
1114 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1115 | /// |
1116 | /// let foo = AtomicBool::new(true); |
1117 | /// foo.or(true, Ordering::SeqCst); |
1118 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1119 | /// |
1120 | /// let foo = AtomicBool::new(false); |
1121 | /// foo.or(false, Ordering::SeqCst); |
1122 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1123 | /// ``` |
1124 | #[inline ] |
1125 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1126 | pub fn or(&self, val: bool, order: Ordering) { |
1127 | self.as_atomic_u8().or(val as u8, order); |
1128 | } |
1129 | |
1130 | /// Logical "xor" with a boolean value. |
1131 | /// |
1132 | /// Performs a logical "xor" operation on the current value and the argument `val`, and sets |
1133 | /// the new value to the result. |
1134 | /// |
1135 | /// Returns the previous value. |
1136 | /// |
1137 | /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering |
1138 | /// of this operation. All ordering modes are possible. Note that using |
1139 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1140 | /// using [`Release`] makes the load part [`Relaxed`]. |
1141 | /// |
1142 | /// # Examples |
1143 | /// |
1144 | /// ``` |
1145 | /// use portable_atomic::{AtomicBool, Ordering}; |
1146 | /// |
1147 | /// let foo = AtomicBool::new(true); |
1148 | /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true); |
1149 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1150 | /// |
1151 | /// let foo = AtomicBool::new(true); |
1152 | /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true); |
1153 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1154 | /// |
1155 | /// let foo = AtomicBool::new(false); |
1156 | /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false); |
1157 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1158 | /// ``` |
1159 | #[inline ] |
1160 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1161 | pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool { |
1162 | self.as_atomic_u8().fetch_xor(val as u8, order) != 0 |
1163 | } |
1164 | |
1165 | /// Logical "xor" with a boolean value. |
1166 | /// |
1167 | /// Performs a logical "xor" operation on the current value and the argument `val`, and sets |
1168 | /// the new value to the result. |
1169 | /// |
1170 | /// Unlike `fetch_xor`, this does not return the previous value. |
1171 | /// |
1172 | /// `xor` takes an [`Ordering`] argument which describes the memory ordering |
1173 | /// of this operation. All ordering modes are possible. Note that using |
1174 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1175 | /// using [`Release`] makes the load part [`Relaxed`]. |
1176 | /// |
1177 | /// This function may generate more efficient code than `fetch_xor` on some platforms. |
1178 | /// |
1179 | /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop |
1180 | /// - MSP430: `xor` instead of disabling interrupts |
1181 | /// |
1182 | /// Note: On x86/x86_64, the use of either function should not usually |
1183 | /// affect the generated code, because LLVM can properly optimize the case |
1184 | /// where the result is unused. |
1185 | /// |
1186 | /// # Examples |
1187 | /// |
1188 | /// ``` |
1189 | /// use portable_atomic::{AtomicBool, Ordering}; |
1190 | /// |
1191 | /// let foo = AtomicBool::new(true); |
1192 | /// foo.xor(false, Ordering::SeqCst); |
1193 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1194 | /// |
1195 | /// let foo = AtomicBool::new(true); |
1196 | /// foo.xor(true, Ordering::SeqCst); |
1197 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1198 | /// |
1199 | /// let foo = AtomicBool::new(false); |
1200 | /// foo.xor(false, Ordering::SeqCst); |
1201 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1202 | /// ``` |
1203 | #[inline ] |
1204 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1205 | pub fn xor(&self, val: bool, order: Ordering) { |
1206 | self.as_atomic_u8().xor(val as u8, order); |
1207 | } |
1208 | |
1209 | /// Logical "not" with a boolean value. |
1210 | /// |
1211 | /// Performs a logical "not" operation on the current value, and sets |
1212 | /// the new value to the result. |
1213 | /// |
1214 | /// Returns the previous value. |
1215 | /// |
1216 | /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering |
1217 | /// of this operation. All ordering modes are possible. Note that using |
1218 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1219 | /// using [`Release`] makes the load part [`Relaxed`]. |
1220 | /// |
1221 | /// # Examples |
1222 | /// |
1223 | /// ``` |
1224 | /// use portable_atomic::{AtomicBool, Ordering}; |
1225 | /// |
1226 | /// let foo = AtomicBool::new(true); |
1227 | /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true); |
1228 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1229 | /// |
1230 | /// let foo = AtomicBool::new(false); |
1231 | /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false); |
1232 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1233 | /// ``` |
1234 | #[inline ] |
1235 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1236 | pub fn fetch_not(&self, order: Ordering) -> bool { |
1237 | self.fetch_xor(true, order) |
1238 | } |
1239 | |
1240 | /// Logical "not" with a boolean value. |
1241 | /// |
1242 | /// Performs a logical "not" operation on the current value, and sets |
1243 | /// the new value to the result. |
1244 | /// |
1245 | /// Unlike `fetch_not`, this does not return the previous value. |
1246 | /// |
1247 | /// `not` takes an [`Ordering`] argument which describes the memory ordering |
1248 | /// of this operation. All ordering modes are possible. Note that using |
1249 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1250 | /// using [`Release`] makes the load part [`Relaxed`]. |
1251 | /// |
1252 | /// This function may generate more efficient code than `fetch_not` on some platforms. |
1253 | /// |
1254 | /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop |
1255 | /// - MSP430: `xor` instead of disabling interrupts |
1256 | /// |
1257 | /// Note: On x86/x86_64, the use of either function should not usually |
1258 | /// affect the generated code, because LLVM can properly optimize the case |
1259 | /// where the result is unused. |
1260 | /// |
1261 | /// # Examples |
1262 | /// |
1263 | /// ``` |
1264 | /// use portable_atomic::{AtomicBool, Ordering}; |
1265 | /// |
1266 | /// let foo = AtomicBool::new(true); |
1267 | /// foo.not(Ordering::SeqCst); |
1268 | /// assert_eq!(foo.load(Ordering::SeqCst), false); |
1269 | /// |
1270 | /// let foo = AtomicBool::new(false); |
1271 | /// foo.not(Ordering::SeqCst); |
1272 | /// assert_eq!(foo.load(Ordering::SeqCst), true); |
1273 | /// ``` |
1274 | #[inline ] |
1275 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1276 | pub fn not(&self, order: Ordering) { |
1277 | self.xor(true, order); |
1278 | } |
1279 | |
1280 | /// Fetches the value, and applies a function to it that returns an optional |
1281 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1282 | /// returned `Some(_)`, else `Err(previous_value)`. |
1283 | /// |
1284 | /// Note: This may call the function multiple times if the value has been |
1285 | /// changed from other threads in the meantime, as long as the function |
1286 | /// returns `Some(_)`, but the function will have been applied only once to |
1287 | /// the stored value. |
1288 | /// |
1289 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory |
1290 | /// ordering of this operation. The first describes the required ordering for |
1291 | /// when the operation finally succeeds while the second describes the |
1292 | /// required ordering for loads. These correspond to the success and failure |
1293 | /// orderings of [`compare_exchange`](Self::compare_exchange) respectively. |
1294 | /// |
1295 | /// Using [`Acquire`] as success ordering makes the store part of this |
1296 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1297 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1298 | /// [`Acquire`] or [`Relaxed`]. |
1299 | /// |
1300 | /// # Considerations |
1301 | /// |
1302 | /// This method is not magic; it is not provided by the hardware. |
1303 | /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak), |
1304 | /// and suffers from the same drawbacks. |
1305 | /// In particular, this method will not circumvent the [ABA Problem]. |
1306 | /// |
1307 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1308 | /// |
1309 | /// # Panics |
1310 | /// |
1311 | /// Panics if `fetch_order` is [`Release`], [`AcqRel`]. |
1312 | /// |
1313 | /// # Examples |
1314 | /// |
1315 | /// ```rust |
1316 | /// use portable_atomic::{AtomicBool, Ordering}; |
1317 | /// |
1318 | /// let x = AtomicBool::new(false); |
1319 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false)); |
1320 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false)); |
1321 | /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true)); |
1322 | /// assert_eq!(x.load(Ordering::SeqCst), false); |
1323 | /// ``` |
1324 | #[inline ] |
1325 | #[cfg_attr ( |
1326 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1327 | track_caller |
1328 | )] |
1329 | pub fn fetch_update<F>( |
1330 | &self, |
1331 | set_order: Ordering, |
1332 | fetch_order: Ordering, |
1333 | mut f: F, |
1334 | ) -> Result<bool, bool> |
1335 | where |
1336 | F: FnMut(bool) -> Option<bool>, |
1337 | { |
1338 | let mut prev = self.load(fetch_order); |
1339 | while let Some(next) = f(prev) { |
1340 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
1341 | x @ Ok(_) => return x, |
1342 | Err(next_prev) => prev = next_prev, |
1343 | } |
1344 | } |
1345 | Err(prev) |
1346 | } |
1347 | } // cfg_has_atomic_cas! |
1348 | |
1349 | const_fn! { |
1350 | // This function is actually `const fn`-compatible on Rust 1.32+, |
1351 | // but makes `const fn` only on Rust 1.58+ to match other atomic types. |
1352 | const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]; |
1353 | /// Returns a mutable pointer to the underlying [`bool`]. |
1354 | /// |
1355 | /// Returning an `*mut` pointer from a shared reference to this atomic is |
1356 | /// safe because the atomic types work with interior mutability. Any use of |
1357 | /// the returned raw pointer requires an `unsafe` block and has to uphold |
1358 | /// the safety requirements. If there is concurrent access, note the following |
1359 | /// additional safety requirements: |
1360 | /// |
1361 | /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent |
1362 | /// operations on it must be atomic. |
1363 | /// - Otherwise, any concurrent operations on it must be compatible with |
1364 | /// operations performed by this atomic type. |
1365 | /// |
1366 | /// This is `const fn` on Rust 1.58+. |
1367 | #[inline ] |
1368 | pub const fn as_ptr(&self) -> *mut bool { |
1369 | self.v.get() as *mut bool |
1370 | } |
1371 | } |
1372 | |
1373 | #[inline ] |
1374 | fn as_atomic_u8(&self) -> &imp::AtomicU8 { |
1375 | // SAFETY: AtomicBool and imp::AtomicU8 have the same layout, |
1376 | // and both access data in the same way. |
1377 | unsafe { &*(self as *const Self as *const imp::AtomicU8) } |
1378 | } |
1379 | } |
1380 | } // cfg_has_atomic_8! |
1381 | |
1382 | cfg_has_atomic_ptr! { |
1383 | /// A raw pointer type which can be safely shared between threads. |
1384 | /// |
1385 | /// This type has the same in-memory representation as a `*mut T`. |
1386 | /// |
1387 | /// If the compiler and the platform support atomic loads and stores of pointers, |
1388 | /// this type is a wrapper for the standard library's |
1389 | /// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it |
1390 | /// but the compiler does not, atomic operations are implemented using inline |
1391 | /// assembly. |
1392 | // We can use #[repr(transparent)] here, but #[repr(C, align(N))] |
1393 | // will show clearer docs. |
1394 | #[cfg_attr (target_pointer_width = "16" , repr(C, align(2)))] |
1395 | #[cfg_attr (target_pointer_width = "32" , repr(C, align(4)))] |
1396 | #[cfg_attr (target_pointer_width = "64" , repr(C, align(8)))] |
1397 | #[cfg_attr (target_pointer_width = "128" , repr(C, align(16)))] |
1398 | pub struct AtomicPtr<T> { |
1399 | inner: imp::AtomicPtr<T>, |
1400 | } |
1401 | |
1402 | impl<T> Default for AtomicPtr<T> { |
1403 | /// Creates a null `AtomicPtr<T>`. |
1404 | #[inline ] |
1405 | fn default() -> Self { |
1406 | Self::new(ptr::null_mut()) |
1407 | } |
1408 | } |
1409 | |
1410 | impl<T> From<*mut T> for AtomicPtr<T> { |
1411 | #[inline ] |
1412 | fn from(p: *mut T) -> Self { |
1413 | Self::new(p) |
1414 | } |
1415 | } |
1416 | |
1417 | impl<T> fmt::Debug for AtomicPtr<T> { |
1418 | #[allow (clippy::missing_inline_in_public_items)] // fmt is not hot path |
1419 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
1420 | // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L2024 |
1421 | fmt::Debug::fmt(&self.load(Ordering::Relaxed), f) |
1422 | } |
1423 | } |
1424 | |
1425 | impl<T> fmt::Pointer for AtomicPtr<T> { |
1426 | #[allow (clippy::missing_inline_in_public_items)] // fmt is not hot path |
1427 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
1428 | // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.70.0/library/core/src/sync/atomic.rs#L2024 |
1429 | fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f) |
1430 | } |
1431 | } |
1432 | |
1433 | // UnwindSafe is implicitly implemented. |
1434 | #[cfg (not(portable_atomic_no_core_unwind_safe))] |
1435 | impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {} |
1436 | #[cfg (all(portable_atomic_no_core_unwind_safe, feature = "std" ))] |
1437 | impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {} |
1438 | |
1439 | impl<T> AtomicPtr<T> { |
1440 | /// Creates a new `AtomicPtr`. |
1441 | /// |
1442 | /// # Examples |
1443 | /// |
1444 | /// ``` |
1445 | /// use portable_atomic::AtomicPtr; |
1446 | /// |
1447 | /// let ptr = &mut 5; |
1448 | /// let atomic_ptr = AtomicPtr::new(ptr); |
1449 | /// ``` |
1450 | #[inline ] |
1451 | #[must_use ] |
1452 | pub const fn new(p: *mut T) -> Self { |
1453 | static_assert_layout!(AtomicPtr<()>, *mut ()); |
1454 | Self { inner: imp::AtomicPtr::new(p) } |
1455 | } |
1456 | |
1457 | /// Creates a new `AtomicPtr` from a pointer. |
1458 | /// |
1459 | /// # Safety |
1460 | /// |
1461 | /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this |
1462 | /// can be bigger than `align_of::<*mut T>()`). |
1463 | /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
1464 | /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value |
1465 | /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned |
1466 | /// value (or vice-versa). |
1467 | /// * In other words, time periods where the value is accessed atomically may not overlap |
1468 | /// with periods where the value is accessed non-atomically. |
1469 | /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the |
1470 | /// duration of lifetime `'a`. Most use cases should be able to follow this guideline. |
1471 | /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done |
1472 | /// from the same thread. |
1473 | /// * If this atomic type is *not* lock-free: |
1474 | /// * Any accesses to the value behind `ptr` must have a happens-before relationship |
1475 | /// with accesses via the returned value (or vice-versa). |
1476 | /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must |
1477 | /// be compatible with operations performed by this atomic type. |
1478 | /// * This method must not be used to create overlapping or mixed-size atomic accesses, as |
1479 | /// these are not supported by the memory model. |
1480 | /// |
1481 | /// [valid]: core::ptr#safety |
1482 | #[inline ] |
1483 | #[must_use ] |
1484 | pub unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self { |
1485 | #[allow (clippy::cast_ptr_alignment)] |
1486 | // SAFETY: guaranteed by the caller |
1487 | unsafe { &*(ptr as *mut Self) } |
1488 | } |
1489 | |
1490 | /// Returns `true` if operations on values of this type are lock-free. |
1491 | /// |
1492 | /// If the compiler or the platform doesn't support the necessary |
1493 | /// atomic instructions, global locks for every potentially |
1494 | /// concurrent atomic operation will be used. |
1495 | /// |
1496 | /// # Examples |
1497 | /// |
1498 | /// ``` |
1499 | /// use portable_atomic::AtomicPtr; |
1500 | /// |
1501 | /// let is_lock_free = AtomicPtr::<()>::is_lock_free(); |
1502 | /// ``` |
1503 | #[inline ] |
1504 | #[must_use ] |
1505 | pub fn is_lock_free() -> bool { |
1506 | <imp::AtomicPtr<T>>::is_lock_free() |
1507 | } |
1508 | |
1509 | /// Returns `true` if operations on values of this type are lock-free. |
1510 | /// |
1511 | /// If the compiler or the platform doesn't support the necessary |
1512 | /// atomic instructions, global locks for every potentially |
1513 | /// concurrent atomic operation will be used. |
1514 | /// |
1515 | /// **Note:** If the atomic operation relies on dynamic CPU feature detection, |
1516 | /// this type may be lock-free even if the function returns false. |
1517 | /// |
1518 | /// # Examples |
1519 | /// |
1520 | /// ``` |
1521 | /// use portable_atomic::AtomicPtr; |
1522 | /// |
1523 | /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free(); |
1524 | /// ``` |
1525 | #[inline ] |
1526 | #[must_use ] |
1527 | pub const fn is_always_lock_free() -> bool { |
1528 | <imp::AtomicPtr<T>>::is_always_lock_free() |
1529 | } |
1530 | |
1531 | /// Returns a mutable reference to the underlying pointer. |
1532 | /// |
1533 | /// This is safe because the mutable reference guarantees that no other threads are |
1534 | /// concurrently accessing the atomic data. |
1535 | /// |
1536 | /// # Examples |
1537 | /// |
1538 | /// ``` |
1539 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1540 | /// |
1541 | /// let mut data = 10; |
1542 | /// let mut atomic_ptr = AtomicPtr::new(&mut data); |
1543 | /// let mut other_data = 5; |
1544 | /// *atomic_ptr.get_mut() = &mut other_data; |
1545 | /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5); |
1546 | /// ``` |
1547 | #[inline ] |
1548 | pub fn get_mut(&mut self) -> &mut *mut T { |
1549 | self.inner.get_mut() |
1550 | } |
1551 | |
1552 | // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types. |
1553 | // https://github.com/rust-lang/rust/issues/76314 |
1554 | |
1555 | /// Consumes the atomic and returns the contained value. |
1556 | /// |
1557 | /// This is safe because passing `self` by value guarantees that no other threads are |
1558 | /// concurrently accessing the atomic data. |
1559 | /// |
1560 | /// # Examples |
1561 | /// |
1562 | /// ``` |
1563 | /// use portable_atomic::AtomicPtr; |
1564 | /// |
1565 | /// let mut data = 5; |
1566 | /// let atomic_ptr = AtomicPtr::new(&mut data); |
1567 | /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5); |
1568 | /// ``` |
1569 | #[inline ] |
1570 | pub fn into_inner(self) -> *mut T { |
1571 | self.inner.into_inner() |
1572 | } |
1573 | |
1574 | /// Loads a value from the pointer. |
1575 | /// |
1576 | /// `load` takes an [`Ordering`] argument which describes the memory ordering |
1577 | /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
1578 | /// |
1579 | /// # Panics |
1580 | /// |
1581 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
1582 | /// |
1583 | /// # Examples |
1584 | /// |
1585 | /// ``` |
1586 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1587 | /// |
1588 | /// let ptr = &mut 5; |
1589 | /// let some_ptr = AtomicPtr::new(ptr); |
1590 | /// |
1591 | /// let value = some_ptr.load(Ordering::Relaxed); |
1592 | /// ``` |
1593 | #[inline ] |
1594 | #[cfg_attr ( |
1595 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1596 | track_caller |
1597 | )] |
1598 | pub fn load(&self, order: Ordering) -> *mut T { |
1599 | self.inner.load(order) |
1600 | } |
1601 | |
1602 | /// Stores a value into the pointer. |
1603 | /// |
1604 | /// `store` takes an [`Ordering`] argument which describes the memory ordering |
1605 | /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
1606 | /// |
1607 | /// # Panics |
1608 | /// |
1609 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
1610 | /// |
1611 | /// # Examples |
1612 | /// |
1613 | /// ``` |
1614 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1615 | /// |
1616 | /// let ptr = &mut 5; |
1617 | /// let some_ptr = AtomicPtr::new(ptr); |
1618 | /// |
1619 | /// let other_ptr = &mut 10; |
1620 | /// |
1621 | /// some_ptr.store(other_ptr, Ordering::Relaxed); |
1622 | /// ``` |
1623 | #[inline ] |
1624 | #[cfg_attr ( |
1625 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1626 | track_caller |
1627 | )] |
1628 | pub fn store(&self, ptr: *mut T, order: Ordering) { |
1629 | self.inner.store(ptr, order); |
1630 | } |
1631 | |
1632 | cfg_has_atomic_cas! { |
1633 | /// Stores a value into the pointer, returning the previous value. |
1634 | /// |
1635 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
1636 | /// of this operation. All ordering modes are possible. Note that using |
1637 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
1638 | /// using [`Release`] makes the load part [`Relaxed`]. |
1639 | /// |
1640 | /// # Examples |
1641 | /// |
1642 | /// ``` |
1643 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1644 | /// |
1645 | /// let ptr = &mut 5; |
1646 | /// let some_ptr = AtomicPtr::new(ptr); |
1647 | /// |
1648 | /// let other_ptr = &mut 10; |
1649 | /// |
1650 | /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed); |
1651 | /// ``` |
1652 | #[inline ] |
1653 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1654 | pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T { |
1655 | self.inner.swap(ptr, order) |
1656 | } |
1657 | |
1658 | /// Stores a value into the pointer if the current value is the same as the `current` value. |
1659 | /// |
1660 | /// The return value is a result indicating whether the new value was written and containing |
1661 | /// the previous value. On success this value is guaranteed to be equal to `current`. |
1662 | /// |
1663 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
1664 | /// ordering of this operation. `success` describes the required ordering for the |
1665 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
1666 | /// `failure` describes the required ordering for the load operation that takes place when |
1667 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
1668 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
1669 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
1670 | /// |
1671 | /// # Panics |
1672 | /// |
1673 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
1674 | /// |
1675 | /// # Examples |
1676 | /// |
1677 | /// ``` |
1678 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1679 | /// |
1680 | /// let ptr = &mut 5; |
1681 | /// let some_ptr = AtomicPtr::new(ptr); |
1682 | /// |
1683 | /// let other_ptr = &mut 10; |
1684 | /// |
1685 | /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed); |
1686 | /// ``` |
1687 | #[inline ] |
1688 | #[cfg_attr (portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
1689 | #[cfg_attr ( |
1690 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1691 | track_caller |
1692 | )] |
1693 | pub fn compare_exchange( |
1694 | &self, |
1695 | current: *mut T, |
1696 | new: *mut T, |
1697 | success: Ordering, |
1698 | failure: Ordering, |
1699 | ) -> Result<*mut T, *mut T> { |
1700 | self.inner.compare_exchange(current, new, success, failure) |
1701 | } |
1702 | |
1703 | /// Stores a value into the pointer if the current value is the same as the `current` value. |
1704 | /// |
1705 | /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the |
1706 | /// comparison succeeds, which can result in more efficient code on some platforms. The |
1707 | /// return value is a result indicating whether the new value was written and containing the |
1708 | /// previous value. |
1709 | /// |
1710 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
1711 | /// ordering of this operation. `success` describes the required ordering for the |
1712 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
1713 | /// `failure` describes the required ordering for the load operation that takes place when |
1714 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
1715 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
1716 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
1717 | /// |
1718 | /// # Panics |
1719 | /// |
1720 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
1721 | /// |
1722 | /// # Examples |
1723 | /// |
1724 | /// ``` |
1725 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1726 | /// |
1727 | /// let some_ptr = AtomicPtr::new(&mut 5); |
1728 | /// |
1729 | /// let new = &mut 10; |
1730 | /// let mut old = some_ptr.load(Ordering::Relaxed); |
1731 | /// loop { |
1732 | /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
1733 | /// Ok(_) => break, |
1734 | /// Err(x) => old = x, |
1735 | /// } |
1736 | /// } |
1737 | /// ``` |
1738 | #[inline ] |
1739 | #[cfg_attr (portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
1740 | #[cfg_attr ( |
1741 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1742 | track_caller |
1743 | )] |
1744 | pub fn compare_exchange_weak( |
1745 | &self, |
1746 | current: *mut T, |
1747 | new: *mut T, |
1748 | success: Ordering, |
1749 | failure: Ordering, |
1750 | ) -> Result<*mut T, *mut T> { |
1751 | self.inner.compare_exchange_weak(current, new, success, failure) |
1752 | } |
1753 | |
1754 | /// Fetches the value, and applies a function to it that returns an optional |
1755 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function |
1756 | /// returned `Some(_)`, else `Err(previous_value)`. |
1757 | /// |
1758 | /// Note: This may call the function multiple times if the value has been |
1759 | /// changed from other threads in the meantime, as long as the function |
1760 | /// returns `Some(_)`, but the function will have been applied only once to |
1761 | /// the stored value. |
1762 | /// |
1763 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory |
1764 | /// ordering of this operation. The first describes the required ordering for |
1765 | /// when the operation finally succeeds while the second describes the |
1766 | /// required ordering for loads. These correspond to the success and failure |
1767 | /// orderings of [`compare_exchange`](Self::compare_exchange) respectively. |
1768 | /// |
1769 | /// Using [`Acquire`] as success ordering makes the store part of this |
1770 | /// operation [`Relaxed`], and using [`Release`] makes the final successful |
1771 | /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], |
1772 | /// [`Acquire`] or [`Relaxed`]. |
1773 | /// |
1774 | /// # Panics |
1775 | /// |
1776 | /// Panics if `fetch_order` is [`Release`], [`AcqRel`]. |
1777 | /// |
1778 | /// # Considerations |
1779 | /// |
1780 | /// This method is not magic; it is not provided by the hardware. |
1781 | /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak), |
1782 | /// and suffers from the same drawbacks. |
1783 | /// In particular, this method will not circumvent the [ABA Problem]. |
1784 | /// |
1785 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
1786 | /// |
1787 | /// # Examples |
1788 | /// |
1789 | /// ```rust |
1790 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1791 | /// |
1792 | /// let ptr: *mut _ = &mut 5; |
1793 | /// let some_ptr = AtomicPtr::new(ptr); |
1794 | /// |
1795 | /// let new: *mut _ = &mut 10; |
1796 | /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr)); |
1797 | /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| { |
1798 | /// if x == ptr { |
1799 | /// Some(new) |
1800 | /// } else { |
1801 | /// None |
1802 | /// } |
1803 | /// }); |
1804 | /// assert_eq!(result, Ok(ptr)); |
1805 | /// assert_eq!(some_ptr.load(Ordering::SeqCst), new); |
1806 | /// ``` |
1807 | #[inline ] |
1808 | #[cfg_attr ( |
1809 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
1810 | track_caller |
1811 | )] |
1812 | pub fn fetch_update<F>( |
1813 | &self, |
1814 | set_order: Ordering, |
1815 | fetch_order: Ordering, |
1816 | mut f: F, |
1817 | ) -> Result<*mut T, *mut T> |
1818 | where |
1819 | F: FnMut(*mut T) -> Option<*mut T>, |
1820 | { |
1821 | let mut prev = self.load(fetch_order); |
1822 | while let Some(next) = f(prev) { |
1823 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
1824 | x @ Ok(_) => return x, |
1825 | Err(next_prev) => prev = next_prev, |
1826 | } |
1827 | } |
1828 | Err(prev) |
1829 | } |
1830 | |
1831 | #[cfg (miri)] |
1832 | #[inline ] |
1833 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1834 | fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T |
1835 | where |
1836 | F: FnMut(*mut T) -> *mut T, |
1837 | { |
1838 | // This is a private function and all instances of `f` only operate on the value |
1839 | // loaded, so there is no need to synchronize the first load/failed CAS. |
1840 | let mut prev = self.load(Ordering::Relaxed); |
1841 | loop { |
1842 | let next = f(prev); |
1843 | match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) { |
1844 | Ok(x) => return x, |
1845 | Err(next_prev) => prev = next_prev, |
1846 | } |
1847 | } |
1848 | } |
1849 | |
1850 | /// Offsets the pointer's address by adding `val` (in units of `T`), |
1851 | /// returning the previous pointer. |
1852 | /// |
1853 | /// This is equivalent to using [`wrapping_add`] to atomically perform the |
1854 | /// equivalent of `ptr = ptr.wrapping_add(val);`. |
1855 | /// |
1856 | /// This method operates in units of `T`, which means that it cannot be used |
1857 | /// to offset the pointer by an amount which is not a multiple of |
1858 | /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to |
1859 | /// work with a deliberately misaligned pointer. In such cases, you may use |
1860 | /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead. |
1861 | /// |
1862 | /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the |
1863 | /// memory ordering of this operation. All ordering modes are possible. Note |
1864 | /// that using [`Acquire`] makes the store part of this operation |
1865 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
1866 | /// |
1867 | /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add |
1868 | /// |
1869 | /// # Examples |
1870 | /// |
1871 | /// ``` |
1872 | /// # #![allow(unstable_name_collisions)] |
1873 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1874 | /// use sptr::Strict; // stable polyfill for strict provenance |
1875 | /// |
1876 | /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut()); |
1877 | /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0); |
1878 | /// // Note: units of `size_of::<i64>()`. |
1879 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8); |
1880 | /// ``` |
1881 | #[inline ] |
1882 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1883 | pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T { |
1884 | self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order) |
1885 | } |
1886 | |
1887 | /// Offsets the pointer's address by subtracting `val` (in units of `T`), |
1888 | /// returning the previous pointer. |
1889 | /// |
1890 | /// This is equivalent to using [`wrapping_sub`] to atomically perform the |
1891 | /// equivalent of `ptr = ptr.wrapping_sub(val);`. |
1892 | /// |
1893 | /// This method operates in units of `T`, which means that it cannot be used |
1894 | /// to offset the pointer by an amount which is not a multiple of |
1895 | /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to |
1896 | /// work with a deliberately misaligned pointer. In such cases, you may use |
1897 | /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead. |
1898 | /// |
1899 | /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory |
1900 | /// ordering of this operation. All ordering modes are possible. Note that |
1901 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
1902 | /// and using [`Release`] makes the load part [`Relaxed`]. |
1903 | /// |
1904 | /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub |
1905 | /// |
1906 | /// # Examples |
1907 | /// |
1908 | /// ``` |
1909 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1910 | /// |
1911 | /// let array = [1i32, 2i32]; |
1912 | /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _); |
1913 | /// |
1914 | /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1],)); |
1915 | /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0])); |
1916 | /// ``` |
1917 | #[inline ] |
1918 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1919 | pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T { |
1920 | self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order) |
1921 | } |
1922 | |
1923 | /// Offsets the pointer's address by adding `val` *bytes*, returning the |
1924 | /// previous pointer. |
1925 | /// |
1926 | /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically |
1927 | /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`. |
1928 | /// |
1929 | /// `fetch_byte_add` takes an [`Ordering`] argument which describes the |
1930 | /// memory ordering of this operation. All ordering modes are possible. Note |
1931 | /// that using [`Acquire`] makes the store part of this operation |
1932 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
1933 | /// |
1934 | /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add |
1935 | /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast |
1936 | /// |
1937 | /// # Examples |
1938 | /// |
1939 | /// ``` |
1940 | /// # #![allow(unstable_name_collisions)] |
1941 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1942 | /// use sptr::Strict; // stable polyfill for strict provenance |
1943 | /// |
1944 | /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut()); |
1945 | /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0); |
1946 | /// // Note: in units of bytes, not `size_of::<i64>()`. |
1947 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1); |
1948 | /// ``` |
1949 | #[inline ] |
1950 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1951 | pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T { |
1952 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
1953 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
1954 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
1955 | // compatible and is sound. |
1956 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
1957 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
1958 | #[cfg (miri)] |
1959 | { |
1960 | self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_add(val))) |
1961 | } |
1962 | #[cfg (not(miri))] |
1963 | { |
1964 | self.as_atomic_usize().fetch_add(val, order) as *mut T |
1965 | } |
1966 | } |
1967 | |
1968 | /// Offsets the pointer's address by subtracting `val` *bytes*, returning the |
1969 | /// previous pointer. |
1970 | /// |
1971 | /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically |
1972 | /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`. |
1973 | /// |
1974 | /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the |
1975 | /// memory ordering of this operation. All ordering modes are possible. Note |
1976 | /// that using [`Acquire`] makes the store part of this operation |
1977 | /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`]. |
1978 | /// |
1979 | /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub |
1980 | /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast |
1981 | /// |
1982 | /// # Examples |
1983 | /// |
1984 | /// ``` |
1985 | /// # #![allow(unstable_name_collisions)] |
1986 | /// use portable_atomic::{AtomicPtr, Ordering}; |
1987 | /// use sptr::Strict; // stable polyfill for strict provenance |
1988 | /// |
1989 | /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1)); |
1990 | /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1); |
1991 | /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0); |
1992 | /// ``` |
1993 | #[inline ] |
1994 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
1995 | pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T { |
1996 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
1997 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
1998 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
1999 | // compatible and is sound. |
2000 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2001 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2002 | #[cfg (miri)] |
2003 | { |
2004 | self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_sub(val))) |
2005 | } |
2006 | #[cfg (not(miri))] |
2007 | { |
2008 | self.as_atomic_usize().fetch_sub(val, order) as *mut T |
2009 | } |
2010 | } |
2011 | |
2012 | /// Performs a bitwise "or" operation on the address of the current pointer, |
2013 | /// and the argument `val`, and stores a pointer with provenance of the |
2014 | /// current pointer and the resulting address. |
2015 | /// |
2016 | /// This is equivalent to using [`map_addr`] to atomically perform |
2017 | /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged |
2018 | /// pointer schemes to atomically set tag bits. |
2019 | /// |
2020 | /// **Caveat**: This operation returns the previous value. To compute the |
2021 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2022 | /// example: `a.fetch_or(val).map_addr(|a| a | val)`. |
2023 | /// |
2024 | /// `fetch_or` takes an [`Ordering`] argument which describes the memory |
2025 | /// ordering of this operation. All ordering modes are possible. Note that |
2026 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2027 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2028 | /// |
2029 | /// This API and its claimed semantics are part of the Strict Provenance |
2030 | /// experiment, see the [module documentation for `ptr`][core::ptr] for |
2031 | /// details. |
2032 | /// |
2033 | /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr |
2034 | /// |
2035 | /// # Examples |
2036 | /// |
2037 | /// ``` |
2038 | /// # #![allow(unstable_name_collisions)] |
2039 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2040 | /// use sptr::Strict; // stable polyfill for strict provenance |
2041 | /// |
2042 | /// let pointer = &mut 3i64 as *mut i64; |
2043 | /// |
2044 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2045 | /// // Tag the bottom bit of the pointer. |
2046 | /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0); |
2047 | /// // Extract and untag. |
2048 | /// let tagged = atom.load(Ordering::Relaxed); |
2049 | /// assert_eq!(tagged.addr() & 1, 1); |
2050 | /// assert_eq!(tagged.map_addr(|p| p & !1), pointer); |
2051 | /// ``` |
2052 | #[inline ] |
2053 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2054 | pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T { |
2055 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2056 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2057 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2058 | // compatible and is sound. |
2059 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2060 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2061 | #[cfg (miri)] |
2062 | { |
2063 | self.fetch_update_(order, |x| strict::map_addr(x, |x| x | val)) |
2064 | } |
2065 | #[cfg (not(miri))] |
2066 | { |
2067 | self.as_atomic_usize().fetch_or(val, order) as *mut T |
2068 | } |
2069 | } |
2070 | |
2071 | /// Performs a bitwise "and" operation on the address of the current |
2072 | /// pointer, and the argument `val`, and stores a pointer with provenance of |
2073 | /// the current pointer and the resulting address. |
2074 | /// |
2075 | /// This is equivalent to using [`map_addr`] to atomically perform |
2076 | /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged |
2077 | /// pointer schemes to atomically unset tag bits. |
2078 | /// |
2079 | /// **Caveat**: This operation returns the previous value. To compute the |
2080 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2081 | /// example: `a.fetch_and(val).map_addr(|a| a & val)`. |
2082 | /// |
2083 | /// `fetch_and` takes an [`Ordering`] argument which describes the memory |
2084 | /// ordering of this operation. All ordering modes are possible. Note that |
2085 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2086 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2087 | /// |
2088 | /// This API and its claimed semantics are part of the Strict Provenance |
2089 | /// experiment, see the [module documentation for `ptr`][core::ptr] for |
2090 | /// details. |
2091 | /// |
2092 | /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr |
2093 | /// |
2094 | /// # Examples |
2095 | /// |
2096 | /// ``` |
2097 | /// # #![allow(unstable_name_collisions)] |
2098 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2099 | /// use sptr::Strict; // stable polyfill for strict provenance |
2100 | /// |
2101 | /// let pointer = &mut 3i64 as *mut i64; |
2102 | /// // A tagged pointer |
2103 | /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1)); |
2104 | /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1); |
2105 | /// // Untag, and extract the previously tagged pointer. |
2106 | /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1); |
2107 | /// assert_eq!(untagged, pointer); |
2108 | /// ``` |
2109 | #[inline ] |
2110 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2111 | pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T { |
2112 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2113 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2114 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2115 | // compatible and is sound. |
2116 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2117 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2118 | #[cfg (miri)] |
2119 | { |
2120 | self.fetch_update_(order, |x| strict::map_addr(x, |x| x & val)) |
2121 | } |
2122 | #[cfg (not(miri))] |
2123 | { |
2124 | self.as_atomic_usize().fetch_and(val, order) as *mut T |
2125 | } |
2126 | } |
2127 | |
2128 | /// Performs a bitwise "xor" operation on the address of the current |
2129 | /// pointer, and the argument `val`, and stores a pointer with provenance of |
2130 | /// the current pointer and the resulting address. |
2131 | /// |
2132 | /// This is equivalent to using [`map_addr`] to atomically perform |
2133 | /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged |
2134 | /// pointer schemes to atomically toggle tag bits. |
2135 | /// |
2136 | /// **Caveat**: This operation returns the previous value. To compute the |
2137 | /// stored value without losing provenance, you may use [`map_addr`]. For |
2138 | /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`. |
2139 | /// |
2140 | /// `fetch_xor` takes an [`Ordering`] argument which describes the memory |
2141 | /// ordering of this operation. All ordering modes are possible. Note that |
2142 | /// using [`Acquire`] makes the store part of this operation [`Relaxed`], |
2143 | /// and using [`Release`] makes the load part [`Relaxed`]. |
2144 | /// |
2145 | /// This API and its claimed semantics are part of the Strict Provenance |
2146 | /// experiment, see the [module documentation for `ptr`][core::ptr] for |
2147 | /// details. |
2148 | /// |
2149 | /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr |
2150 | /// |
2151 | /// # Examples |
2152 | /// |
2153 | /// ``` |
2154 | /// # #![allow(unstable_name_collisions)] |
2155 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2156 | /// use sptr::Strict; // stable polyfill for strict provenance |
2157 | /// |
2158 | /// let pointer = &mut 3i64 as *mut i64; |
2159 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2160 | /// |
2161 | /// // Toggle a tag bit on the pointer. |
2162 | /// atom.fetch_xor(1, Ordering::Relaxed); |
2163 | /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1); |
2164 | /// ``` |
2165 | #[inline ] |
2166 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2167 | pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T { |
2168 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2169 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2170 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2171 | // compatible and is sound. |
2172 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2173 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2174 | #[cfg (miri)] |
2175 | { |
2176 | self.fetch_update_(order, |x| strict::map_addr(x, |x| x ^ val)) |
2177 | } |
2178 | #[cfg (not(miri))] |
2179 | { |
2180 | self.as_atomic_usize().fetch_xor(val, order) as *mut T |
2181 | } |
2182 | } |
2183 | |
2184 | /// Sets the bit at the specified bit-position to 1. |
2185 | /// |
2186 | /// Returns `true` if the specified bit was previously set to 1. |
2187 | /// |
2188 | /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering |
2189 | /// of this operation. All ordering modes are possible. Note that using |
2190 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2191 | /// using [`Release`] makes the load part [`Relaxed`]. |
2192 | /// |
2193 | /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64. |
2194 | /// |
2195 | /// # Examples |
2196 | /// |
2197 | /// ``` |
2198 | /// # #![allow(unstable_name_collisions)] |
2199 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2200 | /// use sptr::Strict; // stable polyfill for strict provenance |
2201 | /// |
2202 | /// let pointer = &mut 3i64 as *mut i64; |
2203 | /// |
2204 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2205 | /// // Tag the bottom bit of the pointer. |
2206 | /// assert!(!atom.bit_set(0, Ordering::Relaxed)); |
2207 | /// // Extract and untag. |
2208 | /// let tagged = atom.load(Ordering::Relaxed); |
2209 | /// assert_eq!(tagged.addr() & 1, 1); |
2210 | /// assert_eq!(tagged.map_addr(|p| p & !1), pointer); |
2211 | /// ``` |
2212 | #[inline ] |
2213 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2214 | pub fn bit_set(&self, bit: u32, order: Ordering) -> bool { |
2215 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2216 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2217 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2218 | // compatible and is sound. |
2219 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2220 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2221 | #[cfg (miri)] |
2222 | { |
2223 | let mask = 1_usize.wrapping_shl(bit); |
2224 | self.fetch_or(mask, order) as usize & mask != 0 |
2225 | } |
2226 | #[cfg (not(miri))] |
2227 | { |
2228 | self.as_atomic_usize().bit_set(bit, order) |
2229 | } |
2230 | } |
2231 | |
2232 | /// Clears the bit at the specified bit-position to 1. |
2233 | /// |
2234 | /// Returns `true` if the specified bit was previously set to 1. |
2235 | /// |
2236 | /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering |
2237 | /// of this operation. All ordering modes are possible. Note that using |
2238 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2239 | /// using [`Release`] makes the load part [`Relaxed`]. |
2240 | /// |
2241 | /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64. |
2242 | /// |
2243 | /// # Examples |
2244 | /// |
2245 | /// ``` |
2246 | /// # #![allow(unstable_name_collisions)] |
2247 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2248 | /// use sptr::Strict; // stable polyfill for strict provenance |
2249 | /// |
2250 | /// let pointer = &mut 3i64 as *mut i64; |
2251 | /// // A tagged pointer |
2252 | /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1)); |
2253 | /// assert!(atom.bit_set(0, Ordering::Relaxed)); |
2254 | /// // Untag |
2255 | /// assert!(atom.bit_clear(0, Ordering::Relaxed)); |
2256 | /// ``` |
2257 | #[inline ] |
2258 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2259 | pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool { |
2260 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2261 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2262 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2263 | // compatible and is sound. |
2264 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2265 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2266 | #[cfg (miri)] |
2267 | { |
2268 | let mask = 1_usize.wrapping_shl(bit); |
2269 | self.fetch_and(!mask, order) as usize & mask != 0 |
2270 | } |
2271 | #[cfg (not(miri))] |
2272 | { |
2273 | self.as_atomic_usize().bit_clear(bit, order) |
2274 | } |
2275 | } |
2276 | |
2277 | /// Toggles the bit at the specified bit-position. |
2278 | /// |
2279 | /// Returns `true` if the specified bit was previously set to 1. |
2280 | /// |
2281 | /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering |
2282 | /// of this operation. All ordering modes are possible. Note that using |
2283 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2284 | /// using [`Release`] makes the load part [`Relaxed`]. |
2285 | /// |
2286 | /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64. |
2287 | /// |
2288 | /// # Examples |
2289 | /// |
2290 | /// ``` |
2291 | /// # #![allow(unstable_name_collisions)] |
2292 | /// use portable_atomic::{AtomicPtr, Ordering}; |
2293 | /// use sptr::Strict; // stable polyfill for strict provenance |
2294 | /// |
2295 | /// let pointer = &mut 3i64 as *mut i64; |
2296 | /// let atom = AtomicPtr::<i64>::new(pointer); |
2297 | /// |
2298 | /// // Toggle a tag bit on the pointer. |
2299 | /// atom.bit_toggle(0, Ordering::Relaxed); |
2300 | /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1); |
2301 | /// ``` |
2302 | #[inline ] |
2303 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
2304 | pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool { |
2305 | // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance |
2306 | // compatible, but it is unstable. So, for now emulate it only on cfg(miri). |
2307 | // Code using AtomicUsize::fetch_* via casts is still permissive-provenance |
2308 | // compatible and is sound. |
2309 | // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized, |
2310 | // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized. |
2311 | #[cfg (miri)] |
2312 | { |
2313 | let mask = 1_usize.wrapping_shl(bit); |
2314 | self.fetch_xor(mask, order) as usize & mask != 0 |
2315 | } |
2316 | #[cfg (not(miri))] |
2317 | { |
2318 | self.as_atomic_usize().bit_toggle(bit, order) |
2319 | } |
2320 | } |
2321 | |
2322 | #[cfg (not(miri))] |
2323 | #[inline ] |
2324 | fn as_atomic_usize(&self) -> &AtomicUsize { |
2325 | static_assert!( |
2326 | core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>() |
2327 | ); |
2328 | static_assert!( |
2329 | core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>() |
2330 | ); |
2331 | // SAFETY: AtomicPtr and AtomicUsize have the same layout, |
2332 | // and both access data in the same way. |
2333 | unsafe { &*(self as *const Self as *const AtomicUsize) } |
2334 | } |
2335 | } // cfg_has_atomic_cas! |
2336 | |
2337 | const_fn! { |
2338 | const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]; |
2339 | /// Returns a mutable pointer to the underlying pointer. |
2340 | /// |
2341 | /// Returning an `*mut` pointer from a shared reference to this atomic is |
2342 | /// safe because the atomic types work with interior mutability. Any use of |
2343 | /// the returned raw pointer requires an `unsafe` block and has to uphold |
2344 | /// the safety requirements. If there is concurrent access, note the following |
2345 | /// additional safety requirements: |
2346 | /// |
2347 | /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent |
2348 | /// operations on it must be atomic. |
2349 | /// - Otherwise, any concurrent operations on it must be compatible with |
2350 | /// operations performed by this atomic type. |
2351 | /// |
2352 | /// This is `const fn` on Rust 1.58+. |
2353 | #[inline ] |
2354 | pub const fn as_ptr(&self) -> *mut *mut T { |
2355 | self.inner.as_ptr() |
2356 | } |
2357 | } |
2358 | } |
2359 | } // cfg_has_atomic_ptr! |
2360 | |
2361 | macro_rules! atomic_int { |
2362 | // TODO: support AtomicF{16,128} once https://github.com/rust-lang/rust/issues/116909 stabilized. |
2363 | (AtomicU32, $int_type:ident, $align:literal) => { |
2364 | atomic_int!(int, AtomicU32, $int_type, $align); |
2365 | #[cfg(feature = "float" )] |
2366 | atomic_int!(float, AtomicF32, f32, AtomicU32, $int_type, $align); |
2367 | }; |
2368 | (AtomicU64, $int_type:ident, $align:literal) => { |
2369 | atomic_int!(int, AtomicU64, $int_type, $align); |
2370 | #[cfg(feature = "float" )] |
2371 | atomic_int!(float, AtomicF64, f64, AtomicU64, $int_type, $align); |
2372 | }; |
2373 | ($atomic_type:ident, $int_type:ident, $align:literal) => { |
2374 | atomic_int!(int, $atomic_type, $int_type, $align); |
2375 | }; |
2376 | |
2377 | // Atomic{I,U}* impls |
2378 | (int, $atomic_type:ident, $int_type:ident, $align:literal) => { |
2379 | doc_comment! { |
2380 | concat!("An integer type which can be safely shared between threads. |
2381 | |
2382 | This type has the same in-memory representation as the underlying integer type, |
2383 | [`" , stringify!($int_type), "`]. |
2384 | |
2385 | If the compiler and the platform support atomic loads and stores of [`" , stringify!($int_type), |
2386 | "`], this type is a wrapper for the standard library's `" , stringify!($atomic_type), |
2387 | "`. If the platform supports it but the compiler does not, atomic operations are implemented using |
2388 | inline assembly. Otherwise synchronizes using global locks. |
2389 | You can call [`" , stringify!($atomic_type), "::is_lock_free()`] to check whether |
2390 | atomic instructions or locks will be used. |
2391 | " |
2392 | ), |
2393 | // We can use #[repr(transparent)] here, but #[repr(C, align(N))] |
2394 | // will show clearer docs. |
2395 | #[repr(C, align($align))] |
2396 | pub struct $atomic_type { |
2397 | inner: imp::$atomic_type, |
2398 | } |
2399 | } |
2400 | |
2401 | impl Default for $atomic_type { |
2402 | #[inline] |
2403 | fn default() -> Self { |
2404 | Self::new($int_type::default()) |
2405 | } |
2406 | } |
2407 | |
2408 | impl From<$int_type> for $atomic_type { |
2409 | #[inline] |
2410 | fn from(v: $int_type) -> Self { |
2411 | Self::new(v) |
2412 | } |
2413 | } |
2414 | |
2415 | // UnwindSafe is implicitly implemented. |
2416 | #[cfg(not(portable_atomic_no_core_unwind_safe))] |
2417 | impl core::panic::RefUnwindSafe for $atomic_type {} |
2418 | #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std" ))] |
2419 | impl std::panic::RefUnwindSafe for $atomic_type {} |
2420 | |
2421 | impl_debug_and_serde!($atomic_type); |
2422 | |
2423 | impl $atomic_type { |
2424 | doc_comment! { |
2425 | concat!( |
2426 | "Creates a new atomic integer. |
2427 | |
2428 | # Examples |
2429 | |
2430 | ``` |
2431 | use portable_atomic::" , stringify!($atomic_type), "; |
2432 | |
2433 | let atomic_forty_two = " , stringify!($atomic_type), "::new(42); |
2434 | ```" |
2435 | ), |
2436 | #[inline] |
2437 | #[must_use] |
2438 | pub const fn new(v: $int_type) -> Self { |
2439 | static_assert_layout!($atomic_type, $int_type); |
2440 | Self { inner: imp::$atomic_type::new(v) } |
2441 | } |
2442 | } |
2443 | |
2444 | doc_comment! { |
2445 | concat!("Creates a new reference to an atomic integer from a pointer. |
2446 | |
2447 | # Safety |
2448 | |
2449 | * `ptr` must be aligned to `align_of::<" , stringify!($atomic_type), ">()` (note that on some platforms this |
2450 | can be bigger than `align_of::<" , stringify!($int_type), ">()`). |
2451 | * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
2452 | * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value |
2453 | behind `ptr` must have a happens-before relationship with atomic accesses via |
2454 | the returned value (or vice-versa). |
2455 | * In other words, time periods where the value is accessed atomically may not |
2456 | overlap with periods where the value is accessed non-atomically. |
2457 | * This requirement is trivially satisfied if `ptr` is never used non-atomically |
2458 | for the duration of lifetime `'a`. Most use cases should be able to follow |
2459 | this guideline. |
2460 | * This requirement is also trivially satisfied if all accesses (atomic or not) are |
2461 | done from the same thread. |
2462 | * If this atomic type is *not* lock-free: |
2463 | * Any accesses to the value behind `ptr` must have a happens-before relationship |
2464 | with accesses via the returned value (or vice-versa). |
2465 | * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must |
2466 | be compatible with operations performed by this atomic type. |
2467 | * This method must not be used to create overlapping or mixed-size atomic |
2468 | accesses, as these are not supported by the memory model. |
2469 | |
2470 | [valid]: core::ptr#safety" ), |
2471 | #[inline] |
2472 | #[must_use] |
2473 | pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self { |
2474 | #[allow(clippy::cast_ptr_alignment)] |
2475 | // SAFETY: guaranteed by the caller |
2476 | unsafe { &*(ptr as *mut Self) } |
2477 | } |
2478 | } |
2479 | |
2480 | doc_comment! { |
2481 | concat!("Returns `true` if operations on values of this type are lock-free. |
2482 | |
2483 | If the compiler or the platform doesn't support the necessary |
2484 | atomic instructions, global locks for every potentially |
2485 | concurrent atomic operation will be used. |
2486 | |
2487 | # Examples |
2488 | |
2489 | ``` |
2490 | use portable_atomic::" , stringify!($atomic_type), "; |
2491 | |
2492 | let is_lock_free = " , stringify!($atomic_type), "::is_lock_free(); |
2493 | ```" ), |
2494 | #[inline] |
2495 | #[must_use] |
2496 | pub fn is_lock_free() -> bool { |
2497 | <imp::$atomic_type>::is_lock_free() |
2498 | } |
2499 | } |
2500 | |
2501 | doc_comment! { |
2502 | concat!("Returns `true` if operations on values of this type are lock-free. |
2503 | |
2504 | If the compiler or the platform doesn't support the necessary |
2505 | atomic instructions, global locks for every potentially |
2506 | concurrent atomic operation will be used. |
2507 | |
2508 | **Note:** If the atomic operation relies on dynamic CPU feature detection, |
2509 | this type may be lock-free even if the function returns false. |
2510 | |
2511 | # Examples |
2512 | |
2513 | ``` |
2514 | use portable_atomic::" , stringify!($atomic_type), "; |
2515 | |
2516 | const IS_ALWAYS_LOCK_FREE: bool = " , stringify!($atomic_type), "::is_always_lock_free(); |
2517 | ```" ), |
2518 | #[inline] |
2519 | #[must_use] |
2520 | pub const fn is_always_lock_free() -> bool { |
2521 | <imp::$atomic_type>::is_always_lock_free() |
2522 | } |
2523 | } |
2524 | |
2525 | doc_comment! { |
2526 | concat!("Returns a mutable reference to the underlying integer. \n |
2527 | This is safe because the mutable reference guarantees that no other threads are |
2528 | concurrently accessing the atomic data. |
2529 | |
2530 | # Examples |
2531 | |
2532 | ``` |
2533 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2534 | |
2535 | let mut some_var = " , stringify!($atomic_type), "::new(10); |
2536 | assert_eq!(*some_var.get_mut(), 10); |
2537 | *some_var.get_mut() = 5; |
2538 | assert_eq!(some_var.load(Ordering::SeqCst), 5); |
2539 | ```" ), |
2540 | #[inline] |
2541 | pub fn get_mut(&mut self) -> &mut $int_type { |
2542 | self.inner.get_mut() |
2543 | } |
2544 | } |
2545 | |
2546 | // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types. |
2547 | // https://github.com/rust-lang/rust/issues/76314 |
2548 | |
2549 | doc_comment! { |
2550 | concat!("Consumes the atomic and returns the contained value. |
2551 | |
2552 | This is safe because passing `self` by value guarantees that no other threads are |
2553 | concurrently accessing the atomic data. |
2554 | |
2555 | # Examples |
2556 | |
2557 | ``` |
2558 | use portable_atomic::" , stringify!($atomic_type), "; |
2559 | |
2560 | let some_var = " , stringify!($atomic_type), "::new(5); |
2561 | assert_eq!(some_var.into_inner(), 5); |
2562 | ```" ), |
2563 | #[inline] |
2564 | pub fn into_inner(self) -> $int_type { |
2565 | self.inner.into_inner() |
2566 | } |
2567 | } |
2568 | |
2569 | doc_comment! { |
2570 | concat!("Loads a value from the atomic integer. |
2571 | |
2572 | `load` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
2573 | Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
2574 | |
2575 | # Panics |
2576 | |
2577 | Panics if `order` is [`Release`] or [`AcqRel`]. |
2578 | |
2579 | # Examples |
2580 | |
2581 | ``` |
2582 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2583 | |
2584 | let some_var = " , stringify!($atomic_type), "::new(5); |
2585 | |
2586 | assert_eq!(some_var.load(Ordering::Relaxed), 5); |
2587 | ```" ), |
2588 | #[inline] |
2589 | #[cfg_attr( |
2590 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
2591 | track_caller |
2592 | )] |
2593 | pub fn load(&self, order: Ordering) -> $int_type { |
2594 | self.inner.load(order) |
2595 | } |
2596 | } |
2597 | |
2598 | doc_comment! { |
2599 | concat!("Stores a value into the atomic integer. |
2600 | |
2601 | `store` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
2602 | Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
2603 | |
2604 | # Panics |
2605 | |
2606 | Panics if `order` is [`Acquire`] or [`AcqRel`]. |
2607 | |
2608 | # Examples |
2609 | |
2610 | ``` |
2611 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2612 | |
2613 | let some_var = " , stringify!($atomic_type), "::new(5); |
2614 | |
2615 | some_var.store(10, Ordering::Relaxed); |
2616 | assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2617 | ```" ), |
2618 | #[inline] |
2619 | #[cfg_attr( |
2620 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
2621 | track_caller |
2622 | )] |
2623 | pub fn store(&self, val: $int_type, order: Ordering) { |
2624 | self.inner.store(val, order) |
2625 | } |
2626 | } |
2627 | |
2628 | cfg_has_atomic_cas! { |
2629 | doc_comment! { |
2630 | concat!("Stores a value into the atomic integer, returning the previous value. |
2631 | |
2632 | `swap` takes an [`Ordering`] argument which describes the memory ordering |
2633 | of this operation. All ordering modes are possible. Note that using |
2634 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2635 | using [`Release`] makes the load part [`Relaxed`]. |
2636 | |
2637 | # Examples |
2638 | |
2639 | ``` |
2640 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2641 | |
2642 | let some_var = " , stringify!($atomic_type), "::new(5); |
2643 | |
2644 | assert_eq!(some_var.swap(10, Ordering::Relaxed), 5); |
2645 | ```" ), |
2646 | #[inline] |
2647 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2648 | pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type { |
2649 | self.inner.swap(val, order) |
2650 | } |
2651 | } |
2652 | |
2653 | doc_comment! { |
2654 | concat!("Stores a value into the atomic integer if the current value is the same as |
2655 | the `current` value. |
2656 | |
2657 | The return value is a result indicating whether the new value was written and |
2658 | containing the previous value. On success this value is guaranteed to be equal to |
2659 | `current`. |
2660 | |
2661 | `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
2662 | ordering of this operation. `success` describes the required ordering for the |
2663 | read-modify-write operation that takes place if the comparison with `current` succeeds. |
2664 | `failure` describes the required ordering for the load operation that takes place when |
2665 | the comparison fails. Using [`Acquire`] as success ordering makes the store part |
2666 | of this operation [`Relaxed`], and using [`Release`] makes the successful load |
2667 | [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
2668 | |
2669 | # Panics |
2670 | |
2671 | Panics if `failure` is [`Release`], [`AcqRel`]. |
2672 | |
2673 | # Examples |
2674 | |
2675 | ``` |
2676 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2677 | |
2678 | let some_var = " , stringify!($atomic_type), "::new(5); |
2679 | |
2680 | assert_eq!( |
2681 | some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed), |
2682 | Ok(5), |
2683 | ); |
2684 | assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2685 | |
2686 | assert_eq!( |
2687 | some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire), |
2688 | Err(10), |
2689 | ); |
2690 | assert_eq!(some_var.load(Ordering::Relaxed), 10); |
2691 | ```" ), |
2692 | #[inline] |
2693 | #[cfg_attr(portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
2694 | #[cfg_attr( |
2695 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
2696 | track_caller |
2697 | )] |
2698 | pub fn compare_exchange( |
2699 | &self, |
2700 | current: $int_type, |
2701 | new: $int_type, |
2702 | success: Ordering, |
2703 | failure: Ordering, |
2704 | ) -> Result<$int_type, $int_type> { |
2705 | self.inner.compare_exchange(current, new, success, failure) |
2706 | } |
2707 | } |
2708 | |
2709 | doc_comment! { |
2710 | concat!("Stores a value into the atomic integer if the current value is the same as |
2711 | the `current` value. |
2712 | Unlike [`compare_exchange`](Self::compare_exchange) |
2713 | this function is allowed to spuriously fail even |
2714 | when the comparison succeeds, which can result in more efficient code on some |
2715 | platforms. The return value is a result indicating whether the new value was |
2716 | written and containing the previous value. |
2717 | |
2718 | `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
2719 | ordering of this operation. `success` describes the required ordering for the |
2720 | read-modify-write operation that takes place if the comparison with `current` succeeds. |
2721 | `failure` describes the required ordering for the load operation that takes place when |
2722 | the comparison fails. Using [`Acquire`] as success ordering makes the store part |
2723 | of this operation [`Relaxed`], and using [`Release`] makes the successful load |
2724 | [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
2725 | |
2726 | # Panics |
2727 | |
2728 | Panics if `failure` is [`Release`], [`AcqRel`]. |
2729 | |
2730 | # Examples |
2731 | |
2732 | ``` |
2733 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2734 | |
2735 | let val = " , stringify!($atomic_type), "::new(4); |
2736 | |
2737 | let mut old = val.load(Ordering::Relaxed); |
2738 | loop { |
2739 | let new = old * 2; |
2740 | match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { |
2741 | Ok(_) => break, |
2742 | Err(x) => old = x, |
2743 | } |
2744 | } |
2745 | ```" ), |
2746 | #[inline] |
2747 | #[cfg_attr(portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
2748 | #[cfg_attr( |
2749 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
2750 | track_caller |
2751 | )] |
2752 | pub fn compare_exchange_weak( |
2753 | &self, |
2754 | current: $int_type, |
2755 | new: $int_type, |
2756 | success: Ordering, |
2757 | failure: Ordering, |
2758 | ) -> Result<$int_type, $int_type> { |
2759 | self.inner.compare_exchange_weak(current, new, success, failure) |
2760 | } |
2761 | } |
2762 | |
2763 | doc_comment! { |
2764 | concat!("Adds to the current value, returning the previous value. |
2765 | |
2766 | This operation wraps around on overflow. |
2767 | |
2768 | `fetch_add` takes an [`Ordering`] argument which describes the memory ordering |
2769 | of this operation. All ordering modes are possible. Note that using |
2770 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2771 | using [`Release`] makes the load part [`Relaxed`]. |
2772 | |
2773 | # Examples |
2774 | |
2775 | ``` |
2776 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2777 | |
2778 | let foo = " , stringify!($atomic_type), "::new(0); |
2779 | assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0); |
2780 | assert_eq!(foo.load(Ordering::SeqCst), 10); |
2781 | ```" ), |
2782 | #[inline] |
2783 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2784 | pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type { |
2785 | self.inner.fetch_add(val, order) |
2786 | } |
2787 | } |
2788 | |
2789 | doc_comment! { |
2790 | concat!("Adds to the current value. |
2791 | |
2792 | This operation wraps around on overflow. |
2793 | |
2794 | Unlike `fetch_add`, this does not return the previous value. |
2795 | |
2796 | `add` takes an [`Ordering`] argument which describes the memory ordering |
2797 | of this operation. All ordering modes are possible. Note that using |
2798 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2799 | using [`Release`] makes the load part [`Relaxed`]. |
2800 | |
2801 | This function may generate more efficient code than `fetch_add` on some platforms. |
2802 | |
2803 | - MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics) |
2804 | |
2805 | # Examples |
2806 | |
2807 | ``` |
2808 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2809 | |
2810 | let foo = " , stringify!($atomic_type), "::new(0); |
2811 | foo.add(10, Ordering::SeqCst); |
2812 | assert_eq!(foo.load(Ordering::SeqCst), 10); |
2813 | ```" ), |
2814 | #[inline] |
2815 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2816 | pub fn add(&self, val: $int_type, order: Ordering) { |
2817 | self.inner.add(val, order); |
2818 | } |
2819 | } |
2820 | |
2821 | doc_comment! { |
2822 | concat!("Subtracts from the current value, returning the previous value. |
2823 | |
2824 | This operation wraps around on overflow. |
2825 | |
2826 | `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering |
2827 | of this operation. All ordering modes are possible. Note that using |
2828 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2829 | using [`Release`] makes the load part [`Relaxed`]. |
2830 | |
2831 | # Examples |
2832 | |
2833 | ``` |
2834 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2835 | |
2836 | let foo = " , stringify!($atomic_type), "::new(20); |
2837 | assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20); |
2838 | assert_eq!(foo.load(Ordering::SeqCst), 10); |
2839 | ```" ), |
2840 | #[inline] |
2841 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2842 | pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type { |
2843 | self.inner.fetch_sub(val, order) |
2844 | } |
2845 | } |
2846 | |
2847 | doc_comment! { |
2848 | concat!("Subtracts from the current value. |
2849 | |
2850 | This operation wraps around on overflow. |
2851 | |
2852 | Unlike `fetch_sub`, this does not return the previous value. |
2853 | |
2854 | `sub` takes an [`Ordering`] argument which describes the memory ordering |
2855 | of this operation. All ordering modes are possible. Note that using |
2856 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2857 | using [`Release`] makes the load part [`Relaxed`]. |
2858 | |
2859 | This function may generate more efficient code than `fetch_sub` on some platforms. |
2860 | |
2861 | - MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics) |
2862 | |
2863 | # Examples |
2864 | |
2865 | ``` |
2866 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2867 | |
2868 | let foo = " , stringify!($atomic_type), "::new(20); |
2869 | foo.sub(10, Ordering::SeqCst); |
2870 | assert_eq!(foo.load(Ordering::SeqCst), 10); |
2871 | ```" ), |
2872 | #[inline] |
2873 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2874 | pub fn sub(&self, val: $int_type, order: Ordering) { |
2875 | self.inner.sub(val, order); |
2876 | } |
2877 | } |
2878 | |
2879 | doc_comment! { |
2880 | concat!("Bitwise \"and \" with the current value. |
2881 | |
2882 | Performs a bitwise \"and \" operation on the current value and the argument `val`, and |
2883 | sets the new value to the result. |
2884 | |
2885 | Returns the previous value. |
2886 | |
2887 | `fetch_and` takes an [`Ordering`] argument which describes the memory ordering |
2888 | of this operation. All ordering modes are possible. Note that using |
2889 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2890 | using [`Release`] makes the load part [`Relaxed`]. |
2891 | |
2892 | # Examples |
2893 | |
2894 | ``` |
2895 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2896 | |
2897 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
2898 | assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101); |
2899 | assert_eq!(foo.load(Ordering::SeqCst), 0b100001); |
2900 | ```" ), |
2901 | #[inline] |
2902 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2903 | pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type { |
2904 | self.inner.fetch_and(val, order) |
2905 | } |
2906 | } |
2907 | |
2908 | doc_comment! { |
2909 | concat!("Bitwise \"and \" with the current value. |
2910 | |
2911 | Performs a bitwise \"and \" operation on the current value and the argument `val`, and |
2912 | sets the new value to the result. |
2913 | |
2914 | Unlike `fetch_and`, this does not return the previous value. |
2915 | |
2916 | `and` takes an [`Ordering`] argument which describes the memory ordering |
2917 | of this operation. All ordering modes are possible. Note that using |
2918 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2919 | using [`Release`] makes the load part [`Relaxed`]. |
2920 | |
2921 | This function may generate more efficient code than `fetch_and` on some platforms. |
2922 | |
2923 | - x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) |
2924 | - MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics) |
2925 | |
2926 | Note: On x86/x86_64, the use of either function should not usually |
2927 | affect the generated code, because LLVM can properly optimize the case |
2928 | where the result is unused. |
2929 | |
2930 | # Examples |
2931 | |
2932 | ``` |
2933 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2934 | |
2935 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
2936 | assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101); |
2937 | assert_eq!(foo.load(Ordering::SeqCst), 0b100001); |
2938 | ```" ), |
2939 | #[inline] |
2940 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2941 | pub fn and(&self, val: $int_type, order: Ordering) { |
2942 | self.inner.and(val, order); |
2943 | } |
2944 | } |
2945 | |
2946 | doc_comment! { |
2947 | concat!("Bitwise \"nand \" with the current value. |
2948 | |
2949 | Performs a bitwise \"nand \" operation on the current value and the argument `val`, and |
2950 | sets the new value to the result. |
2951 | |
2952 | Returns the previous value. |
2953 | |
2954 | `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering |
2955 | of this operation. All ordering modes are possible. Note that using |
2956 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2957 | using [`Release`] makes the load part [`Relaxed`]. |
2958 | |
2959 | # Examples |
2960 | |
2961 | ``` |
2962 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2963 | |
2964 | let foo = " , stringify!($atomic_type), "::new(0x13); |
2965 | assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13); |
2966 | assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31)); |
2967 | ```" ), |
2968 | #[inline] |
2969 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2970 | pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type { |
2971 | self.inner.fetch_nand(val, order) |
2972 | } |
2973 | } |
2974 | |
2975 | doc_comment! { |
2976 | concat!("Bitwise \"or \" with the current value. |
2977 | |
2978 | Performs a bitwise \"or \" operation on the current value and the argument `val`, and |
2979 | sets the new value to the result. |
2980 | |
2981 | Returns the previous value. |
2982 | |
2983 | `fetch_or` takes an [`Ordering`] argument which describes the memory ordering |
2984 | of this operation. All ordering modes are possible. Note that using |
2985 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
2986 | using [`Release`] makes the load part [`Relaxed`]. |
2987 | |
2988 | # Examples |
2989 | |
2990 | ``` |
2991 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
2992 | |
2993 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
2994 | assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101); |
2995 | assert_eq!(foo.load(Ordering::SeqCst), 0b111111); |
2996 | ```" ), |
2997 | #[inline] |
2998 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
2999 | pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type { |
3000 | self.inner.fetch_or(val, order) |
3001 | } |
3002 | } |
3003 | |
3004 | doc_comment! { |
3005 | concat!("Bitwise \"or \" with the current value. |
3006 | |
3007 | Performs a bitwise \"or \" operation on the current value and the argument `val`, and |
3008 | sets the new value to the result. |
3009 | |
3010 | Unlike `fetch_or`, this does not return the previous value. |
3011 | |
3012 | `or` takes an [`Ordering`] argument which describes the memory ordering |
3013 | of this operation. All ordering modes are possible. Note that using |
3014 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3015 | using [`Release`] makes the load part [`Relaxed`]. |
3016 | |
3017 | This function may generate more efficient code than `fetch_or` on some platforms. |
3018 | |
3019 | - x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) |
3020 | - MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics) |
3021 | |
3022 | Note: On x86/x86_64, the use of either function should not usually |
3023 | affect the generated code, because LLVM can properly optimize the case |
3024 | where the result is unused. |
3025 | |
3026 | # Examples |
3027 | |
3028 | ``` |
3029 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3030 | |
3031 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
3032 | assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101); |
3033 | assert_eq!(foo.load(Ordering::SeqCst), 0b111111); |
3034 | ```" ), |
3035 | #[inline] |
3036 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3037 | pub fn or(&self, val: $int_type, order: Ordering) { |
3038 | self.inner.or(val, order); |
3039 | } |
3040 | } |
3041 | |
3042 | doc_comment! { |
3043 | concat!("Bitwise \"xor \" with the current value. |
3044 | |
3045 | Performs a bitwise \"xor \" operation on the current value and the argument `val`, and |
3046 | sets the new value to the result. |
3047 | |
3048 | Returns the previous value. |
3049 | |
3050 | `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering |
3051 | of this operation. All ordering modes are possible. Note that using |
3052 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3053 | using [`Release`] makes the load part [`Relaxed`]. |
3054 | |
3055 | # Examples |
3056 | |
3057 | ``` |
3058 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3059 | |
3060 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
3061 | assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101); |
3062 | assert_eq!(foo.load(Ordering::SeqCst), 0b011110); |
3063 | ```" ), |
3064 | #[inline] |
3065 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3066 | pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type { |
3067 | self.inner.fetch_xor(val, order) |
3068 | } |
3069 | } |
3070 | |
3071 | doc_comment! { |
3072 | concat!("Bitwise \"xor \" with the current value. |
3073 | |
3074 | Performs a bitwise \"xor \" operation on the current value and the argument `val`, and |
3075 | sets the new value to the result. |
3076 | |
3077 | Unlike `fetch_xor`, this does not return the previous value. |
3078 | |
3079 | `xor` takes an [`Ordering`] argument which describes the memory ordering |
3080 | of this operation. All ordering modes are possible. Note that using |
3081 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3082 | using [`Release`] makes the load part [`Relaxed`]. |
3083 | |
3084 | This function may generate more efficient code than `fetch_xor` on some platforms. |
3085 | |
3086 | - x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) |
3087 | - MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics) |
3088 | |
3089 | Note: On x86/x86_64, the use of either function should not usually |
3090 | affect the generated code, because LLVM can properly optimize the case |
3091 | where the result is unused. |
3092 | |
3093 | # Examples |
3094 | |
3095 | ``` |
3096 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3097 | |
3098 | let foo = " , stringify!($atomic_type), "::new(0b101101); |
3099 | foo.xor(0b110011, Ordering::SeqCst); |
3100 | assert_eq!(foo.load(Ordering::SeqCst), 0b011110); |
3101 | ```" ), |
3102 | #[inline] |
3103 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3104 | pub fn xor(&self, val: $int_type, order: Ordering) { |
3105 | self.inner.xor(val, order); |
3106 | } |
3107 | } |
3108 | |
3109 | doc_comment! { |
3110 | concat!("Fetches the value, and applies a function to it that returns an optional |
3111 | new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else |
3112 | `Err(previous_value)`. |
3113 | |
3114 | Note: This may call the function multiple times if the value has been changed from other threads in |
3115 | the meantime, as long as the function returns `Some(_)`, but the function will have been applied |
3116 | only once to the stored value. |
3117 | |
3118 | `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation. |
3119 | The first describes the required ordering for when the operation finally succeeds while the second |
3120 | describes the required ordering for loads. These correspond to the success and failure orderings of |
3121 | [`compare_exchange`](Self::compare_exchange) respectively. |
3122 | |
3123 | Using [`Acquire`] as success ordering makes the store part |
3124 | of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
3125 | [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3126 | |
3127 | # Panics |
3128 | |
3129 | Panics if `fetch_order` is [`Release`], [`AcqRel`]. |
3130 | |
3131 | # Considerations |
3132 | |
3133 | This method is not magic; it is not provided by the hardware. |
3134 | It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak), |
3135 | and suffers from the same drawbacks. |
3136 | In particular, this method will not circumvent the [ABA Problem]. |
3137 | |
3138 | [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
3139 | |
3140 | # Examples |
3141 | |
3142 | ```rust |
3143 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3144 | |
3145 | let x = " , stringify!($atomic_type), "::new(7); |
3146 | assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7)); |
3147 | assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7)); |
3148 | assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8)); |
3149 | assert_eq!(x.load(Ordering::SeqCst), 9); |
3150 | ```" ), |
3151 | #[inline] |
3152 | #[cfg_attr( |
3153 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3154 | track_caller |
3155 | )] |
3156 | pub fn fetch_update<F>( |
3157 | &self, |
3158 | set_order: Ordering, |
3159 | fetch_order: Ordering, |
3160 | mut f: F, |
3161 | ) -> Result<$int_type, $int_type> |
3162 | where |
3163 | F: FnMut($int_type) -> Option<$int_type>, |
3164 | { |
3165 | let mut prev = self.load(fetch_order); |
3166 | while let Some(next) = f(prev) { |
3167 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
3168 | x @ Ok(_) => return x, |
3169 | Err(next_prev) => prev = next_prev, |
3170 | } |
3171 | } |
3172 | Err(prev) |
3173 | } |
3174 | } |
3175 | |
3176 | doc_comment! { |
3177 | concat!("Maximum with the current value. |
3178 | |
3179 | Finds the maximum of the current value and the argument `val`, and |
3180 | sets the new value to the result. |
3181 | |
3182 | Returns the previous value. |
3183 | |
3184 | `fetch_max` takes an [`Ordering`] argument which describes the memory ordering |
3185 | of this operation. All ordering modes are possible. Note that using |
3186 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3187 | using [`Release`] makes the load part [`Relaxed`]. |
3188 | |
3189 | # Examples |
3190 | |
3191 | ``` |
3192 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3193 | |
3194 | let foo = " , stringify!($atomic_type), "::new(23); |
3195 | assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23); |
3196 | assert_eq!(foo.load(Ordering::SeqCst), 42); |
3197 | ``` |
3198 | |
3199 | If you want to obtain the maximum value in one step, you can use the following: |
3200 | |
3201 | ``` |
3202 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3203 | |
3204 | let foo = " , stringify!($atomic_type), "::new(23); |
3205 | let bar = 42; |
3206 | let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar); |
3207 | assert!(max_foo == 42); |
3208 | ```" ), |
3209 | #[inline] |
3210 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3211 | pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type { |
3212 | self.inner.fetch_max(val, order) |
3213 | } |
3214 | } |
3215 | |
3216 | doc_comment! { |
3217 | concat!("Minimum with the current value. |
3218 | |
3219 | Finds the minimum of the current value and the argument `val`, and |
3220 | sets the new value to the result. |
3221 | |
3222 | Returns the previous value. |
3223 | |
3224 | `fetch_min` takes an [`Ordering`] argument which describes the memory ordering |
3225 | of this operation. All ordering modes are possible. Note that using |
3226 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3227 | using [`Release`] makes the load part [`Relaxed`]. |
3228 | |
3229 | # Examples |
3230 | |
3231 | ``` |
3232 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3233 | |
3234 | let foo = " , stringify!($atomic_type), "::new(23); |
3235 | assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23); |
3236 | assert_eq!(foo.load(Ordering::Relaxed), 23); |
3237 | assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23); |
3238 | assert_eq!(foo.load(Ordering::Relaxed), 22); |
3239 | ``` |
3240 | |
3241 | If you want to obtain the minimum value in one step, you can use the following: |
3242 | |
3243 | ``` |
3244 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3245 | |
3246 | let foo = " , stringify!($atomic_type), "::new(23); |
3247 | let bar = 12; |
3248 | let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar); |
3249 | assert_eq!(min_foo, 12); |
3250 | ```" ), |
3251 | #[inline] |
3252 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3253 | pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type { |
3254 | self.inner.fetch_min(val, order) |
3255 | } |
3256 | } |
3257 | |
3258 | doc_comment! { |
3259 | concat!("Sets the bit at the specified bit-position to 1. |
3260 | |
3261 | Returns `true` if the specified bit was previously set to 1. |
3262 | |
3263 | `bit_set` takes an [`Ordering`] argument which describes the memory ordering |
3264 | of this operation. All ordering modes are possible. Note that using |
3265 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3266 | using [`Release`] makes the load part [`Relaxed`]. |
3267 | |
3268 | This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64. |
3269 | |
3270 | # Examples |
3271 | |
3272 | ``` |
3273 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3274 | |
3275 | let foo = " , stringify!($atomic_type), "::new(0b0000); |
3276 | assert!(!foo.bit_set(0, Ordering::Relaxed)); |
3277 | assert_eq!(foo.load(Ordering::Relaxed), 0b0001); |
3278 | assert!(foo.bit_set(0, Ordering::Relaxed)); |
3279 | assert_eq!(foo.load(Ordering::Relaxed), 0b0001); |
3280 | ```" ), |
3281 | #[inline] |
3282 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3283 | pub fn bit_set(&self, bit: u32, order: Ordering) -> bool { |
3284 | self.inner.bit_set(bit, order) |
3285 | } |
3286 | } |
3287 | |
3288 | doc_comment! { |
3289 | concat!("Clears the bit at the specified bit-position to 1. |
3290 | |
3291 | Returns `true` if the specified bit was previously set to 1. |
3292 | |
3293 | `bit_clear` takes an [`Ordering`] argument which describes the memory ordering |
3294 | of this operation. All ordering modes are possible. Note that using |
3295 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3296 | using [`Release`] makes the load part [`Relaxed`]. |
3297 | |
3298 | This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64. |
3299 | |
3300 | # Examples |
3301 | |
3302 | ``` |
3303 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3304 | |
3305 | let foo = " , stringify!($atomic_type), "::new(0b0001); |
3306 | assert!(foo.bit_clear(0, Ordering::Relaxed)); |
3307 | assert_eq!(foo.load(Ordering::Relaxed), 0b0000); |
3308 | ```" ), |
3309 | #[inline] |
3310 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3311 | pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool { |
3312 | self.inner.bit_clear(bit, order) |
3313 | } |
3314 | } |
3315 | |
3316 | doc_comment! { |
3317 | concat!("Toggles the bit at the specified bit-position. |
3318 | |
3319 | Returns `true` if the specified bit was previously set to 1. |
3320 | |
3321 | `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering |
3322 | of this operation. All ordering modes are possible. Note that using |
3323 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3324 | using [`Release`] makes the load part [`Relaxed`]. |
3325 | |
3326 | This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64. |
3327 | |
3328 | # Examples |
3329 | |
3330 | ``` |
3331 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3332 | |
3333 | let foo = " , stringify!($atomic_type), "::new(0b0000); |
3334 | assert!(!foo.bit_toggle(0, Ordering::Relaxed)); |
3335 | assert_eq!(foo.load(Ordering::Relaxed), 0b0001); |
3336 | assert!(foo.bit_toggle(0, Ordering::Relaxed)); |
3337 | assert_eq!(foo.load(Ordering::Relaxed), 0b0000); |
3338 | ```" ), |
3339 | #[inline] |
3340 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3341 | pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool { |
3342 | self.inner.bit_toggle(bit, order) |
3343 | } |
3344 | } |
3345 | |
3346 | doc_comment! { |
3347 | concat!("Logical negates the current value, and sets the new value to the result. |
3348 | |
3349 | Returns the previous value. |
3350 | |
3351 | `fetch_not` takes an [`Ordering`] argument which describes the memory ordering |
3352 | of this operation. All ordering modes are possible. Note that using |
3353 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3354 | using [`Release`] makes the load part [`Relaxed`]. |
3355 | |
3356 | # Examples |
3357 | |
3358 | ``` |
3359 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3360 | |
3361 | let foo = " , stringify!($atomic_type), "::new(0); |
3362 | assert_eq!(foo.fetch_not(Ordering::Relaxed), 0); |
3363 | assert_eq!(foo.load(Ordering::Relaxed), !0); |
3364 | ```" ), |
3365 | #[inline] |
3366 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3367 | pub fn fetch_not(&self, order: Ordering) -> $int_type { |
3368 | self.inner.fetch_not(order) |
3369 | } |
3370 | |
3371 | doc_comment! { |
3372 | concat!("Logical negates the current value, and sets the new value to the result. |
3373 | |
3374 | Unlike `fetch_not`, this does not return the previous value. |
3375 | |
3376 | `not` takes an [`Ordering`] argument which describes the memory ordering |
3377 | of this operation. All ordering modes are possible. Note that using |
3378 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3379 | using [`Release`] makes the load part [`Relaxed`]. |
3380 | |
3381 | This function may generate more efficient code than `fetch_not` on some platforms. |
3382 | |
3383 | - x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) |
3384 | - MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics) |
3385 | |
3386 | # Examples |
3387 | |
3388 | ``` |
3389 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3390 | |
3391 | let foo = " , stringify!($atomic_type), "::new(0); |
3392 | foo.not(Ordering::Relaxed); |
3393 | assert_eq!(foo.load(Ordering::Relaxed), !0); |
3394 | ```" ), |
3395 | #[inline] |
3396 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3397 | pub fn not(&self, order: Ordering) { |
3398 | self.inner.not(order); |
3399 | } |
3400 | } |
3401 | } |
3402 | |
3403 | doc_comment! { |
3404 | concat!("Negates the current value, and sets the new value to the result. |
3405 | |
3406 | Returns the previous value. |
3407 | |
3408 | `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering |
3409 | of this operation. All ordering modes are possible. Note that using |
3410 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3411 | using [`Release`] makes the load part [`Relaxed`]. |
3412 | |
3413 | # Examples |
3414 | |
3415 | ``` |
3416 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3417 | |
3418 | let foo = " , stringify!($atomic_type), "::new(5); |
3419 | assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5); |
3420 | assert_eq!(foo.load(Ordering::Relaxed), 5_" , stringify!($int_type), ".wrapping_neg()); |
3421 | assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_" , stringify!($int_type), ".wrapping_neg()); |
3422 | assert_eq!(foo.load(Ordering::Relaxed), 5); |
3423 | ```" ), |
3424 | #[inline] |
3425 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3426 | pub fn fetch_neg(&self, order: Ordering) -> $int_type { |
3427 | self.inner.fetch_neg(order) |
3428 | } |
3429 | |
3430 | doc_comment! { |
3431 | concat!("Negates the current value, and sets the new value to the result. |
3432 | |
3433 | Unlike `fetch_neg`, this does not return the previous value. |
3434 | |
3435 | `neg` takes an [`Ordering`] argument which describes the memory ordering |
3436 | of this operation. All ordering modes are possible. Note that using |
3437 | [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3438 | using [`Release`] makes the load part [`Relaxed`]. |
3439 | |
3440 | This function may generate more efficient code than `fetch_neg` on some platforms. |
3441 | |
3442 | - x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64) |
3443 | |
3444 | # Examples |
3445 | |
3446 | ``` |
3447 | use portable_atomic::{" , stringify!($atomic_type), ", Ordering}; |
3448 | |
3449 | let foo = " , stringify!($atomic_type), "::new(5); |
3450 | foo.neg(Ordering::Relaxed); |
3451 | assert_eq!(foo.load(Ordering::Relaxed), 5_" , stringify!($int_type), ".wrapping_neg()); |
3452 | foo.neg(Ordering::Relaxed); |
3453 | assert_eq!(foo.load(Ordering::Relaxed), 5); |
3454 | ```" ), |
3455 | #[inline] |
3456 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3457 | pub fn neg(&self, order: Ordering) { |
3458 | self.inner.neg(order); |
3459 | } |
3460 | } |
3461 | } |
3462 | } // cfg_has_atomic_cas! |
3463 | |
3464 | const_fn! { |
3465 | const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]; |
3466 | /// Returns a mutable pointer to the underlying integer. |
3467 | /// |
3468 | /// Returning an `*mut` pointer from a shared reference to this atomic is |
3469 | /// safe because the atomic types work with interior mutability. Any use of |
3470 | /// the returned raw pointer requires an `unsafe` block and has to uphold |
3471 | /// the safety requirements. If there is concurrent access, note the following |
3472 | /// additional safety requirements: |
3473 | /// |
3474 | /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent |
3475 | /// operations on it must be atomic. |
3476 | /// - Otherwise, any concurrent operations on it must be compatible with |
3477 | /// operations performed by this atomic type. |
3478 | /// |
3479 | /// This is `const fn` on Rust 1.58+. |
3480 | #[inline] |
3481 | pub const fn as_ptr(&self) -> *mut $int_type { |
3482 | self.inner.as_ptr() |
3483 | } |
3484 | } |
3485 | } |
3486 | }; |
3487 | |
3488 | // AtomicF* impls |
3489 | (float, |
3490 | $atomic_type:ident, |
3491 | $float_type:ident, |
3492 | $atomic_int_type:ident, |
3493 | $int_type:ident, |
3494 | $align:literal |
3495 | ) => { |
3496 | doc_comment! { |
3497 | concat!("A floating point type which can be safely shared between threads. |
3498 | |
3499 | This type has the same in-memory representation as the underlying floating point type, |
3500 | [`" , stringify!($float_type), "`]. |
3501 | " |
3502 | ), |
3503 | #[cfg_attr(portable_atomic_doc_cfg, doc(cfg(feature = "float" )))] |
3504 | // We can use #[repr(transparent)] here, but #[repr(C, align(N))] |
3505 | // will show clearer docs. |
3506 | #[repr(C, align($align))] |
3507 | pub struct $atomic_type { |
3508 | inner: imp::float::$atomic_type, |
3509 | } |
3510 | } |
3511 | |
3512 | impl Default for $atomic_type { |
3513 | #[inline] |
3514 | fn default() -> Self { |
3515 | Self::new($float_type::default()) |
3516 | } |
3517 | } |
3518 | |
3519 | impl From<$float_type> for $atomic_type { |
3520 | #[inline] |
3521 | fn from(v: $float_type) -> Self { |
3522 | Self::new(v) |
3523 | } |
3524 | } |
3525 | |
3526 | // UnwindSafe is implicitly implemented. |
3527 | #[cfg(not(portable_atomic_no_core_unwind_safe))] |
3528 | impl core::panic::RefUnwindSafe for $atomic_type {} |
3529 | #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std" ))] |
3530 | impl std::panic::RefUnwindSafe for $atomic_type {} |
3531 | |
3532 | impl_debug_and_serde!($atomic_type); |
3533 | |
3534 | impl $atomic_type { |
3535 | /// Creates a new atomic float. |
3536 | #[inline] |
3537 | #[must_use] |
3538 | pub const fn new(v: $float_type) -> Self { |
3539 | static_assert_layout!($atomic_type, $float_type); |
3540 | Self { inner: imp::float::$atomic_type::new(v) } |
3541 | } |
3542 | |
3543 | doc_comment! { |
3544 | concat!("Creates a new reference to an atomic float from a pointer. |
3545 | |
3546 | # Safety |
3547 | |
3548 | * `ptr` must be aligned to `align_of::<" , stringify!($atomic_type), ">()` (note that on some platforms this |
3549 | can be bigger than `align_of::<" , stringify!($float_type), ">()`). |
3550 | * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. |
3551 | * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value |
3552 | behind `ptr` must have a happens-before relationship with atomic accesses via |
3553 | the returned value (or vice-versa). |
3554 | * In other words, time periods where the value is accessed atomically may not |
3555 | overlap with periods where the value is accessed non-atomically. |
3556 | * This requirement is trivially satisfied if `ptr` is never used non-atomically |
3557 | for the duration of lifetime `'a`. Most use cases should be able to follow |
3558 | this guideline. |
3559 | * This requirement is also trivially satisfied if all accesses (atomic or not) are |
3560 | done from the same thread. |
3561 | * If this atomic type is *not* lock-free: |
3562 | * Any accesses to the value behind `ptr` must have a happens-before relationship |
3563 | with accesses via the returned value (or vice-versa). |
3564 | * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must |
3565 | be compatible with operations performed by this atomic type. |
3566 | * This method must not be used to create overlapping or mixed-size atomic |
3567 | accesses, as these are not supported by the memory model. |
3568 | |
3569 | [valid]: core::ptr#safety" ), |
3570 | #[inline] |
3571 | #[must_use] |
3572 | pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self { |
3573 | #[allow(clippy::cast_ptr_alignment)] |
3574 | // SAFETY: guaranteed by the caller |
3575 | unsafe { &*(ptr as *mut Self) } |
3576 | } |
3577 | } |
3578 | |
3579 | /// Returns `true` if operations on values of this type are lock-free. |
3580 | /// |
3581 | /// If the compiler or the platform doesn't support the necessary |
3582 | /// atomic instructions, global locks for every potentially |
3583 | /// concurrent atomic operation will be used. |
3584 | #[inline] |
3585 | #[must_use] |
3586 | pub fn is_lock_free() -> bool { |
3587 | <imp::float::$atomic_type>::is_lock_free() |
3588 | } |
3589 | |
3590 | /// Returns `true` if operations on values of this type are lock-free. |
3591 | /// |
3592 | /// If the compiler or the platform doesn't support the necessary |
3593 | /// atomic instructions, global locks for every potentially |
3594 | /// concurrent atomic operation will be used. |
3595 | /// |
3596 | /// **Note:** If the atomic operation relies on dynamic CPU feature detection, |
3597 | /// this type may be lock-free even if the function returns false. |
3598 | #[inline] |
3599 | #[must_use] |
3600 | pub const fn is_always_lock_free() -> bool { |
3601 | <imp::float::$atomic_type>::is_always_lock_free() |
3602 | } |
3603 | |
3604 | /// Returns a mutable reference to the underlying float. |
3605 | /// |
3606 | /// This is safe because the mutable reference guarantees that no other threads are |
3607 | /// concurrently accessing the atomic data. |
3608 | #[inline] |
3609 | pub fn get_mut(&mut self) -> &mut $float_type { |
3610 | self.inner.get_mut() |
3611 | } |
3612 | |
3613 | // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types. |
3614 | // https://github.com/rust-lang/rust/issues/76314 |
3615 | |
3616 | /// Consumes the atomic and returns the contained value. |
3617 | /// |
3618 | /// This is safe because passing `self` by value guarantees that no other threads are |
3619 | /// concurrently accessing the atomic data. |
3620 | #[inline] |
3621 | pub fn into_inner(self) -> $float_type { |
3622 | self.inner.into_inner() |
3623 | } |
3624 | |
3625 | /// Loads a value from the atomic float. |
3626 | /// |
3627 | /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
3628 | /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`]. |
3629 | /// |
3630 | /// # Panics |
3631 | /// |
3632 | /// Panics if `order` is [`Release`] or [`AcqRel`]. |
3633 | #[inline] |
3634 | #[cfg_attr( |
3635 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3636 | track_caller |
3637 | )] |
3638 | pub fn load(&self, order: Ordering) -> $float_type { |
3639 | self.inner.load(order) |
3640 | } |
3641 | |
3642 | /// Stores a value into the atomic float. |
3643 | /// |
3644 | /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation. |
3645 | /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`]. |
3646 | /// |
3647 | /// # Panics |
3648 | /// |
3649 | /// Panics if `order` is [`Acquire`] or [`AcqRel`]. |
3650 | #[inline] |
3651 | #[cfg_attr( |
3652 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3653 | track_caller |
3654 | )] |
3655 | pub fn store(&self, val: $float_type, order: Ordering) { |
3656 | self.inner.store(val, order) |
3657 | } |
3658 | |
3659 | cfg_has_atomic_cas! { |
3660 | /// Stores a value into the atomic float, returning the previous value. |
3661 | /// |
3662 | /// `swap` takes an [`Ordering`] argument which describes the memory ordering |
3663 | /// of this operation. All ordering modes are possible. Note that using |
3664 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3665 | /// using [`Release`] makes the load part [`Relaxed`]. |
3666 | #[inline] |
3667 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3668 | pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type { |
3669 | self.inner.swap(val, order) |
3670 | } |
3671 | |
3672 | /// Stores a value into the atomic float if the current value is the same as |
3673 | /// the `current` value. |
3674 | /// |
3675 | /// The return value is a result indicating whether the new value was written and |
3676 | /// containing the previous value. On success this value is guaranteed to be equal to |
3677 | /// `current`. |
3678 | /// |
3679 | /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory |
3680 | /// ordering of this operation. `success` describes the required ordering for the |
3681 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
3682 | /// `failure` describes the required ordering for the load operation that takes place when |
3683 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
3684 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
3685 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3686 | /// |
3687 | /// # Panics |
3688 | /// |
3689 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
3690 | #[inline] |
3691 | #[cfg_attr(portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
3692 | #[cfg_attr( |
3693 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3694 | track_caller |
3695 | )] |
3696 | pub fn compare_exchange( |
3697 | &self, |
3698 | current: $float_type, |
3699 | new: $float_type, |
3700 | success: Ordering, |
3701 | failure: Ordering, |
3702 | ) -> Result<$float_type, $float_type> { |
3703 | self.inner.compare_exchange(current, new, success, failure) |
3704 | } |
3705 | |
3706 | /// Stores a value into the atomic float if the current value is the same as |
3707 | /// the `current` value. |
3708 | /// Unlike [`compare_exchange`](Self::compare_exchange) |
3709 | /// this function is allowed to spuriously fail even |
3710 | /// when the comparison succeeds, which can result in more efficient code on some |
3711 | /// platforms. The return value is a result indicating whether the new value was |
3712 | /// written and containing the previous value. |
3713 | /// |
3714 | /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory |
3715 | /// ordering of this operation. `success` describes the required ordering for the |
3716 | /// read-modify-write operation that takes place if the comparison with `current` succeeds. |
3717 | /// `failure` describes the required ordering for the load operation that takes place when |
3718 | /// the comparison fails. Using [`Acquire`] as success ordering makes the store part |
3719 | /// of this operation [`Relaxed`], and using [`Release`] makes the successful load |
3720 | /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3721 | /// |
3722 | /// # Panics |
3723 | /// |
3724 | /// Panics if `failure` is [`Release`], [`AcqRel`]. |
3725 | #[inline] |
3726 | #[cfg_attr(portable_atomic_doc_cfg, doc(alias = "compare_and_swap" ))] |
3727 | #[cfg_attr( |
3728 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3729 | track_caller |
3730 | )] |
3731 | pub fn compare_exchange_weak( |
3732 | &self, |
3733 | current: $float_type, |
3734 | new: $float_type, |
3735 | success: Ordering, |
3736 | failure: Ordering, |
3737 | ) -> Result<$float_type, $float_type> { |
3738 | self.inner.compare_exchange_weak(current, new, success, failure) |
3739 | } |
3740 | |
3741 | /// Adds to the current value, returning the previous value. |
3742 | /// |
3743 | /// This operation wraps around on overflow. |
3744 | /// |
3745 | /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering |
3746 | /// of this operation. All ordering modes are possible. Note that using |
3747 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3748 | /// using [`Release`] makes the load part [`Relaxed`]. |
3749 | #[inline] |
3750 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3751 | pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type { |
3752 | self.inner.fetch_add(val, order) |
3753 | } |
3754 | |
3755 | /// Subtracts from the current value, returning the previous value. |
3756 | /// |
3757 | /// This operation wraps around on overflow. |
3758 | /// |
3759 | /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering |
3760 | /// of this operation. All ordering modes are possible. Note that using |
3761 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3762 | /// using [`Release`] makes the load part [`Relaxed`]. |
3763 | #[inline] |
3764 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3765 | pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type { |
3766 | self.inner.fetch_sub(val, order) |
3767 | } |
3768 | |
3769 | /// Fetches the value, and applies a function to it that returns an optional |
3770 | /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else |
3771 | /// `Err(previous_value)`. |
3772 | /// |
3773 | /// Note: This may call the function multiple times if the value has been changed from other threads in |
3774 | /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied |
3775 | /// only once to the stored value. |
3776 | /// |
3777 | /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation. |
3778 | /// The first describes the required ordering for when the operation finally succeeds while the second |
3779 | /// describes the required ordering for loads. These correspond to the success and failure orderings of |
3780 | /// [`compare_exchange`](Self::compare_exchange) respectively. |
3781 | /// |
3782 | /// Using [`Acquire`] as success ordering makes the store part |
3783 | /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load |
3784 | /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]. |
3785 | /// |
3786 | /// # Panics |
3787 | /// |
3788 | /// Panics if `fetch_order` is [`Release`], [`AcqRel`]. |
3789 | /// |
3790 | /// # Considerations |
3791 | /// |
3792 | /// This method is not magic; it is not provided by the hardware. |
3793 | /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak), |
3794 | /// and suffers from the same drawbacks. |
3795 | /// In particular, this method will not circumvent the [ABA Problem]. |
3796 | /// |
3797 | /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem |
3798 | #[inline] |
3799 | #[cfg_attr( |
3800 | any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri), |
3801 | track_caller |
3802 | )] |
3803 | pub fn fetch_update<F>( |
3804 | &self, |
3805 | set_order: Ordering, |
3806 | fetch_order: Ordering, |
3807 | mut f: F, |
3808 | ) -> Result<$float_type, $float_type> |
3809 | where |
3810 | F: FnMut($float_type) -> Option<$float_type>, |
3811 | { |
3812 | let mut prev = self.load(fetch_order); |
3813 | while let Some(next) = f(prev) { |
3814 | match self.compare_exchange_weak(prev, next, set_order, fetch_order) { |
3815 | x @ Ok(_) => return x, |
3816 | Err(next_prev) => prev = next_prev, |
3817 | } |
3818 | } |
3819 | Err(prev) |
3820 | } |
3821 | |
3822 | /// Maximum with the current value. |
3823 | /// |
3824 | /// Finds the maximum of the current value and the argument `val`, and |
3825 | /// sets the new value to the result. |
3826 | /// |
3827 | /// Returns the previous value. |
3828 | /// |
3829 | /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering |
3830 | /// of this operation. All ordering modes are possible. Note that using |
3831 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3832 | /// using [`Release`] makes the load part [`Relaxed`]. |
3833 | #[inline] |
3834 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3835 | pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type { |
3836 | self.inner.fetch_max(val, order) |
3837 | } |
3838 | |
3839 | /// Minimum with the current value. |
3840 | /// |
3841 | /// Finds the minimum of the current value and the argument `val`, and |
3842 | /// sets the new value to the result. |
3843 | /// |
3844 | /// Returns the previous value. |
3845 | /// |
3846 | /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering |
3847 | /// of this operation. All ordering modes are possible. Note that using |
3848 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3849 | /// using [`Release`] makes the load part [`Relaxed`]. |
3850 | #[inline] |
3851 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3852 | pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type { |
3853 | self.inner.fetch_min(val, order) |
3854 | } |
3855 | |
3856 | /// Negates the current value, and sets the new value to the result. |
3857 | /// |
3858 | /// Returns the previous value. |
3859 | /// |
3860 | /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering |
3861 | /// of this operation. All ordering modes are possible. Note that using |
3862 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3863 | /// using [`Release`] makes the load part [`Relaxed`]. |
3864 | #[inline] |
3865 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3866 | pub fn fetch_neg(&self, order: Ordering) -> $float_type { |
3867 | self.inner.fetch_neg(order) |
3868 | } |
3869 | |
3870 | /// Computes the absolute value of the current value, and sets the |
3871 | /// new value to the result. |
3872 | /// |
3873 | /// Returns the previous value. |
3874 | /// |
3875 | /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering |
3876 | /// of this operation. All ordering modes are possible. Note that using |
3877 | /// [`Acquire`] makes the store part of this operation [`Relaxed`], and |
3878 | /// using [`Release`] makes the load part [`Relaxed`]. |
3879 | #[inline] |
3880 | #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces |
3881 | pub fn fetch_abs(&self, order: Ordering) -> $float_type { |
3882 | self.inner.fetch_abs(order) |
3883 | } |
3884 | } // cfg_has_atomic_cas! |
3885 | |
3886 | #[cfg(not(portable_atomic_no_const_raw_ptr_deref))] |
3887 | doc_comment! { |
3888 | concat!("Raw transmutation to `&" , stringify!($atomic_int_type), "`. |
3889 | |
3890 | See [`" , stringify!($float_type) ,"::from_bits`] for some discussion of the |
3891 | portability of this operation (there are almost no issues). |
3892 | |
3893 | This is `const fn` on Rust 1.58+." ), |
3894 | #[inline] |
3895 | pub const fn as_bits(&self) -> &$atomic_int_type { |
3896 | self.inner.as_bits() |
3897 | } |
3898 | } |
3899 | #[cfg(portable_atomic_no_const_raw_ptr_deref)] |
3900 | doc_comment! { |
3901 | concat!("Raw transmutation to `&" , stringify!($atomic_int_type), "`. |
3902 | |
3903 | See [`" , stringify!($float_type) ,"::from_bits`] for some discussion of the |
3904 | portability of this operation (there are almost no issues). |
3905 | |
3906 | This is `const fn` on Rust 1.58+." ), |
3907 | #[inline] |
3908 | pub fn as_bits(&self) -> &$atomic_int_type { |
3909 | self.inner.as_bits() |
3910 | } |
3911 | } |
3912 | |
3913 | const_fn! { |
3914 | const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]; |
3915 | /// Returns a mutable pointer to the underlying float. |
3916 | /// |
3917 | /// Returning an `*mut` pointer from a shared reference to this atomic is |
3918 | /// safe because the atomic types work with interior mutability. Any use of |
3919 | /// the returned raw pointer requires an `unsafe` block and has to uphold |
3920 | /// the safety requirements. If there is concurrent access, note the following |
3921 | /// additional safety requirements: |
3922 | /// |
3923 | /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent |
3924 | /// operations on it must be atomic. |
3925 | /// - Otherwise, any concurrent operations on it must be compatible with |
3926 | /// operations performed by this atomic type. |
3927 | /// |
3928 | /// This is `const fn` on Rust 1.58+. |
3929 | #[inline] |
3930 | pub const fn as_ptr(&self) -> *mut $float_type { |
3931 | self.inner.as_ptr() |
3932 | } |
3933 | } |
3934 | } |
3935 | }; |
3936 | } |
3937 | |
3938 | cfg_has_atomic_ptr! { |
3939 | #[cfg (target_pointer_width = "16" )] |
3940 | atomic_int!(AtomicIsize, isize, 2); |
3941 | #[cfg (target_pointer_width = "16" )] |
3942 | atomic_int!(AtomicUsize, usize, 2); |
3943 | #[cfg (target_pointer_width = "32" )] |
3944 | atomic_int!(AtomicIsize, isize, 4); |
3945 | #[cfg (target_pointer_width = "32" )] |
3946 | atomic_int!(AtomicUsize, usize, 4); |
3947 | #[cfg (target_pointer_width = "64" )] |
3948 | atomic_int!(AtomicIsize, isize, 8); |
3949 | #[cfg (target_pointer_width = "64" )] |
3950 | atomic_int!(AtomicUsize, usize, 8); |
3951 | #[cfg (target_pointer_width = "128" )] |
3952 | atomic_int!(AtomicIsize, isize, 16); |
3953 | #[cfg (target_pointer_width = "128" )] |
3954 | atomic_int!(AtomicUsize, usize, 16); |
3955 | } |
3956 | |
3957 | cfg_has_atomic_8! { |
3958 | atomic_int!(AtomicI8, i8, 1); |
3959 | atomic_int!(AtomicU8, u8, 1); |
3960 | } |
3961 | cfg_has_atomic_16! { |
3962 | atomic_int!(AtomicI16, i16, 2); |
3963 | atomic_int!(AtomicU16, u16, 2); |
3964 | } |
3965 | cfg_has_atomic_32! { |
3966 | atomic_int!(AtomicI32, i32, 4); |
3967 | atomic_int!(AtomicU32, u32, 4); |
3968 | } |
3969 | cfg_has_atomic_64! { |
3970 | atomic_int!(AtomicI64, i64, 8); |
3971 | atomic_int!(AtomicU64, u64, 8); |
3972 | } |
3973 | cfg_has_atomic_128! { |
3974 | atomic_int!(AtomicI128, i128, 16); |
3975 | atomic_int!(AtomicU128, u128, 16); |
3976 | } |
3977 | |