1 | //! Useful synchronization primitives. |
2 | //! |
3 | //! ## The need for synchronization |
4 | //! |
5 | //! Conceptually, a Rust program is a series of operations which will |
6 | //! be executed on a computer. The timeline of events happening in the |
7 | //! program is consistent with the order of the operations in the code. |
8 | //! |
9 | //! Consider the following code, operating on some global static variables: |
10 | //! |
11 | //! ```rust |
12 | //! static mut A: u32 = 0; |
13 | //! static mut B: u32 = 0; |
14 | //! static mut C: u32 = 0; |
15 | //! |
16 | //! fn main() { |
17 | //! unsafe { |
18 | //! A = 3; |
19 | //! B = 4; |
20 | //! A = A + B; |
21 | //! C = B; |
22 | //! println!("{A} {B} {C}" ); |
23 | //! C = A; |
24 | //! } |
25 | //! } |
26 | //! ``` |
27 | //! |
28 | //! It appears as if some variables stored in memory are changed, an addition |
29 | //! is performed, result is stored in `A` and the variable `C` is |
30 | //! modified twice. |
31 | //! |
32 | //! When only a single thread is involved, the results are as expected: |
33 | //! the line `7 4 4` gets printed. |
34 | //! |
35 | //! As for what happens behind the scenes, when optimizations are enabled the |
36 | //! final generated machine code might look very different from the code: |
37 | //! |
38 | //! - The first store to `C` might be moved before the store to `A` or `B`, |
39 | //! _as if_ we had written `C = 4; A = 3; B = 4`. |
40 | //! |
41 | //! - Assignment of `A + B` to `A` might be removed, since the sum can be stored |
42 | //! in a temporary location until it gets printed, with the global variable |
43 | //! never getting updated. |
44 | //! |
45 | //! - The final result could be determined just by looking at the code |
46 | //! at compile time, so [constant folding] might turn the whole |
47 | //! block into a simple `println!("7 4 4")`. |
48 | //! |
49 | //! The compiler is allowed to perform any combination of these |
50 | //! optimizations, as long as the final optimized code, when executed, |
51 | //! produces the same results as the one without optimizations. |
52 | //! |
53 | //! Due to the [concurrency] involved in modern computers, assumptions |
54 | //! about the program's execution order are often wrong. Access to |
55 | //! global variables can lead to nondeterministic results, **even if** |
56 | //! compiler optimizations are disabled, and it is **still possible** |
57 | //! to introduce synchronization bugs. |
58 | //! |
59 | //! Note that thanks to Rust's safety guarantees, accessing global (static) |
60 | //! variables requires `unsafe` code, assuming we don't use any of the |
61 | //! synchronization primitives in this module. |
62 | //! |
63 | //! [constant folding]: https://en.wikipedia.org/wiki/Constant_folding |
64 | //! [concurrency]: https://en.wikipedia.org/wiki/Concurrency_(computer_science) |
65 | //! |
66 | //! ## Out-of-order execution |
67 | //! |
68 | //! Instructions can execute in a different order from the one we define, due to |
69 | //! various reasons: |
70 | //! |
71 | //! - The **compiler** reordering instructions: If the compiler can issue an |
72 | //! instruction at an earlier point, it will try to do so. For example, it |
73 | //! might hoist memory loads at the top of a code block, so that the CPU can |
74 | //! start [prefetching] the values from memory. |
75 | //! |
76 | //! In single-threaded scenarios, this can cause issues when writing |
77 | //! signal handlers or certain kinds of low-level code. |
78 | //! Use [compiler fences] to prevent this reordering. |
79 | //! |
80 | //! - A **single processor** executing instructions [out-of-order]: |
81 | //! Modern CPUs are capable of [superscalar] execution, |
82 | //! i.e., multiple instructions might be executing at the same time, |
83 | //! even though the machine code describes a sequential process. |
84 | //! |
85 | //! This kind of reordering is handled transparently by the CPU. |
86 | //! |
87 | //! - A **multiprocessor** system executing multiple hardware threads |
88 | //! at the same time: In multi-threaded scenarios, you can use two |
89 | //! kinds of primitives to deal with synchronization: |
90 | //! - [memory fences] to ensure memory accesses are made visible to |
91 | //! other CPUs in the right order. |
92 | //! - [atomic operations] to ensure simultaneous access to the same |
93 | //! memory location doesn't lead to undefined behavior. |
94 | //! |
95 | //! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching |
96 | //! [compiler fences]: crate::sync::atomic::compiler_fence |
97 | //! [out-of-order]: https://en.wikipedia.org/wiki/Out-of-order_execution |
98 | //! [superscalar]: https://en.wikipedia.org/wiki/Superscalar_processor |
99 | //! [memory fences]: crate::sync::atomic::fence |
100 | //! [atomic operations]: crate::sync::atomic |
101 | //! |
102 | //! ## Higher-level synchronization objects |
103 | //! |
104 | //! Most of the low-level synchronization primitives are quite error-prone and |
105 | //! inconvenient to use, which is why the standard library also exposes some |
106 | //! higher-level synchronization objects. |
107 | //! |
108 | //! These abstractions can be built out of lower-level primitives. |
109 | //! For efficiency, the sync objects in the standard library are usually |
110 | //! implemented with help from the operating system's kernel, which is |
111 | //! able to reschedule the threads while they are blocked on acquiring |
112 | //! a lock. |
113 | //! |
114 | //! The following is an overview of the available synchronization |
115 | //! objects: |
116 | //! |
117 | //! - [`Arc`]: Atomically Reference-Counted pointer, which can be used |
118 | //! in multithreaded environments to prolong the lifetime of some |
119 | //! data until all the threads have finished using it. |
120 | //! |
121 | //! - [`Barrier`]: Ensures multiple threads will wait for each other |
122 | //! to reach a point in the program, before continuing execution all |
123 | //! together. |
124 | //! |
125 | //! - [`Condvar`]: Condition Variable, providing the ability to block |
126 | //! a thread while waiting for an event to occur. |
127 | //! |
128 | //! - [`mpsc`]: Multi-producer, single-consumer queues, used for |
129 | //! message-based communication. Can provide a lightweight |
130 | //! inter-thread synchronisation mechanism, at the cost of some |
131 | //! extra memory. |
132 | //! |
133 | //! - [`Mutex`]: Mutual Exclusion mechanism, which ensures that at |
134 | //! most one thread at a time is able to access some data. |
135 | //! |
136 | //! - [`Once`]: Used for a thread-safe, one-time global initialization routine |
137 | //! |
138 | //! - [`OnceLock`]: Used for thread-safe, one-time initialization of a |
139 | //! global variable. |
140 | //! |
141 | //! - [`RwLock`]: Provides a mutual exclusion mechanism which allows |
142 | //! multiple readers at the same time, while allowing only one |
143 | //! writer at a time. In some cases, this can be more efficient than |
144 | //! a mutex. |
145 | //! |
146 | //! [`Arc`]: crate::sync::Arc |
147 | //! [`Barrier`]: crate::sync::Barrier |
148 | //! [`Condvar`]: crate::sync::Condvar |
149 | //! [`mpsc`]: crate::sync::mpsc |
150 | //! [`Mutex`]: crate::sync::Mutex |
151 | //! [`Once`]: crate::sync::Once |
152 | //! [`OnceLock`]: crate::sync::OnceLock |
153 | //! [`RwLock`]: crate::sync::RwLock |
154 | |
155 | #![stable (feature = "rust1" , since = "1.0.0" )] |
156 | |
157 | #[stable (feature = "rust1" , since = "1.0.0" )] |
158 | pub use alloc_crate::sync::{Arc, Weak}; |
159 | #[stable (feature = "rust1" , since = "1.0.0" )] |
160 | pub use core::sync::atomic; |
161 | #[unstable (feature = "exclusive_wrapper" , issue = "98407" )] |
162 | pub use core::sync::Exclusive; |
163 | |
164 | #[stable (feature = "rust1" , since = "1.0.0" )] |
165 | pub use self::barrier::{Barrier, BarrierWaitResult}; |
166 | #[stable (feature = "rust1" , since = "1.0.0" )] |
167 | pub use self::condvar::{Condvar, WaitTimeoutResult}; |
168 | #[unstable (feature = "mapped_lock_guards" , issue = "117108" )] |
169 | pub use self::mutex::MappedMutexGuard; |
170 | #[stable (feature = "rust1" , since = "1.0.0" )] |
171 | pub use self::mutex::{Mutex, MutexGuard}; |
172 | #[stable (feature = "rust1" , since = "1.0.0" )] |
173 | #[allow (deprecated)] |
174 | pub use self::once::{Once, OnceState, ONCE_INIT}; |
175 | #[stable (feature = "rust1" , since = "1.0.0" )] |
176 | pub use self::poison::{LockResult, PoisonError, TryLockError, TryLockResult}; |
177 | #[unstable (feature = "mapped_lock_guards" , issue = "117108" )] |
178 | pub use self::rwlock::{MappedRwLockReadGuard, MappedRwLockWriteGuard}; |
179 | #[stable (feature = "rust1" , since = "1.0.0" )] |
180 | pub use self::rwlock::{RwLock, RwLockReadGuard, RwLockWriteGuard}; |
181 | |
182 | #[unstable (feature = "lazy_cell" , issue = "109736" )] |
183 | pub use self::lazy_lock::LazyLock; |
184 | #[stable (feature = "once_cell" , since = "1.70.0" )] |
185 | pub use self::once_lock::OnceLock; |
186 | |
187 | #[unstable (feature = "reentrant_lock" , issue = "121440" )] |
188 | pub use self::reentrant_lock::{ReentrantLock, ReentrantLockGuard}; |
189 | |
190 | pub mod mpsc; |
191 | |
192 | mod barrier; |
193 | mod condvar; |
194 | mod lazy_lock; |
195 | mod mpmc; |
196 | mod mutex; |
197 | pub(crate) mod once; |
198 | mod once_lock; |
199 | mod poison; |
200 | mod reentrant_lock; |
201 | mod rwlock; |
202 | |