1 | //! Native threads. |
2 | //! |
3 | //! ## The threading model |
4 | //! |
5 | //! An executing Rust program consists of a collection of native OS threads, |
6 | //! each with their own stack and local state. Threads can be named, and |
7 | //! provide some built-in support for low-level synchronization. |
8 | //! |
9 | //! Communication between threads can be done through |
10 | //! [channels], Rust's message-passing types, along with [other forms of thread |
11 | //! synchronization](../../std/sync/index.html) and shared-memory data |
12 | //! structures. In particular, types that are guaranteed to be |
13 | //! threadsafe are easily shared between threads using the |
14 | //! atomically-reference-counted container, [`Arc`]. |
15 | //! |
16 | //! Fatal logic errors in Rust cause *thread panic*, during which |
17 | //! a thread will unwind the stack, running destructors and freeing |
18 | //! owned resources. While not meant as a 'try/catch' mechanism, panics |
19 | //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with |
20 | //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered |
21 | //! from, or alternatively be resumed with |
22 | //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic |
23 | //! is not caught the thread will exit, but the panic may optionally be |
24 | //! detected from a different thread with [`join`]. If the main thread panics |
25 | //! without the panic being caught, the application will exit with a |
26 | //! non-zero exit code. |
27 | //! |
28 | //! When the main thread of a Rust program terminates, the entire program shuts |
29 | //! down, even if other threads are still running. However, this module provides |
30 | //! convenient facilities for automatically waiting for the termination of a |
31 | //! thread (i.e., join). |
32 | //! |
33 | //! ## Spawning a thread |
34 | //! |
35 | //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function: |
36 | //! |
37 | //! ```rust |
38 | //! use std::thread; |
39 | //! |
40 | //! thread::spawn(move || { |
41 | //! // some work here |
42 | //! }); |
43 | //! ``` |
44 | //! |
45 | //! In this example, the spawned thread is "detached," which means that there is |
46 | //! no way for the program to learn when the spawned thread completes or otherwise |
47 | //! terminates. |
48 | //! |
49 | //! To learn when a thread completes, it is necessary to capture the [`JoinHandle`] |
50 | //! object that is returned by the call to [`spawn`], which provides |
51 | //! a `join` method that allows the caller to wait for the completion of the |
52 | //! spawned thread: |
53 | //! |
54 | //! ```rust |
55 | //! use std::thread; |
56 | //! |
57 | //! let thread_join_handle = thread::spawn(move || { |
58 | //! // some work here |
59 | //! }); |
60 | //! // some work here |
61 | //! let res = thread_join_handle.join(); |
62 | //! ``` |
63 | //! |
64 | //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final |
65 | //! value produced by the spawned thread, or [`Err`] of the value given to |
66 | //! a call to [`panic!`] if the thread panicked. |
67 | //! |
68 | //! Note that there is no parent/child relationship between a thread that spawns a |
69 | //! new thread and the thread being spawned. In particular, the spawned thread may or |
70 | //! may not outlive the spawning thread, unless the spawning thread is the main thread. |
71 | //! |
72 | //! ## Configuring threads |
73 | //! |
74 | //! A new thread can be configured before it is spawned via the [`Builder`] type, |
75 | //! which currently allows you to set the name and stack size for the thread: |
76 | //! |
77 | //! ```rust |
78 | //! # #![allow(unused_must_use)] |
79 | //! use std::thread; |
80 | //! |
81 | //! thread::Builder::new().name("thread1" .to_string()).spawn(move || { |
82 | //! println!("Hello, world!" ); |
83 | //! }); |
84 | //! ``` |
85 | //! |
86 | //! ## The `Thread` type |
87 | //! |
88 | //! Threads are represented via the [`Thread`] type, which you can get in one of |
89 | //! two ways: |
90 | //! |
91 | //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`] |
92 | //! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`]. |
93 | //! * By requesting the current thread, using the [`thread::current`] function. |
94 | //! |
95 | //! The [`thread::current`] function is available even for threads not spawned |
96 | //! by the APIs of this module. |
97 | //! |
98 | //! ## Thread-local storage |
99 | //! |
100 | //! This module also provides an implementation of thread-local storage for Rust |
101 | //! programs. Thread-local storage is a method of storing data into a global |
102 | //! variable that each thread in the program will have its own copy of. |
103 | //! Threads do not share this data, so accesses do not need to be synchronized. |
104 | //! |
105 | //! A thread-local key owns the value it contains and will destroy the value when the |
106 | //! thread exits. It is created with the [`thread_local!`] macro and can contain any |
107 | //! value that is `'static` (no borrowed pointers). It provides an accessor function, |
108 | //! [`with`], that yields a shared reference to the value to the specified |
109 | //! closure. Thread-local keys allow only shared access to values, as there would be no |
110 | //! way to guarantee uniqueness if mutable borrows were allowed. Most values |
111 | //! will want to make use of some form of **interior mutability** through the |
112 | //! [`Cell`] or [`RefCell`] types. |
113 | //! |
114 | //! ## Naming threads |
115 | //! |
116 | //! Threads are able to have associated names for identification purposes. By default, spawned |
117 | //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass |
118 | //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the |
119 | //! thread, use [`Thread::name`]. A couple of examples where the name of a thread gets used: |
120 | //! |
121 | //! * If a panic occurs in a named thread, the thread name will be printed in the panic message. |
122 | //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in |
123 | //! unix-like platforms). |
124 | //! |
125 | //! ## Stack size |
126 | //! |
127 | //! The default stack size is platform-dependent and subject to change. |
128 | //! Currently, it is 2 MiB on all Tier-1 platforms. |
129 | //! |
130 | //! There are two ways to manually specify the stack size for spawned threads: |
131 | //! |
132 | //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`]. |
133 | //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack |
134 | //! size (in bytes). Note that setting [`Builder::stack_size`] will override this. Be aware that |
135 | //! changes to `RUST_MIN_STACK` may be ignored after program start. |
136 | //! |
137 | //! Note that the stack size of the main thread is *not* determined by Rust. |
138 | //! |
139 | //! [channels]: crate::sync::mpsc |
140 | //! [`join`]: JoinHandle::join |
141 | //! [`Result`]: crate::result::Result |
142 | //! [`Ok`]: crate::result::Result::Ok |
143 | //! [`Err`]: crate::result::Result::Err |
144 | //! [`thread::current`]: current |
145 | //! [`thread::Result`]: Result |
146 | //! [`unpark`]: Thread::unpark |
147 | //! [`thread::park_timeout`]: park_timeout |
148 | //! [`Cell`]: crate::cell::Cell |
149 | //! [`RefCell`]: crate::cell::RefCell |
150 | //! [`with`]: LocalKey::with |
151 | //! [`thread_local!`]: crate::thread_local |
152 | |
153 | #![stable (feature = "rust1" , since = "1.0.0" )] |
154 | #![deny (unsafe_op_in_unsafe_fn)] |
155 | // Under `test`, `__FastLocalKeyInner` seems unused. |
156 | #![cfg_attr (test, allow(dead_code))] |
157 | |
158 | #[cfg (all(test, not(target_os = "emscripten" )))] |
159 | mod tests; |
160 | |
161 | use crate::any::Any; |
162 | use crate::cell::{OnceCell, UnsafeCell}; |
163 | use crate::env; |
164 | use crate::ffi::{CStr, CString}; |
165 | use crate::fmt; |
166 | use crate::io; |
167 | use crate::marker::PhantomData; |
168 | use crate::mem::{self, forget}; |
169 | use crate::num::NonZero; |
170 | use crate::panic; |
171 | use crate::panicking; |
172 | use crate::pin::Pin; |
173 | use crate::ptr::addr_of_mut; |
174 | use crate::str; |
175 | use crate::sync::atomic::{AtomicUsize, Ordering}; |
176 | use crate::sync::Arc; |
177 | use crate::sys::thread as imp; |
178 | use crate::sys_common::thread_parking::Parker; |
179 | use crate::sys_common::{AsInner, IntoInner}; |
180 | use crate::time::{Duration, Instant}; |
181 | |
182 | #[stable (feature = "scoped_threads" , since = "1.63.0" )] |
183 | mod scoped; |
184 | |
185 | #[stable (feature = "scoped_threads" , since = "1.63.0" )] |
186 | pub use scoped::{scope, Scope, ScopedJoinHandle}; |
187 | |
188 | //////////////////////////////////////////////////////////////////////////////// |
189 | // Thread-local storage |
190 | //////////////////////////////////////////////////////////////////////////////// |
191 | |
192 | #[macro_use ] |
193 | mod local; |
194 | |
195 | cfg_if::cfg_if! { |
196 | if #[cfg(test)] { |
197 | // Avoid duplicating the global state associated with thread-locals between this crate and |
198 | // realstd. Miri relies on this. |
199 | pub use realstd::thread::{local_impl, AccessError, LocalKey}; |
200 | } else { |
201 | #[stable (feature = "rust1" , since = "1.0.0" )] |
202 | pub use self::local::{AccessError, LocalKey}; |
203 | |
204 | // Implementation details used by the thread_local!{} macro. |
205 | #[doc (hidden)] |
206 | #[unstable (feature = "thread_local_internals" , issue = "none" )] |
207 | pub mod local_impl { |
208 | pub use crate::sys::thread_local::{thread_local_inner, Key, abort_on_dtor_unwind}; |
209 | } |
210 | } |
211 | } |
212 | |
213 | //////////////////////////////////////////////////////////////////////////////// |
214 | // Builder |
215 | //////////////////////////////////////////////////////////////////////////////// |
216 | |
217 | /// Thread factory, which can be used in order to configure the properties of |
218 | /// a new thread. |
219 | /// |
220 | /// Methods can be chained on it in order to configure it. |
221 | /// |
222 | /// The two configurations available are: |
223 | /// |
224 | /// - [`name`]: specifies an [associated name for the thread][naming-threads] |
225 | /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size] |
226 | /// |
227 | /// The [`spawn`] method will take ownership of the builder and create an |
228 | /// [`io::Result`] to the thread handle with the given configuration. |
229 | /// |
230 | /// The [`thread::spawn`] free function uses a `Builder` with default |
231 | /// configuration and [`unwrap`]s its return value. |
232 | /// |
233 | /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want |
234 | /// to recover from a failure to launch a thread, indeed the free function will |
235 | /// panic where the `Builder` method will return a [`io::Result`]. |
236 | /// |
237 | /// # Examples |
238 | /// |
239 | /// ``` |
240 | /// use std::thread; |
241 | /// |
242 | /// let builder = thread::Builder::new(); |
243 | /// |
244 | /// let handler = builder.spawn(|| { |
245 | /// // thread code |
246 | /// }).unwrap(); |
247 | /// |
248 | /// handler.join().unwrap(); |
249 | /// ``` |
250 | /// |
251 | /// [`stack_size`]: Builder::stack_size |
252 | /// [`name`]: Builder::name |
253 | /// [`spawn`]: Builder::spawn |
254 | /// [`thread::spawn`]: spawn |
255 | /// [`io::Result`]: crate::io::Result |
256 | /// [`unwrap`]: crate::result::Result::unwrap |
257 | /// [naming-threads]: ./index.html#naming-threads |
258 | /// [stack-size]: ./index.html#stack-size |
259 | #[must_use = "must eventually spawn the thread" ] |
260 | #[stable (feature = "rust1" , since = "1.0.0" )] |
261 | #[derive (Debug)] |
262 | pub struct Builder { |
263 | // A name for the thread-to-be, for identification in panic messages |
264 | name: Option<String>, |
265 | // The size of the stack for the spawned thread in bytes |
266 | stack_size: Option<usize>, |
267 | } |
268 | |
269 | impl Builder { |
270 | /// Generates the base configuration for spawning a thread, from which |
271 | /// configuration methods can be chained. |
272 | /// |
273 | /// # Examples |
274 | /// |
275 | /// ``` |
276 | /// use std::thread; |
277 | /// |
278 | /// let builder = thread::Builder::new() |
279 | /// .name("foo" .into()) |
280 | /// .stack_size(32 * 1024); |
281 | /// |
282 | /// let handler = builder.spawn(|| { |
283 | /// // thread code |
284 | /// }).unwrap(); |
285 | /// |
286 | /// handler.join().unwrap(); |
287 | /// ``` |
288 | #[stable (feature = "rust1" , since = "1.0.0" )] |
289 | pub fn new() -> Builder { |
290 | Builder { name: None, stack_size: None } |
291 | } |
292 | |
293 | /// Names the thread-to-be. Currently the name is used for identification |
294 | /// only in panic messages. |
295 | /// |
296 | /// The name must not contain null bytes (`\0`). |
297 | /// |
298 | /// For more information about named threads, see |
299 | /// [this module-level documentation][naming-threads]. |
300 | /// |
301 | /// # Examples |
302 | /// |
303 | /// ``` |
304 | /// use std::thread; |
305 | /// |
306 | /// let builder = thread::Builder::new() |
307 | /// .name("foo" .into()); |
308 | /// |
309 | /// let handler = builder.spawn(|| { |
310 | /// assert_eq!(thread::current().name(), Some("foo" )) |
311 | /// }).unwrap(); |
312 | /// |
313 | /// handler.join().unwrap(); |
314 | /// ``` |
315 | /// |
316 | /// [naming-threads]: ./index.html#naming-threads |
317 | #[stable (feature = "rust1" , since = "1.0.0" )] |
318 | pub fn name(mut self, name: String) -> Builder { |
319 | self.name = Some(name); |
320 | self |
321 | } |
322 | |
323 | /// Sets the size of the stack (in bytes) for the new thread. |
324 | /// |
325 | /// The actual stack size may be greater than this value if |
326 | /// the platform specifies a minimal stack size. |
327 | /// |
328 | /// For more information about the stack size for threads, see |
329 | /// [this module-level documentation][stack-size]. |
330 | /// |
331 | /// # Examples |
332 | /// |
333 | /// ``` |
334 | /// use std::thread; |
335 | /// |
336 | /// let builder = thread::Builder::new().stack_size(32 * 1024); |
337 | /// ``` |
338 | /// |
339 | /// [stack-size]: ./index.html#stack-size |
340 | #[stable (feature = "rust1" , since = "1.0.0" )] |
341 | pub fn stack_size(mut self, size: usize) -> Builder { |
342 | self.stack_size = Some(size); |
343 | self |
344 | } |
345 | |
346 | /// Spawns a new thread by taking ownership of the `Builder`, and returns an |
347 | /// [`io::Result`] to its [`JoinHandle`]. |
348 | /// |
349 | /// The spawned thread may outlive the caller (unless the caller thread |
350 | /// is the main thread; the whole process is terminated when the main |
351 | /// thread finishes). The join handle can be used to block on |
352 | /// termination of the spawned thread, including recovering its panics. |
353 | /// |
354 | /// For a more complete documentation see [`thread::spawn`][`spawn`]. |
355 | /// |
356 | /// # Errors |
357 | /// |
358 | /// Unlike the [`spawn`] free function, this method yields an |
359 | /// [`io::Result`] to capture any failure to create the thread at |
360 | /// the OS level. |
361 | /// |
362 | /// [`io::Result`]: crate::io::Result |
363 | /// |
364 | /// # Panics |
365 | /// |
366 | /// Panics if a thread name was set and it contained null bytes. |
367 | /// |
368 | /// # Examples |
369 | /// |
370 | /// ``` |
371 | /// use std::thread; |
372 | /// |
373 | /// let builder = thread::Builder::new(); |
374 | /// |
375 | /// let handler = builder.spawn(|| { |
376 | /// // thread code |
377 | /// }).unwrap(); |
378 | /// |
379 | /// handler.join().unwrap(); |
380 | /// ``` |
381 | #[stable (feature = "rust1" , since = "1.0.0" )] |
382 | pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>> |
383 | where |
384 | F: FnOnce() -> T, |
385 | F: Send + 'static, |
386 | T: Send + 'static, |
387 | { |
388 | unsafe { self.spawn_unchecked(f) } |
389 | } |
390 | |
391 | /// Spawns a new thread without any lifetime restrictions by taking ownership |
392 | /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`]. |
393 | /// |
394 | /// The spawned thread may outlive the caller (unless the caller thread |
395 | /// is the main thread; the whole process is terminated when the main |
396 | /// thread finishes). The join handle can be used to block on |
397 | /// termination of the spawned thread, including recovering its panics. |
398 | /// |
399 | /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`], |
400 | /// except for the relaxed lifetime bounds, which render it unsafe. |
401 | /// For a more complete documentation see [`thread::spawn`][`spawn`]. |
402 | /// |
403 | /// # Errors |
404 | /// |
405 | /// Unlike the [`spawn`] free function, this method yields an |
406 | /// [`io::Result`] to capture any failure to create the thread at |
407 | /// the OS level. |
408 | /// |
409 | /// # Panics |
410 | /// |
411 | /// Panics if a thread name was set and it contained null bytes. |
412 | /// |
413 | /// # Safety |
414 | /// |
415 | /// The caller has to ensure that the spawned thread does not outlive any |
416 | /// references in the supplied thread closure and its return type. |
417 | /// This can be guaranteed in two ways: |
418 | /// |
419 | /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced |
420 | /// data is dropped |
421 | /// - use only types with `'static` lifetime bounds, i.e., those with no or only |
422 | /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`] |
423 | /// and [`thread::spawn`][`spawn`] enforce this property statically) |
424 | /// |
425 | /// # Examples |
426 | /// |
427 | /// ``` |
428 | /// #![feature(thread_spawn_unchecked)] |
429 | /// use std::thread; |
430 | /// |
431 | /// let builder = thread::Builder::new(); |
432 | /// |
433 | /// let x = 1; |
434 | /// let thread_x = &x; |
435 | /// |
436 | /// let handler = unsafe { |
437 | /// builder.spawn_unchecked(move || { |
438 | /// println!("x = {}" , *thread_x); |
439 | /// }).unwrap() |
440 | /// }; |
441 | /// |
442 | /// // caller has to ensure `join()` is called, otherwise |
443 | /// // it is possible to access freed memory if `x` gets |
444 | /// // dropped before the thread closure is executed! |
445 | /// handler.join().unwrap(); |
446 | /// ``` |
447 | /// |
448 | /// [`io::Result`]: crate::io::Result |
449 | #[unstable (feature = "thread_spawn_unchecked" , issue = "55132" )] |
450 | pub unsafe fn spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>> |
451 | where |
452 | F: FnOnce() -> T, |
453 | F: Send + 'a, |
454 | T: Send + 'a, |
455 | { |
456 | Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?)) |
457 | } |
458 | |
459 | unsafe fn spawn_unchecked_<'a, 'scope, F, T>( |
460 | self, |
461 | f: F, |
462 | scope_data: Option<Arc<scoped::ScopeData>>, |
463 | ) -> io::Result<JoinInner<'scope, T>> |
464 | where |
465 | F: FnOnce() -> T, |
466 | F: Send + 'a, |
467 | T: Send + 'a, |
468 | 'scope: 'a, |
469 | { |
470 | let Builder { name, stack_size } = self; |
471 | |
472 | let stack_size = stack_size.unwrap_or_else(|| { |
473 | static MIN: AtomicUsize = AtomicUsize::new(0); |
474 | |
475 | match MIN.load(Ordering::Relaxed) { |
476 | 0 => {} |
477 | n => return n - 1, |
478 | } |
479 | |
480 | let amt = env::var_os("RUST_MIN_STACK" ) |
481 | .and_then(|s| s.to_str().and_then(|s| s.parse().ok())) |
482 | .unwrap_or(imp::DEFAULT_MIN_STACK_SIZE); |
483 | |
484 | // 0 is our sentinel value, so ensure that we'll never see 0 after |
485 | // initialization has run |
486 | MIN.store(amt + 1, Ordering::Relaxed); |
487 | amt |
488 | }); |
489 | |
490 | let my_thread = name.map_or_else(Thread::new_unnamed, |name| unsafe { |
491 | Thread::new( |
492 | CString::new(name).expect("thread name may not contain interior null bytes" ), |
493 | ) |
494 | }); |
495 | let their_thread = my_thread.clone(); |
496 | |
497 | let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet { |
498 | scope: scope_data, |
499 | result: UnsafeCell::new(None), |
500 | _marker: PhantomData, |
501 | }); |
502 | let their_packet = my_packet.clone(); |
503 | |
504 | let output_capture = crate::io::set_output_capture(None); |
505 | crate::io::set_output_capture(output_capture.clone()); |
506 | |
507 | // Pass `f` in `MaybeUninit` because actually that closure might *run longer than the lifetime of `F`*. |
508 | // See <https://github.com/rust-lang/rust/issues/101983> for more details. |
509 | // To prevent leaks we use a wrapper that drops its contents. |
510 | #[repr (transparent)] |
511 | struct MaybeDangling<T>(mem::MaybeUninit<T>); |
512 | impl<T> MaybeDangling<T> { |
513 | fn new(x: T) -> Self { |
514 | MaybeDangling(mem::MaybeUninit::new(x)) |
515 | } |
516 | fn into_inner(self) -> T { |
517 | // SAFETY: we are always initialized. |
518 | let ret = unsafe { self.0.assume_init_read() }; |
519 | // Make sure we don't drop. |
520 | mem::forget(self); |
521 | ret |
522 | } |
523 | } |
524 | impl<T> Drop for MaybeDangling<T> { |
525 | fn drop(&mut self) { |
526 | // SAFETY: we are always initialized. |
527 | unsafe { self.0.assume_init_drop() }; |
528 | } |
529 | } |
530 | |
531 | let f = MaybeDangling::new(f); |
532 | let main = move || { |
533 | if let Some(name) = their_thread.cname() { |
534 | imp::Thread::set_name(name); |
535 | } |
536 | |
537 | crate::io::set_output_capture(output_capture); |
538 | |
539 | let f = f.into_inner(); |
540 | set_current(their_thread); |
541 | let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| { |
542 | crate::sys_common::backtrace::__rust_begin_short_backtrace(f) |
543 | })); |
544 | // SAFETY: `their_packet` as been built just above and moved by the |
545 | // closure (it is an Arc<...>) and `my_packet` will be stored in the |
546 | // same `JoinInner` as this closure meaning the mutation will be |
547 | // safe (not modify it and affect a value far away). |
548 | unsafe { *their_packet.result.get() = Some(try_result) }; |
549 | // Here `their_packet` gets dropped, and if this is the last `Arc` for that packet that |
550 | // will call `decrement_num_running_threads` and therefore signal that this thread is |
551 | // done. |
552 | drop(their_packet); |
553 | // Here, the lifetime `'a` and even `'scope` can end. `main` keeps running for a bit |
554 | // after that before returning itself. |
555 | }; |
556 | |
557 | if let Some(scope_data) = &my_packet.scope { |
558 | scope_data.increment_num_running_threads(); |
559 | } |
560 | |
561 | let main = Box::new(main); |
562 | // SAFETY: dynamic size and alignment of the Box remain the same. See below for why the |
563 | // lifetime change is justified. |
564 | let main = unsafe { Box::from_raw(Box::into_raw(main) as *mut (dyn FnOnce() + 'static)) }; |
565 | |
566 | Ok(JoinInner { |
567 | // SAFETY: |
568 | // |
569 | // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed |
570 | // through FFI or otherwise used with low-level threading primitives that have no |
571 | // notion of or way to enforce lifetimes. |
572 | // |
573 | // As mentioned in the `Safety` section of this function's documentation, the caller of |
574 | // this function needs to guarantee that the passed-in lifetime is sufficiently long |
575 | // for the lifetime of the thread. |
576 | // |
577 | // Similarly, the `sys` implementation must guarantee that no references to the closure |
578 | // exist after the thread has terminated, which is signaled by `Thread::join` |
579 | // returning. |
580 | native: unsafe { imp::Thread::new(stack_size, main)? }, |
581 | thread: my_thread, |
582 | packet: my_packet, |
583 | }) |
584 | } |
585 | } |
586 | |
587 | //////////////////////////////////////////////////////////////////////////////// |
588 | // Free functions |
589 | //////////////////////////////////////////////////////////////////////////////// |
590 | |
591 | /// Spawns a new thread, returning a [`JoinHandle`] for it. |
592 | /// |
593 | /// The join handle provides a [`join`] method that can be used to join the spawned |
594 | /// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing |
595 | /// the argument given to [`panic!`]. |
596 | /// |
597 | /// If the join handle is dropped, the spawned thread will implicitly be *detached*. |
598 | /// In this case, the spawned thread may no longer be joined. |
599 | /// (It is the responsibility of the program to either eventually join threads it |
600 | /// creates or detach them; otherwise, a resource leak will result.) |
601 | /// |
602 | /// This call will create a thread using default parameters of [`Builder`], if you |
603 | /// want to specify the stack size or the name of the thread, use this API |
604 | /// instead. |
605 | /// |
606 | /// As you can see in the signature of `spawn` there are two constraints on |
607 | /// both the closure given to `spawn` and its return value, let's explain them: |
608 | /// |
609 | /// - The `'static` constraint means that the closure and its return value |
610 | /// must have a lifetime of the whole program execution. The reason for this |
611 | /// is that threads can outlive the lifetime they have been created in. |
612 | /// |
613 | /// Indeed if the thread, and by extension its return value, can outlive their |
614 | /// caller, we need to make sure that they will be valid afterwards, and since |
615 | /// we *can't* know when it will return we need to have them valid as long as |
616 | /// possible, that is until the end of the program, hence the `'static` |
617 | /// lifetime. |
618 | /// - The [`Send`] constraint is because the closure will need to be passed |
619 | /// *by value* from the thread where it is spawned to the new thread. Its |
620 | /// return value will need to be passed from the new thread to the thread |
621 | /// where it is `join`ed. |
622 | /// As a reminder, the [`Send`] marker trait expresses that it is safe to be |
623 | /// passed from thread to thread. [`Sync`] expresses that it is safe to have a |
624 | /// reference be passed from thread to thread. |
625 | /// |
626 | /// # Panics |
627 | /// |
628 | /// Panics if the OS fails to create a thread; use [`Builder::spawn`] |
629 | /// to recover from such errors. |
630 | /// |
631 | /// # Examples |
632 | /// |
633 | /// Creating a thread. |
634 | /// |
635 | /// ``` |
636 | /// use std::thread; |
637 | /// |
638 | /// let handler = thread::spawn(|| { |
639 | /// // thread code |
640 | /// }); |
641 | /// |
642 | /// handler.join().unwrap(); |
643 | /// ``` |
644 | /// |
645 | /// As mentioned in the module documentation, threads are usually made to |
646 | /// communicate using [`channels`], here is how it usually looks. |
647 | /// |
648 | /// This example also shows how to use `move`, in order to give ownership |
649 | /// of values to a thread. |
650 | /// |
651 | /// ``` |
652 | /// use std::thread; |
653 | /// use std::sync::mpsc::channel; |
654 | /// |
655 | /// let (tx, rx) = channel(); |
656 | /// |
657 | /// let sender = thread::spawn(move || { |
658 | /// tx.send("Hello, thread" .to_owned()) |
659 | /// .expect("Unable to send on channel" ); |
660 | /// }); |
661 | /// |
662 | /// let receiver = thread::spawn(move || { |
663 | /// let value = rx.recv().expect("Unable to receive from channel" ); |
664 | /// println!("{value}" ); |
665 | /// }); |
666 | /// |
667 | /// sender.join().expect("The sender thread has panicked" ); |
668 | /// receiver.join().expect("The receiver thread has panicked" ); |
669 | /// ``` |
670 | /// |
671 | /// A thread can also return a value through its [`JoinHandle`], you can use |
672 | /// this to make asynchronous computations (futures might be more appropriate |
673 | /// though). |
674 | /// |
675 | /// ``` |
676 | /// use std::thread; |
677 | /// |
678 | /// let computation = thread::spawn(|| { |
679 | /// // Some expensive computation. |
680 | /// 42 |
681 | /// }); |
682 | /// |
683 | /// let result = computation.join().unwrap(); |
684 | /// println!("{result}" ); |
685 | /// ``` |
686 | /// |
687 | /// [`channels`]: crate::sync::mpsc |
688 | /// [`join`]: JoinHandle::join |
689 | /// [`Err`]: crate::result::Result::Err |
690 | #[stable (feature = "rust1" , since = "1.0.0" )] |
691 | pub fn spawn<F, T>(f: F) -> JoinHandle<T> |
692 | where |
693 | F: FnOnce() -> T, |
694 | F: Send + 'static, |
695 | T: Send + 'static, |
696 | { |
697 | Builder::new().spawn(f).expect(msg:"failed to spawn thread" ) |
698 | } |
699 | |
700 | thread_local! { |
701 | static CURRENT: OnceCell<Thread> = const { OnceCell::new() }; |
702 | } |
703 | |
704 | /// Sets the thread handle for the current thread. |
705 | /// |
706 | /// Panics if the handle has been set already or when called from a TLS destructor. |
707 | pub(crate) fn set_current(thread: Thread) { |
708 | CURRENT.with(|current: &OnceCell| current.set(thread).unwrap()); |
709 | } |
710 | |
711 | /// Gets a handle to the thread that invokes it. |
712 | /// |
713 | /// In contrast to the public `current` function, this will not panic if called |
714 | /// from inside a TLS destructor. |
715 | pub(crate) fn try_current() -> Option<Thread> { |
716 | CURRENT.try_with(|current: &OnceCell| current.get_or_init(|| Thread::new_unnamed()).clone()).ok() |
717 | } |
718 | |
719 | /// Gets a handle to the thread that invokes it. |
720 | /// |
721 | /// # Examples |
722 | /// |
723 | /// Getting a handle to the current thread with `thread::current()`: |
724 | /// |
725 | /// ``` |
726 | /// use std::thread; |
727 | /// |
728 | /// let handler = thread::Builder::new() |
729 | /// .name("named thread" .into()) |
730 | /// .spawn(|| { |
731 | /// let handle = thread::current(); |
732 | /// assert_eq!(handle.name(), Some("named thread" )); |
733 | /// }) |
734 | /// .unwrap(); |
735 | /// |
736 | /// handler.join().unwrap(); |
737 | /// ``` |
738 | #[must_use ] |
739 | #[stable (feature = "rust1" , since = "1.0.0" )] |
740 | pub fn current() -> Thread { |
741 | try_current().expect( |
742 | msg:"use of std::thread::current() is not possible \ |
743 | msg: after the thread's local data has been destroyed" , |
744 | ) |
745 | } |
746 | |
747 | /// Cooperatively gives up a timeslice to the OS scheduler. |
748 | /// |
749 | /// This calls the underlying OS scheduler's yield primitive, signaling |
750 | /// that the calling thread is willing to give up its remaining timeslice |
751 | /// so that the OS may schedule other threads on the CPU. |
752 | /// |
753 | /// A drawback of yielding in a loop is that if the OS does not have any |
754 | /// other ready threads to run on the current CPU, the thread will effectively |
755 | /// busy-wait, which wastes CPU time and energy. |
756 | /// |
757 | /// Therefore, when waiting for events of interest, a programmer's first |
758 | /// choice should be to use synchronization devices such as [`channel`]s, |
759 | /// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are |
760 | /// implemented in a blocking manner, giving up the CPU until the event |
761 | /// of interest has occurred which avoids repeated yielding. |
762 | /// |
763 | /// `yield_now` should thus be used only rarely, mostly in situations where |
764 | /// repeated polling is required because there is no other suitable way to |
765 | /// learn when an event of interest has occurred. |
766 | /// |
767 | /// # Examples |
768 | /// |
769 | /// ``` |
770 | /// use std::thread; |
771 | /// |
772 | /// thread::yield_now(); |
773 | /// ``` |
774 | /// |
775 | /// [`channel`]: crate::sync::mpsc |
776 | /// [`join`]: JoinHandle::join |
777 | /// [`Condvar`]: crate::sync::Condvar |
778 | /// [`Mutex`]: crate::sync::Mutex |
779 | #[stable (feature = "rust1" , since = "1.0.0" )] |
780 | pub fn yield_now() { |
781 | imp::Thread::yield_now() |
782 | } |
783 | |
784 | /// Determines whether the current thread is unwinding because of panic. |
785 | /// |
786 | /// A common use of this feature is to poison shared resources when writing |
787 | /// unsafe code, by checking `panicking` when the `drop` is called. |
788 | /// |
789 | /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex] |
790 | /// already poison themselves when a thread panics while holding the lock. |
791 | /// |
792 | /// This can also be used in multithreaded applications, in order to send a |
793 | /// message to other threads warning that a thread has panicked (e.g., for |
794 | /// monitoring purposes). |
795 | /// |
796 | /// # Examples |
797 | /// |
798 | /// ```should_panic |
799 | /// use std::thread; |
800 | /// |
801 | /// struct SomeStruct; |
802 | /// |
803 | /// impl Drop for SomeStruct { |
804 | /// fn drop(&mut self) { |
805 | /// if thread::panicking() { |
806 | /// println!("dropped while unwinding" ); |
807 | /// } else { |
808 | /// println!("dropped while not unwinding" ); |
809 | /// } |
810 | /// } |
811 | /// } |
812 | /// |
813 | /// { |
814 | /// print!("a: " ); |
815 | /// let a = SomeStruct; |
816 | /// } |
817 | /// |
818 | /// { |
819 | /// print!("b: " ); |
820 | /// let b = SomeStruct; |
821 | /// panic!() |
822 | /// } |
823 | /// ``` |
824 | /// |
825 | /// [Mutex]: crate::sync::Mutex |
826 | #[inline ] |
827 | #[must_use ] |
828 | #[stable (feature = "rust1" , since = "1.0.0" )] |
829 | pub fn panicking() -> bool { |
830 | panicking::panicking() |
831 | } |
832 | |
833 | /// Use [`sleep`]. |
834 | /// |
835 | /// Puts the current thread to sleep for at least the specified amount of time. |
836 | /// |
837 | /// The thread may sleep longer than the duration specified due to scheduling |
838 | /// specifics or platform-dependent functionality. It will never sleep less. |
839 | /// |
840 | /// This function is blocking, and should not be used in `async` functions. |
841 | /// |
842 | /// # Platform-specific behavior |
843 | /// |
844 | /// On Unix platforms, the underlying syscall may be interrupted by a |
845 | /// spurious wakeup or signal handler. To ensure the sleep occurs for at least |
846 | /// the specified duration, this function may invoke that system call multiple |
847 | /// times. |
848 | /// |
849 | /// # Examples |
850 | /// |
851 | /// ```no_run |
852 | /// use std::thread; |
853 | /// |
854 | /// // Let's sleep for 2 seconds: |
855 | /// thread::sleep_ms(2000); |
856 | /// ``` |
857 | #[stable (feature = "rust1" , since = "1.0.0" )] |
858 | #[deprecated (since = "1.6.0" , note = "replaced by `std::thread::sleep`" )] |
859 | pub fn sleep_ms(ms: u32) { |
860 | sleep(dur:Duration::from_millis(ms as u64)) |
861 | } |
862 | |
863 | /// Puts the current thread to sleep for at least the specified amount of time. |
864 | /// |
865 | /// The thread may sleep longer than the duration specified due to scheduling |
866 | /// specifics or platform-dependent functionality. It will never sleep less. |
867 | /// |
868 | /// This function is blocking, and should not be used in `async` functions. |
869 | /// |
870 | /// # Platform-specific behavior |
871 | /// |
872 | /// On Unix platforms, the underlying syscall may be interrupted by a |
873 | /// spurious wakeup or signal handler. To ensure the sleep occurs for at least |
874 | /// the specified duration, this function may invoke that system call multiple |
875 | /// times. |
876 | /// Platforms which do not support nanosecond precision for sleeping will |
877 | /// have `dur` rounded up to the nearest granularity of time they can sleep for. |
878 | /// |
879 | /// Currently, specifying a zero duration on Unix platforms returns immediately |
880 | /// without invoking the underlying [`nanosleep`] syscall, whereas on Windows |
881 | /// platforms the underlying [`Sleep`] syscall is always invoked. |
882 | /// If the intention is to yield the current time-slice you may want to use |
883 | /// [`yield_now`] instead. |
884 | /// |
885 | /// [`nanosleep`]: https://linux.die.net/man/2/nanosleep |
886 | /// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep |
887 | /// |
888 | /// # Examples |
889 | /// |
890 | /// ```no_run |
891 | /// use std::{thread, time}; |
892 | /// |
893 | /// let ten_millis = time::Duration::from_millis(10); |
894 | /// let now = time::Instant::now(); |
895 | /// |
896 | /// thread::sleep(ten_millis); |
897 | /// |
898 | /// assert!(now.elapsed() >= ten_millis); |
899 | /// ``` |
900 | #[stable (feature = "thread_sleep" , since = "1.4.0" )] |
901 | pub fn sleep(dur: Duration) { |
902 | imp::Thread::sleep(dur) |
903 | } |
904 | |
905 | /// Puts the current thread to sleep until the specified deadline has passed. |
906 | /// |
907 | /// The thread may still be asleep after the deadline specified due to |
908 | /// scheduling specifics or platform-dependent functionality. It will never |
909 | /// wake before. |
910 | /// |
911 | /// This function is blocking, and should not be used in `async` functions. |
912 | /// |
913 | /// # Platform-specific behavior |
914 | /// |
915 | /// This function uses [`sleep`] internally, see its platform-specific behaviour. |
916 | /// |
917 | /// |
918 | /// # Examples |
919 | /// |
920 | /// A simple game loop that limits the game to 60 frames per second. |
921 | /// |
922 | /// ```no_run |
923 | /// #![feature(thread_sleep_until)] |
924 | /// # use std::time::{Duration, Instant}; |
925 | /// # use std::thread; |
926 | /// # |
927 | /// # fn update() {} |
928 | /// # fn render() {} |
929 | /// # |
930 | /// let max_fps = 60.0; |
931 | /// let frame_time = Duration::from_secs_f32(1.0/max_fps); |
932 | /// let mut next_frame = Instant::now(); |
933 | /// loop { |
934 | /// thread::sleep_until(next_frame); |
935 | /// next_frame += frame_time; |
936 | /// update(); |
937 | /// render(); |
938 | /// } |
939 | /// ``` |
940 | /// |
941 | /// A slow api we must not call too fast and which takes a few |
942 | /// tries before succeeding. By using `sleep_until` the time the |
943 | /// api call takes does not influence when we retry or when we give up |
944 | /// |
945 | /// ```no_run |
946 | /// #![feature(thread_sleep_until)] |
947 | /// # use std::time::{Duration, Instant}; |
948 | /// # use std::thread; |
949 | /// # |
950 | /// # enum Status { |
951 | /// # Ready(usize), |
952 | /// # Waiting, |
953 | /// # } |
954 | /// # fn slow_web_api_call() -> Status { Status::Ready(42) } |
955 | /// # |
956 | /// # const MAX_DURATION: Duration = Duration::from_secs(10); |
957 | /// # |
958 | /// # fn try_api_call() -> Result<usize, ()> { |
959 | /// let deadline = Instant::now() + MAX_DURATION; |
960 | /// let delay = Duration::from_millis(250); |
961 | /// let mut next_attempt = Instant::now(); |
962 | /// loop { |
963 | /// if Instant::now() > deadline { |
964 | /// break Err(()); |
965 | /// } |
966 | /// if let Status::Ready(data) = slow_web_api_call() { |
967 | /// break Ok(data); |
968 | /// } |
969 | /// |
970 | /// next_attempt = deadline.min(next_attempt + delay); |
971 | /// thread::sleep_until(next_attempt); |
972 | /// } |
973 | /// # } |
974 | /// # let _data = try_api_call(); |
975 | /// ``` |
976 | #[unstable (feature = "thread_sleep_until" , issue = "113752" )] |
977 | pub fn sleep_until(deadline: Instant) { |
978 | let now: Instant = Instant::now(); |
979 | |
980 | if let Some(delay: Duration) = deadline.checked_duration_since(earlier:now) { |
981 | sleep(dur:delay); |
982 | } |
983 | } |
984 | |
985 | /// Used to ensure that `park` and `park_timeout` do not unwind, as that can |
986 | /// cause undefined behaviour if not handled correctly (see #102398 for context). |
987 | struct PanicGuard; |
988 | |
989 | impl Drop for PanicGuard { |
990 | fn drop(&mut self) { |
991 | rtabort!("an irrecoverable error occurred while synchronizing threads" ) |
992 | } |
993 | } |
994 | |
995 | /// Blocks unless or until the current thread's token is made available. |
996 | /// |
997 | /// A call to `park` does not guarantee that the thread will remain parked |
998 | /// forever, and callers should be prepared for this possibility. However, |
999 | /// it is guaranteed that this function will not panic (it may abort the |
1000 | /// process if the implementation encounters some rare errors). |
1001 | /// |
1002 | /// # `park` and `unpark` |
1003 | /// |
1004 | /// Every thread is equipped with some basic low-level blocking support, via the |
1005 | /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`] |
1006 | /// method. [`park`] blocks the current thread, which can then be resumed from |
1007 | /// another thread by calling the [`unpark`] method on the blocked thread's |
1008 | /// handle. |
1009 | /// |
1010 | /// Conceptually, each [`Thread`] handle has an associated token, which is |
1011 | /// initially not present: |
1012 | /// |
1013 | /// * The [`thread::park`][`park`] function blocks the current thread unless or |
1014 | /// until the token is available for its thread handle, at which point it |
1015 | /// atomically consumes the token. It may also return *spuriously*, without |
1016 | /// consuming the token. [`thread::park_timeout`] does the same, but allows |
1017 | /// specifying a maximum time to block the thread for. |
1018 | /// |
1019 | /// * The [`unpark`] method on a [`Thread`] atomically makes the token available |
1020 | /// if it wasn't already. Because the token is initially absent, [`unpark`] |
1021 | /// followed by [`park`] will result in the second call returning immediately. |
1022 | /// |
1023 | /// The API is typically used by acquiring a handle to the current thread, |
1024 | /// placing that handle in a shared data structure so that other threads can |
1025 | /// find it, and then `park`ing in a loop. When some desired condition is met, another |
1026 | /// thread calls [`unpark`] on the handle. |
1027 | /// |
1028 | /// The motivation for this design is twofold: |
1029 | /// |
1030 | /// * It avoids the need to allocate mutexes and condvars when building new |
1031 | /// synchronization primitives; the threads already provide basic |
1032 | /// blocking/signaling. |
1033 | /// |
1034 | /// * It can be implemented very efficiently on many platforms. |
1035 | /// |
1036 | /// # Memory Ordering |
1037 | /// |
1038 | /// Calls to `park` _synchronize-with_ calls to `unpark`, meaning that memory |
1039 | /// operations performed before a call to `unpark` are made visible to the thread that |
1040 | /// consumes the token and returns from `park`. Note that all `park` and `unpark` |
1041 | /// operations for a given thread form a total order and `park` synchronizes-with |
1042 | /// _all_ prior `unpark` operations. |
1043 | /// |
1044 | /// In atomic ordering terms, `unpark` performs a `Release` operation and `park` |
1045 | /// performs the corresponding `Acquire` operation. Calls to `unpark` for the same |
1046 | /// thread form a [release sequence]. |
1047 | /// |
1048 | /// Note that being unblocked does not imply a call was made to `unpark`, because |
1049 | /// wakeups can also be spurious. For example, a valid, but inefficient, |
1050 | /// implementation could have `park` and `unpark` return immediately without doing anything, |
1051 | /// making *all* wakeups spurious. |
1052 | /// |
1053 | /// # Examples |
1054 | /// |
1055 | /// ``` |
1056 | /// use std::thread; |
1057 | /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}}; |
1058 | /// use std::time::Duration; |
1059 | /// |
1060 | /// let flag = Arc::new(AtomicBool::new(false)); |
1061 | /// let flag2 = Arc::clone(&flag); |
1062 | /// |
1063 | /// let parked_thread = thread::spawn(move || { |
1064 | /// // We want to wait until the flag is set. We *could* just spin, but using |
1065 | /// // park/unpark is more efficient. |
1066 | /// while !flag2.load(Ordering::Relaxed) { |
1067 | /// println!("Parking thread" ); |
1068 | /// thread::park(); |
1069 | /// // We *could* get here spuriously, i.e., way before the 10ms below are over! |
1070 | /// // But that is no problem, we are in a loop until the flag is set anyway. |
1071 | /// println!("Thread unparked" ); |
1072 | /// } |
1073 | /// println!("Flag received" ); |
1074 | /// }); |
1075 | /// |
1076 | /// // Let some time pass for the thread to be spawned. |
1077 | /// thread::sleep(Duration::from_millis(10)); |
1078 | /// |
1079 | /// // Set the flag, and let the thread wake up. |
1080 | /// // There is no race condition here, if `unpark` |
1081 | /// // happens first, `park` will return immediately. |
1082 | /// // Hence there is no risk of a deadlock. |
1083 | /// flag.store(true, Ordering::Relaxed); |
1084 | /// println!("Unpark the thread" ); |
1085 | /// parked_thread.thread().unpark(); |
1086 | /// |
1087 | /// parked_thread.join().unwrap(); |
1088 | /// ``` |
1089 | /// |
1090 | /// [`unpark`]: Thread::unpark |
1091 | /// [`thread::park_timeout`]: park_timeout |
1092 | /// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence |
1093 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1094 | pub fn park() { |
1095 | let guard: PanicGuard = PanicGuard; |
1096 | // SAFETY: park_timeout is called on the parker owned by this thread. |
1097 | unsafe { |
1098 | current().park(); |
1099 | } |
1100 | // No panic occurred, do not abort. |
1101 | forget(guard); |
1102 | } |
1103 | |
1104 | /// Use [`park_timeout`]. |
1105 | /// |
1106 | /// Blocks unless or until the current thread's token is made available or |
1107 | /// the specified duration has been reached (may wake spuriously). |
1108 | /// |
1109 | /// The semantics of this function are equivalent to [`park`] except |
1110 | /// that the thread will be blocked for roughly no longer than `dur`. This |
1111 | /// method should not be used for precise timing due to anomalies such as |
1112 | /// preemption or platform differences that might not cause the maximum |
1113 | /// amount of time waited to be precisely `ms` long. |
1114 | /// |
1115 | /// See the [park documentation][`park`] for more detail. |
1116 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1117 | #[deprecated (since = "1.6.0" , note = "replaced by `std::thread::park_timeout`" )] |
1118 | pub fn park_timeout_ms(ms: u32) { |
1119 | park_timeout(dur:Duration::from_millis(ms as u64)) |
1120 | } |
1121 | |
1122 | /// Blocks unless or until the current thread's token is made available or |
1123 | /// the specified duration has been reached (may wake spuriously). |
1124 | /// |
1125 | /// The semantics of this function are equivalent to [`park`][park] except |
1126 | /// that the thread will be blocked for roughly no longer than `dur`. This |
1127 | /// method should not be used for precise timing due to anomalies such as |
1128 | /// preemption or platform differences that might not cause the maximum |
1129 | /// amount of time waited to be precisely `dur` long. |
1130 | /// |
1131 | /// See the [park documentation][park] for more details. |
1132 | /// |
1133 | /// # Platform-specific behavior |
1134 | /// |
1135 | /// Platforms which do not support nanosecond precision for sleeping will have |
1136 | /// `dur` rounded up to the nearest granularity of time they can sleep for. |
1137 | /// |
1138 | /// # Examples |
1139 | /// |
1140 | /// Waiting for the complete expiration of the timeout: |
1141 | /// |
1142 | /// ```rust,no_run |
1143 | /// use std::thread::park_timeout; |
1144 | /// use std::time::{Instant, Duration}; |
1145 | /// |
1146 | /// let timeout = Duration::from_secs(2); |
1147 | /// let beginning_park = Instant::now(); |
1148 | /// |
1149 | /// let mut timeout_remaining = timeout; |
1150 | /// loop { |
1151 | /// park_timeout(timeout_remaining); |
1152 | /// let elapsed = beginning_park.elapsed(); |
1153 | /// if elapsed >= timeout { |
1154 | /// break; |
1155 | /// } |
1156 | /// println!("restarting park_timeout after {elapsed:?}" ); |
1157 | /// timeout_remaining = timeout - elapsed; |
1158 | /// } |
1159 | /// ``` |
1160 | #[stable (feature = "park_timeout" , since = "1.4.0" )] |
1161 | pub fn park_timeout(dur: Duration) { |
1162 | let guard: PanicGuard = PanicGuard; |
1163 | // SAFETY: park_timeout is called on the parker owned by this thread. |
1164 | unsafe { |
1165 | current().inner.as_ref().parker().park_timeout(dur); |
1166 | } |
1167 | // No panic occurred, do not abort. |
1168 | forget(guard); |
1169 | } |
1170 | |
1171 | //////////////////////////////////////////////////////////////////////////////// |
1172 | // ThreadId |
1173 | //////////////////////////////////////////////////////////////////////////////// |
1174 | |
1175 | /// A unique identifier for a running thread. |
1176 | /// |
1177 | /// A `ThreadId` is an opaque object that uniquely identifies each thread |
1178 | /// created during the lifetime of a process. `ThreadId`s are guaranteed not to |
1179 | /// be reused, even when a thread terminates. `ThreadId`s are under the control |
1180 | /// of Rust's standard library and there may not be any relationship between |
1181 | /// `ThreadId` and the underlying platform's notion of a thread identifier -- |
1182 | /// the two concepts cannot, therefore, be used interchangeably. A `ThreadId` |
1183 | /// can be retrieved from the [`id`] method on a [`Thread`]. |
1184 | /// |
1185 | /// # Examples |
1186 | /// |
1187 | /// ``` |
1188 | /// use std::thread; |
1189 | /// |
1190 | /// let other_thread = thread::spawn(|| { |
1191 | /// thread::current().id() |
1192 | /// }); |
1193 | /// |
1194 | /// let other_thread_id = other_thread.join().unwrap(); |
1195 | /// assert!(thread::current().id() != other_thread_id); |
1196 | /// ``` |
1197 | /// |
1198 | /// [`id`]: Thread::id |
1199 | #[stable (feature = "thread_id" , since = "1.19.0" )] |
1200 | #[derive (Eq, PartialEq, Clone, Copy, Hash, Debug)] |
1201 | pub struct ThreadId(NonZero<u64>); |
1202 | |
1203 | impl ThreadId { |
1204 | // Generate a new unique thread ID. |
1205 | fn new() -> ThreadId { |
1206 | #[cold ] |
1207 | fn exhausted() -> ! { |
1208 | panic!("failed to generate unique thread ID: bitspace exhausted" ) |
1209 | } |
1210 | |
1211 | cfg_if::cfg_if! { |
1212 | if #[cfg(target_has_atomic = "64" )] { |
1213 | use crate::sync::atomic::AtomicU64; |
1214 | |
1215 | static COUNTER: AtomicU64 = AtomicU64::new(0); |
1216 | |
1217 | let mut last = COUNTER.load(Ordering::Relaxed); |
1218 | loop { |
1219 | let Some(id) = last.checked_add(1) else { |
1220 | exhausted(); |
1221 | }; |
1222 | |
1223 | match COUNTER.compare_exchange_weak(last, id, Ordering::Relaxed, Ordering::Relaxed) { |
1224 | Ok(_) => return ThreadId(NonZero::new(id).unwrap()), |
1225 | Err(id) => last = id, |
1226 | } |
1227 | } |
1228 | } else { |
1229 | use crate::sync::{Mutex, PoisonError}; |
1230 | |
1231 | static COUNTER: Mutex<u64> = Mutex::new(0); |
1232 | |
1233 | let mut counter = COUNTER.lock().unwrap_or_else(PoisonError::into_inner); |
1234 | let Some(id) = counter.checked_add(1) else { |
1235 | // in case the panic handler ends up calling `ThreadId::new()`, |
1236 | // avoid reentrant lock acquire. |
1237 | drop(counter); |
1238 | exhausted(); |
1239 | }; |
1240 | |
1241 | *counter = id; |
1242 | drop(counter); |
1243 | ThreadId(NonZero::new(id).unwrap()) |
1244 | } |
1245 | } |
1246 | } |
1247 | |
1248 | /// This returns a numeric identifier for the thread identified by this |
1249 | /// `ThreadId`. |
1250 | /// |
1251 | /// As noted in the documentation for the type itself, it is essentially an |
1252 | /// opaque ID, but is guaranteed to be unique for each thread. The returned |
1253 | /// value is entirely opaque -- only equality testing is stable. Note that |
1254 | /// it is not guaranteed which values new threads will return, and this may |
1255 | /// change across Rust versions. |
1256 | #[must_use ] |
1257 | #[unstable (feature = "thread_id_value" , issue = "67939" )] |
1258 | pub fn as_u64(&self) -> NonZero<u64> { |
1259 | self.0 |
1260 | } |
1261 | } |
1262 | |
1263 | //////////////////////////////////////////////////////////////////////////////// |
1264 | // Thread |
1265 | //////////////////////////////////////////////////////////////////////////////// |
1266 | |
1267 | /// The internal representation of a `Thread`'s name. |
1268 | enum ThreadName { |
1269 | Main, |
1270 | Other(CString), |
1271 | Unnamed, |
1272 | } |
1273 | |
1274 | /// The internal representation of a `Thread` handle |
1275 | struct Inner { |
1276 | name: ThreadName, // Guaranteed to be UTF-8 |
1277 | id: ThreadId, |
1278 | parker: Parker, |
1279 | } |
1280 | |
1281 | impl Inner { |
1282 | fn parker(self: Pin<&Self>) -> Pin<&Parker> { |
1283 | unsafe { Pin::map_unchecked(self, |inner: &Inner| &inner.parker) } |
1284 | } |
1285 | } |
1286 | |
1287 | #[derive (Clone)] |
1288 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1289 | /// A handle to a thread. |
1290 | /// |
1291 | /// Threads are represented via the `Thread` type, which you can get in one of |
1292 | /// two ways: |
1293 | /// |
1294 | /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`] |
1295 | /// function, and calling [`thread`][`JoinHandle::thread`] on the |
1296 | /// [`JoinHandle`]. |
1297 | /// * By requesting the current thread, using the [`thread::current`] function. |
1298 | /// |
1299 | /// The [`thread::current`] function is available even for threads not spawned |
1300 | /// by the APIs of this module. |
1301 | /// |
1302 | /// There is usually no need to create a `Thread` struct yourself, one |
1303 | /// should instead use a function like `spawn` to create new threads, see the |
1304 | /// docs of [`Builder`] and [`spawn`] for more details. |
1305 | /// |
1306 | /// [`thread::current`]: current |
1307 | pub struct Thread { |
1308 | inner: Pin<Arc<Inner>>, |
1309 | } |
1310 | |
1311 | impl Thread { |
1312 | /// Used only internally to construct a thread object without spawning. |
1313 | /// |
1314 | /// # Safety |
1315 | /// `name` must be valid UTF-8. |
1316 | pub(crate) unsafe fn new(name: CString) -> Thread { |
1317 | unsafe { Self::new_inner(ThreadName::Other(name)) } |
1318 | } |
1319 | |
1320 | pub(crate) fn new_unnamed() -> Thread { |
1321 | unsafe { Self::new_inner(ThreadName::Unnamed) } |
1322 | } |
1323 | |
1324 | // Used in runtime to construct main thread |
1325 | pub(crate) fn new_main() -> Thread { |
1326 | unsafe { Self::new_inner(ThreadName::Main) } |
1327 | } |
1328 | |
1329 | /// # Safety |
1330 | /// If `name` is `ThreadName::Other(_)`, the contained string must be valid UTF-8. |
1331 | unsafe fn new_inner(name: ThreadName) -> Thread { |
1332 | // We have to use `unsafe` here to construct the `Parker` in-place, |
1333 | // which is required for the UNIX implementation. |
1334 | // |
1335 | // SAFETY: We pin the Arc immediately after creation, so its address never |
1336 | // changes. |
1337 | let inner = unsafe { |
1338 | let mut arc = Arc::<Inner>::new_uninit(); |
1339 | let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr(); |
1340 | addr_of_mut!((*ptr).name).write(name); |
1341 | addr_of_mut!((*ptr).id).write(ThreadId::new()); |
1342 | Parker::new_in_place(addr_of_mut!((*ptr).parker)); |
1343 | Pin::new_unchecked(arc.assume_init()) |
1344 | }; |
1345 | |
1346 | Thread { inner } |
1347 | } |
1348 | |
1349 | /// Like the public [`park`], but callable on any handle. This is used to |
1350 | /// allow parking in TLS destructors. |
1351 | /// |
1352 | /// # Safety |
1353 | /// May only be called from the thread to which this handle belongs. |
1354 | pub(crate) unsafe fn park(&self) { |
1355 | unsafe { self.inner.as_ref().parker().park() } |
1356 | } |
1357 | |
1358 | /// Atomically makes the handle's token available if it is not already. |
1359 | /// |
1360 | /// Every thread is equipped with some basic low-level blocking support, via |
1361 | /// the [`park`][park] function and the `unpark()` method. These can be |
1362 | /// used as a more CPU-efficient implementation of a spinlock. |
1363 | /// |
1364 | /// See the [park documentation][park] for more details. |
1365 | /// |
1366 | /// # Examples |
1367 | /// |
1368 | /// ``` |
1369 | /// use std::thread; |
1370 | /// use std::time::Duration; |
1371 | /// |
1372 | /// let parked_thread = thread::Builder::new() |
1373 | /// .spawn(|| { |
1374 | /// println!("Parking thread" ); |
1375 | /// thread::park(); |
1376 | /// println!("Thread unparked" ); |
1377 | /// }) |
1378 | /// .unwrap(); |
1379 | /// |
1380 | /// // Let some time pass for the thread to be spawned. |
1381 | /// thread::sleep(Duration::from_millis(10)); |
1382 | /// |
1383 | /// println!("Unpark the thread" ); |
1384 | /// parked_thread.thread().unpark(); |
1385 | /// |
1386 | /// parked_thread.join().unwrap(); |
1387 | /// ``` |
1388 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1389 | #[inline ] |
1390 | pub fn unpark(&self) { |
1391 | self.inner.as_ref().parker().unpark(); |
1392 | } |
1393 | |
1394 | /// Gets the thread's unique identifier. |
1395 | /// |
1396 | /// # Examples |
1397 | /// |
1398 | /// ``` |
1399 | /// use std::thread; |
1400 | /// |
1401 | /// let other_thread = thread::spawn(|| { |
1402 | /// thread::current().id() |
1403 | /// }); |
1404 | /// |
1405 | /// let other_thread_id = other_thread.join().unwrap(); |
1406 | /// assert!(thread::current().id() != other_thread_id); |
1407 | /// ``` |
1408 | #[stable (feature = "thread_id" , since = "1.19.0" )] |
1409 | #[must_use ] |
1410 | pub fn id(&self) -> ThreadId { |
1411 | self.inner.id |
1412 | } |
1413 | |
1414 | /// Gets the thread's name. |
1415 | /// |
1416 | /// For more information about named threads, see |
1417 | /// [this module-level documentation][naming-threads]. |
1418 | /// |
1419 | /// # Examples |
1420 | /// |
1421 | /// Threads by default have no name specified: |
1422 | /// |
1423 | /// ``` |
1424 | /// use std::thread; |
1425 | /// |
1426 | /// let builder = thread::Builder::new(); |
1427 | /// |
1428 | /// let handler = builder.spawn(|| { |
1429 | /// assert!(thread::current().name().is_none()); |
1430 | /// }).unwrap(); |
1431 | /// |
1432 | /// handler.join().unwrap(); |
1433 | /// ``` |
1434 | /// |
1435 | /// Thread with a specified name: |
1436 | /// |
1437 | /// ``` |
1438 | /// use std::thread; |
1439 | /// |
1440 | /// let builder = thread::Builder::new() |
1441 | /// .name("foo" .into()); |
1442 | /// |
1443 | /// let handler = builder.spawn(|| { |
1444 | /// assert_eq!(thread::current().name(), Some("foo" )) |
1445 | /// }).unwrap(); |
1446 | /// |
1447 | /// handler.join().unwrap(); |
1448 | /// ``` |
1449 | /// |
1450 | /// [naming-threads]: ./index.html#naming-threads |
1451 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1452 | #[must_use ] |
1453 | pub fn name(&self) -> Option<&str> { |
1454 | self.cname().map(|s| unsafe { str::from_utf8_unchecked(s.to_bytes()) }) |
1455 | } |
1456 | |
1457 | fn cname(&self) -> Option<&CStr> { |
1458 | match &self.inner.name { |
1459 | ThreadName::Main => Some(c"main" ), |
1460 | ThreadName::Other(other) => Some(&other), |
1461 | ThreadName::Unnamed => None, |
1462 | } |
1463 | } |
1464 | } |
1465 | |
1466 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1467 | impl fmt::Debug for Thread { |
1468 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
1469 | f&mut DebugStruct<'_, '_>.debug_struct("Thread" ) |
1470 | .field("id" , &self.id()) |
1471 | .field(name:"name" , &self.name()) |
1472 | .finish_non_exhaustive() |
1473 | } |
1474 | } |
1475 | |
1476 | //////////////////////////////////////////////////////////////////////////////// |
1477 | // JoinHandle |
1478 | //////////////////////////////////////////////////////////////////////////////// |
1479 | |
1480 | /// A specialized [`Result`] type for threads. |
1481 | /// |
1482 | /// Indicates the manner in which a thread exited. |
1483 | /// |
1484 | /// The value contained in the `Result::Err` variant |
1485 | /// is the value the thread panicked with; |
1486 | /// that is, the argument the `panic!` macro was called with. |
1487 | /// Unlike with normal errors, this value doesn't implement |
1488 | /// the [`Error`](crate::error::Error) trait. |
1489 | /// |
1490 | /// Thus, a sensible way to handle a thread panic is to either: |
1491 | /// |
1492 | /// 1. propagate the panic with [`std::panic::resume_unwind`] |
1493 | /// 2. or in case the thread is intended to be a subsystem boundary |
1494 | /// that is supposed to isolate system-level failures, |
1495 | /// match on the `Err` variant and handle the panic in an appropriate way |
1496 | /// |
1497 | /// A thread that completes without panicking is considered to exit successfully. |
1498 | /// |
1499 | /// # Examples |
1500 | /// |
1501 | /// Matching on the result of a joined thread: |
1502 | /// |
1503 | /// ```no_run |
1504 | /// use std::{fs, thread, panic}; |
1505 | /// |
1506 | /// fn copy_in_thread() -> thread::Result<()> { |
1507 | /// thread::spawn(|| { |
1508 | /// fs::copy("foo.txt" , "bar.txt" ).unwrap(); |
1509 | /// }).join() |
1510 | /// } |
1511 | /// |
1512 | /// fn main() { |
1513 | /// match copy_in_thread() { |
1514 | /// Ok(_) => println!("copy succeeded" ), |
1515 | /// Err(e) => panic::resume_unwind(e), |
1516 | /// } |
1517 | /// } |
1518 | /// ``` |
1519 | /// |
1520 | /// [`Result`]: crate::result::Result |
1521 | /// [`std::panic::resume_unwind`]: crate::panic::resume_unwind |
1522 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1523 | pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>; |
1524 | |
1525 | // This packet is used to communicate the return value between the spawned |
1526 | // thread and the rest of the program. It is shared through an `Arc` and |
1527 | // there's no need for a mutex here because synchronization happens with `join()` |
1528 | // (the caller will never read this packet until the thread has exited). |
1529 | // |
1530 | // An Arc to the packet is stored into a `JoinInner` which in turns is placed |
1531 | // in `JoinHandle`. |
1532 | struct Packet<'scope, T> { |
1533 | scope: Option<Arc<scoped::ScopeData>>, |
1534 | result: UnsafeCell<Option<Result<T>>>, |
1535 | _marker: PhantomData<Option<&'scope scoped::ScopeData>>, |
1536 | } |
1537 | |
1538 | // Due to the usage of `UnsafeCell` we need to manually implement Sync. |
1539 | // The type `T` should already always be Send (otherwise the thread could not |
1540 | // have been created) and the Packet is Sync because all access to the |
1541 | // `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync. |
1542 | unsafe impl<'scope, T: Sync> Sync for Packet<'scope, T> {} |
1543 | |
1544 | impl<'scope, T> Drop for Packet<'scope, T> { |
1545 | fn drop(&mut self) { |
1546 | // If this packet was for a thread that ran in a scope, the thread |
1547 | // panicked, and nobody consumed the panic payload, we make sure |
1548 | // the scope function will panic. |
1549 | let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_))); |
1550 | // Drop the result without causing unwinding. |
1551 | // This is only relevant for threads that aren't join()ed, as |
1552 | // join() will take the `result` and set it to None, such that |
1553 | // there is nothing left to drop here. |
1554 | // If this panics, we should handle that, because we're outside the |
1555 | // outermost `catch_unwind` of our thread. |
1556 | // We just abort in that case, since there's nothing else we can do. |
1557 | // (And even if we tried to handle it somehow, we'd also need to handle |
1558 | // the case where the panic payload we get out of it also panics on |
1559 | // drop, and so on. See issue #86027.) |
1560 | if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| { |
1561 | *self.result.get_mut() = None; |
1562 | })) { |
1563 | rtabort!("thread result panicked on drop" ); |
1564 | } |
1565 | // Book-keeping so the scope knows when it's done. |
1566 | if let Some(scope) = &self.scope { |
1567 | // Now that there will be no more user code running on this thread |
1568 | // that can use 'scope, mark the thread as 'finished'. |
1569 | // It's important we only do this after the `result` has been dropped, |
1570 | // since dropping it might still use things it borrowed from 'scope. |
1571 | scope.decrement_num_running_threads(unhandled_panic); |
1572 | } |
1573 | } |
1574 | } |
1575 | |
1576 | /// Inner representation for JoinHandle |
1577 | struct JoinInner<'scope, T> { |
1578 | native: imp::Thread, |
1579 | thread: Thread, |
1580 | packet: Arc<Packet<'scope, T>>, |
1581 | } |
1582 | |
1583 | impl<'scope, T> JoinInner<'scope, T> { |
1584 | fn join(mut self) -> Result<T> { |
1585 | self.native.join(); |
1586 | Arc::get_mut(&mut self.packet).unwrap().result.get_mut().take().unwrap() |
1587 | } |
1588 | } |
1589 | |
1590 | /// An owned permission to join on a thread (block on its termination). |
1591 | /// |
1592 | /// A `JoinHandle` *detaches* the associated thread when it is dropped, which |
1593 | /// means that there is no longer any handle to the thread and no way to `join` |
1594 | /// on it. |
1595 | /// |
1596 | /// Due to platform restrictions, it is not possible to [`Clone`] this |
1597 | /// handle: the ability to join a thread is a uniquely-owned permission. |
1598 | /// |
1599 | /// This `struct` is created by the [`thread::spawn`] function and the |
1600 | /// [`thread::Builder::spawn`] method. |
1601 | /// |
1602 | /// # Examples |
1603 | /// |
1604 | /// Creation from [`thread::spawn`]: |
1605 | /// |
1606 | /// ``` |
1607 | /// use std::thread; |
1608 | /// |
1609 | /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| { |
1610 | /// // some work here |
1611 | /// }); |
1612 | /// ``` |
1613 | /// |
1614 | /// Creation from [`thread::Builder::spawn`]: |
1615 | /// |
1616 | /// ``` |
1617 | /// use std::thread; |
1618 | /// |
1619 | /// let builder = thread::Builder::new(); |
1620 | /// |
1621 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
1622 | /// // some work here |
1623 | /// }).unwrap(); |
1624 | /// ``` |
1625 | /// |
1626 | /// A thread being detached and outliving the thread that spawned it: |
1627 | /// |
1628 | /// ```no_run |
1629 | /// use std::thread; |
1630 | /// use std::time::Duration; |
1631 | /// |
1632 | /// let original_thread = thread::spawn(|| { |
1633 | /// let _detached_thread = thread::spawn(|| { |
1634 | /// // Here we sleep to make sure that the first thread returns before. |
1635 | /// thread::sleep(Duration::from_millis(10)); |
1636 | /// // This will be called, even though the JoinHandle is dropped. |
1637 | /// println!("♫ Still alive ♫" ); |
1638 | /// }); |
1639 | /// }); |
1640 | /// |
1641 | /// original_thread.join().expect("The thread being joined has panicked" ); |
1642 | /// println!("Original thread is joined." ); |
1643 | /// |
1644 | /// // We make sure that the new thread has time to run, before the main |
1645 | /// // thread returns. |
1646 | /// |
1647 | /// thread::sleep(Duration::from_millis(1000)); |
1648 | /// ``` |
1649 | /// |
1650 | /// [`thread::Builder::spawn`]: Builder::spawn |
1651 | /// [`thread::spawn`]: spawn |
1652 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1653 | #[cfg_attr (target_os = "teeos" , must_use)] |
1654 | pub struct JoinHandle<T>(JoinInner<'static, T>); |
1655 | |
1656 | #[stable (feature = "joinhandle_impl_send_sync" , since = "1.29.0" )] |
1657 | unsafe impl<T> Send for JoinHandle<T> {} |
1658 | #[stable (feature = "joinhandle_impl_send_sync" , since = "1.29.0" )] |
1659 | unsafe impl<T> Sync for JoinHandle<T> {} |
1660 | |
1661 | impl<T> JoinHandle<T> { |
1662 | /// Extracts a handle to the underlying thread. |
1663 | /// |
1664 | /// # Examples |
1665 | /// |
1666 | /// ``` |
1667 | /// use std::thread; |
1668 | /// |
1669 | /// let builder = thread::Builder::new(); |
1670 | /// |
1671 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
1672 | /// // some work here |
1673 | /// }).unwrap(); |
1674 | /// |
1675 | /// let thread = join_handle.thread(); |
1676 | /// println!("thread id: {:?}" , thread.id()); |
1677 | /// ``` |
1678 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1679 | #[must_use ] |
1680 | pub fn thread(&self) -> &Thread { |
1681 | &self.0.thread |
1682 | } |
1683 | |
1684 | /// Waits for the associated thread to finish. |
1685 | /// |
1686 | /// This function will return immediately if the associated thread has already finished. |
1687 | /// |
1688 | /// In terms of [atomic memory orderings], the completion of the associated |
1689 | /// thread synchronizes with this function returning. In other words, all |
1690 | /// operations performed by that thread [happen |
1691 | /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all |
1692 | /// operations that happen after `join` returns. |
1693 | /// |
1694 | /// If the associated thread panics, [`Err`] is returned with the parameter given |
1695 | /// to [`panic!`]. |
1696 | /// |
1697 | /// [`Err`]: crate::result::Result::Err |
1698 | /// [atomic memory orderings]: crate::sync::atomic |
1699 | /// |
1700 | /// # Panics |
1701 | /// |
1702 | /// This function may panic on some platforms if a thread attempts to join |
1703 | /// itself or otherwise may create a deadlock with joining threads. |
1704 | /// |
1705 | /// # Examples |
1706 | /// |
1707 | /// ``` |
1708 | /// use std::thread; |
1709 | /// |
1710 | /// let builder = thread::Builder::new(); |
1711 | /// |
1712 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
1713 | /// // some work here |
1714 | /// }).unwrap(); |
1715 | /// join_handle.join().expect("Couldn't join on the associated thread" ); |
1716 | /// ``` |
1717 | #[stable (feature = "rust1" , since = "1.0.0" )] |
1718 | pub fn join(self) -> Result<T> { |
1719 | self.0.join() |
1720 | } |
1721 | |
1722 | /// Checks if the associated thread has finished running its main function. |
1723 | /// |
1724 | /// `is_finished` supports implementing a non-blocking join operation, by checking |
1725 | /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To |
1726 | /// block while waiting on the thread to finish, use [`join`][Self::join]. |
1727 | /// |
1728 | /// This might return `true` for a brief moment after the thread's main |
1729 | /// function has returned, but before the thread itself has stopped running. |
1730 | /// However, once this returns `true`, [`join`][Self::join] can be expected |
1731 | /// to return quickly, without blocking for any significant amount of time. |
1732 | #[stable (feature = "thread_is_running" , since = "1.61.0" )] |
1733 | pub fn is_finished(&self) -> bool { |
1734 | Arc::strong_count(&self.0.packet) == 1 |
1735 | } |
1736 | } |
1737 | |
1738 | impl<T> AsInner<imp::Thread> for JoinHandle<T> { |
1739 | fn as_inner(&self) -> &imp::Thread { |
1740 | &self.0.native |
1741 | } |
1742 | } |
1743 | |
1744 | impl<T> IntoInner<imp::Thread> for JoinHandle<T> { |
1745 | fn into_inner(self) -> imp::Thread { |
1746 | self.0.native |
1747 | } |
1748 | } |
1749 | |
1750 | #[stable (feature = "std_debug" , since = "1.16.0" )] |
1751 | impl<T> fmt::Debug for JoinHandle<T> { |
1752 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
1753 | f.debug_struct(name:"JoinHandle" ).finish_non_exhaustive() |
1754 | } |
1755 | } |
1756 | |
1757 | fn _assert_sync_and_send() { |
1758 | fn _assert_both<T: Send + Sync>() {} |
1759 | _assert_both::<JoinHandle<()>>(); |
1760 | _assert_both::<Thread>(); |
1761 | } |
1762 | |
1763 | /// Returns an estimate of the default amount of parallelism a program should use. |
1764 | /// |
1765 | /// Parallelism is a resource. A given machine provides a certain capacity for |
1766 | /// parallelism, i.e., a bound on the number of computations it can perform |
1767 | /// simultaneously. This number often corresponds to the amount of CPUs a |
1768 | /// computer has, but it may diverge in various cases. |
1769 | /// |
1770 | /// Host environments such as VMs or container orchestrators may want to |
1771 | /// restrict the amount of parallelism made available to programs in them. This |
1772 | /// is often done to limit the potential impact of (unintentionally) |
1773 | /// resource-intensive programs on other programs running on the same machine. |
1774 | /// |
1775 | /// # Limitations |
1776 | /// |
1777 | /// The purpose of this API is to provide an easy and portable way to query |
1778 | /// the default amount of parallelism the program should use. Among other things it |
1779 | /// does not expose information on NUMA regions, does not account for |
1780 | /// differences in (co)processor capabilities or current system load, |
1781 | /// and will not modify the program's global state in order to more accurately |
1782 | /// query the amount of available parallelism. |
1783 | /// |
1784 | /// Where both fixed steady-state and burst limits are available the steady-state |
1785 | /// capacity will be used to ensure more predictable latencies. |
1786 | /// |
1787 | /// Resource limits can be changed during the runtime of a program, therefore the value is |
1788 | /// not cached and instead recomputed every time this function is called. It should not be |
1789 | /// called from hot code. |
1790 | /// |
1791 | /// The value returned by this function should be considered a simplified |
1792 | /// approximation of the actual amount of parallelism available at any given |
1793 | /// time. To get a more detailed or precise overview of the amount of |
1794 | /// parallelism available to the program, you may wish to use |
1795 | /// platform-specific APIs as well. The following platform limitations currently |
1796 | /// apply to `available_parallelism`: |
1797 | /// |
1798 | /// On Windows: |
1799 | /// - It may undercount the amount of parallelism available on systems with more |
1800 | /// than 64 logical CPUs. However, programs typically need specific support to |
1801 | /// take advantage of more than 64 logical CPUs, and in the absence of such |
1802 | /// support, the number returned by this function accurately reflects the |
1803 | /// number of logical CPUs the program can use by default. |
1804 | /// - It may overcount the amount of parallelism available on systems limited by |
1805 | /// process-wide affinity masks, or job object limitations. |
1806 | /// |
1807 | /// On Linux: |
1808 | /// - It may overcount the amount of parallelism available when limited by a |
1809 | /// process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be |
1810 | /// queried, e.g. due to sandboxing. |
1811 | /// - It may undercount the amount of parallelism if the current thread's affinity mask |
1812 | /// does not reflect the process' cpuset, e.g. due to pinned threads. |
1813 | /// - If the process is in a cgroup v1 cpu controller, this may need to |
1814 | /// scan mountpoints to find the corresponding cgroup v1 controller, |
1815 | /// which may take time on systems with large numbers of mountpoints. |
1816 | /// (This does not apply to cgroup v2, or to processes not in a |
1817 | /// cgroup.) |
1818 | /// |
1819 | /// On all targets: |
1820 | /// - It may overcount the amount of parallelism available when running in a VM |
1821 | /// with CPU usage limits (e.g. an overcommitted host). |
1822 | /// |
1823 | /// # Errors |
1824 | /// |
1825 | /// This function will, but is not limited to, return errors in the following |
1826 | /// cases: |
1827 | /// |
1828 | /// - If the amount of parallelism is not known for the target platform. |
1829 | /// - If the program lacks permission to query the amount of parallelism made |
1830 | /// available to it. |
1831 | /// |
1832 | /// # Examples |
1833 | /// |
1834 | /// ``` |
1835 | /// # #![allow (dead_code)] |
1836 | /// use std::{io, thread}; |
1837 | /// |
1838 | /// fn main() -> io::Result<()> { |
1839 | /// let count = thread::available_parallelism()?.get(); |
1840 | /// assert!(count >= 1_usize); |
1841 | /// Ok(()) |
1842 | /// } |
1843 | /// ``` |
1844 | #[doc (alias = "available_concurrency" )] // Alias for a previous name we gave this API on unstable. |
1845 | #[doc (alias = "hardware_concurrency" )] // Alias for C++ `std::thread::hardware_concurrency`. |
1846 | #[doc (alias = "num_cpus" )] // Alias for a popular ecosystem crate which provides similar functionality. |
1847 | #[stable (feature = "available_parallelism" , since = "1.59.0" )] |
1848 | pub fn available_parallelism() -> io::Result<NonZero<usize>> { |
1849 | imp::available_parallelism() |
1850 | } |
1851 | |