| 1 | //! Native threads. |
| 2 | //! |
| 3 | //! ## The threading model |
| 4 | //! |
| 5 | //! An executing Rust program consists of a collection of native OS threads, |
| 6 | //! each with their own stack and local state. Threads can be named, and |
| 7 | //! provide some built-in support for low-level synchronization. |
| 8 | //! |
| 9 | //! Communication between threads can be done through |
| 10 | //! [channels], Rust's message-passing types, along with [other forms of thread |
| 11 | //! synchronization](../../std/sync/index.html) and shared-memory data |
| 12 | //! structures. In particular, types that are guaranteed to be |
| 13 | //! threadsafe are easily shared between threads using the |
| 14 | //! atomically-reference-counted container, [`Arc`]. |
| 15 | //! |
| 16 | //! Fatal logic errors in Rust cause *thread panic*, during which |
| 17 | //! a thread will unwind the stack, running destructors and freeing |
| 18 | //! owned resources. While not meant as a 'try/catch' mechanism, panics |
| 19 | //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with |
| 20 | //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered |
| 21 | //! from, or alternatively be resumed with |
| 22 | //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic |
| 23 | //! is not caught the thread will exit, but the panic may optionally be |
| 24 | //! detected from a different thread with [`join`]. If the main thread panics |
| 25 | //! without the panic being caught, the application will exit with a |
| 26 | //! non-zero exit code. |
| 27 | //! |
| 28 | //! When the main thread of a Rust program terminates, the entire program shuts |
| 29 | //! down, even if other threads are still running. However, this module provides |
| 30 | //! convenient facilities for automatically waiting for the termination of a |
| 31 | //! thread (i.e., join). |
| 32 | //! |
| 33 | //! ## Spawning a thread |
| 34 | //! |
| 35 | //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function: |
| 36 | //! |
| 37 | //! ```rust |
| 38 | //! use std::thread; |
| 39 | //! |
| 40 | //! thread::spawn(move || { |
| 41 | //! // some work here |
| 42 | //! }); |
| 43 | //! ``` |
| 44 | //! |
| 45 | //! In this example, the spawned thread is "detached," which means that there is |
| 46 | //! no way for the program to learn when the spawned thread completes or otherwise |
| 47 | //! terminates. |
| 48 | //! |
| 49 | //! To learn when a thread completes, it is necessary to capture the [`JoinHandle`] |
| 50 | //! object that is returned by the call to [`spawn`], which provides |
| 51 | //! a `join` method that allows the caller to wait for the completion of the |
| 52 | //! spawned thread: |
| 53 | //! |
| 54 | //! ```rust |
| 55 | //! use std::thread; |
| 56 | //! |
| 57 | //! let thread_join_handle = thread::spawn(move || { |
| 58 | //! // some work here |
| 59 | //! }); |
| 60 | //! // some work here |
| 61 | //! let res = thread_join_handle.join(); |
| 62 | //! ``` |
| 63 | //! |
| 64 | //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final |
| 65 | //! value produced by the spawned thread, or [`Err`] of the value given to |
| 66 | //! a call to [`panic!`] if the thread panicked. |
| 67 | //! |
| 68 | //! Note that there is no parent/child relationship between a thread that spawns a |
| 69 | //! new thread and the thread being spawned. In particular, the spawned thread may or |
| 70 | //! may not outlive the spawning thread, unless the spawning thread is the main thread. |
| 71 | //! |
| 72 | //! ## Configuring threads |
| 73 | //! |
| 74 | //! A new thread can be configured before it is spawned via the [`Builder`] type, |
| 75 | //! which currently allows you to set the name and stack size for the thread: |
| 76 | //! |
| 77 | //! ```rust |
| 78 | //! # #![allow(unused_must_use)] |
| 79 | //! use std::thread; |
| 80 | //! |
| 81 | //! thread::Builder::new().name("thread1" .to_string()).spawn(move || { |
| 82 | //! println!("Hello, world!" ); |
| 83 | //! }); |
| 84 | //! ``` |
| 85 | //! |
| 86 | //! ## The `Thread` type |
| 87 | //! |
| 88 | //! Threads are represented via the [`Thread`] type, which you can get in one of |
| 89 | //! two ways: |
| 90 | //! |
| 91 | //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`] |
| 92 | //! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`]. |
| 93 | //! * By requesting the current thread, using the [`thread::current`] function. |
| 94 | //! |
| 95 | //! The [`thread::current`] function is available even for threads not spawned |
| 96 | //! by the APIs of this module. |
| 97 | //! |
| 98 | //! ## Thread-local storage |
| 99 | //! |
| 100 | //! This module also provides an implementation of thread-local storage for Rust |
| 101 | //! programs. Thread-local storage is a method of storing data into a global |
| 102 | //! variable that each thread in the program will have its own copy of. |
| 103 | //! Threads do not share this data, so accesses do not need to be synchronized. |
| 104 | //! |
| 105 | //! A thread-local key owns the value it contains and will destroy the value when the |
| 106 | //! thread exits. It is created with the [`thread_local!`] macro and can contain any |
| 107 | //! value that is `'static` (no borrowed pointers). It provides an accessor function, |
| 108 | //! [`with`], that yields a shared reference to the value to the specified |
| 109 | //! closure. Thread-local keys allow only shared access to values, as there would be no |
| 110 | //! way to guarantee uniqueness if mutable borrows were allowed. Most values |
| 111 | //! will want to make use of some form of **interior mutability** through the |
| 112 | //! [`Cell`] or [`RefCell`] types. |
| 113 | //! |
| 114 | //! ## Naming threads |
| 115 | //! |
| 116 | //! Threads are able to have associated names for identification purposes. By default, spawned |
| 117 | //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass |
| 118 | //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the |
| 119 | //! thread, use [`Thread::name`]. A couple of examples where the name of a thread gets used: |
| 120 | //! |
| 121 | //! * If a panic occurs in a named thread, the thread name will be printed in the panic message. |
| 122 | //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in |
| 123 | //! unix-like platforms). |
| 124 | //! |
| 125 | //! ## Stack size |
| 126 | //! |
| 127 | //! The default stack size is platform-dependent and subject to change. |
| 128 | //! Currently, it is 2 MiB on all Tier-1 platforms. |
| 129 | //! |
| 130 | //! There are two ways to manually specify the stack size for spawned threads: |
| 131 | //! |
| 132 | //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`]. |
| 133 | //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack |
| 134 | //! size (in bytes). Note that setting [`Builder::stack_size`] will override this. Be aware that |
| 135 | //! changes to `RUST_MIN_STACK` may be ignored after program start. |
| 136 | //! |
| 137 | //! Note that the stack size of the main thread is *not* determined by Rust. |
| 138 | //! |
| 139 | //! [channels]: crate::sync::mpsc |
| 140 | //! [`join`]: JoinHandle::join |
| 141 | //! [`Result`]: crate::result::Result |
| 142 | //! [`Ok`]: crate::result::Result::Ok |
| 143 | //! [`Err`]: crate::result::Result::Err |
| 144 | //! [`thread::current`]: current::current |
| 145 | //! [`thread::Result`]: Result |
| 146 | //! [`unpark`]: Thread::unpark |
| 147 | //! [`thread::park_timeout`]: park_timeout |
| 148 | //! [`Cell`]: crate::cell::Cell |
| 149 | //! [`RefCell`]: crate::cell::RefCell |
| 150 | //! [`with`]: LocalKey::with |
| 151 | //! [`thread_local!`]: crate::thread_local |
| 152 | |
| 153 | #![stable (feature = "rust1" , since = "1.0.0" )] |
| 154 | #![deny (unsafe_op_in_unsafe_fn)] |
| 155 | // Under `test`, `__FastLocalKeyInner` seems unused. |
| 156 | #![cfg_attr (test, allow(dead_code))] |
| 157 | |
| 158 | #[cfg (all(test, not(any(target_os = "emscripten" , target_os = "wasi" ))))] |
| 159 | mod tests; |
| 160 | |
| 161 | use crate::any::Any; |
| 162 | use crate::cell::UnsafeCell; |
| 163 | use crate::ffi::CStr; |
| 164 | use crate::marker::PhantomData; |
| 165 | use crate::mem::{self, ManuallyDrop, forget}; |
| 166 | use crate::num::NonZero; |
| 167 | use crate::pin::Pin; |
| 168 | use crate::sync::Arc; |
| 169 | use crate::sync::atomic::{Atomic, AtomicUsize, Ordering}; |
| 170 | use crate::sys::sync::Parker; |
| 171 | use crate::sys::thread as imp; |
| 172 | use crate::sys_common::{AsInner, IntoInner}; |
| 173 | use crate::time::{Duration, Instant}; |
| 174 | use crate::{env, fmt, io, panic, panicking, str}; |
| 175 | |
| 176 | #[stable (feature = "scoped_threads" , since = "1.63.0" )] |
| 177 | mod scoped; |
| 178 | |
| 179 | #[stable (feature = "scoped_threads" , since = "1.63.0" )] |
| 180 | pub use scoped::{Scope, ScopedJoinHandle, scope}; |
| 181 | |
| 182 | mod current; |
| 183 | |
| 184 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 185 | pub use current::current; |
| 186 | pub(crate) use current::{current_id, current_or_unnamed, drop_current}; |
| 187 | use current::{set_current, try_with_current}; |
| 188 | |
| 189 | mod spawnhook; |
| 190 | |
| 191 | #[unstable (feature = "thread_spawn_hook" , issue = "132951" )] |
| 192 | pub use spawnhook::add_spawn_hook; |
| 193 | |
| 194 | //////////////////////////////////////////////////////////////////////////////// |
| 195 | // Thread-local storage |
| 196 | //////////////////////////////////////////////////////////////////////////////// |
| 197 | |
| 198 | #[macro_use ] |
| 199 | mod local; |
| 200 | |
| 201 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 202 | pub use self::local::{AccessError, LocalKey}; |
| 203 | |
| 204 | // Implementation details used by the thread_local!{} macro. |
| 205 | #[doc (hidden)] |
| 206 | #[unstable (feature = "thread_local_internals" , issue = "none" )] |
| 207 | pub mod local_impl { |
| 208 | pub use crate::sys::thread_local::*; |
| 209 | } |
| 210 | |
| 211 | //////////////////////////////////////////////////////////////////////////////// |
| 212 | // Builder |
| 213 | //////////////////////////////////////////////////////////////////////////////// |
| 214 | |
| 215 | /// Thread factory, which can be used in order to configure the properties of |
| 216 | /// a new thread. |
| 217 | /// |
| 218 | /// Methods can be chained on it in order to configure it. |
| 219 | /// |
| 220 | /// The two configurations available are: |
| 221 | /// |
| 222 | /// - [`name`]: specifies an [associated name for the thread][naming-threads] |
| 223 | /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size] |
| 224 | /// |
| 225 | /// The [`spawn`] method will take ownership of the builder and create an |
| 226 | /// [`io::Result`] to the thread handle with the given configuration. |
| 227 | /// |
| 228 | /// The [`thread::spawn`] free function uses a `Builder` with default |
| 229 | /// configuration and [`unwrap`]s its return value. |
| 230 | /// |
| 231 | /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want |
| 232 | /// to recover from a failure to launch a thread, indeed the free function will |
| 233 | /// panic where the `Builder` method will return a [`io::Result`]. |
| 234 | /// |
| 235 | /// # Examples |
| 236 | /// |
| 237 | /// ``` |
| 238 | /// use std::thread; |
| 239 | /// |
| 240 | /// let builder = thread::Builder::new(); |
| 241 | /// |
| 242 | /// let handler = builder.spawn(|| { |
| 243 | /// // thread code |
| 244 | /// }).unwrap(); |
| 245 | /// |
| 246 | /// handler.join().unwrap(); |
| 247 | /// ``` |
| 248 | /// |
| 249 | /// [`stack_size`]: Builder::stack_size |
| 250 | /// [`name`]: Builder::name |
| 251 | /// [`spawn`]: Builder::spawn |
| 252 | /// [`thread::spawn`]: spawn |
| 253 | /// [`io::Result`]: crate::io::Result |
| 254 | /// [`unwrap`]: crate::result::Result::unwrap |
| 255 | /// [naming-threads]: ./index.html#naming-threads |
| 256 | /// [stack-size]: ./index.html#stack-size |
| 257 | #[must_use = "must eventually spawn the thread" ] |
| 258 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 259 | #[derive (Debug)] |
| 260 | pub struct Builder { |
| 261 | // A name for the thread-to-be, for identification in panic messages |
| 262 | name: Option<String>, |
| 263 | // The size of the stack for the spawned thread in bytes |
| 264 | stack_size: Option<usize>, |
| 265 | // Skip running and inheriting the thread spawn hooks |
| 266 | no_hooks: bool, |
| 267 | } |
| 268 | |
| 269 | impl Builder { |
| 270 | /// Generates the base configuration for spawning a thread, from which |
| 271 | /// configuration methods can be chained. |
| 272 | /// |
| 273 | /// # Examples |
| 274 | /// |
| 275 | /// ``` |
| 276 | /// use std::thread; |
| 277 | /// |
| 278 | /// let builder = thread::Builder::new() |
| 279 | /// .name("foo" .into()) |
| 280 | /// .stack_size(32 * 1024); |
| 281 | /// |
| 282 | /// let handler = builder.spawn(|| { |
| 283 | /// // thread code |
| 284 | /// }).unwrap(); |
| 285 | /// |
| 286 | /// handler.join().unwrap(); |
| 287 | /// ``` |
| 288 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 289 | pub fn new() -> Builder { |
| 290 | Builder { name: None, stack_size: None, no_hooks: false } |
| 291 | } |
| 292 | |
| 293 | /// Names the thread-to-be. Currently the name is used for identification |
| 294 | /// only in panic messages. |
| 295 | /// |
| 296 | /// The name must not contain null bytes (`\0`). |
| 297 | /// |
| 298 | /// For more information about named threads, see |
| 299 | /// [this module-level documentation][naming-threads]. |
| 300 | /// |
| 301 | /// # Examples |
| 302 | /// |
| 303 | /// ``` |
| 304 | /// use std::thread; |
| 305 | /// |
| 306 | /// let builder = thread::Builder::new() |
| 307 | /// .name("foo" .into()); |
| 308 | /// |
| 309 | /// let handler = builder.spawn(|| { |
| 310 | /// assert_eq!(thread::current().name(), Some("foo" )) |
| 311 | /// }).unwrap(); |
| 312 | /// |
| 313 | /// handler.join().unwrap(); |
| 314 | /// ``` |
| 315 | /// |
| 316 | /// [naming-threads]: ./index.html#naming-threads |
| 317 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 318 | pub fn name(mut self, name: String) -> Builder { |
| 319 | self.name = Some(name); |
| 320 | self |
| 321 | } |
| 322 | |
| 323 | /// Sets the size of the stack (in bytes) for the new thread. |
| 324 | /// |
| 325 | /// The actual stack size may be greater than this value if |
| 326 | /// the platform specifies a minimal stack size. |
| 327 | /// |
| 328 | /// For more information about the stack size for threads, see |
| 329 | /// [this module-level documentation][stack-size]. |
| 330 | /// |
| 331 | /// # Examples |
| 332 | /// |
| 333 | /// ``` |
| 334 | /// use std::thread; |
| 335 | /// |
| 336 | /// let builder = thread::Builder::new().stack_size(32 * 1024); |
| 337 | /// ``` |
| 338 | /// |
| 339 | /// [stack-size]: ./index.html#stack-size |
| 340 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 341 | pub fn stack_size(mut self, size: usize) -> Builder { |
| 342 | self.stack_size = Some(size); |
| 343 | self |
| 344 | } |
| 345 | |
| 346 | /// Disables running and inheriting [spawn hooks](add_spawn_hook). |
| 347 | /// |
| 348 | /// Use this if the parent thread is in no way relevant for the child thread. |
| 349 | /// For example, when lazily spawning threads for a thread pool. |
| 350 | #[unstable (feature = "thread_spawn_hook" , issue = "132951" )] |
| 351 | pub fn no_hooks(mut self) -> Builder { |
| 352 | self.no_hooks = true; |
| 353 | self |
| 354 | } |
| 355 | |
| 356 | /// Spawns a new thread by taking ownership of the `Builder`, and returns an |
| 357 | /// [`io::Result`] to its [`JoinHandle`]. |
| 358 | /// |
| 359 | /// The spawned thread may outlive the caller (unless the caller thread |
| 360 | /// is the main thread; the whole process is terminated when the main |
| 361 | /// thread finishes). The join handle can be used to block on |
| 362 | /// termination of the spawned thread, including recovering its panics. |
| 363 | /// |
| 364 | /// For a more complete documentation see [`thread::spawn`][`spawn`]. |
| 365 | /// |
| 366 | /// # Errors |
| 367 | /// |
| 368 | /// Unlike the [`spawn`] free function, this method yields an |
| 369 | /// [`io::Result`] to capture any failure to create the thread at |
| 370 | /// the OS level. |
| 371 | /// |
| 372 | /// [`io::Result`]: crate::io::Result |
| 373 | /// |
| 374 | /// # Panics |
| 375 | /// |
| 376 | /// Panics if a thread name was set and it contained null bytes. |
| 377 | /// |
| 378 | /// # Examples |
| 379 | /// |
| 380 | /// ``` |
| 381 | /// use std::thread; |
| 382 | /// |
| 383 | /// let builder = thread::Builder::new(); |
| 384 | /// |
| 385 | /// let handler = builder.spawn(|| { |
| 386 | /// // thread code |
| 387 | /// }).unwrap(); |
| 388 | /// |
| 389 | /// handler.join().unwrap(); |
| 390 | /// ``` |
| 391 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 392 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
| 393 | pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>> |
| 394 | where |
| 395 | F: FnOnce() -> T, |
| 396 | F: Send + 'static, |
| 397 | T: Send + 'static, |
| 398 | { |
| 399 | unsafe { self.spawn_unchecked(f) } |
| 400 | } |
| 401 | |
| 402 | /// Spawns a new thread without any lifetime restrictions by taking ownership |
| 403 | /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`]. |
| 404 | /// |
| 405 | /// The spawned thread may outlive the caller (unless the caller thread |
| 406 | /// is the main thread; the whole process is terminated when the main |
| 407 | /// thread finishes). The join handle can be used to block on |
| 408 | /// termination of the spawned thread, including recovering its panics. |
| 409 | /// |
| 410 | /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`], |
| 411 | /// except for the relaxed lifetime bounds, which render it unsafe. |
| 412 | /// For a more complete documentation see [`thread::spawn`][`spawn`]. |
| 413 | /// |
| 414 | /// # Errors |
| 415 | /// |
| 416 | /// Unlike the [`spawn`] free function, this method yields an |
| 417 | /// [`io::Result`] to capture any failure to create the thread at |
| 418 | /// the OS level. |
| 419 | /// |
| 420 | /// # Panics |
| 421 | /// |
| 422 | /// Panics if a thread name was set and it contained null bytes. |
| 423 | /// |
| 424 | /// # Safety |
| 425 | /// |
| 426 | /// The caller has to ensure that the spawned thread does not outlive any |
| 427 | /// references in the supplied thread closure and its return type. |
| 428 | /// This can be guaranteed in two ways: |
| 429 | /// |
| 430 | /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced |
| 431 | /// data is dropped |
| 432 | /// - use only types with `'static` lifetime bounds, i.e., those with no or only |
| 433 | /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`] |
| 434 | /// and [`thread::spawn`][`spawn`] enforce this property statically) |
| 435 | /// |
| 436 | /// # Examples |
| 437 | /// |
| 438 | /// ``` |
| 439 | /// use std::thread; |
| 440 | /// |
| 441 | /// let builder = thread::Builder::new(); |
| 442 | /// |
| 443 | /// let x = 1; |
| 444 | /// let thread_x = &x; |
| 445 | /// |
| 446 | /// let handler = unsafe { |
| 447 | /// builder.spawn_unchecked(move || { |
| 448 | /// println!("x = {}" , *thread_x); |
| 449 | /// }).unwrap() |
| 450 | /// }; |
| 451 | /// |
| 452 | /// // caller has to ensure `join()` is called, otherwise |
| 453 | /// // it is possible to access freed memory if `x` gets |
| 454 | /// // dropped before the thread closure is executed! |
| 455 | /// handler.join().unwrap(); |
| 456 | /// ``` |
| 457 | /// |
| 458 | /// [`io::Result`]: crate::io::Result |
| 459 | #[stable (feature = "thread_spawn_unchecked" , since = "1.82.0" )] |
| 460 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
| 461 | pub unsafe fn spawn_unchecked<F, T>(self, f: F) -> io::Result<JoinHandle<T>> |
| 462 | where |
| 463 | F: FnOnce() -> T, |
| 464 | F: Send, |
| 465 | T: Send, |
| 466 | { |
| 467 | Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?)) |
| 468 | } |
| 469 | |
| 470 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
| 471 | unsafe fn spawn_unchecked_<'scope, F, T>( |
| 472 | self, |
| 473 | f: F, |
| 474 | scope_data: Option<Arc<scoped::ScopeData>>, |
| 475 | ) -> io::Result<JoinInner<'scope, T>> |
| 476 | where |
| 477 | F: FnOnce() -> T, |
| 478 | F: Send, |
| 479 | T: Send, |
| 480 | { |
| 481 | let Builder { name, stack_size, no_hooks } = self; |
| 482 | |
| 483 | let stack_size = stack_size.unwrap_or_else(|| { |
| 484 | static MIN: Atomic<usize> = AtomicUsize::new(0); |
| 485 | |
| 486 | match MIN.load(Ordering::Relaxed) { |
| 487 | 0 => {} |
| 488 | n => return n - 1, |
| 489 | } |
| 490 | |
| 491 | let amt = env::var_os("RUST_MIN_STACK" ) |
| 492 | .and_then(|s| s.to_str().and_then(|s| s.parse().ok())) |
| 493 | .unwrap_or(imp::DEFAULT_MIN_STACK_SIZE); |
| 494 | |
| 495 | // 0 is our sentinel value, so ensure that we'll never see 0 after |
| 496 | // initialization has run |
| 497 | MIN.store(amt + 1, Ordering::Relaxed); |
| 498 | amt |
| 499 | }); |
| 500 | |
| 501 | let id = ThreadId::new(); |
| 502 | let my_thread = Thread::new(id, name); |
| 503 | |
| 504 | let hooks = if no_hooks { |
| 505 | spawnhook::ChildSpawnHooks::default() |
| 506 | } else { |
| 507 | spawnhook::run_spawn_hooks(&my_thread) |
| 508 | }; |
| 509 | |
| 510 | let their_thread = my_thread.clone(); |
| 511 | |
| 512 | let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet { |
| 513 | scope: scope_data, |
| 514 | result: UnsafeCell::new(None), |
| 515 | _marker: PhantomData, |
| 516 | }); |
| 517 | let their_packet = my_packet.clone(); |
| 518 | |
| 519 | // Pass `f` in `MaybeUninit` because actually that closure might *run longer than the lifetime of `F`*. |
| 520 | // See <https://github.com/rust-lang/rust/issues/101983> for more details. |
| 521 | // To prevent leaks we use a wrapper that drops its contents. |
| 522 | #[repr (transparent)] |
| 523 | struct MaybeDangling<T>(mem::MaybeUninit<T>); |
| 524 | impl<T> MaybeDangling<T> { |
| 525 | fn new(x: T) -> Self { |
| 526 | MaybeDangling(mem::MaybeUninit::new(x)) |
| 527 | } |
| 528 | fn into_inner(self) -> T { |
| 529 | // Make sure we don't drop. |
| 530 | let this = ManuallyDrop::new(self); |
| 531 | // SAFETY: we are always initialized. |
| 532 | unsafe { this.0.assume_init_read() } |
| 533 | } |
| 534 | } |
| 535 | impl<T> Drop for MaybeDangling<T> { |
| 536 | fn drop(&mut self) { |
| 537 | // SAFETY: we are always initialized. |
| 538 | unsafe { self.0.assume_init_drop() }; |
| 539 | } |
| 540 | } |
| 541 | |
| 542 | let f = MaybeDangling::new(f); |
| 543 | let main = move || { |
| 544 | if let Err(_thread) = set_current(their_thread.clone()) { |
| 545 | // Both the current thread handle and the ID should not be |
| 546 | // initialized yet. Since only the C runtime and some of our |
| 547 | // platform code run before this, this point shouldn't be |
| 548 | // reachable. Use an abort to save binary size (see #123356). |
| 549 | rtabort!("something here is badly broken!" ); |
| 550 | } |
| 551 | |
| 552 | if let Some(name) = their_thread.cname() { |
| 553 | imp::Thread::set_name(name); |
| 554 | } |
| 555 | |
| 556 | let f = f.into_inner(); |
| 557 | let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| { |
| 558 | crate::sys::backtrace::__rust_begin_short_backtrace(|| hooks.run()); |
| 559 | crate::sys::backtrace::__rust_begin_short_backtrace(f) |
| 560 | })); |
| 561 | // SAFETY: `their_packet` as been built just above and moved by the |
| 562 | // closure (it is an Arc<...>) and `my_packet` will be stored in the |
| 563 | // same `JoinInner` as this closure meaning the mutation will be |
| 564 | // safe (not modify it and affect a value far away). |
| 565 | unsafe { *their_packet.result.get() = Some(try_result) }; |
| 566 | // Here `their_packet` gets dropped, and if this is the last `Arc` for that packet that |
| 567 | // will call `decrement_num_running_threads` and therefore signal that this thread is |
| 568 | // done. |
| 569 | drop(their_packet); |
| 570 | // Here, the lifetime `'scope` can end. `main` keeps running for a bit |
| 571 | // after that before returning itself. |
| 572 | }; |
| 573 | |
| 574 | if let Some(scope_data) = &my_packet.scope { |
| 575 | scope_data.increment_num_running_threads(); |
| 576 | } |
| 577 | |
| 578 | let main = Box::new(main); |
| 579 | // SAFETY: dynamic size and alignment of the Box remain the same. See below for why the |
| 580 | // lifetime change is justified. |
| 581 | let main = |
| 582 | unsafe { Box::from_raw(Box::into_raw(main) as *mut (dyn FnOnce() + Send + 'static)) }; |
| 583 | |
| 584 | Ok(JoinInner { |
| 585 | // SAFETY: |
| 586 | // |
| 587 | // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed |
| 588 | // through FFI or otherwise used with low-level threading primitives that have no |
| 589 | // notion of or way to enforce lifetimes. |
| 590 | // |
| 591 | // As mentioned in the `Safety` section of this function's documentation, the caller of |
| 592 | // this function needs to guarantee that the passed-in lifetime is sufficiently long |
| 593 | // for the lifetime of the thread. |
| 594 | // |
| 595 | // Similarly, the `sys` implementation must guarantee that no references to the closure |
| 596 | // exist after the thread has terminated, which is signaled by `Thread::join` |
| 597 | // returning. |
| 598 | native: unsafe { imp::Thread::new(stack_size, main)? }, |
| 599 | thread: my_thread, |
| 600 | packet: my_packet, |
| 601 | }) |
| 602 | } |
| 603 | } |
| 604 | |
| 605 | //////////////////////////////////////////////////////////////////////////////// |
| 606 | // Free functions |
| 607 | //////////////////////////////////////////////////////////////////////////////// |
| 608 | |
| 609 | /// Spawns a new thread, returning a [`JoinHandle`] for it. |
| 610 | /// |
| 611 | /// The join handle provides a [`join`] method that can be used to join the spawned |
| 612 | /// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing |
| 613 | /// the argument given to [`panic!`]. |
| 614 | /// |
| 615 | /// If the join handle is dropped, the spawned thread will implicitly be *detached*. |
| 616 | /// In this case, the spawned thread may no longer be joined. |
| 617 | /// (It is the responsibility of the program to either eventually join threads it |
| 618 | /// creates or detach them; otherwise, a resource leak will result.) |
| 619 | /// |
| 620 | /// This call will create a thread using default parameters of [`Builder`], if you |
| 621 | /// want to specify the stack size or the name of the thread, use this API |
| 622 | /// instead. |
| 623 | /// |
| 624 | /// As you can see in the signature of `spawn` there are two constraints on |
| 625 | /// both the closure given to `spawn` and its return value, let's explain them: |
| 626 | /// |
| 627 | /// - The `'static` constraint means that the closure and its return value |
| 628 | /// must have a lifetime of the whole program execution. The reason for this |
| 629 | /// is that threads can outlive the lifetime they have been created in. |
| 630 | /// |
| 631 | /// Indeed if the thread, and by extension its return value, can outlive their |
| 632 | /// caller, we need to make sure that they will be valid afterwards, and since |
| 633 | /// we *can't* know when it will return we need to have them valid as long as |
| 634 | /// possible, that is until the end of the program, hence the `'static` |
| 635 | /// lifetime. |
| 636 | /// - The [`Send`] constraint is because the closure will need to be passed |
| 637 | /// *by value* from the thread where it is spawned to the new thread. Its |
| 638 | /// return value will need to be passed from the new thread to the thread |
| 639 | /// where it is `join`ed. |
| 640 | /// As a reminder, the [`Send`] marker trait expresses that it is safe to be |
| 641 | /// passed from thread to thread. [`Sync`] expresses that it is safe to have a |
| 642 | /// reference be passed from thread to thread. |
| 643 | /// |
| 644 | /// # Panics |
| 645 | /// |
| 646 | /// Panics if the OS fails to create a thread; use [`Builder::spawn`] |
| 647 | /// to recover from such errors. |
| 648 | /// |
| 649 | /// # Examples |
| 650 | /// |
| 651 | /// Creating a thread. |
| 652 | /// |
| 653 | /// ``` |
| 654 | /// use std::thread; |
| 655 | /// |
| 656 | /// let handler = thread::spawn(|| { |
| 657 | /// // thread code |
| 658 | /// }); |
| 659 | /// |
| 660 | /// handler.join().unwrap(); |
| 661 | /// ``` |
| 662 | /// |
| 663 | /// As mentioned in the module documentation, threads are usually made to |
| 664 | /// communicate using [`channels`], here is how it usually looks. |
| 665 | /// |
| 666 | /// This example also shows how to use `move`, in order to give ownership |
| 667 | /// of values to a thread. |
| 668 | /// |
| 669 | /// ``` |
| 670 | /// use std::thread; |
| 671 | /// use std::sync::mpsc::channel; |
| 672 | /// |
| 673 | /// let (tx, rx) = channel(); |
| 674 | /// |
| 675 | /// let sender = thread::spawn(move || { |
| 676 | /// tx.send("Hello, thread" .to_owned()) |
| 677 | /// .expect("Unable to send on channel" ); |
| 678 | /// }); |
| 679 | /// |
| 680 | /// let receiver = thread::spawn(move || { |
| 681 | /// let value = rx.recv().expect("Unable to receive from channel" ); |
| 682 | /// println!("{value}" ); |
| 683 | /// }); |
| 684 | /// |
| 685 | /// sender.join().expect("The sender thread has panicked" ); |
| 686 | /// receiver.join().expect("The receiver thread has panicked" ); |
| 687 | /// ``` |
| 688 | /// |
| 689 | /// A thread can also return a value through its [`JoinHandle`], you can use |
| 690 | /// this to make asynchronous computations (futures might be more appropriate |
| 691 | /// though). |
| 692 | /// |
| 693 | /// ``` |
| 694 | /// use std::thread; |
| 695 | /// |
| 696 | /// let computation = thread::spawn(|| { |
| 697 | /// // Some expensive computation. |
| 698 | /// 42 |
| 699 | /// }); |
| 700 | /// |
| 701 | /// let result = computation.join().unwrap(); |
| 702 | /// println!("{result}" ); |
| 703 | /// ``` |
| 704 | /// |
| 705 | /// # Notes |
| 706 | /// |
| 707 | /// This function has the same minimal guarantee regarding "foreign" unwinding operations (e.g. |
| 708 | /// an exception thrown from C++ code, or a `panic!` in Rust code compiled or linked with a |
| 709 | /// different runtime) as [`catch_unwind`]; namely, if the thread created with `thread::spawn` |
| 710 | /// unwinds all the way to the root with such an exception, one of two behaviors are possible, |
| 711 | /// and it is unspecified which will occur: |
| 712 | /// |
| 713 | /// * The process aborts. |
| 714 | /// * The process does not abort, and [`join`] will return a `Result::Err` |
| 715 | /// containing an opaque type. |
| 716 | /// |
| 717 | /// [`catch_unwind`]: ../../std/panic/fn.catch_unwind.html |
| 718 | /// [`channels`]: crate::sync::mpsc |
| 719 | /// [`join`]: JoinHandle::join |
| 720 | /// [`Err`]: crate::result::Result::Err |
| 721 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 722 | #[cfg_attr (miri, track_caller)] // even without panics, this helps for Miri backtraces |
| 723 | pub fn spawn<F, T>(f: F) -> JoinHandle<T> |
| 724 | where |
| 725 | F: FnOnce() -> T, |
| 726 | F: Send + 'static, |
| 727 | T: Send + 'static, |
| 728 | { |
| 729 | Builder::new().spawn(f).expect(msg:"failed to spawn thread" ) |
| 730 | } |
| 731 | |
| 732 | /// Cooperatively gives up a timeslice to the OS scheduler. |
| 733 | /// |
| 734 | /// This calls the underlying OS scheduler's yield primitive, signaling |
| 735 | /// that the calling thread is willing to give up its remaining timeslice |
| 736 | /// so that the OS may schedule other threads on the CPU. |
| 737 | /// |
| 738 | /// A drawback of yielding in a loop is that if the OS does not have any |
| 739 | /// other ready threads to run on the current CPU, the thread will effectively |
| 740 | /// busy-wait, which wastes CPU time and energy. |
| 741 | /// |
| 742 | /// Therefore, when waiting for events of interest, a programmer's first |
| 743 | /// choice should be to use synchronization devices such as [`channel`]s, |
| 744 | /// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are |
| 745 | /// implemented in a blocking manner, giving up the CPU until the event |
| 746 | /// of interest has occurred which avoids repeated yielding. |
| 747 | /// |
| 748 | /// `yield_now` should thus be used only rarely, mostly in situations where |
| 749 | /// repeated polling is required because there is no other suitable way to |
| 750 | /// learn when an event of interest has occurred. |
| 751 | /// |
| 752 | /// # Examples |
| 753 | /// |
| 754 | /// ``` |
| 755 | /// use std::thread; |
| 756 | /// |
| 757 | /// thread::yield_now(); |
| 758 | /// ``` |
| 759 | /// |
| 760 | /// [`channel`]: crate::sync::mpsc |
| 761 | /// [`join`]: JoinHandle::join |
| 762 | /// [`Condvar`]: crate::sync::Condvar |
| 763 | /// [`Mutex`]: crate::sync::Mutex |
| 764 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 765 | pub fn yield_now() { |
| 766 | imp::Thread::yield_now() |
| 767 | } |
| 768 | |
| 769 | /// Determines whether the current thread is unwinding because of panic. |
| 770 | /// |
| 771 | /// A common use of this feature is to poison shared resources when writing |
| 772 | /// unsafe code, by checking `panicking` when the `drop` is called. |
| 773 | /// |
| 774 | /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex] |
| 775 | /// already poison themselves when a thread panics while holding the lock. |
| 776 | /// |
| 777 | /// This can also be used in multithreaded applications, in order to send a |
| 778 | /// message to other threads warning that a thread has panicked (e.g., for |
| 779 | /// monitoring purposes). |
| 780 | /// |
| 781 | /// # Examples |
| 782 | /// |
| 783 | /// ```should_panic |
| 784 | /// use std::thread; |
| 785 | /// |
| 786 | /// struct SomeStruct; |
| 787 | /// |
| 788 | /// impl Drop for SomeStruct { |
| 789 | /// fn drop(&mut self) { |
| 790 | /// if thread::panicking() { |
| 791 | /// println!("dropped while unwinding" ); |
| 792 | /// } else { |
| 793 | /// println!("dropped while not unwinding" ); |
| 794 | /// } |
| 795 | /// } |
| 796 | /// } |
| 797 | /// |
| 798 | /// { |
| 799 | /// print!("a: " ); |
| 800 | /// let a = SomeStruct; |
| 801 | /// } |
| 802 | /// |
| 803 | /// { |
| 804 | /// print!("b: " ); |
| 805 | /// let b = SomeStruct; |
| 806 | /// panic!() |
| 807 | /// } |
| 808 | /// ``` |
| 809 | /// |
| 810 | /// [Mutex]: crate::sync::Mutex |
| 811 | #[inline ] |
| 812 | #[must_use ] |
| 813 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 814 | pub fn panicking() -> bool { |
| 815 | panicking::panicking() |
| 816 | } |
| 817 | |
| 818 | /// Uses [`sleep`]. |
| 819 | /// |
| 820 | /// Puts the current thread to sleep for at least the specified amount of time. |
| 821 | /// |
| 822 | /// The thread may sleep longer than the duration specified due to scheduling |
| 823 | /// specifics or platform-dependent functionality. It will never sleep less. |
| 824 | /// |
| 825 | /// This function is blocking, and should not be used in `async` functions. |
| 826 | /// |
| 827 | /// # Platform-specific behavior |
| 828 | /// |
| 829 | /// On Unix platforms, the underlying syscall may be interrupted by a |
| 830 | /// spurious wakeup or signal handler. To ensure the sleep occurs for at least |
| 831 | /// the specified duration, this function may invoke that system call multiple |
| 832 | /// times. |
| 833 | /// |
| 834 | /// # Examples |
| 835 | /// |
| 836 | /// ```no_run |
| 837 | /// use std::thread; |
| 838 | /// |
| 839 | /// // Let's sleep for 2 seconds: |
| 840 | /// thread::sleep_ms(2000); |
| 841 | /// ``` |
| 842 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 843 | #[deprecated (since = "1.6.0" , note = "replaced by `std::thread::sleep`" )] |
| 844 | pub fn sleep_ms(ms: u32) { |
| 845 | sleep(dur:Duration::from_millis(ms as u64)) |
| 846 | } |
| 847 | |
| 848 | /// Puts the current thread to sleep for at least the specified amount of time. |
| 849 | /// |
| 850 | /// The thread may sleep longer than the duration specified due to scheduling |
| 851 | /// specifics or platform-dependent functionality. It will never sleep less. |
| 852 | /// |
| 853 | /// This function is blocking, and should not be used in `async` functions. |
| 854 | /// |
| 855 | /// # Platform-specific behavior |
| 856 | /// |
| 857 | /// On Unix platforms, the underlying syscall may be interrupted by a |
| 858 | /// spurious wakeup or signal handler. To ensure the sleep occurs for at least |
| 859 | /// the specified duration, this function may invoke that system call multiple |
| 860 | /// times. |
| 861 | /// Platforms which do not support nanosecond precision for sleeping will |
| 862 | /// have `dur` rounded up to the nearest granularity of time they can sleep for. |
| 863 | /// |
| 864 | /// Currently, specifying a zero duration on Unix platforms returns immediately |
| 865 | /// without invoking the underlying [`nanosleep`] syscall, whereas on Windows |
| 866 | /// platforms the underlying [`Sleep`] syscall is always invoked. |
| 867 | /// If the intention is to yield the current time-slice you may want to use |
| 868 | /// [`yield_now`] instead. |
| 869 | /// |
| 870 | /// [`nanosleep`]: https://linux.die.net/man/2/nanosleep |
| 871 | /// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep |
| 872 | /// |
| 873 | /// # Examples |
| 874 | /// |
| 875 | /// ```no_run |
| 876 | /// use std::{thread, time}; |
| 877 | /// |
| 878 | /// let ten_millis = time::Duration::from_millis(10); |
| 879 | /// let now = time::Instant::now(); |
| 880 | /// |
| 881 | /// thread::sleep(ten_millis); |
| 882 | /// |
| 883 | /// assert!(now.elapsed() >= ten_millis); |
| 884 | /// ``` |
| 885 | #[stable (feature = "thread_sleep" , since = "1.4.0" )] |
| 886 | pub fn sleep(dur: Duration) { |
| 887 | imp::Thread::sleep(dur) |
| 888 | } |
| 889 | |
| 890 | /// Puts the current thread to sleep until the specified deadline has passed. |
| 891 | /// |
| 892 | /// The thread may still be asleep after the deadline specified due to |
| 893 | /// scheduling specifics or platform-dependent functionality. It will never |
| 894 | /// wake before. |
| 895 | /// |
| 896 | /// This function is blocking, and should not be used in `async` functions. |
| 897 | /// |
| 898 | /// # Platform-specific behavior |
| 899 | /// |
| 900 | /// This function uses [`sleep`] internally, see its platform-specific behavior. |
| 901 | /// |
| 902 | /// |
| 903 | /// # Examples |
| 904 | /// |
| 905 | /// A simple game loop that limits the game to 60 frames per second. |
| 906 | /// |
| 907 | /// ```no_run |
| 908 | /// #![feature(thread_sleep_until)] |
| 909 | /// # use std::time::{Duration, Instant}; |
| 910 | /// # use std::thread; |
| 911 | /// # |
| 912 | /// # fn update() {} |
| 913 | /// # fn render() {} |
| 914 | /// # |
| 915 | /// let max_fps = 60.0; |
| 916 | /// let frame_time = Duration::from_secs_f32(1.0/max_fps); |
| 917 | /// let mut next_frame = Instant::now(); |
| 918 | /// loop { |
| 919 | /// thread::sleep_until(next_frame); |
| 920 | /// next_frame += frame_time; |
| 921 | /// update(); |
| 922 | /// render(); |
| 923 | /// } |
| 924 | /// ``` |
| 925 | /// |
| 926 | /// A slow api we must not call too fast and which takes a few |
| 927 | /// tries before succeeding. By using `sleep_until` the time the |
| 928 | /// api call takes does not influence when we retry or when we give up |
| 929 | /// |
| 930 | /// ```no_run |
| 931 | /// #![feature(thread_sleep_until)] |
| 932 | /// # use std::time::{Duration, Instant}; |
| 933 | /// # use std::thread; |
| 934 | /// # |
| 935 | /// # enum Status { |
| 936 | /// # Ready(usize), |
| 937 | /// # Waiting, |
| 938 | /// # } |
| 939 | /// # fn slow_web_api_call() -> Status { Status::Ready(42) } |
| 940 | /// # |
| 941 | /// # const MAX_DURATION: Duration = Duration::from_secs(10); |
| 942 | /// # |
| 943 | /// # fn try_api_call() -> Result<usize, ()> { |
| 944 | /// let deadline = Instant::now() + MAX_DURATION; |
| 945 | /// let delay = Duration::from_millis(250); |
| 946 | /// let mut next_attempt = Instant::now(); |
| 947 | /// loop { |
| 948 | /// if Instant::now() > deadline { |
| 949 | /// break Err(()); |
| 950 | /// } |
| 951 | /// if let Status::Ready(data) = slow_web_api_call() { |
| 952 | /// break Ok(data); |
| 953 | /// } |
| 954 | /// |
| 955 | /// next_attempt = deadline.min(next_attempt + delay); |
| 956 | /// thread::sleep_until(next_attempt); |
| 957 | /// } |
| 958 | /// # } |
| 959 | /// # let _data = try_api_call(); |
| 960 | /// ``` |
| 961 | #[unstable (feature = "thread_sleep_until" , issue = "113752" )] |
| 962 | pub fn sleep_until(deadline: Instant) { |
| 963 | let now: Instant = Instant::now(); |
| 964 | |
| 965 | if let Some(delay: Duration) = deadline.checked_duration_since(earlier:now) { |
| 966 | sleep(dur:delay); |
| 967 | } |
| 968 | } |
| 969 | |
| 970 | /// Used to ensure that `park` and `park_timeout` do not unwind, as that can |
| 971 | /// cause undefined behavior if not handled correctly (see #102398 for context). |
| 972 | struct PanicGuard; |
| 973 | |
| 974 | impl Drop for PanicGuard { |
| 975 | fn drop(&mut self) { |
| 976 | rtabort!("an irrecoverable error occurred while synchronizing threads" ) |
| 977 | } |
| 978 | } |
| 979 | |
| 980 | /// Blocks unless or until the current thread's token is made available. |
| 981 | /// |
| 982 | /// A call to `park` does not guarantee that the thread will remain parked |
| 983 | /// forever, and callers should be prepared for this possibility. However, |
| 984 | /// it is guaranteed that this function will not panic (it may abort the |
| 985 | /// process if the implementation encounters some rare errors). |
| 986 | /// |
| 987 | /// # `park` and `unpark` |
| 988 | /// |
| 989 | /// Every thread is equipped with some basic low-level blocking support, via the |
| 990 | /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`] |
| 991 | /// method. [`park`] blocks the current thread, which can then be resumed from |
| 992 | /// another thread by calling the [`unpark`] method on the blocked thread's |
| 993 | /// handle. |
| 994 | /// |
| 995 | /// Conceptually, each [`Thread`] handle has an associated token, which is |
| 996 | /// initially not present: |
| 997 | /// |
| 998 | /// * The [`thread::park`][`park`] function blocks the current thread unless or |
| 999 | /// until the token is available for its thread handle, at which point it |
| 1000 | /// atomically consumes the token. It may also return *spuriously*, without |
| 1001 | /// consuming the token. [`thread::park_timeout`] does the same, but allows |
| 1002 | /// specifying a maximum time to block the thread for. |
| 1003 | /// |
| 1004 | /// * The [`unpark`] method on a [`Thread`] atomically makes the token available |
| 1005 | /// if it wasn't already. Because the token is initially absent, [`unpark`] |
| 1006 | /// followed by [`park`] will result in the second call returning immediately. |
| 1007 | /// |
| 1008 | /// The API is typically used by acquiring a handle to the current thread, |
| 1009 | /// placing that handle in a shared data structure so that other threads can |
| 1010 | /// find it, and then `park`ing in a loop. When some desired condition is met, another |
| 1011 | /// thread calls [`unpark`] on the handle. |
| 1012 | /// |
| 1013 | /// The motivation for this design is twofold: |
| 1014 | /// |
| 1015 | /// * It avoids the need to allocate mutexes and condvars when building new |
| 1016 | /// synchronization primitives; the threads already provide basic |
| 1017 | /// blocking/signaling. |
| 1018 | /// |
| 1019 | /// * It can be implemented very efficiently on many platforms. |
| 1020 | /// |
| 1021 | /// # Memory Ordering |
| 1022 | /// |
| 1023 | /// Calls to `unpark` _synchronize-with_ calls to `park`, meaning that memory |
| 1024 | /// operations performed before a call to `unpark` are made visible to the thread that |
| 1025 | /// consumes the token and returns from `park`. Note that all `park` and `unpark` |
| 1026 | /// operations for a given thread form a total order and _all_ prior `unpark` operations |
| 1027 | /// synchronize-with `park`. |
| 1028 | /// |
| 1029 | /// In atomic ordering terms, `unpark` performs a `Release` operation and `park` |
| 1030 | /// performs the corresponding `Acquire` operation. Calls to `unpark` for the same |
| 1031 | /// thread form a [release sequence]. |
| 1032 | /// |
| 1033 | /// Note that being unblocked does not imply a call was made to `unpark`, because |
| 1034 | /// wakeups can also be spurious. For example, a valid, but inefficient, |
| 1035 | /// implementation could have `park` and `unpark` return immediately without doing anything, |
| 1036 | /// making *all* wakeups spurious. |
| 1037 | /// |
| 1038 | /// # Examples |
| 1039 | /// |
| 1040 | /// ``` |
| 1041 | /// use std::thread; |
| 1042 | /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}}; |
| 1043 | /// use std::time::Duration; |
| 1044 | /// |
| 1045 | /// let flag = Arc::new(AtomicBool::new(false)); |
| 1046 | /// let flag2 = Arc::clone(&flag); |
| 1047 | /// |
| 1048 | /// let parked_thread = thread::spawn(move || { |
| 1049 | /// // We want to wait until the flag is set. We *could* just spin, but using |
| 1050 | /// // park/unpark is more efficient. |
| 1051 | /// while !flag2.load(Ordering::Relaxed) { |
| 1052 | /// println!("Parking thread" ); |
| 1053 | /// thread::park(); |
| 1054 | /// // We *could* get here spuriously, i.e., way before the 10ms below are over! |
| 1055 | /// // But that is no problem, we are in a loop until the flag is set anyway. |
| 1056 | /// println!("Thread unparked" ); |
| 1057 | /// } |
| 1058 | /// println!("Flag received" ); |
| 1059 | /// }); |
| 1060 | /// |
| 1061 | /// // Let some time pass for the thread to be spawned. |
| 1062 | /// thread::sleep(Duration::from_millis(10)); |
| 1063 | /// |
| 1064 | /// // Set the flag, and let the thread wake up. |
| 1065 | /// // There is no race condition here, if `unpark` |
| 1066 | /// // happens first, `park` will return immediately. |
| 1067 | /// // Hence there is no risk of a deadlock. |
| 1068 | /// flag.store(true, Ordering::Relaxed); |
| 1069 | /// println!("Unpark the thread" ); |
| 1070 | /// parked_thread.thread().unpark(); |
| 1071 | /// |
| 1072 | /// parked_thread.join().unwrap(); |
| 1073 | /// ``` |
| 1074 | /// |
| 1075 | /// [`unpark`]: Thread::unpark |
| 1076 | /// [`thread::park_timeout`]: park_timeout |
| 1077 | /// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence |
| 1078 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1079 | pub fn park() { |
| 1080 | let guard: PanicGuard = PanicGuard; |
| 1081 | // SAFETY: park_timeout is called on the parker owned by this thread. |
| 1082 | unsafe { |
| 1083 | current().park(); |
| 1084 | } |
| 1085 | // No panic occurred, do not abort. |
| 1086 | forget(guard); |
| 1087 | } |
| 1088 | |
| 1089 | /// Uses [`park_timeout`]. |
| 1090 | /// |
| 1091 | /// Blocks unless or until the current thread's token is made available or |
| 1092 | /// the specified duration has been reached (may wake spuriously). |
| 1093 | /// |
| 1094 | /// The semantics of this function are equivalent to [`park`] except |
| 1095 | /// that the thread will be blocked for roughly no longer than `dur`. This |
| 1096 | /// method should not be used for precise timing due to anomalies such as |
| 1097 | /// preemption or platform differences that might not cause the maximum |
| 1098 | /// amount of time waited to be precisely `ms` long. |
| 1099 | /// |
| 1100 | /// See the [park documentation][`park`] for more detail. |
| 1101 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1102 | #[deprecated (since = "1.6.0" , note = "replaced by `std::thread::park_timeout`" )] |
| 1103 | pub fn park_timeout_ms(ms: u32) { |
| 1104 | park_timeout(dur:Duration::from_millis(ms as u64)) |
| 1105 | } |
| 1106 | |
| 1107 | /// Blocks unless or until the current thread's token is made available or |
| 1108 | /// the specified duration has been reached (may wake spuriously). |
| 1109 | /// |
| 1110 | /// The semantics of this function are equivalent to [`park`][park] except |
| 1111 | /// that the thread will be blocked for roughly no longer than `dur`. This |
| 1112 | /// method should not be used for precise timing due to anomalies such as |
| 1113 | /// preemption or platform differences that might not cause the maximum |
| 1114 | /// amount of time waited to be precisely `dur` long. |
| 1115 | /// |
| 1116 | /// See the [park documentation][park] for more details. |
| 1117 | /// |
| 1118 | /// # Platform-specific behavior |
| 1119 | /// |
| 1120 | /// Platforms which do not support nanosecond precision for sleeping will have |
| 1121 | /// `dur` rounded up to the nearest granularity of time they can sleep for. |
| 1122 | /// |
| 1123 | /// # Examples |
| 1124 | /// |
| 1125 | /// Waiting for the complete expiration of the timeout: |
| 1126 | /// |
| 1127 | /// ```rust,no_run |
| 1128 | /// use std::thread::park_timeout; |
| 1129 | /// use std::time::{Instant, Duration}; |
| 1130 | /// |
| 1131 | /// let timeout = Duration::from_secs(2); |
| 1132 | /// let beginning_park = Instant::now(); |
| 1133 | /// |
| 1134 | /// let mut timeout_remaining = timeout; |
| 1135 | /// loop { |
| 1136 | /// park_timeout(timeout_remaining); |
| 1137 | /// let elapsed = beginning_park.elapsed(); |
| 1138 | /// if elapsed >= timeout { |
| 1139 | /// break; |
| 1140 | /// } |
| 1141 | /// println!("restarting park_timeout after {elapsed:?}" ); |
| 1142 | /// timeout_remaining = timeout - elapsed; |
| 1143 | /// } |
| 1144 | /// ``` |
| 1145 | #[stable (feature = "park_timeout" , since = "1.4.0" )] |
| 1146 | pub fn park_timeout(dur: Duration) { |
| 1147 | let guard: PanicGuard = PanicGuard; |
| 1148 | // SAFETY: park_timeout is called on a handle owned by this thread. |
| 1149 | unsafe { |
| 1150 | current().park_timeout(dur); |
| 1151 | } |
| 1152 | // No panic occurred, do not abort. |
| 1153 | forget(guard); |
| 1154 | } |
| 1155 | |
| 1156 | //////////////////////////////////////////////////////////////////////////////// |
| 1157 | // ThreadId |
| 1158 | //////////////////////////////////////////////////////////////////////////////// |
| 1159 | |
| 1160 | /// A unique identifier for a running thread. |
| 1161 | /// |
| 1162 | /// A `ThreadId` is an opaque object that uniquely identifies each thread |
| 1163 | /// created during the lifetime of a process. `ThreadId`s are guaranteed not to |
| 1164 | /// be reused, even when a thread terminates. `ThreadId`s are under the control |
| 1165 | /// of Rust's standard library and there may not be any relationship between |
| 1166 | /// `ThreadId` and the underlying platform's notion of a thread identifier -- |
| 1167 | /// the two concepts cannot, therefore, be used interchangeably. A `ThreadId` |
| 1168 | /// can be retrieved from the [`id`] method on a [`Thread`]. |
| 1169 | /// |
| 1170 | /// # Examples |
| 1171 | /// |
| 1172 | /// ``` |
| 1173 | /// use std::thread; |
| 1174 | /// |
| 1175 | /// let other_thread = thread::spawn(|| { |
| 1176 | /// thread::current().id() |
| 1177 | /// }); |
| 1178 | /// |
| 1179 | /// let other_thread_id = other_thread.join().unwrap(); |
| 1180 | /// assert!(thread::current().id() != other_thread_id); |
| 1181 | /// ``` |
| 1182 | /// |
| 1183 | /// [`id`]: Thread::id |
| 1184 | #[stable (feature = "thread_id" , since = "1.19.0" )] |
| 1185 | #[derive (Eq, PartialEq, Clone, Copy, Hash, Debug)] |
| 1186 | pub struct ThreadId(NonZero<u64>); |
| 1187 | |
| 1188 | impl ThreadId { |
| 1189 | // Generate a new unique thread ID. |
| 1190 | pub(crate) fn new() -> ThreadId { |
| 1191 | #[cold ] |
| 1192 | fn exhausted() -> ! { |
| 1193 | panic!("failed to generate unique thread ID: bitspace exhausted" ) |
| 1194 | } |
| 1195 | |
| 1196 | cfg_if::cfg_if! { |
| 1197 | if #[cfg(target_has_atomic = "64" )] { |
| 1198 | use crate::sync::atomic::{Atomic, AtomicU64}; |
| 1199 | |
| 1200 | static COUNTER: Atomic<u64> = AtomicU64::new(0); |
| 1201 | |
| 1202 | let mut last = COUNTER.load(Ordering::Relaxed); |
| 1203 | loop { |
| 1204 | let Some(id) = last.checked_add(1) else { |
| 1205 | exhausted(); |
| 1206 | }; |
| 1207 | |
| 1208 | match COUNTER.compare_exchange_weak(last, id, Ordering::Relaxed, Ordering::Relaxed) { |
| 1209 | Ok(_) => return ThreadId(NonZero::new(id).unwrap()), |
| 1210 | Err(id) => last = id, |
| 1211 | } |
| 1212 | } |
| 1213 | } else { |
| 1214 | use crate::sync::{Mutex, PoisonError}; |
| 1215 | |
| 1216 | static COUNTER: Mutex<u64> = Mutex::new(0); |
| 1217 | |
| 1218 | let mut counter = COUNTER.lock().unwrap_or_else(PoisonError::into_inner); |
| 1219 | let Some(id) = counter.checked_add(1) else { |
| 1220 | // in case the panic handler ends up calling `ThreadId::new()`, |
| 1221 | // avoid reentrant lock acquire. |
| 1222 | drop(counter); |
| 1223 | exhausted(); |
| 1224 | }; |
| 1225 | |
| 1226 | *counter = id; |
| 1227 | drop(counter); |
| 1228 | ThreadId(NonZero::new(id).unwrap()) |
| 1229 | } |
| 1230 | } |
| 1231 | } |
| 1232 | |
| 1233 | #[cfg (any(not(target_thread_local), target_has_atomic = "64" ))] |
| 1234 | fn from_u64(v: u64) -> Option<ThreadId> { |
| 1235 | NonZero::new(v).map(ThreadId) |
| 1236 | } |
| 1237 | |
| 1238 | /// This returns a numeric identifier for the thread identified by this |
| 1239 | /// `ThreadId`. |
| 1240 | /// |
| 1241 | /// As noted in the documentation for the type itself, it is essentially an |
| 1242 | /// opaque ID, but is guaranteed to be unique for each thread. The returned |
| 1243 | /// value is entirely opaque -- only equality testing is stable. Note that |
| 1244 | /// it is not guaranteed which values new threads will return, and this may |
| 1245 | /// change across Rust versions. |
| 1246 | #[must_use ] |
| 1247 | #[unstable (feature = "thread_id_value" , issue = "67939" )] |
| 1248 | pub fn as_u64(&self) -> NonZero<u64> { |
| 1249 | self.0 |
| 1250 | } |
| 1251 | } |
| 1252 | |
| 1253 | //////////////////////////////////////////////////////////////////////////////// |
| 1254 | // Thread |
| 1255 | //////////////////////////////////////////////////////////////////////////////// |
| 1256 | |
| 1257 | // This module ensures private fields are kept private, which is necessary to enforce the safety requirements. |
| 1258 | mod thread_name_string { |
| 1259 | use crate::ffi::{CStr, CString}; |
| 1260 | use crate::str; |
| 1261 | |
| 1262 | /// Like a `String` it's guaranteed UTF-8 and like a `CString` it's null terminated. |
| 1263 | pub(crate) struct ThreadNameString { |
| 1264 | inner: CString, |
| 1265 | } |
| 1266 | |
| 1267 | impl From<String> for ThreadNameString { |
| 1268 | fn from(s: String) -> Self { |
| 1269 | Self { |
| 1270 | inner: CString::new(s).expect("thread name may not contain interior null bytes" ), |
| 1271 | } |
| 1272 | } |
| 1273 | } |
| 1274 | |
| 1275 | impl ThreadNameString { |
| 1276 | pub fn as_cstr(&self) -> &CStr { |
| 1277 | &self.inner |
| 1278 | } |
| 1279 | |
| 1280 | pub fn as_str(&self) -> &str { |
| 1281 | // SAFETY: `ThreadNameString` is guaranteed to be UTF-8. |
| 1282 | unsafe { str::from_utf8_unchecked(self.inner.to_bytes()) } |
| 1283 | } |
| 1284 | } |
| 1285 | } |
| 1286 | |
| 1287 | use thread_name_string::ThreadNameString; |
| 1288 | |
| 1289 | /// Store the ID of the main thread. |
| 1290 | /// |
| 1291 | /// The thread handle for the main thread is created lazily, and this might even |
| 1292 | /// happen pre-main. Since not every platform has a way to identify the main |
| 1293 | /// thread when that happens – macOS's `pthread_main_np` function being a notable |
| 1294 | /// exception – we cannot assign it the right name right then. Instead, in our |
| 1295 | /// runtime startup code, we remember the thread ID of the main thread (through |
| 1296 | /// this modules `set` function) and use it to identify the main thread from then |
| 1297 | /// on. This works reliably and has the additional advantage that we can report |
| 1298 | /// the right thread name on main even after the thread handle has been destroyed. |
| 1299 | /// Note however that this also means that the name reported in pre-main functions |
| 1300 | /// will be incorrect, but that's just something we have to live with. |
| 1301 | pub(crate) mod main_thread { |
| 1302 | cfg_if::cfg_if! { |
| 1303 | if #[cfg(target_has_atomic = "64" )] { |
| 1304 | use super::ThreadId; |
| 1305 | use crate::sync::atomic::{Atomic, AtomicU64}; |
| 1306 | use crate::sync::atomic::Ordering::Relaxed; |
| 1307 | |
| 1308 | static MAIN: Atomic<u64> = AtomicU64::new(0); |
| 1309 | |
| 1310 | pub(super) fn get() -> Option<ThreadId> { |
| 1311 | ThreadId::from_u64(MAIN.load(Relaxed)) |
| 1312 | } |
| 1313 | |
| 1314 | /// # Safety |
| 1315 | /// May only be called once. |
| 1316 | pub(crate) unsafe fn set(id: ThreadId) { |
| 1317 | MAIN.store(id.as_u64().get(), Relaxed) |
| 1318 | } |
| 1319 | } else { |
| 1320 | use super::ThreadId; |
| 1321 | use crate::mem::MaybeUninit; |
| 1322 | use crate::sync::atomic::{Atomic, AtomicBool}; |
| 1323 | use crate::sync::atomic::Ordering::{Acquire, Release}; |
| 1324 | |
| 1325 | static INIT: Atomic<bool> = AtomicBool::new(false); |
| 1326 | static mut MAIN: MaybeUninit<ThreadId> = MaybeUninit::uninit(); |
| 1327 | |
| 1328 | pub(super) fn get() -> Option<ThreadId> { |
| 1329 | if INIT.load(Acquire) { |
| 1330 | Some(unsafe { MAIN.assume_init() }) |
| 1331 | } else { |
| 1332 | None |
| 1333 | } |
| 1334 | } |
| 1335 | |
| 1336 | /// # Safety |
| 1337 | /// May only be called once. |
| 1338 | pub(crate) unsafe fn set(id: ThreadId) { |
| 1339 | unsafe { MAIN = MaybeUninit::new(id) }; |
| 1340 | INIT.store(true, Release); |
| 1341 | } |
| 1342 | } |
| 1343 | } |
| 1344 | } |
| 1345 | |
| 1346 | /// Run a function with the current thread's name. |
| 1347 | /// |
| 1348 | /// Modulo thread local accesses, this function is safe to call from signal |
| 1349 | /// handlers and in similar circumstances where allocations are not possible. |
| 1350 | pub(crate) fn with_current_name<F, R>(f: F) -> R |
| 1351 | where |
| 1352 | F: FnOnce(Option<&str>) -> R, |
| 1353 | { |
| 1354 | try_with_current(|thread| { |
| 1355 | if let Some(thread) = thread { |
| 1356 | // If there is a current thread handle, try to use the name stored |
| 1357 | // there. |
| 1358 | if let Some(name) = &thread.inner.name { |
| 1359 | return f(Some(name.as_str())); |
| 1360 | } else if Some(thread.inner.id) == main_thread::get() { |
| 1361 | // The main thread doesn't store its name in the handle, we must |
| 1362 | // identify it through its ID. Since we already have the `Thread`, |
| 1363 | // we can retrieve the ID from it instead of going through another |
| 1364 | // thread local. |
| 1365 | return f(Some("main" )); |
| 1366 | } |
| 1367 | } else if let Some(main) = main_thread::get() |
| 1368 | && let Some(id) = current::id::get() |
| 1369 | && id == main |
| 1370 | { |
| 1371 | // The main thread doesn't always have a thread handle, we must |
| 1372 | // identify it through its ID instead. The checks are ordered so |
| 1373 | // that the current ID is only loaded if it is actually needed, |
| 1374 | // since loading it from TLS might need multiple expensive accesses. |
| 1375 | return f(Some("main" )); |
| 1376 | } |
| 1377 | |
| 1378 | f(None) |
| 1379 | }) |
| 1380 | } |
| 1381 | |
| 1382 | /// The internal representation of a `Thread` handle |
| 1383 | struct Inner { |
| 1384 | name: Option<ThreadNameString>, |
| 1385 | id: ThreadId, |
| 1386 | parker: Parker, |
| 1387 | } |
| 1388 | |
| 1389 | impl Inner { |
| 1390 | fn parker(self: Pin<&Self>) -> Pin<&Parker> { |
| 1391 | unsafe { Pin::map_unchecked(self, |inner: &Inner| &inner.parker) } |
| 1392 | } |
| 1393 | } |
| 1394 | |
| 1395 | #[derive (Clone)] |
| 1396 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1397 | /// A handle to a thread. |
| 1398 | /// |
| 1399 | /// Threads are represented via the `Thread` type, which you can get in one of |
| 1400 | /// two ways: |
| 1401 | /// |
| 1402 | /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`] |
| 1403 | /// function, and calling [`thread`][`JoinHandle::thread`] on the |
| 1404 | /// [`JoinHandle`]. |
| 1405 | /// * By requesting the current thread, using the [`thread::current`] function. |
| 1406 | /// |
| 1407 | /// The [`thread::current`] function is available even for threads not spawned |
| 1408 | /// by the APIs of this module. |
| 1409 | /// |
| 1410 | /// There is usually no need to create a `Thread` struct yourself, one |
| 1411 | /// should instead use a function like `spawn` to create new threads, see the |
| 1412 | /// docs of [`Builder`] and [`spawn`] for more details. |
| 1413 | /// |
| 1414 | /// [`thread::current`]: current::current |
| 1415 | pub struct Thread { |
| 1416 | inner: Pin<Arc<Inner>>, |
| 1417 | } |
| 1418 | |
| 1419 | impl Thread { |
| 1420 | pub(crate) fn new(id: ThreadId, name: Option<String>) -> Thread { |
| 1421 | let name = name.map(ThreadNameString::from); |
| 1422 | |
| 1423 | // We have to use `unsafe` here to construct the `Parker` in-place, |
| 1424 | // which is required for the UNIX implementation. |
| 1425 | // |
| 1426 | // SAFETY: We pin the Arc immediately after creation, so its address never |
| 1427 | // changes. |
| 1428 | let inner = unsafe { |
| 1429 | let mut arc = Arc::<Inner>::new_uninit(); |
| 1430 | let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr(); |
| 1431 | (&raw mut (*ptr).name).write(name); |
| 1432 | (&raw mut (*ptr).id).write(id); |
| 1433 | Parker::new_in_place(&raw mut (*ptr).parker); |
| 1434 | Pin::new_unchecked(arc.assume_init()) |
| 1435 | }; |
| 1436 | |
| 1437 | Thread { inner } |
| 1438 | } |
| 1439 | |
| 1440 | /// Like the public [`park`], but callable on any handle. This is used to |
| 1441 | /// allow parking in TLS destructors. |
| 1442 | /// |
| 1443 | /// # Safety |
| 1444 | /// May only be called from the thread to which this handle belongs. |
| 1445 | pub(crate) unsafe fn park(&self) { |
| 1446 | unsafe { self.inner.as_ref().parker().park() } |
| 1447 | } |
| 1448 | |
| 1449 | /// Like the public [`park_timeout`], but callable on any handle. This is |
| 1450 | /// used to allow parking in TLS destructors. |
| 1451 | /// |
| 1452 | /// # Safety |
| 1453 | /// May only be called from the thread to which this handle belongs. |
| 1454 | pub(crate) unsafe fn park_timeout(&self, dur: Duration) { |
| 1455 | unsafe { self.inner.as_ref().parker().park_timeout(dur) } |
| 1456 | } |
| 1457 | |
| 1458 | /// Atomically makes the handle's token available if it is not already. |
| 1459 | /// |
| 1460 | /// Every thread is equipped with some basic low-level blocking support, via |
| 1461 | /// the [`park`][park] function and the `unpark()` method. These can be |
| 1462 | /// used as a more CPU-efficient implementation of a spinlock. |
| 1463 | /// |
| 1464 | /// See the [park documentation][park] for more details. |
| 1465 | /// |
| 1466 | /// # Examples |
| 1467 | /// |
| 1468 | /// ``` |
| 1469 | /// use std::thread; |
| 1470 | /// use std::time::Duration; |
| 1471 | /// |
| 1472 | /// let parked_thread = thread::Builder::new() |
| 1473 | /// .spawn(|| { |
| 1474 | /// println!("Parking thread" ); |
| 1475 | /// thread::park(); |
| 1476 | /// println!("Thread unparked" ); |
| 1477 | /// }) |
| 1478 | /// .unwrap(); |
| 1479 | /// |
| 1480 | /// // Let some time pass for the thread to be spawned. |
| 1481 | /// thread::sleep(Duration::from_millis(10)); |
| 1482 | /// |
| 1483 | /// println!("Unpark the thread" ); |
| 1484 | /// parked_thread.thread().unpark(); |
| 1485 | /// |
| 1486 | /// parked_thread.join().unwrap(); |
| 1487 | /// ``` |
| 1488 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1489 | #[inline ] |
| 1490 | pub fn unpark(&self) { |
| 1491 | self.inner.as_ref().parker().unpark(); |
| 1492 | } |
| 1493 | |
| 1494 | /// Gets the thread's unique identifier. |
| 1495 | /// |
| 1496 | /// # Examples |
| 1497 | /// |
| 1498 | /// ``` |
| 1499 | /// use std::thread; |
| 1500 | /// |
| 1501 | /// let other_thread = thread::spawn(|| { |
| 1502 | /// thread::current().id() |
| 1503 | /// }); |
| 1504 | /// |
| 1505 | /// let other_thread_id = other_thread.join().unwrap(); |
| 1506 | /// assert!(thread::current().id() != other_thread_id); |
| 1507 | /// ``` |
| 1508 | #[stable (feature = "thread_id" , since = "1.19.0" )] |
| 1509 | #[must_use ] |
| 1510 | pub fn id(&self) -> ThreadId { |
| 1511 | self.inner.id |
| 1512 | } |
| 1513 | |
| 1514 | /// Gets the thread's name. |
| 1515 | /// |
| 1516 | /// For more information about named threads, see |
| 1517 | /// [this module-level documentation][naming-threads]. |
| 1518 | /// |
| 1519 | /// # Examples |
| 1520 | /// |
| 1521 | /// Threads by default have no name specified: |
| 1522 | /// |
| 1523 | /// ``` |
| 1524 | /// use std::thread; |
| 1525 | /// |
| 1526 | /// let builder = thread::Builder::new(); |
| 1527 | /// |
| 1528 | /// let handler = builder.spawn(|| { |
| 1529 | /// assert!(thread::current().name().is_none()); |
| 1530 | /// }).unwrap(); |
| 1531 | /// |
| 1532 | /// handler.join().unwrap(); |
| 1533 | /// ``` |
| 1534 | /// |
| 1535 | /// Thread with a specified name: |
| 1536 | /// |
| 1537 | /// ``` |
| 1538 | /// use std::thread; |
| 1539 | /// |
| 1540 | /// let builder = thread::Builder::new() |
| 1541 | /// .name("foo" .into()); |
| 1542 | /// |
| 1543 | /// let handler = builder.spawn(|| { |
| 1544 | /// assert_eq!(thread::current().name(), Some("foo" )) |
| 1545 | /// }).unwrap(); |
| 1546 | /// |
| 1547 | /// handler.join().unwrap(); |
| 1548 | /// ``` |
| 1549 | /// |
| 1550 | /// [naming-threads]: ./index.html#naming-threads |
| 1551 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1552 | #[must_use ] |
| 1553 | pub fn name(&self) -> Option<&str> { |
| 1554 | if let Some(name) = &self.inner.name { |
| 1555 | Some(name.as_str()) |
| 1556 | } else if main_thread::get() == Some(self.inner.id) { |
| 1557 | Some("main" ) |
| 1558 | } else { |
| 1559 | None |
| 1560 | } |
| 1561 | } |
| 1562 | |
| 1563 | /// Consumes the `Thread`, returning a raw pointer. |
| 1564 | /// |
| 1565 | /// To avoid a memory leak the pointer must be converted |
| 1566 | /// back into a `Thread` using [`Thread::from_raw`]. |
| 1567 | /// |
| 1568 | /// # Examples |
| 1569 | /// |
| 1570 | /// ``` |
| 1571 | /// #![feature(thread_raw)] |
| 1572 | /// |
| 1573 | /// use std::thread::{self, Thread}; |
| 1574 | /// |
| 1575 | /// let thread = thread::current(); |
| 1576 | /// let id = thread.id(); |
| 1577 | /// let ptr = Thread::into_raw(thread); |
| 1578 | /// unsafe { |
| 1579 | /// assert_eq!(Thread::from_raw(ptr).id(), id); |
| 1580 | /// } |
| 1581 | /// ``` |
| 1582 | #[unstable (feature = "thread_raw" , issue = "97523" )] |
| 1583 | pub fn into_raw(self) -> *const () { |
| 1584 | // Safety: We only expose an opaque pointer, which maintains the `Pin` invariant. |
| 1585 | let inner = unsafe { Pin::into_inner_unchecked(self.inner) }; |
| 1586 | Arc::into_raw(inner) as *const () |
| 1587 | } |
| 1588 | |
| 1589 | /// Constructs a `Thread` from a raw pointer. |
| 1590 | /// |
| 1591 | /// The raw pointer must have been previously returned |
| 1592 | /// by a call to [`Thread::into_raw`]. |
| 1593 | /// |
| 1594 | /// # Safety |
| 1595 | /// |
| 1596 | /// This function is unsafe because improper use may lead |
| 1597 | /// to memory unsafety, even if the returned `Thread` is never |
| 1598 | /// accessed. |
| 1599 | /// |
| 1600 | /// Creating a `Thread` from a pointer other than one returned |
| 1601 | /// from [`Thread::into_raw`] is **undefined behavior**. |
| 1602 | /// |
| 1603 | /// Calling this function twice on the same raw pointer can lead |
| 1604 | /// to a double-free if both `Thread` instances are dropped. |
| 1605 | #[unstable (feature = "thread_raw" , issue = "97523" )] |
| 1606 | pub unsafe fn from_raw(ptr: *const ()) -> Thread { |
| 1607 | // Safety: Upheld by caller. |
| 1608 | unsafe { Thread { inner: Pin::new_unchecked(Arc::from_raw(ptr as *const Inner)) } } |
| 1609 | } |
| 1610 | |
| 1611 | fn cname(&self) -> Option<&CStr> { |
| 1612 | if let Some(name) = &self.inner.name { |
| 1613 | Some(name.as_cstr()) |
| 1614 | } else if main_thread::get() == Some(self.inner.id) { |
| 1615 | Some(c"main" ) |
| 1616 | } else { |
| 1617 | None |
| 1618 | } |
| 1619 | } |
| 1620 | } |
| 1621 | |
| 1622 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1623 | impl fmt::Debug for Thread { |
| 1624 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
| 1625 | f&mut DebugStruct<'_, '_>.debug_struct("Thread" ) |
| 1626 | .field("id" , &self.id()) |
| 1627 | .field(name:"name" , &self.name()) |
| 1628 | .finish_non_exhaustive() |
| 1629 | } |
| 1630 | } |
| 1631 | |
| 1632 | //////////////////////////////////////////////////////////////////////////////// |
| 1633 | // JoinHandle |
| 1634 | //////////////////////////////////////////////////////////////////////////////// |
| 1635 | |
| 1636 | /// A specialized [`Result`] type for threads. |
| 1637 | /// |
| 1638 | /// Indicates the manner in which a thread exited. |
| 1639 | /// |
| 1640 | /// The value contained in the `Result::Err` variant |
| 1641 | /// is the value the thread panicked with; |
| 1642 | /// that is, the argument the `panic!` macro was called with. |
| 1643 | /// Unlike with normal errors, this value doesn't implement |
| 1644 | /// the [`Error`](crate::error::Error) trait. |
| 1645 | /// |
| 1646 | /// Thus, a sensible way to handle a thread panic is to either: |
| 1647 | /// |
| 1648 | /// 1. propagate the panic with [`std::panic::resume_unwind`] |
| 1649 | /// 2. or in case the thread is intended to be a subsystem boundary |
| 1650 | /// that is supposed to isolate system-level failures, |
| 1651 | /// match on the `Err` variant and handle the panic in an appropriate way |
| 1652 | /// |
| 1653 | /// A thread that completes without panicking is considered to exit successfully. |
| 1654 | /// |
| 1655 | /// # Examples |
| 1656 | /// |
| 1657 | /// Matching on the result of a joined thread: |
| 1658 | /// |
| 1659 | /// ```no_run |
| 1660 | /// use std::{fs, thread, panic}; |
| 1661 | /// |
| 1662 | /// fn copy_in_thread() -> thread::Result<()> { |
| 1663 | /// thread::spawn(|| { |
| 1664 | /// fs::copy("foo.txt" , "bar.txt" ).unwrap(); |
| 1665 | /// }).join() |
| 1666 | /// } |
| 1667 | /// |
| 1668 | /// fn main() { |
| 1669 | /// match copy_in_thread() { |
| 1670 | /// Ok(_) => println!("copy succeeded" ), |
| 1671 | /// Err(e) => panic::resume_unwind(e), |
| 1672 | /// } |
| 1673 | /// } |
| 1674 | /// ``` |
| 1675 | /// |
| 1676 | /// [`Result`]: crate::result::Result |
| 1677 | /// [`std::panic::resume_unwind`]: crate::panic::resume_unwind |
| 1678 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1679 | #[doc (search_unbox)] |
| 1680 | pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>; |
| 1681 | |
| 1682 | // This packet is used to communicate the return value between the spawned |
| 1683 | // thread and the rest of the program. It is shared through an `Arc` and |
| 1684 | // there's no need for a mutex here because synchronization happens with `join()` |
| 1685 | // (the caller will never read this packet until the thread has exited). |
| 1686 | // |
| 1687 | // An Arc to the packet is stored into a `JoinInner` which in turns is placed |
| 1688 | // in `JoinHandle`. |
| 1689 | struct Packet<'scope, T> { |
| 1690 | scope: Option<Arc<scoped::ScopeData>>, |
| 1691 | result: UnsafeCell<Option<Result<T>>>, |
| 1692 | _marker: PhantomData<Option<&'scope scoped::ScopeData>>, |
| 1693 | } |
| 1694 | |
| 1695 | // Due to the usage of `UnsafeCell` we need to manually implement Sync. |
| 1696 | // The type `T` should already always be Send (otherwise the thread could not |
| 1697 | // have been created) and the Packet is Sync because all access to the |
| 1698 | // `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync. |
| 1699 | unsafe impl<'scope, T: Send> Sync for Packet<'scope, T> {} |
| 1700 | |
| 1701 | impl<'scope, T> Drop for Packet<'scope, T> { |
| 1702 | fn drop(&mut self) { |
| 1703 | // If this packet was for a thread that ran in a scope, the thread |
| 1704 | // panicked, and nobody consumed the panic payload, we make sure |
| 1705 | // the scope function will panic. |
| 1706 | let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_))); |
| 1707 | // Drop the result without causing unwinding. |
| 1708 | // This is only relevant for threads that aren't join()ed, as |
| 1709 | // join() will take the `result` and set it to None, such that |
| 1710 | // there is nothing left to drop here. |
| 1711 | // If this panics, we should handle that, because we're outside the |
| 1712 | // outermost `catch_unwind` of our thread. |
| 1713 | // We just abort in that case, since there's nothing else we can do. |
| 1714 | // (And even if we tried to handle it somehow, we'd also need to handle |
| 1715 | // the case where the panic payload we get out of it also panics on |
| 1716 | // drop, and so on. See issue #86027.) |
| 1717 | if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| { |
| 1718 | *self.result.get_mut() = None; |
| 1719 | })) { |
| 1720 | rtabort!("thread result panicked on drop" ); |
| 1721 | } |
| 1722 | // Book-keeping so the scope knows when it's done. |
| 1723 | if let Some(scope) = &self.scope { |
| 1724 | // Now that there will be no more user code running on this thread |
| 1725 | // that can use 'scope, mark the thread as 'finished'. |
| 1726 | // It's important we only do this after the `result` has been dropped, |
| 1727 | // since dropping it might still use things it borrowed from 'scope. |
| 1728 | scope.decrement_num_running_threads(unhandled_panic); |
| 1729 | } |
| 1730 | } |
| 1731 | } |
| 1732 | |
| 1733 | /// Inner representation for JoinHandle |
| 1734 | struct JoinInner<'scope, T> { |
| 1735 | native: imp::Thread, |
| 1736 | thread: Thread, |
| 1737 | packet: Arc<Packet<'scope, T>>, |
| 1738 | } |
| 1739 | |
| 1740 | impl<'scope, T> JoinInner<'scope, T> { |
| 1741 | fn join(mut self) -> Result<T> { |
| 1742 | self.native.join(); |
| 1743 | ArcOption>>::get_mut(&mut self.packet) |
| 1744 | // FIXME(fuzzypixelz): returning an error instead of panicking here |
| 1745 | // would require updating the documentation of |
| 1746 | // `std::thread::Result`; currently we can return `Err` if and only |
| 1747 | // if the thread had panicked. |
| 1748 | .expect(msg:"threads should not terminate unexpectedly" ) |
| 1749 | .result |
| 1750 | .get_mut() |
| 1751 | .take() |
| 1752 | .unwrap() |
| 1753 | } |
| 1754 | } |
| 1755 | |
| 1756 | /// An owned permission to join on a thread (block on its termination). |
| 1757 | /// |
| 1758 | /// A `JoinHandle` *detaches* the associated thread when it is dropped, which |
| 1759 | /// means that there is no longer any handle to the thread and no way to `join` |
| 1760 | /// on it. |
| 1761 | /// |
| 1762 | /// Due to platform restrictions, it is not possible to [`Clone`] this |
| 1763 | /// handle: the ability to join a thread is a uniquely-owned permission. |
| 1764 | /// |
| 1765 | /// This `struct` is created by the [`thread::spawn`] function and the |
| 1766 | /// [`thread::Builder::spawn`] method. |
| 1767 | /// |
| 1768 | /// # Examples |
| 1769 | /// |
| 1770 | /// Creation from [`thread::spawn`]: |
| 1771 | /// |
| 1772 | /// ``` |
| 1773 | /// use std::thread; |
| 1774 | /// |
| 1775 | /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| { |
| 1776 | /// // some work here |
| 1777 | /// }); |
| 1778 | /// ``` |
| 1779 | /// |
| 1780 | /// Creation from [`thread::Builder::spawn`]: |
| 1781 | /// |
| 1782 | /// ``` |
| 1783 | /// use std::thread; |
| 1784 | /// |
| 1785 | /// let builder = thread::Builder::new(); |
| 1786 | /// |
| 1787 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
| 1788 | /// // some work here |
| 1789 | /// }).unwrap(); |
| 1790 | /// ``` |
| 1791 | /// |
| 1792 | /// A thread being detached and outliving the thread that spawned it: |
| 1793 | /// |
| 1794 | /// ```no_run |
| 1795 | /// use std::thread; |
| 1796 | /// use std::time::Duration; |
| 1797 | /// |
| 1798 | /// let original_thread = thread::spawn(|| { |
| 1799 | /// let _detached_thread = thread::spawn(|| { |
| 1800 | /// // Here we sleep to make sure that the first thread returns before. |
| 1801 | /// thread::sleep(Duration::from_millis(10)); |
| 1802 | /// // This will be called, even though the JoinHandle is dropped. |
| 1803 | /// println!("♫ Still alive ♫" ); |
| 1804 | /// }); |
| 1805 | /// }); |
| 1806 | /// |
| 1807 | /// original_thread.join().expect("The thread being joined has panicked" ); |
| 1808 | /// println!("Original thread is joined." ); |
| 1809 | /// |
| 1810 | /// // We make sure that the new thread has time to run, before the main |
| 1811 | /// // thread returns. |
| 1812 | /// |
| 1813 | /// thread::sleep(Duration::from_millis(1000)); |
| 1814 | /// ``` |
| 1815 | /// |
| 1816 | /// [`thread::Builder::spawn`]: Builder::spawn |
| 1817 | /// [`thread::spawn`]: spawn |
| 1818 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1819 | #[cfg_attr (target_os = "teeos" , must_use)] |
| 1820 | pub struct JoinHandle<T>(JoinInner<'static, T>); |
| 1821 | |
| 1822 | #[stable (feature = "joinhandle_impl_send_sync" , since = "1.29.0" )] |
| 1823 | unsafe impl<T> Send for JoinHandle<T> {} |
| 1824 | #[stable (feature = "joinhandle_impl_send_sync" , since = "1.29.0" )] |
| 1825 | unsafe impl<T> Sync for JoinHandle<T> {} |
| 1826 | |
| 1827 | impl<T> JoinHandle<T> { |
| 1828 | /// Extracts a handle to the underlying thread. |
| 1829 | /// |
| 1830 | /// # Examples |
| 1831 | /// |
| 1832 | /// ``` |
| 1833 | /// use std::thread; |
| 1834 | /// |
| 1835 | /// let builder = thread::Builder::new(); |
| 1836 | /// |
| 1837 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
| 1838 | /// // some work here |
| 1839 | /// }).unwrap(); |
| 1840 | /// |
| 1841 | /// let thread = join_handle.thread(); |
| 1842 | /// println!("thread id: {:?}" , thread.id()); |
| 1843 | /// ``` |
| 1844 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1845 | #[must_use ] |
| 1846 | pub fn thread(&self) -> &Thread { |
| 1847 | &self.0.thread |
| 1848 | } |
| 1849 | |
| 1850 | /// Waits for the associated thread to finish. |
| 1851 | /// |
| 1852 | /// This function will return immediately if the associated thread has already finished. |
| 1853 | /// |
| 1854 | /// In terms of [atomic memory orderings], the completion of the associated |
| 1855 | /// thread synchronizes with this function returning. In other words, all |
| 1856 | /// operations performed by that thread [happen |
| 1857 | /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all |
| 1858 | /// operations that happen after `join` returns. |
| 1859 | /// |
| 1860 | /// If the associated thread panics, [`Err`] is returned with the parameter given |
| 1861 | /// to [`panic!`] (though see the Notes below). |
| 1862 | /// |
| 1863 | /// [`Err`]: crate::result::Result::Err |
| 1864 | /// [atomic memory orderings]: crate::sync::atomic |
| 1865 | /// |
| 1866 | /// # Panics |
| 1867 | /// |
| 1868 | /// This function may panic on some platforms if a thread attempts to join |
| 1869 | /// itself or otherwise may create a deadlock with joining threads. |
| 1870 | /// |
| 1871 | /// # Examples |
| 1872 | /// |
| 1873 | /// ``` |
| 1874 | /// use std::thread; |
| 1875 | /// |
| 1876 | /// let builder = thread::Builder::new(); |
| 1877 | /// |
| 1878 | /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| { |
| 1879 | /// // some work here |
| 1880 | /// }).unwrap(); |
| 1881 | /// join_handle.join().expect("Couldn't join on the associated thread" ); |
| 1882 | /// ``` |
| 1883 | /// |
| 1884 | /// # Notes |
| 1885 | /// |
| 1886 | /// If a "foreign" unwinding operation (e.g. an exception thrown from C++ |
| 1887 | /// code, or a `panic!` in Rust code compiled or linked with a different |
| 1888 | /// runtime) unwinds all the way to the thread root, the process may be |
| 1889 | /// aborted; see the Notes on [`thread::spawn`]. If the process is not |
| 1890 | /// aborted, this function will return a `Result::Err` containing an opaque |
| 1891 | /// type. |
| 1892 | /// |
| 1893 | /// [`catch_unwind`]: ../../std/panic/fn.catch_unwind.html |
| 1894 | /// [`thread::spawn`]: spawn |
| 1895 | #[stable (feature = "rust1" , since = "1.0.0" )] |
| 1896 | pub fn join(self) -> Result<T> { |
| 1897 | self.0.join() |
| 1898 | } |
| 1899 | |
| 1900 | /// Checks if the associated thread has finished running its main function. |
| 1901 | /// |
| 1902 | /// `is_finished` supports implementing a non-blocking join operation, by checking |
| 1903 | /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To |
| 1904 | /// block while waiting on the thread to finish, use [`join`][Self::join]. |
| 1905 | /// |
| 1906 | /// This might return `true` for a brief moment after the thread's main |
| 1907 | /// function has returned, but before the thread itself has stopped running. |
| 1908 | /// However, once this returns `true`, [`join`][Self::join] can be expected |
| 1909 | /// to return quickly, without blocking for any significant amount of time. |
| 1910 | #[stable (feature = "thread_is_running" , since = "1.61.0" )] |
| 1911 | pub fn is_finished(&self) -> bool { |
| 1912 | Arc::strong_count(&self.0.packet) == 1 |
| 1913 | } |
| 1914 | } |
| 1915 | |
| 1916 | impl<T> AsInner<imp::Thread> for JoinHandle<T> { |
| 1917 | fn as_inner(&self) -> &imp::Thread { |
| 1918 | &self.0.native |
| 1919 | } |
| 1920 | } |
| 1921 | |
| 1922 | impl<T> IntoInner<imp::Thread> for JoinHandle<T> { |
| 1923 | fn into_inner(self) -> imp::Thread { |
| 1924 | self.0.native |
| 1925 | } |
| 1926 | } |
| 1927 | |
| 1928 | #[stable (feature = "std_debug" , since = "1.16.0" )] |
| 1929 | impl<T> fmt::Debug for JoinHandle<T> { |
| 1930 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { |
| 1931 | f.debug_struct(name:"JoinHandle" ).finish_non_exhaustive() |
| 1932 | } |
| 1933 | } |
| 1934 | |
| 1935 | fn _assert_sync_and_send() { |
| 1936 | fn _assert_both<T: Send + Sync>() {} |
| 1937 | _assert_both::<JoinHandle<()>>(); |
| 1938 | _assert_both::<Thread>(); |
| 1939 | } |
| 1940 | |
| 1941 | /// Returns an estimate of the default amount of parallelism a program should use. |
| 1942 | /// |
| 1943 | /// Parallelism is a resource. A given machine provides a certain capacity for |
| 1944 | /// parallelism, i.e., a bound on the number of computations it can perform |
| 1945 | /// simultaneously. This number often corresponds to the amount of CPUs a |
| 1946 | /// computer has, but it may diverge in various cases. |
| 1947 | /// |
| 1948 | /// Host environments such as VMs or container orchestrators may want to |
| 1949 | /// restrict the amount of parallelism made available to programs in them. This |
| 1950 | /// is often done to limit the potential impact of (unintentionally) |
| 1951 | /// resource-intensive programs on other programs running on the same machine. |
| 1952 | /// |
| 1953 | /// # Limitations |
| 1954 | /// |
| 1955 | /// The purpose of this API is to provide an easy and portable way to query |
| 1956 | /// the default amount of parallelism the program should use. Among other things it |
| 1957 | /// does not expose information on NUMA regions, does not account for |
| 1958 | /// differences in (co)processor capabilities or current system load, |
| 1959 | /// and will not modify the program's global state in order to more accurately |
| 1960 | /// query the amount of available parallelism. |
| 1961 | /// |
| 1962 | /// Where both fixed steady-state and burst limits are available the steady-state |
| 1963 | /// capacity will be used to ensure more predictable latencies. |
| 1964 | /// |
| 1965 | /// Resource limits can be changed during the runtime of a program, therefore the value is |
| 1966 | /// not cached and instead recomputed every time this function is called. It should not be |
| 1967 | /// called from hot code. |
| 1968 | /// |
| 1969 | /// The value returned by this function should be considered a simplified |
| 1970 | /// approximation of the actual amount of parallelism available at any given |
| 1971 | /// time. To get a more detailed or precise overview of the amount of |
| 1972 | /// parallelism available to the program, you may wish to use |
| 1973 | /// platform-specific APIs as well. The following platform limitations currently |
| 1974 | /// apply to `available_parallelism`: |
| 1975 | /// |
| 1976 | /// On Windows: |
| 1977 | /// - It may undercount the amount of parallelism available on systems with more |
| 1978 | /// than 64 logical CPUs. However, programs typically need specific support to |
| 1979 | /// take advantage of more than 64 logical CPUs, and in the absence of such |
| 1980 | /// support, the number returned by this function accurately reflects the |
| 1981 | /// number of logical CPUs the program can use by default. |
| 1982 | /// - It may overcount the amount of parallelism available on systems limited by |
| 1983 | /// process-wide affinity masks, or job object limitations. |
| 1984 | /// |
| 1985 | /// On Linux: |
| 1986 | /// - It may overcount the amount of parallelism available when limited by a |
| 1987 | /// process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be |
| 1988 | /// queried, e.g. due to sandboxing. |
| 1989 | /// - It may undercount the amount of parallelism if the current thread's affinity mask |
| 1990 | /// does not reflect the process' cpuset, e.g. due to pinned threads. |
| 1991 | /// - If the process is in a cgroup v1 cpu controller, this may need to |
| 1992 | /// scan mountpoints to find the corresponding cgroup v1 controller, |
| 1993 | /// which may take time on systems with large numbers of mountpoints. |
| 1994 | /// (This does not apply to cgroup v2, or to processes not in a |
| 1995 | /// cgroup.) |
| 1996 | /// |
| 1997 | /// On all targets: |
| 1998 | /// - It may overcount the amount of parallelism available when running in a VM |
| 1999 | /// with CPU usage limits (e.g. an overcommitted host). |
| 2000 | /// |
| 2001 | /// # Errors |
| 2002 | /// |
| 2003 | /// This function will, but is not limited to, return errors in the following |
| 2004 | /// cases: |
| 2005 | /// |
| 2006 | /// - If the amount of parallelism is not known for the target platform. |
| 2007 | /// - If the program lacks permission to query the amount of parallelism made |
| 2008 | /// available to it. |
| 2009 | /// |
| 2010 | /// # Examples |
| 2011 | /// |
| 2012 | /// ``` |
| 2013 | /// # #![allow (dead_code)] |
| 2014 | /// use std::{io, thread}; |
| 2015 | /// |
| 2016 | /// fn main() -> io::Result<()> { |
| 2017 | /// let count = thread::available_parallelism()?.get(); |
| 2018 | /// assert!(count >= 1_usize); |
| 2019 | /// Ok(()) |
| 2020 | /// } |
| 2021 | /// ``` |
| 2022 | #[doc (alias = "available_concurrency" )] // Alias for a previous name we gave this API on unstable. |
| 2023 | #[doc (alias = "hardware_concurrency" )] // Alias for C++ `std::thread::hardware_concurrency`. |
| 2024 | #[doc (alias = "num_cpus" )] // Alias for a popular ecosystem crate which provides similar functionality. |
| 2025 | #[stable (feature = "available_parallelism" , since = "1.59.0" )] |
| 2026 | pub fn available_parallelism() -> io::Result<NonZero<usize>> { |
| 2027 | imp::available_parallelism() |
| 2028 | } |
| 2029 | |