1//! Native threads.
2//!
3//! ## The threading model
4//!
5//! An executing Rust program consists of a collection of native OS threads,
6//! each with their own stack and local state. Threads can be named, and
7//! provide some built-in support for low-level synchronization.
8//!
9//! Communication between threads can be done through
10//! [channels], Rust's message-passing types, along with [other forms of thread
11//! synchronization](../../std/sync/index.html) and shared-memory data
12//! structures. In particular, types that are guaranteed to be
13//! threadsafe are easily shared between threads using the
14//! atomically-reference-counted container, [`Arc`].
15//!
16//! Fatal logic errors in Rust cause *thread panic*, during which
17//! a thread will unwind the stack, running destructors and freeing
18//! owned resources. While not meant as a 'try/catch' mechanism, panics
19//! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20//! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21//! from, or alternatively be resumed with
22//! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23//! is not caught the thread will exit, but the panic may optionally be
24//! detected from a different thread with [`join`]. If the main thread panics
25//! without the panic being caught, the application will exit with a
26//! non-zero exit code.
27//!
28//! When the main thread of a Rust program terminates, the entire program shuts
29//! down, even if other threads are still running. However, this module provides
30//! convenient facilities for automatically waiting for the termination of a
31//! thread (i.e., join).
32//!
33//! ## Spawning a thread
34//!
35//! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
36//!
37//! ```rust
38//! use std::thread;
39//!
40//! thread::spawn(move || {
41//! // some work here
42//! });
43//! ```
44//!
45//! In this example, the spawned thread is "detached," which means that there is
46//! no way for the program to learn when the spawned thread completes or otherwise
47//! terminates.
48//!
49//! To learn when a thread completes, it is necessary to capture the [`JoinHandle`]
50//! object that is returned by the call to [`spawn`], which provides
51//! a `join` method that allows the caller to wait for the completion of the
52//! spawned thread:
53//!
54//! ```rust
55//! use std::thread;
56//!
57//! let thread_join_handle = thread::spawn(move || {
58//! // some work here
59//! });
60//! // some work here
61//! let res = thread_join_handle.join();
62//! ```
63//!
64//! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
65//! value produced by the spawned thread, or [`Err`] of the value given to
66//! a call to [`panic!`] if the thread panicked.
67//!
68//! Note that there is no parent/child relationship between a thread that spawns a
69//! new thread and the thread being spawned. In particular, the spawned thread may or
70//! may not outlive the spawning thread, unless the spawning thread is the main thread.
71//!
72//! ## Configuring threads
73//!
74//! A new thread can be configured before it is spawned via the [`Builder`] type,
75//! which currently allows you to set the name and stack size for the thread:
76//!
77//! ```rust
78//! # #![allow(unused_must_use)]
79//! use std::thread;
80//!
81//! thread::Builder::new().name("thread1".to_string()).spawn(move || {
82//! println!("Hello, world!");
83//! });
84//! ```
85//!
86//! ## The `Thread` type
87//!
88//! Threads are represented via the [`Thread`] type, which you can get in one of
89//! two ways:
90//!
91//! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
92//! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
93//! * By requesting the current thread, using the [`thread::current`] function.
94//!
95//! The [`thread::current`] function is available even for threads not spawned
96//! by the APIs of this module.
97//!
98//! ## Thread-local storage
99//!
100//! This module also provides an implementation of thread-local storage for Rust
101//! programs. Thread-local storage is a method of storing data into a global
102//! variable that each thread in the program will have its own copy of.
103//! Threads do not share this data, so accesses do not need to be synchronized.
104//!
105//! A thread-local key owns the value it contains and will destroy the value when the
106//! thread exits. It is created with the [`thread_local!`] macro and can contain any
107//! value that is `'static` (no borrowed pointers). It provides an accessor function,
108//! [`with`], that yields a shared reference to the value to the specified
109//! closure. Thread-local keys allow only shared access to values, as there would be no
110//! way to guarantee uniqueness if mutable borrows were allowed. Most values
111//! will want to make use of some form of **interior mutability** through the
112//! [`Cell`] or [`RefCell`] types.
113//!
114//! ## Naming threads
115//!
116//! Threads are able to have associated names for identification purposes. By default, spawned
117//! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
118//! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
119//! thread, use [`Thread::name`]. A couple of examples where the name of a thread gets used:
120//!
121//! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
122//! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
123//! unix-like platforms).
124//!
125//! ## Stack size
126//!
127//! The default stack size is platform-dependent and subject to change.
128//! Currently, it is 2 MiB on all Tier-1 platforms.
129//!
130//! There are two ways to manually specify the stack size for spawned threads:
131//!
132//! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
133//! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
134//! size (in bytes). Note that setting [`Builder::stack_size`] will override this. Be aware that
135//! changes to `RUST_MIN_STACK` may be ignored after program start.
136//!
137//! Note that the stack size of the main thread is *not* determined by Rust.
138//!
139//! [channels]: crate::sync::mpsc
140//! [`join`]: JoinHandle::join
141//! [`Result`]: crate::result::Result
142//! [`Ok`]: crate::result::Result::Ok
143//! [`Err`]: crate::result::Result::Err
144//! [`thread::current`]: current
145//! [`thread::Result`]: Result
146//! [`unpark`]: Thread::unpark
147//! [`thread::park_timeout`]: park_timeout
148//! [`Cell`]: crate::cell::Cell
149//! [`RefCell`]: crate::cell::RefCell
150//! [`with`]: LocalKey::with
151//! [`thread_local!`]: crate::thread_local
152
153#![stable(feature = "rust1", since = "1.0.0")]
154#![deny(unsafe_op_in_unsafe_fn)]
155// Under `test`, `__FastLocalKeyInner` seems unused.
156#![cfg_attr(test, allow(dead_code))]
157
158#[cfg(all(test, not(target_os = "emscripten")))]
159mod tests;
160
161use crate::any::Any;
162use crate::cell::UnsafeCell;
163use crate::ffi::{CStr, CString};
164use crate::fmt;
165use crate::io;
166use crate::marker::PhantomData;
167use crate::mem::{self, forget};
168use crate::num::NonZeroU64;
169use crate::num::NonZeroUsize;
170use crate::panic;
171use crate::panicking;
172use crate::pin::Pin;
173use crate::ptr::addr_of_mut;
174use crate::str;
175use crate::sync::Arc;
176use crate::sys::thread as imp;
177use crate::sys_common::thread;
178use crate::sys_common::thread_info;
179use crate::sys_common::thread_parking::Parker;
180use crate::sys_common::{AsInner, IntoInner};
181use crate::time::{Duration, Instant};
182
183#[stable(feature = "scoped_threads", since = "1.63.0")]
184mod scoped;
185
186#[stable(feature = "scoped_threads", since = "1.63.0")]
187pub use scoped::{scope, Scope, ScopedJoinHandle};
188
189////////////////////////////////////////////////////////////////////////////////
190// Thread-local storage
191////////////////////////////////////////////////////////////////////////////////
192
193#[macro_use]
194mod local;
195
196cfg_if::cfg_if! {
197 if #[cfg(test)] {
198 // Avoid duplicating the global state associated with thread-locals between this crate and
199 // realstd. Miri relies on this.
200 pub use realstd::thread::{local_impl, AccessError, LocalKey};
201 } else {
202 #[stable(feature = "rust1", since = "1.0.0")]
203 pub use self::local::{AccessError, LocalKey};
204
205 // Implementation details used by the thread_local!{} macro.
206 #[doc(hidden)]
207 #[unstable(feature = "thread_local_internals", issue = "none")]
208 pub mod local_impl {
209 pub use crate::sys::common::thread_local::{thread_local_inner, Key, abort_on_dtor_unwind};
210 }
211 }
212}
213
214////////////////////////////////////////////////////////////////////////////////
215// Builder
216////////////////////////////////////////////////////////////////////////////////
217
218/// Thread factory, which can be used in order to configure the properties of
219/// a new thread.
220///
221/// Methods can be chained on it in order to configure it.
222///
223/// The two configurations available are:
224///
225/// - [`name`]: specifies an [associated name for the thread][naming-threads]
226/// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
227///
228/// The [`spawn`] method will take ownership of the builder and create an
229/// [`io::Result`] to the thread handle with the given configuration.
230///
231/// The [`thread::spawn`] free function uses a `Builder` with default
232/// configuration and [`unwrap`]s its return value.
233///
234/// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
235/// to recover from a failure to launch a thread, indeed the free function will
236/// panic where the `Builder` method will return a [`io::Result`].
237///
238/// # Examples
239///
240/// ```
241/// use std::thread;
242///
243/// let builder = thread::Builder::new();
244///
245/// let handler = builder.spawn(|| {
246/// // thread code
247/// }).unwrap();
248///
249/// handler.join().unwrap();
250/// ```
251///
252/// [`stack_size`]: Builder::stack_size
253/// [`name`]: Builder::name
254/// [`spawn`]: Builder::spawn
255/// [`thread::spawn`]: spawn
256/// [`io::Result`]: crate::io::Result
257/// [`unwrap`]: crate::result::Result::unwrap
258/// [naming-threads]: ./index.html#naming-threads
259/// [stack-size]: ./index.html#stack-size
260#[must_use = "must eventually spawn the thread"]
261#[stable(feature = "rust1", since = "1.0.0")]
262#[derive(Debug)]
263pub struct Builder {
264 // A name for the thread-to-be, for identification in panic messages
265 name: Option<String>,
266 // The size of the stack for the spawned thread in bytes
267 stack_size: Option<usize>,
268}
269
270impl Builder {
271 /// Generates the base configuration for spawning a thread, from which
272 /// configuration methods can be chained.
273 ///
274 /// # Examples
275 ///
276 /// ```
277 /// use std::thread;
278 ///
279 /// let builder = thread::Builder::new()
280 /// .name("foo".into())
281 /// .stack_size(32 * 1024);
282 ///
283 /// let handler = builder.spawn(|| {
284 /// // thread code
285 /// }).unwrap();
286 ///
287 /// handler.join().unwrap();
288 /// ```
289 #[stable(feature = "rust1", since = "1.0.0")]
290 pub fn new() -> Builder {
291 Builder { name: None, stack_size: None }
292 }
293
294 /// Names the thread-to-be. Currently the name is used for identification
295 /// only in panic messages.
296 ///
297 /// The name must not contain null bytes (`\0`).
298 ///
299 /// For more information about named threads, see
300 /// [this module-level documentation][naming-threads].
301 ///
302 /// # Examples
303 ///
304 /// ```
305 /// use std::thread;
306 ///
307 /// let builder = thread::Builder::new()
308 /// .name("foo".into());
309 ///
310 /// let handler = builder.spawn(|| {
311 /// assert_eq!(thread::current().name(), Some("foo"))
312 /// }).unwrap();
313 ///
314 /// handler.join().unwrap();
315 /// ```
316 ///
317 /// [naming-threads]: ./index.html#naming-threads
318 #[stable(feature = "rust1", since = "1.0.0")]
319 pub fn name(mut self, name: String) -> Builder {
320 self.name = Some(name);
321 self
322 }
323
324 /// Sets the size of the stack (in bytes) for the new thread.
325 ///
326 /// The actual stack size may be greater than this value if
327 /// the platform specifies a minimal stack size.
328 ///
329 /// For more information about the stack size for threads, see
330 /// [this module-level documentation][stack-size].
331 ///
332 /// # Examples
333 ///
334 /// ```
335 /// use std::thread;
336 ///
337 /// let builder = thread::Builder::new().stack_size(32 * 1024);
338 /// ```
339 ///
340 /// [stack-size]: ./index.html#stack-size
341 #[stable(feature = "rust1", since = "1.0.0")]
342 pub fn stack_size(mut self, size: usize) -> Builder {
343 self.stack_size = Some(size);
344 self
345 }
346
347 /// Spawns a new thread by taking ownership of the `Builder`, and returns an
348 /// [`io::Result`] to its [`JoinHandle`].
349 ///
350 /// The spawned thread may outlive the caller (unless the caller thread
351 /// is the main thread; the whole process is terminated when the main
352 /// thread finishes). The join handle can be used to block on
353 /// termination of the spawned thread, including recovering its panics.
354 ///
355 /// For a more complete documentation see [`thread::spawn`][`spawn`].
356 ///
357 /// # Errors
358 ///
359 /// Unlike the [`spawn`] free function, this method yields an
360 /// [`io::Result`] to capture any failure to create the thread at
361 /// the OS level.
362 ///
363 /// [`io::Result`]: crate::io::Result
364 ///
365 /// # Panics
366 ///
367 /// Panics if a thread name was set and it contained null bytes.
368 ///
369 /// # Examples
370 ///
371 /// ```
372 /// use std::thread;
373 ///
374 /// let builder = thread::Builder::new();
375 ///
376 /// let handler = builder.spawn(|| {
377 /// // thread code
378 /// }).unwrap();
379 ///
380 /// handler.join().unwrap();
381 /// ```
382 #[stable(feature = "rust1", since = "1.0.0")]
383 pub fn spawn<F, T>(self, f: F) -> io::Result<JoinHandle<T>>
384 where
385 F: FnOnce() -> T,
386 F: Send + 'static,
387 T: Send + 'static,
388 {
389 unsafe { self.spawn_unchecked(f) }
390 }
391
392 /// Spawns a new thread without any lifetime restrictions by taking ownership
393 /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
394 ///
395 /// The spawned thread may outlive the caller (unless the caller thread
396 /// is the main thread; the whole process is terminated when the main
397 /// thread finishes). The join handle can be used to block on
398 /// termination of the spawned thread, including recovering its panics.
399 ///
400 /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
401 /// except for the relaxed lifetime bounds, which render it unsafe.
402 /// For a more complete documentation see [`thread::spawn`][`spawn`].
403 ///
404 /// # Errors
405 ///
406 /// Unlike the [`spawn`] free function, this method yields an
407 /// [`io::Result`] to capture any failure to create the thread at
408 /// the OS level.
409 ///
410 /// # Panics
411 ///
412 /// Panics if a thread name was set and it contained null bytes.
413 ///
414 /// # Safety
415 ///
416 /// The caller has to ensure that the spawned thread does not outlive any
417 /// references in the supplied thread closure and its return type.
418 /// This can be guaranteed in two ways:
419 ///
420 /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
421 /// data is dropped
422 /// - use only types with `'static` lifetime bounds, i.e., those with no or only
423 /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
424 /// and [`thread::spawn`][`spawn`] enforce this property statically)
425 ///
426 /// # Examples
427 ///
428 /// ```
429 /// #![feature(thread_spawn_unchecked)]
430 /// use std::thread;
431 ///
432 /// let builder = thread::Builder::new();
433 ///
434 /// let x = 1;
435 /// let thread_x = &x;
436 ///
437 /// let handler = unsafe {
438 /// builder.spawn_unchecked(move || {
439 /// println!("x = {}", *thread_x);
440 /// }).unwrap()
441 /// };
442 ///
443 /// // caller has to ensure `join()` is called, otherwise
444 /// // it is possible to access freed memory if `x` gets
445 /// // dropped before the thread closure is executed!
446 /// handler.join().unwrap();
447 /// ```
448 ///
449 /// [`io::Result`]: crate::io::Result
450 #[unstable(feature = "thread_spawn_unchecked", issue = "55132")]
451 pub unsafe fn spawn_unchecked<'a, F, T>(self, f: F) -> io::Result<JoinHandle<T>>
452 where
453 F: FnOnce() -> T,
454 F: Send + 'a,
455 T: Send + 'a,
456 {
457 Ok(JoinHandle(unsafe { self.spawn_unchecked_(f, None) }?))
458 }
459
460 unsafe fn spawn_unchecked_<'a, 'scope, F, T>(
461 self,
462 f: F,
463 scope_data: Option<Arc<scoped::ScopeData>>,
464 ) -> io::Result<JoinInner<'scope, T>>
465 where
466 F: FnOnce() -> T,
467 F: Send + 'a,
468 T: Send + 'a,
469 'scope: 'a,
470 {
471 let Builder { name, stack_size } = self;
472
473 let stack_size = stack_size.unwrap_or_else(thread::min_stack);
474
475 let my_thread = Thread::new(name.map(|name| {
476 CString::new(name).expect("thread name may not contain interior null bytes")
477 }));
478 let their_thread = my_thread.clone();
479
480 let my_packet: Arc<Packet<'scope, T>> = Arc::new(Packet {
481 scope: scope_data,
482 result: UnsafeCell::new(None),
483 _marker: PhantomData,
484 });
485 let their_packet = my_packet.clone();
486
487 let output_capture = crate::io::set_output_capture(None);
488 crate::io::set_output_capture(output_capture.clone());
489
490 // Pass `f` in `MaybeUninit` because actually that closure might *run longer than the lifetime of `F`*.
491 // See <https://github.com/rust-lang/rust/issues/101983> for more details.
492 // To prevent leaks we use a wrapper that drops its contents.
493 #[repr(transparent)]
494 struct MaybeDangling<T>(mem::MaybeUninit<T>);
495 impl<T> MaybeDangling<T> {
496 fn new(x: T) -> Self {
497 MaybeDangling(mem::MaybeUninit::new(x))
498 }
499 fn into_inner(self) -> T {
500 // SAFETY: we are always initialized.
501 let ret = unsafe { self.0.assume_init_read() };
502 // Make sure we don't drop.
503 mem::forget(self);
504 ret
505 }
506 }
507 impl<T> Drop for MaybeDangling<T> {
508 fn drop(&mut self) {
509 // SAFETY: we are always initialized.
510 unsafe { self.0.assume_init_drop() };
511 }
512 }
513
514 let f = MaybeDangling::new(f);
515 let main = move || {
516 if let Some(name) = their_thread.cname() {
517 imp::Thread::set_name(name);
518 }
519
520 crate::io::set_output_capture(output_capture);
521
522 // SAFETY: we constructed `f` initialized.
523 let f = f.into_inner();
524 // SAFETY: the stack guard passed is the one for the current thread.
525 // This means the current thread's stack and the new thread's stack
526 // are properly set and protected from each other.
527 thread_info::set(unsafe { imp::guard::current() }, their_thread);
528 let try_result = panic::catch_unwind(panic::AssertUnwindSafe(|| {
529 crate::sys_common::backtrace::__rust_begin_short_backtrace(f)
530 }));
531 // SAFETY: `their_packet` as been built just above and moved by the
532 // closure (it is an Arc<...>) and `my_packet` will be stored in the
533 // same `JoinInner` as this closure meaning the mutation will be
534 // safe (not modify it and affect a value far away).
535 unsafe { *their_packet.result.get() = Some(try_result) };
536 // Here `their_packet` gets dropped, and if this is the last `Arc` for that packet that
537 // will call `decrement_num_running_threads` and therefore signal that this thread is
538 // done.
539 drop(their_packet);
540 // Here, the lifetime `'a` and even `'scope` can end. `main` keeps running for a bit
541 // after that before returning itself.
542 };
543
544 if let Some(scope_data) = &my_packet.scope {
545 scope_data.increment_num_running_threads();
546 }
547
548 let main = Box::new(main);
549 // SAFETY: dynamic size and alignment of the Box remain the same. See below for why the
550 // lifetime change is justified.
551 let main = unsafe { Box::from_raw(Box::into_raw(main) as *mut (dyn FnOnce() + 'static)) };
552
553 Ok(JoinInner {
554 // SAFETY:
555 //
556 // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
557 // through FFI or otherwise used with low-level threading primitives that have no
558 // notion of or way to enforce lifetimes.
559 //
560 // As mentioned in the `Safety` section of this function's documentation, the caller of
561 // this function needs to guarantee that the passed-in lifetime is sufficiently long
562 // for the lifetime of the thread.
563 //
564 // Similarly, the `sys` implementation must guarantee that no references to the closure
565 // exist after the thread has terminated, which is signaled by `Thread::join`
566 // returning.
567 native: unsafe { imp::Thread::new(stack_size, main)? },
568 thread: my_thread,
569 packet: my_packet,
570 })
571 }
572}
573
574////////////////////////////////////////////////////////////////////////////////
575// Free functions
576////////////////////////////////////////////////////////////////////////////////
577
578/// Spawns a new thread, returning a [`JoinHandle`] for it.
579///
580/// The join handle provides a [`join`] method that can be used to join the spawned
581/// thread. If the spawned thread panics, [`join`] will return an [`Err`] containing
582/// the argument given to [`panic!`].
583///
584/// If the join handle is dropped, the spawned thread will implicitly be *detached*.
585/// In this case, the spawned thread may no longer be joined.
586/// (It is the responsibility of the program to either eventually join threads it
587/// creates or detach them; otherwise, a resource leak will result.)
588///
589/// This call will create a thread using default parameters of [`Builder`], if you
590/// want to specify the stack size or the name of the thread, use this API
591/// instead.
592///
593/// As you can see in the signature of `spawn` there are two constraints on
594/// both the closure given to `spawn` and its return value, let's explain them:
595///
596/// - The `'static` constraint means that the closure and its return value
597/// must have a lifetime of the whole program execution. The reason for this
598/// is that threads can outlive the lifetime they have been created in.
599///
600/// Indeed if the thread, and by extension its return value, can outlive their
601/// caller, we need to make sure that they will be valid afterwards, and since
602/// we *can't* know when it will return we need to have them valid as long as
603/// possible, that is until the end of the program, hence the `'static`
604/// lifetime.
605/// - The [`Send`] constraint is because the closure will need to be passed
606/// *by value* from the thread where it is spawned to the new thread. Its
607/// return value will need to be passed from the new thread to the thread
608/// where it is `join`ed.
609/// As a reminder, the [`Send`] marker trait expresses that it is safe to be
610/// passed from thread to thread. [`Sync`] expresses that it is safe to have a
611/// reference be passed from thread to thread.
612///
613/// # Panics
614///
615/// Panics if the OS fails to create a thread; use [`Builder::spawn`]
616/// to recover from such errors.
617///
618/// # Examples
619///
620/// Creating a thread.
621///
622/// ```
623/// use std::thread;
624///
625/// let handler = thread::spawn(|| {
626/// // thread code
627/// });
628///
629/// handler.join().unwrap();
630/// ```
631///
632/// As mentioned in the module documentation, threads are usually made to
633/// communicate using [`channels`], here is how it usually looks.
634///
635/// This example also shows how to use `move`, in order to give ownership
636/// of values to a thread.
637///
638/// ```
639/// use std::thread;
640/// use std::sync::mpsc::channel;
641///
642/// let (tx, rx) = channel();
643///
644/// let sender = thread::spawn(move || {
645/// tx.send("Hello, thread".to_owned())
646/// .expect("Unable to send on channel");
647/// });
648///
649/// let receiver = thread::spawn(move || {
650/// let value = rx.recv().expect("Unable to receive from channel");
651/// println!("{value}");
652/// });
653///
654/// sender.join().expect("The sender thread has panicked");
655/// receiver.join().expect("The receiver thread has panicked");
656/// ```
657///
658/// A thread can also return a value through its [`JoinHandle`], you can use
659/// this to make asynchronous computations (futures might be more appropriate
660/// though).
661///
662/// ```
663/// use std::thread;
664///
665/// let computation = thread::spawn(|| {
666/// // Some expensive computation.
667/// 42
668/// });
669///
670/// let result = computation.join().unwrap();
671/// println!("{result}");
672/// ```
673///
674/// [`channels`]: crate::sync::mpsc
675/// [`join`]: JoinHandle::join
676/// [`Err`]: crate::result::Result::Err
677#[stable(feature = "rust1", since = "1.0.0")]
678pub fn spawn<F, T>(f: F) -> JoinHandle<T>
679where
680 F: FnOnce() -> T,
681 F: Send + 'static,
682 T: Send + 'static,
683{
684 Builder::new().spawn(f).expect(msg:"failed to spawn thread")
685}
686
687/// Gets a handle to the thread that invokes it.
688///
689/// # Examples
690///
691/// Getting a handle to the current thread with `thread::current()`:
692///
693/// ```
694/// use std::thread;
695///
696/// let handler = thread::Builder::new()
697/// .name("named thread".into())
698/// .spawn(|| {
699/// let handle = thread::current();
700/// assert_eq!(handle.name(), Some("named thread"));
701/// })
702/// .unwrap();
703///
704/// handler.join().unwrap();
705/// ```
706#[must_use]
707#[stable(feature = "rust1", since = "1.0.0")]
708pub fn current() -> Thread {
709 thread_info::current_thread().expect(
710 msg:"use of std::thread::current() is not possible \
711msg: after the thread's local data has been destroyed",
712 )
713}
714
715/// Cooperatively gives up a timeslice to the OS scheduler.
716///
717/// This calls the underlying OS scheduler's yield primitive, signaling
718/// that the calling thread is willing to give up its remaining timeslice
719/// so that the OS may schedule other threads on the CPU.
720///
721/// A drawback of yielding in a loop is that if the OS does not have any
722/// other ready threads to run on the current CPU, the thread will effectively
723/// busy-wait, which wastes CPU time and energy.
724///
725/// Therefore, when waiting for events of interest, a programmer's first
726/// choice should be to use synchronization devices such as [`channel`]s,
727/// [`Condvar`]s, [`Mutex`]es or [`join`] since these primitives are
728/// implemented in a blocking manner, giving up the CPU until the event
729/// of interest has occurred which avoids repeated yielding.
730///
731/// `yield_now` should thus be used only rarely, mostly in situations where
732/// repeated polling is required because there is no other suitable way to
733/// learn when an event of interest has occurred.
734///
735/// # Examples
736///
737/// ```
738/// use std::thread;
739///
740/// thread::yield_now();
741/// ```
742///
743/// [`channel`]: crate::sync::mpsc
744/// [`join`]: JoinHandle::join
745/// [`Condvar`]: crate::sync::Condvar
746/// [`Mutex`]: crate::sync::Mutex
747#[stable(feature = "rust1", since = "1.0.0")]
748pub fn yield_now() {
749 imp::Thread::yield_now()
750}
751
752/// Determines whether the current thread is unwinding because of panic.
753///
754/// A common use of this feature is to poison shared resources when writing
755/// unsafe code, by checking `panicking` when the `drop` is called.
756///
757/// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
758/// already poison themselves when a thread panics while holding the lock.
759///
760/// This can also be used in multithreaded applications, in order to send a
761/// message to other threads warning that a thread has panicked (e.g., for
762/// monitoring purposes).
763///
764/// # Examples
765///
766/// ```should_panic
767/// use std::thread;
768///
769/// struct SomeStruct;
770///
771/// impl Drop for SomeStruct {
772/// fn drop(&mut self) {
773/// if thread::panicking() {
774/// println!("dropped while unwinding");
775/// } else {
776/// println!("dropped while not unwinding");
777/// }
778/// }
779/// }
780///
781/// {
782/// print!("a: ");
783/// let a = SomeStruct;
784/// }
785///
786/// {
787/// print!("b: ");
788/// let b = SomeStruct;
789/// panic!()
790/// }
791/// ```
792///
793/// [Mutex]: crate::sync::Mutex
794#[inline]
795#[must_use]
796#[stable(feature = "rust1", since = "1.0.0")]
797pub fn panicking() -> bool {
798 panicking::panicking()
799}
800
801/// Use [`sleep`].
802///
803/// Puts the current thread to sleep for at least the specified amount of time.
804///
805/// The thread may sleep longer than the duration specified due to scheduling
806/// specifics or platform-dependent functionality. It will never sleep less.
807///
808/// This function is blocking, and should not be used in `async` functions.
809///
810/// # Platform-specific behavior
811///
812/// On Unix platforms, the underlying syscall may be interrupted by a
813/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
814/// the specified duration, this function may invoke that system call multiple
815/// times.
816///
817/// # Examples
818///
819/// ```no_run
820/// use std::thread;
821///
822/// // Let's sleep for 2 seconds:
823/// thread::sleep_ms(2000);
824/// ```
825#[stable(feature = "rust1", since = "1.0.0")]
826#[deprecated(since = "1.6.0", note = "replaced by `std::thread::sleep`")]
827pub fn sleep_ms(ms: u32) {
828 sleep(dur:Duration::from_millis(ms as u64))
829}
830
831/// Puts the current thread to sleep for at least the specified amount of time.
832///
833/// The thread may sleep longer than the duration specified due to scheduling
834/// specifics or platform-dependent functionality. It will never sleep less.
835///
836/// This function is blocking, and should not be used in `async` functions.
837///
838/// # Platform-specific behavior
839///
840/// On Unix platforms, the underlying syscall may be interrupted by a
841/// spurious wakeup or signal handler. To ensure the sleep occurs for at least
842/// the specified duration, this function may invoke that system call multiple
843/// times.
844/// Platforms which do not support nanosecond precision for sleeping will
845/// have `dur` rounded up to the nearest granularity of time they can sleep for.
846///
847/// Currently, specifying a zero duration on Unix platforms returns immediately
848/// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
849/// platforms the underlying [`Sleep`] syscall is always invoked.
850/// If the intention is to yield the current time-slice you may want to use
851/// [`yield_now`] instead.
852///
853/// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
854/// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
855///
856/// # Examples
857///
858/// ```no_run
859/// use std::{thread, time};
860///
861/// let ten_millis = time::Duration::from_millis(10);
862/// let now = time::Instant::now();
863///
864/// thread::sleep(ten_millis);
865///
866/// assert!(now.elapsed() >= ten_millis);
867/// ```
868#[stable(feature = "thread_sleep", since = "1.4.0")]
869pub fn sleep(dur: Duration) {
870 imp::Thread::sleep(dur)
871}
872
873/// Puts the current thread to sleep until the specified deadline has passed.
874///
875/// The thread may still be asleep after the deadline specified due to
876/// scheduling specifics or platform-dependent functionality. It will never
877/// wake before.
878///
879/// This function is blocking, and should not be used in `async` functions.
880///
881/// # Platform-specific behavior
882///
883/// This function uses [`sleep`] internally, see its platform-specific behaviour.
884///
885///
886/// # Examples
887///
888/// A simple game loop that limits the game to 60 frames per second.
889///
890/// ```no_run
891/// #![feature(thread_sleep_until)]
892/// # use std::time::{Duration, Instant};
893/// # use std::thread;
894/// #
895/// # fn update() {}
896/// # fn render() {}
897/// #
898/// let max_fps = 60.0;
899/// let frame_time = Duration::from_secs_f32(1.0/max_fps);
900/// let mut next_frame = Instant::now();
901/// loop {
902/// thread::sleep_until(next_frame);
903/// next_frame += frame_time;
904/// update();
905/// render();
906/// }
907/// ```
908///
909/// A slow api we must not call too fast and which takes a few
910/// tries before succeeding. By using `sleep_until` the time the
911/// api call takes does not influence when we retry or when we give up
912///
913/// ```no_run
914/// #![feature(thread_sleep_until)]
915/// # use std::time::{Duration, Instant};
916/// # use std::thread;
917/// #
918/// # enum Status {
919/// # Ready(usize),
920/// # Waiting,
921/// # }
922/// # fn slow_web_api_call() -> Status { Status::Ready(42) }
923/// #
924/// # const MAX_DURATION: Duration = Duration::from_secs(10);
925/// #
926/// # fn try_api_call() -> Result<usize, ()> {
927/// let deadline = Instant::now() + MAX_DURATION;
928/// let delay = Duration::from_millis(250);
929/// let mut next_attempt = Instant::now();
930/// loop {
931/// if Instant::now() > deadline {
932/// break Err(());
933/// }
934/// if let Status::Ready(data) = slow_web_api_call() {
935/// break Ok(data);
936/// }
937///
938/// next_attempt = deadline.min(next_attempt + delay);
939/// thread::sleep_until(next_attempt);
940/// }
941/// # }
942/// # let _data = try_api_call();
943/// ```
944#[unstable(feature = "thread_sleep_until", issue = "113752")]
945pub fn sleep_until(deadline: Instant) {
946 let now: Instant = Instant::now();
947
948 if let Some(delay: Duration) = deadline.checked_duration_since(earlier:now) {
949 sleep(dur:delay);
950 }
951}
952
953/// Used to ensure that `park` and `park_timeout` do not unwind, as that can
954/// cause undefined behaviour if not handled correctly (see #102398 for context).
955struct PanicGuard;
956
957impl Drop for PanicGuard {
958 fn drop(&mut self) {
959 rtabort!("an irrecoverable error occurred while synchronizing threads")
960 }
961}
962
963/// Blocks unless or until the current thread's token is made available.
964///
965/// A call to `park` does not guarantee that the thread will remain parked
966/// forever, and callers should be prepared for this possibility. However,
967/// it is guaranteed that this function will not panic (it may abort the
968/// process if the implementation encounters some rare errors).
969///
970/// # `park` and `unpark`
971///
972/// Every thread is equipped with some basic low-level blocking support, via the
973/// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
974/// method. [`park`] blocks the current thread, which can then be resumed from
975/// another thread by calling the [`unpark`] method on the blocked thread's
976/// handle.
977///
978/// Conceptually, each [`Thread`] handle has an associated token, which is
979/// initially not present:
980///
981/// * The [`thread::park`][`park`] function blocks the current thread unless or
982/// until the token is available for its thread handle, at which point it
983/// atomically consumes the token. It may also return *spuriously*, without
984/// consuming the token. [`thread::park_timeout`] does the same, but allows
985/// specifying a maximum time to block the thread for.
986///
987/// * The [`unpark`] method on a [`Thread`] atomically makes the token available
988/// if it wasn't already. Because the token is initially absent, [`unpark`]
989/// followed by [`park`] will result in the second call returning immediately.
990///
991/// The API is typically used by acquiring a handle to the current thread,
992/// placing that handle in a shared data structure so that other threads can
993/// find it, and then `park`ing in a loop. When some desired condition is met, another
994/// thread calls [`unpark`] on the handle.
995///
996/// The motivation for this design is twofold:
997///
998/// * It avoids the need to allocate mutexes and condvars when building new
999/// synchronization primitives; the threads already provide basic
1000/// blocking/signaling.
1001///
1002/// * It can be implemented very efficiently on many platforms.
1003///
1004/// # Memory Ordering
1005///
1006/// Calls to `park` _synchronize-with_ calls to `unpark`, meaning that memory
1007/// operations performed before a call to `unpark` are made visible to the thread that
1008/// consumes the token and returns from `park`. Note that all `park` and `unpark`
1009/// operations for a given thread form a total order and `park` synchronizes-with
1010/// _all_ prior `unpark` operations.
1011///
1012/// In atomic ordering terms, `unpark` performs a `Release` operation and `park`
1013/// performs the corresponding `Acquire` operation. Calls to `unpark` for the same
1014/// thread form a [release sequence].
1015///
1016/// Note that being unblocked does not imply a call was made to `unpark`, because
1017/// wakeups can also be spurious. For example, a valid, but inefficient,
1018/// implementation could have `park` and `unpark` return immediately without doing anything,
1019/// making *all* wakeups spurious.
1020///
1021/// # Examples
1022///
1023/// ```
1024/// use std::thread;
1025/// use std::sync::{Arc, atomic::{Ordering, AtomicBool}};
1026/// use std::time::Duration;
1027///
1028/// let flag = Arc::new(AtomicBool::new(false));
1029/// let flag2 = Arc::clone(&flag);
1030///
1031/// let parked_thread = thread::spawn(move || {
1032/// // We want to wait until the flag is set. We *could* just spin, but using
1033/// // park/unpark is more efficient.
1034/// while !flag2.load(Ordering::Relaxed) {
1035/// println!("Parking thread");
1036/// thread::park();
1037/// // We *could* get here spuriously, i.e., way before the 10ms below are over!
1038/// // But that is no problem, we are in a loop until the flag is set anyway.
1039/// println!("Thread unparked");
1040/// }
1041/// println!("Flag received");
1042/// });
1043///
1044/// // Let some time pass for the thread to be spawned.
1045/// thread::sleep(Duration::from_millis(10));
1046///
1047/// // Set the flag, and let the thread wake up.
1048/// // There is no race condition here, if `unpark`
1049/// // happens first, `park` will return immediately.
1050/// // Hence there is no risk of a deadlock.
1051/// flag.store(true, Ordering::Relaxed);
1052/// println!("Unpark the thread");
1053/// parked_thread.thread().unpark();
1054///
1055/// parked_thread.join().unwrap();
1056/// ```
1057///
1058/// [`unpark`]: Thread::unpark
1059/// [`thread::park_timeout`]: park_timeout
1060/// [release sequence]: https://en.cppreference.com/w/cpp/atomic/memory_order#Release_sequence
1061#[stable(feature = "rust1", since = "1.0.0")]
1062pub fn park() {
1063 let guard: PanicGuard = PanicGuard;
1064 // SAFETY: park_timeout is called on the parker owned by this thread.
1065 unsafe {
1066 current().inner.as_ref().parker().park();
1067 }
1068 // No panic occurred, do not abort.
1069 forget(guard);
1070}
1071
1072/// Use [`park_timeout`].
1073///
1074/// Blocks unless or until the current thread's token is made available or
1075/// the specified duration has been reached (may wake spuriously).
1076///
1077/// The semantics of this function are equivalent to [`park`] except
1078/// that the thread will be blocked for roughly no longer than `dur`. This
1079/// method should not be used for precise timing due to anomalies such as
1080/// preemption or platform differences that might not cause the maximum
1081/// amount of time waited to be precisely `ms` long.
1082///
1083/// See the [park documentation][`park`] for more detail.
1084#[stable(feature = "rust1", since = "1.0.0")]
1085#[deprecated(since = "1.6.0", note = "replaced by `std::thread::park_timeout`")]
1086pub fn park_timeout_ms(ms: u32) {
1087 park_timeout(dur:Duration::from_millis(ms as u64))
1088}
1089
1090/// Blocks unless or until the current thread's token is made available or
1091/// the specified duration has been reached (may wake spuriously).
1092///
1093/// The semantics of this function are equivalent to [`park`][park] except
1094/// that the thread will be blocked for roughly no longer than `dur`. This
1095/// method should not be used for precise timing due to anomalies such as
1096/// preemption or platform differences that might not cause the maximum
1097/// amount of time waited to be precisely `dur` long.
1098///
1099/// See the [park documentation][park] for more details.
1100///
1101/// # Platform-specific behavior
1102///
1103/// Platforms which do not support nanosecond precision for sleeping will have
1104/// `dur` rounded up to the nearest granularity of time they can sleep for.
1105///
1106/// # Examples
1107///
1108/// Waiting for the complete expiration of the timeout:
1109///
1110/// ```rust,no_run
1111/// use std::thread::park_timeout;
1112/// use std::time::{Instant, Duration};
1113///
1114/// let timeout = Duration::from_secs(2);
1115/// let beginning_park = Instant::now();
1116///
1117/// let mut timeout_remaining = timeout;
1118/// loop {
1119/// park_timeout(timeout_remaining);
1120/// let elapsed = beginning_park.elapsed();
1121/// if elapsed >= timeout {
1122/// break;
1123/// }
1124/// println!("restarting park_timeout after {elapsed:?}");
1125/// timeout_remaining = timeout - elapsed;
1126/// }
1127/// ```
1128#[stable(feature = "park_timeout", since = "1.4.0")]
1129pub fn park_timeout(dur: Duration) {
1130 let guard: PanicGuard = PanicGuard;
1131 // SAFETY: park_timeout is called on the parker owned by this thread.
1132 unsafe {
1133 current().inner.as_ref().parker().park_timeout(dur);
1134 }
1135 // No panic occurred, do not abort.
1136 forget(guard);
1137}
1138
1139////////////////////////////////////////////////////////////////////////////////
1140// ThreadId
1141////////////////////////////////////////////////////////////////////////////////
1142
1143/// A unique identifier for a running thread.
1144///
1145/// A `ThreadId` is an opaque object that uniquely identifies each thread
1146/// created during the lifetime of a process. `ThreadId`s are guaranteed not to
1147/// be reused, even when a thread terminates. `ThreadId`s are under the control
1148/// of Rust's standard library and there may not be any relationship between
1149/// `ThreadId` and the underlying platform's notion of a thread identifier --
1150/// the two concepts cannot, therefore, be used interchangeably. A `ThreadId`
1151/// can be retrieved from the [`id`] method on a [`Thread`].
1152///
1153/// # Examples
1154///
1155/// ```
1156/// use std::thread;
1157///
1158/// let other_thread = thread::spawn(|| {
1159/// thread::current().id()
1160/// });
1161///
1162/// let other_thread_id = other_thread.join().unwrap();
1163/// assert!(thread::current().id() != other_thread_id);
1164/// ```
1165///
1166/// [`id`]: Thread::id
1167#[stable(feature = "thread_id", since = "1.19.0")]
1168#[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
1169pub struct ThreadId(NonZeroU64);
1170
1171impl ThreadId {
1172 // Generate a new unique thread ID.
1173 fn new() -> ThreadId {
1174 #[cold]
1175 fn exhausted() -> ! {
1176 panic!("failed to generate unique thread ID: bitspace exhausted")
1177 }
1178
1179 cfg_if::cfg_if! {
1180 if #[cfg(target_has_atomic = "64")] {
1181 use crate::sync::atomic::{AtomicU64, Ordering::Relaxed};
1182
1183 static COUNTER: AtomicU64 = AtomicU64::new(0);
1184
1185 let mut last = COUNTER.load(Relaxed);
1186 loop {
1187 let Some(id) = last.checked_add(1) else {
1188 exhausted();
1189 };
1190
1191 match COUNTER.compare_exchange_weak(last, id, Relaxed, Relaxed) {
1192 Ok(_) => return ThreadId(NonZeroU64::new(id).unwrap()),
1193 Err(id) => last = id,
1194 }
1195 }
1196 } else {
1197 use crate::sync::{Mutex, PoisonError};
1198
1199 static COUNTER: Mutex<u64> = Mutex::new(0);
1200
1201 let mut counter = COUNTER.lock().unwrap_or_else(PoisonError::into_inner);
1202 let Some(id) = counter.checked_add(1) else {
1203 // in case the panic handler ends up calling `ThreadId::new()`,
1204 // avoid reentrant lock acquire.
1205 drop(counter);
1206 exhausted();
1207 };
1208
1209 *counter = id;
1210 drop(counter);
1211 ThreadId(NonZeroU64::new(id).unwrap())
1212 }
1213 }
1214 }
1215
1216 /// This returns a numeric identifier for the thread identified by this
1217 /// `ThreadId`.
1218 ///
1219 /// As noted in the documentation for the type itself, it is essentially an
1220 /// opaque ID, but is guaranteed to be unique for each thread. The returned
1221 /// value is entirely opaque -- only equality testing is stable. Note that
1222 /// it is not guaranteed which values new threads will return, and this may
1223 /// change across Rust versions.
1224 #[must_use]
1225 #[unstable(feature = "thread_id_value", issue = "67939")]
1226 pub fn as_u64(&self) -> NonZeroU64 {
1227 self.0
1228 }
1229}
1230
1231////////////////////////////////////////////////////////////////////////////////
1232// Thread
1233////////////////////////////////////////////////////////////////////////////////
1234
1235/// The internal representation of a `Thread` handle
1236struct Inner {
1237 name: Option<CString>, // Guaranteed to be UTF-8
1238 id: ThreadId,
1239 parker: Parker,
1240}
1241
1242impl Inner {
1243 fn parker(self: Pin<&Self>) -> Pin<&Parker> {
1244 unsafe { Pin::map_unchecked(self, |inner: &Inner| &inner.parker) }
1245 }
1246}
1247
1248#[derive(Clone)]
1249#[stable(feature = "rust1", since = "1.0.0")]
1250/// A handle to a thread.
1251///
1252/// Threads are represented via the `Thread` type, which you can get in one of
1253/// two ways:
1254///
1255/// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1256/// function, and calling [`thread`][`JoinHandle::thread`] on the
1257/// [`JoinHandle`].
1258/// * By requesting the current thread, using the [`thread::current`] function.
1259///
1260/// The [`thread::current`] function is available even for threads not spawned
1261/// by the APIs of this module.
1262///
1263/// There is usually no need to create a `Thread` struct yourself, one
1264/// should instead use a function like `spawn` to create new threads, see the
1265/// docs of [`Builder`] and [`spawn`] for more details.
1266///
1267/// [`thread::current`]: current
1268pub struct Thread {
1269 inner: Pin<Arc<Inner>>,
1270}
1271
1272impl Thread {
1273 // Used only internally to construct a thread object without spawning
1274 // Panics if the name contains nuls.
1275 pub(crate) fn new(name: Option<CString>) -> Thread {
1276 // We have to use `unsafe` here to construct the `Parker` in-place,
1277 // which is required for the UNIX implementation.
1278 //
1279 // SAFETY: We pin the Arc immediately after creation, so its address never
1280 // changes.
1281 let inner = unsafe {
1282 let mut arc = Arc::<Inner>::new_uninit();
1283 let ptr = Arc::get_mut_unchecked(&mut arc).as_mut_ptr();
1284 addr_of_mut!((*ptr).name).write(name);
1285 addr_of_mut!((*ptr).id).write(ThreadId::new());
1286 Parker::new_in_place(addr_of_mut!((*ptr).parker));
1287 Pin::new_unchecked(arc.assume_init())
1288 };
1289
1290 Thread { inner }
1291 }
1292
1293 /// Atomically makes the handle's token available if it is not already.
1294 ///
1295 /// Every thread is equipped with some basic low-level blocking support, via
1296 /// the [`park`][park] function and the `unpark()` method. These can be
1297 /// used as a more CPU-efficient implementation of a spinlock.
1298 ///
1299 /// See the [park documentation][park] for more details.
1300 ///
1301 /// # Examples
1302 ///
1303 /// ```
1304 /// use std::thread;
1305 /// use std::time::Duration;
1306 ///
1307 /// let parked_thread = thread::Builder::new()
1308 /// .spawn(|| {
1309 /// println!("Parking thread");
1310 /// thread::park();
1311 /// println!("Thread unparked");
1312 /// })
1313 /// .unwrap();
1314 ///
1315 /// // Let some time pass for the thread to be spawned.
1316 /// thread::sleep(Duration::from_millis(10));
1317 ///
1318 /// println!("Unpark the thread");
1319 /// parked_thread.thread().unpark();
1320 ///
1321 /// parked_thread.join().unwrap();
1322 /// ```
1323 #[stable(feature = "rust1", since = "1.0.0")]
1324 #[inline]
1325 pub fn unpark(&self) {
1326 self.inner.as_ref().parker().unpark();
1327 }
1328
1329 /// Gets the thread's unique identifier.
1330 ///
1331 /// # Examples
1332 ///
1333 /// ```
1334 /// use std::thread;
1335 ///
1336 /// let other_thread = thread::spawn(|| {
1337 /// thread::current().id()
1338 /// });
1339 ///
1340 /// let other_thread_id = other_thread.join().unwrap();
1341 /// assert!(thread::current().id() != other_thread_id);
1342 /// ```
1343 #[stable(feature = "thread_id", since = "1.19.0")]
1344 #[must_use]
1345 pub fn id(&self) -> ThreadId {
1346 self.inner.id
1347 }
1348
1349 /// Gets the thread's name.
1350 ///
1351 /// For more information about named threads, see
1352 /// [this module-level documentation][naming-threads].
1353 ///
1354 /// # Examples
1355 ///
1356 /// Threads by default have no name specified:
1357 ///
1358 /// ```
1359 /// use std::thread;
1360 ///
1361 /// let builder = thread::Builder::new();
1362 ///
1363 /// let handler = builder.spawn(|| {
1364 /// assert!(thread::current().name().is_none());
1365 /// }).unwrap();
1366 ///
1367 /// handler.join().unwrap();
1368 /// ```
1369 ///
1370 /// Thread with a specified name:
1371 ///
1372 /// ```
1373 /// use std::thread;
1374 ///
1375 /// let builder = thread::Builder::new()
1376 /// .name("foo".into());
1377 ///
1378 /// let handler = builder.spawn(|| {
1379 /// assert_eq!(thread::current().name(), Some("foo"))
1380 /// }).unwrap();
1381 ///
1382 /// handler.join().unwrap();
1383 /// ```
1384 ///
1385 /// [naming-threads]: ./index.html#naming-threads
1386 #[stable(feature = "rust1", since = "1.0.0")]
1387 #[must_use]
1388 pub fn name(&self) -> Option<&str> {
1389 self.cname().map(|s| unsafe { str::from_utf8_unchecked(s.to_bytes()) })
1390 }
1391
1392 fn cname(&self) -> Option<&CStr> {
1393 self.inner.name.as_deref()
1394 }
1395}
1396
1397#[stable(feature = "rust1", since = "1.0.0")]
1398impl fmt::Debug for Thread {
1399 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1400 f&mut DebugStruct<'_, '_>.debug_struct("Thread")
1401 .field("id", &self.id())
1402 .field(name:"name", &self.name())
1403 .finish_non_exhaustive()
1404 }
1405}
1406
1407////////////////////////////////////////////////////////////////////////////////
1408// JoinHandle
1409////////////////////////////////////////////////////////////////////////////////
1410
1411/// A specialized [`Result`] type for threads.
1412///
1413/// Indicates the manner in which a thread exited.
1414///
1415/// The value contained in the `Result::Err` variant
1416/// is the value the thread panicked with;
1417/// that is, the argument the `panic!` macro was called with.
1418/// Unlike with normal errors, this value doesn't implement
1419/// the [`Error`](crate::error::Error) trait.
1420///
1421/// Thus, a sensible way to handle a thread panic is to either:
1422///
1423/// 1. propagate the panic with [`std::panic::resume_unwind`]
1424/// 2. or in case the thread is intended to be a subsystem boundary
1425/// that is supposed to isolate system-level failures,
1426/// match on the `Err` variant and handle the panic in an appropriate way
1427///
1428/// A thread that completes without panicking is considered to exit successfully.
1429///
1430/// # Examples
1431///
1432/// Matching on the result of a joined thread:
1433///
1434/// ```no_run
1435/// use std::{fs, thread, panic};
1436///
1437/// fn copy_in_thread() -> thread::Result<()> {
1438/// thread::spawn(|| {
1439/// fs::copy("foo.txt", "bar.txt").unwrap();
1440/// }).join()
1441/// }
1442///
1443/// fn main() {
1444/// match copy_in_thread() {
1445/// Ok(_) => println!("copy succeeded"),
1446/// Err(e) => panic::resume_unwind(e),
1447/// }
1448/// }
1449/// ```
1450///
1451/// [`Result`]: crate::result::Result
1452/// [`std::panic::resume_unwind`]: crate::panic::resume_unwind
1453#[stable(feature = "rust1", since = "1.0.0")]
1454pub type Result<T> = crate::result::Result<T, Box<dyn Any + Send + 'static>>;
1455
1456// This packet is used to communicate the return value between the spawned
1457// thread and the rest of the program. It is shared through an `Arc` and
1458// there's no need for a mutex here because synchronization happens with `join()`
1459// (the caller will never read this packet until the thread has exited).
1460//
1461// An Arc to the packet is stored into a `JoinInner` which in turns is placed
1462// in `JoinHandle`.
1463struct Packet<'scope, T> {
1464 scope: Option<Arc<scoped::ScopeData>>,
1465 result: UnsafeCell<Option<Result<T>>>,
1466 _marker: PhantomData<Option<&'scope scoped::ScopeData>>,
1467}
1468
1469// Due to the usage of `UnsafeCell` we need to manually implement Sync.
1470// The type `T` should already always be Send (otherwise the thread could not
1471// have been created) and the Packet is Sync because all access to the
1472// `UnsafeCell` synchronized (by the `join()` boundary), and `ScopeData` is Sync.
1473unsafe impl<'scope, T: Sync> Sync for Packet<'scope, T> {}
1474
1475impl<'scope, T> Drop for Packet<'scope, T> {
1476 fn drop(&mut self) {
1477 // If this packet was for a thread that ran in a scope, the thread
1478 // panicked, and nobody consumed the panic payload, we make sure
1479 // the scope function will panic.
1480 let unhandled_panic = matches!(self.result.get_mut(), Some(Err(_)));
1481 // Drop the result without causing unwinding.
1482 // This is only relevant for threads that aren't join()ed, as
1483 // join() will take the `result` and set it to None, such that
1484 // there is nothing left to drop here.
1485 // If this panics, we should handle that, because we're outside the
1486 // outermost `catch_unwind` of our thread.
1487 // We just abort in that case, since there's nothing else we can do.
1488 // (And even if we tried to handle it somehow, we'd also need to handle
1489 // the case where the panic payload we get out of it also panics on
1490 // drop, and so on. See issue #86027.)
1491 if let Err(_) = panic::catch_unwind(panic::AssertUnwindSafe(|| {
1492 *self.result.get_mut() = None;
1493 })) {
1494 rtabort!("thread result panicked on drop");
1495 }
1496 // Book-keeping so the scope knows when it's done.
1497 if let Some(scope) = &self.scope {
1498 // Now that there will be no more user code running on this thread
1499 // that can use 'scope, mark the thread as 'finished'.
1500 // It's important we only do this after the `result` has been dropped,
1501 // since dropping it might still use things it borrowed from 'scope.
1502 scope.decrement_num_running_threads(unhandled_panic);
1503 }
1504 }
1505}
1506
1507/// Inner representation for JoinHandle
1508struct JoinInner<'scope, T> {
1509 native: imp::Thread,
1510 thread: Thread,
1511 packet: Arc<Packet<'scope, T>>,
1512}
1513
1514impl<'scope, T> JoinInner<'scope, T> {
1515 fn join(mut self) -> Result<T> {
1516 self.native.join();
1517 Arc::get_mut(&mut self.packet).unwrap().result.get_mut().take().unwrap()
1518 }
1519}
1520
1521/// An owned permission to join on a thread (block on its termination).
1522///
1523/// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1524/// means that there is no longer any handle to the thread and no way to `join`
1525/// on it.
1526///
1527/// Due to platform restrictions, it is not possible to [`Clone`] this
1528/// handle: the ability to join a thread is a uniquely-owned permission.
1529///
1530/// This `struct` is created by the [`thread::spawn`] function and the
1531/// [`thread::Builder::spawn`] method.
1532///
1533/// # Examples
1534///
1535/// Creation from [`thread::spawn`]:
1536///
1537/// ```
1538/// use std::thread;
1539///
1540/// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1541/// // some work here
1542/// });
1543/// ```
1544///
1545/// Creation from [`thread::Builder::spawn`]:
1546///
1547/// ```
1548/// use std::thread;
1549///
1550/// let builder = thread::Builder::new();
1551///
1552/// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1553/// // some work here
1554/// }).unwrap();
1555/// ```
1556///
1557/// A thread being detached and outliving the thread that spawned it:
1558///
1559/// ```no_run
1560/// use std::thread;
1561/// use std::time::Duration;
1562///
1563/// let original_thread = thread::spawn(|| {
1564/// let _detached_thread = thread::spawn(|| {
1565/// // Here we sleep to make sure that the first thread returns before.
1566/// thread::sleep(Duration::from_millis(10));
1567/// // This will be called, even though the JoinHandle is dropped.
1568/// println!("♫ Still alive ♫");
1569/// });
1570/// });
1571///
1572/// original_thread.join().expect("The thread being joined has panicked");
1573/// println!("Original thread is joined.");
1574///
1575/// // We make sure that the new thread has time to run, before the main
1576/// // thread returns.
1577///
1578/// thread::sleep(Duration::from_millis(1000));
1579/// ```
1580///
1581/// [`thread::Builder::spawn`]: Builder::spawn
1582/// [`thread::spawn`]: spawn
1583#[stable(feature = "rust1", since = "1.0.0")]
1584#[cfg_attr(target_os = "teeos", must_use)]
1585pub struct JoinHandle<T>(JoinInner<'static, T>);
1586
1587#[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1588unsafe impl<T> Send for JoinHandle<T> {}
1589#[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1590unsafe impl<T> Sync for JoinHandle<T> {}
1591
1592impl<T> JoinHandle<T> {
1593 /// Extracts a handle to the underlying thread.
1594 ///
1595 /// # Examples
1596 ///
1597 /// ```
1598 /// use std::thread;
1599 ///
1600 /// let builder = thread::Builder::new();
1601 ///
1602 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1603 /// // some work here
1604 /// }).unwrap();
1605 ///
1606 /// let thread = join_handle.thread();
1607 /// println!("thread id: {:?}", thread.id());
1608 /// ```
1609 #[stable(feature = "rust1", since = "1.0.0")]
1610 #[must_use]
1611 pub fn thread(&self) -> &Thread {
1612 &self.0.thread
1613 }
1614
1615 /// Waits for the associated thread to finish.
1616 ///
1617 /// This function will return immediately if the associated thread has already finished.
1618 ///
1619 /// In terms of [atomic memory orderings], the completion of the associated
1620 /// thread synchronizes with this function returning. In other words, all
1621 /// operations performed by that thread [happen
1622 /// before](https://doc.rust-lang.org/nomicon/atomics.html#data-accesses) all
1623 /// operations that happen after `join` returns.
1624 ///
1625 /// If the associated thread panics, [`Err`] is returned with the parameter given
1626 /// to [`panic!`].
1627 ///
1628 /// [`Err`]: crate::result::Result::Err
1629 /// [atomic memory orderings]: crate::sync::atomic
1630 ///
1631 /// # Panics
1632 ///
1633 /// This function may panic on some platforms if a thread attempts to join
1634 /// itself or otherwise may create a deadlock with joining threads.
1635 ///
1636 /// # Examples
1637 ///
1638 /// ```
1639 /// use std::thread;
1640 ///
1641 /// let builder = thread::Builder::new();
1642 ///
1643 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1644 /// // some work here
1645 /// }).unwrap();
1646 /// join_handle.join().expect("Couldn't join on the associated thread");
1647 /// ```
1648 #[stable(feature = "rust1", since = "1.0.0")]
1649 pub fn join(self) -> Result<T> {
1650 self.0.join()
1651 }
1652
1653 /// Checks if the associated thread has finished running its main function.
1654 ///
1655 /// `is_finished` supports implementing a non-blocking join operation, by checking
1656 /// `is_finished`, and calling `join` if it returns `true`. This function does not block. To
1657 /// block while waiting on the thread to finish, use [`join`][Self::join].
1658 ///
1659 /// This might return `true` for a brief moment after the thread's main
1660 /// function has returned, but before the thread itself has stopped running.
1661 /// However, once this returns `true`, [`join`][Self::join] can be expected
1662 /// to return quickly, without blocking for any significant amount of time.
1663 #[stable(feature = "thread_is_running", since = "1.61.0")]
1664 pub fn is_finished(&self) -> bool {
1665 Arc::strong_count(&self.0.packet) == 1
1666 }
1667}
1668
1669impl<T> AsInner<imp::Thread> for JoinHandle<T> {
1670 fn as_inner(&self) -> &imp::Thread {
1671 &self.0.native
1672 }
1673}
1674
1675impl<T> IntoInner<imp::Thread> for JoinHandle<T> {
1676 fn into_inner(self) -> imp::Thread {
1677 self.0.native
1678 }
1679}
1680
1681#[stable(feature = "std_debug", since = "1.16.0")]
1682impl<T> fmt::Debug for JoinHandle<T> {
1683 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1684 f.debug_struct(name:"JoinHandle").finish_non_exhaustive()
1685 }
1686}
1687
1688fn _assert_sync_and_send() {
1689 fn _assert_both<T: Send + Sync>() {}
1690 _assert_both::<JoinHandle<()>>();
1691 _assert_both::<Thread>();
1692}
1693
1694/// Returns an estimate of the default amount of parallelism a program should use.
1695///
1696/// Parallelism is a resource. A given machine provides a certain capacity for
1697/// parallelism, i.e., a bound on the number of computations it can perform
1698/// simultaneously. This number often corresponds to the amount of CPUs a
1699/// computer has, but it may diverge in various cases.
1700///
1701/// Host environments such as VMs or container orchestrators may want to
1702/// restrict the amount of parallelism made available to programs in them. This
1703/// is often done to limit the potential impact of (unintentionally)
1704/// resource-intensive programs on other programs running on the same machine.
1705///
1706/// # Limitations
1707///
1708/// The purpose of this API is to provide an easy and portable way to query
1709/// the default amount of parallelism the program should use. Among other things it
1710/// does not expose information on NUMA regions, does not account for
1711/// differences in (co)processor capabilities or current system load,
1712/// and will not modify the program's global state in order to more accurately
1713/// query the amount of available parallelism.
1714///
1715/// Where both fixed steady-state and burst limits are available the steady-state
1716/// capacity will be used to ensure more predictable latencies.
1717///
1718/// Resource limits can be changed during the runtime of a program, therefore the value is
1719/// not cached and instead recomputed every time this function is called. It should not be
1720/// called from hot code.
1721///
1722/// The value returned by this function should be considered a simplified
1723/// approximation of the actual amount of parallelism available at any given
1724/// time. To get a more detailed or precise overview of the amount of
1725/// parallelism available to the program, you may wish to use
1726/// platform-specific APIs as well. The following platform limitations currently
1727/// apply to `available_parallelism`:
1728///
1729/// On Windows:
1730/// - It may undercount the amount of parallelism available on systems with more
1731/// than 64 logical CPUs. However, programs typically need specific support to
1732/// take advantage of more than 64 logical CPUs, and in the absence of such
1733/// support, the number returned by this function accurately reflects the
1734/// number of logical CPUs the program can use by default.
1735/// - It may overcount the amount of parallelism available on systems limited by
1736/// process-wide affinity masks, or job object limitations.
1737///
1738/// On Linux:
1739/// - It may overcount the amount of parallelism available when limited by a
1740/// process-wide affinity mask or cgroup quotas and `sched_getaffinity()` or cgroup fs can't be
1741/// queried, e.g. due to sandboxing.
1742/// - It may undercount the amount of parallelism if the current thread's affinity mask
1743/// does not reflect the process' cpuset, e.g. due to pinned threads.
1744/// - If the process is in a cgroup v1 cpu controller, this may need to
1745/// scan mountpoints to find the corresponding cgroup v1 controller,
1746/// which may take time on systems with large numbers of mountpoints.
1747/// (This does not apply to cgroup v2, or to processes not in a
1748/// cgroup.)
1749///
1750/// On all targets:
1751/// - It may overcount the amount of parallelism available when running in a VM
1752/// with CPU usage limits (e.g. an overcommitted host).
1753///
1754/// # Errors
1755///
1756/// This function will, but is not limited to, return errors in the following
1757/// cases:
1758///
1759/// - If the amount of parallelism is not known for the target platform.
1760/// - If the program lacks permission to query the amount of parallelism made
1761/// available to it.
1762///
1763/// # Examples
1764///
1765/// ```
1766/// # #![allow(dead_code)]
1767/// use std::{io, thread};
1768///
1769/// fn main() -> io::Result<()> {
1770/// let count = thread::available_parallelism()?.get();
1771/// assert!(count >= 1_usize);
1772/// Ok(())
1773/// }
1774/// ```
1775#[doc(alias = "available_concurrency")] // Alias for a previous name we gave this API on unstable.
1776#[doc(alias = "hardware_concurrency")] // Alias for C++ `std::thread::hardware_concurrency`.
1777#[doc(alias = "num_cpus")] // Alias for a popular ecosystem crate which provides similar functionality.
1778#[stable(feature = "available_parallelism", since = "1.59.0")]
1779pub fn available_parallelism() -> io::Result<NonZeroUsize> {
1780 imp::available_parallelism()
1781}
1782