1//! ## Per-Layer Filtering
2//!
3//! Per-layer filters permit individual `Layer`s to have their own filter
4//! configurations without interfering with other `Layer`s.
5//!
6//! This module is not public; the public APIs defined in this module are
7//! re-exported in the top-level `filter` module. Therefore, this documentation
8//! primarily concerns the internal implementation details. For the user-facing
9//! public API documentation, see the individual public types in this module, as
10//! well as the, see the `Layer` trait documentation's [per-layer filtering
11//! section]][1].
12//!
13//! ## How does per-layer filtering work?
14//!
15//! As described in the API documentation, the [`Filter`] trait defines a
16//! filtering strategy for a per-layer filter. We expect there will be a variety
17//! of implementations of [`Filter`], both in `tracing-subscriber` and in user
18//! code.
19//!
20//! To actually *use* a [`Filter`] implementation, it is combined with a
21//! [`Layer`] by the [`Filtered`] struct defined in this module. [`Filtered`]
22//! implements [`Layer`] by calling into the wrapped [`Layer`], or not, based on
23//! the filtering strategy. While there will be a variety of types that implement
24//! [`Filter`], all actual *uses* of per-layer filtering will occur through the
25//! [`Filtered`] struct. Therefore, most of the implementation details live
26//! there.
27//!
28//! [1]: crate::layer#per-layer-filtering
29//! [`Filter`]: crate::layer::Filter
30use crate::{
31 filter::LevelFilter,
32 layer::{self, Context, Layer},
33 registry,
34};
35use std::{
36 any::TypeId,
37 cell::{Cell, RefCell},
38 fmt,
39 marker::PhantomData,
40 ops::Deref,
41 sync::Arc,
42 thread_local,
43};
44use tracing_core::{
45 span,
46 subscriber::{Interest, Subscriber},
47 Dispatch, Event, Metadata,
48};
49pub mod combinator;
50
51/// A [`Layer`] that wraps an inner [`Layer`] and adds a [`Filter`] which
52/// controls what spans and events are enabled for that layer.
53///
54/// This is returned by the [`Layer::with_filter`] method. See the
55/// [documentation on per-layer filtering][plf] for details.
56///
57/// [`Filter`]: crate::layer::Filter
58/// [plf]: crate::layer#per-layer-filtering
59#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
60#[derive(Clone)]
61pub struct Filtered<L, F, S> {
62 filter: F,
63 layer: L,
64 id: MagicPlfDowncastMarker,
65 _s: PhantomData<fn(S)>,
66}
67
68/// Uniquely identifies an individual [`Filter`] instance in the context of
69/// a [`Subscriber`].
70///
71/// When adding a [`Filtered`] [`Layer`] to a [`Subscriber`], the [`Subscriber`]
72/// generates a `FilterId` for that [`Filtered`] layer. The [`Filtered`] layer
73/// will then use the generated ID to query whether a particular span was
74/// previously enabled by that layer's [`Filter`].
75///
76/// **Note**: Currently, the [`Registry`] type provided by this crate is the
77/// **only** [`Subscriber`] implementation capable of participating in per-layer
78/// filtering. Therefore, the `FilterId` type cannot currently be constructed by
79/// code outside of `tracing-subscriber`. In the future, new APIs will be added to `tracing-subscriber` to
80/// allow non-Registry [`Subscriber`]s to also participate in per-layer
81/// filtering. When those APIs are added, subscribers will be responsible
82/// for generating and assigning `FilterId`s.
83///
84/// [`Filter`]: crate::layer::Filter
85/// [`Subscriber`]: tracing_core::Subscriber
86/// [`Layer`]: crate::layer::Layer
87/// [`Registry`]: crate::registry::Registry
88#[cfg(feature = "registry")]
89#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
90#[derive(Copy, Clone)]
91pub struct FilterId(u64);
92
93/// A bitmap tracking which [`FilterId`]s have enabled a given span or
94/// event.
95///
96/// This is currently a private type that's used exclusively by the
97/// [`Registry`]. However, in the future, this may become a public API, in order
98/// to allow user subscribers to host [`Filter`]s.
99///
100/// [`Registry`]: crate::Registry
101/// [`Filter`]: crate::layer::Filter
102#[derive(Default, Copy, Clone, Eq, PartialEq)]
103pub(crate) struct FilterMap {
104 bits: u64,
105}
106
107/// The current state of `enabled` calls to per-layer filters on this
108/// thread.
109///
110/// When `Filtered::enabled` is called, the filter will set the bit
111/// corresponding to its ID if the filter will disable the event/span being
112/// filtered. When the event or span is recorded, the per-layer filter will
113/// check its bit to determine if it disabled that event or span, and skip
114/// forwarding the event or span to the inner layer if the bit is set. Once
115/// a span or event has been skipped by a per-layer filter, it unsets its
116/// bit, so that the `FilterMap` has been cleared for the next set of
117/// `enabled` calls.
118///
119/// FilterState is also read by the `Registry`, for two reasons:
120///
121/// 1. When filtering a span, the Registry must store the `FilterMap`
122/// generated by `Filtered::enabled` calls for that span as part of the
123/// span's per-span data. This allows `Filtered` layers to determine
124/// whether they had previously disabled a given span, and avoid showing it
125/// to the wrapped layer if it was disabled.
126///
127/// This allows `Filtered` layers to also filter out the spans they
128/// disable from span traversals (such as iterating over parents, etc).
129/// 2. If all the bits are set, then every per-layer filter has decided it
130/// doesn't want to enable that span or event. In that case, the
131/// `Registry`'s `enabled` method will return `false`, so that
132/// recording a span or event can be skipped entirely.
133#[derive(Debug)]
134pub(crate) struct FilterState {
135 enabled: Cell<FilterMap>,
136 // TODO(eliza): `Interest`s should _probably_ be `Copy`. The only reason
137 // they're not is our Obsessive Commitment to Forwards-Compatibility. If
138 // this changes in tracing-core`, we can make this a `Cell` rather than
139 // `RefCell`...
140 interest: RefCell<Option<Interest>>,
141
142 #[cfg(debug_assertions)]
143 counters: DebugCounters,
144}
145
146/// Extra counters added to `FilterState` used only to make debug assertions.
147#[cfg(debug_assertions)]
148#[derive(Debug, Default)]
149struct DebugCounters {
150 /// How many per-layer filters have participated in the current `enabled`
151 /// call?
152 in_filter_pass: Cell<usize>,
153
154 /// How many per-layer filters have participated in the current `register_callsite`
155 /// call?
156 in_interest_pass: Cell<usize>,
157}
158
159thread_local! {
160 pub(crate) static FILTERING: FilterState = FilterState::new();
161}
162
163/// Extension trait adding [combinators] for combining [`Filter`].
164///
165/// [combinators]: crate::filter::combinator
166/// [`Filter`]: crate::layer::Filter
167pub trait FilterExt<S>: layer::Filter<S> {
168 /// Combines this [`Filter`] with another [`Filter`] s so that spans and
169 /// events are enabled if and only if *both* filters return `true`.
170 ///
171 /// # Examples
172 ///
173 /// Enabling spans or events if they have both a particular target *and* are
174 /// above a certain level:
175 ///
176 /// ```
177 /// use tracing_subscriber::{
178 /// filter::{filter_fn, LevelFilter, FilterExt},
179 /// prelude::*,
180 /// };
181 ///
182 /// // Enables spans and events with targets starting with `interesting_target`:
183 /// let target_filter = filter_fn(|meta| {
184 /// meta.target().starts_with("interesting_target")
185 /// });
186 ///
187 /// // Enables spans and events with levels `INFO` and below:
188 /// let level_filter = LevelFilter::INFO;
189 ///
190 /// // Combine the two filters together, returning a filter that only enables
191 /// // spans and events that *both* filters will enable:
192 /// let filter = target_filter.and(level_filter);
193 ///
194 /// tracing_subscriber::registry()
195 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
196 /// .init();
197 ///
198 /// // This event will *not* be enabled:
199 /// tracing::info!("an event with an uninteresting target");
200 ///
201 /// // This event *will* be enabled:
202 /// tracing::info!(target: "interesting_target", "a very interesting event");
203 ///
204 /// // This event will *not* be enabled:
205 /// tracing::debug!(target: "interesting_target", "interesting debug event...");
206 /// ```
207 ///
208 /// [`Filter`]: crate::layer::Filter
209 fn and<B>(self, other: B) -> combinator::And<Self, B, S>
210 where
211 Self: Sized,
212 B: layer::Filter<S>,
213 {
214 combinator::And::new(self, other)
215 }
216
217 /// Combines two [`Filter`]s so that spans and events are enabled if *either* filter
218 /// returns `true`.
219 ///
220 /// # Examples
221 ///
222 /// Enabling spans and events at the `INFO` level and above, and all spans
223 /// and events with a particular target:
224 /// ```
225 /// use tracing_subscriber::{
226 /// filter::{filter_fn, LevelFilter, FilterExt},
227 /// prelude::*,
228 /// };
229 ///
230 /// // Enables spans and events with targets starting with `interesting_target`:
231 /// let target_filter = filter_fn(|meta| {
232 /// meta.target().starts_with("interesting_target")
233 /// });
234 ///
235 /// // Enables spans and events with levels `INFO` and below:
236 /// let level_filter = LevelFilter::INFO;
237 ///
238 /// // Combine the two filters together so that a span or event is enabled
239 /// // if it is at INFO or lower, or if it has a target starting with
240 /// // `interesting_target`.
241 /// let filter = level_filter.or(target_filter);
242 ///
243 /// tracing_subscriber::registry()
244 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
245 /// .init();
246 ///
247 /// // This event will *not* be enabled:
248 /// tracing::debug!("an uninteresting event");
249 ///
250 /// // This event *will* be enabled:
251 /// tracing::info!("an uninteresting INFO event");
252 ///
253 /// // This event *will* be enabled:
254 /// tracing::info!(target: "interesting_target", "a very interesting event");
255 ///
256 /// // This event *will* be enabled:
257 /// tracing::debug!(target: "interesting_target", "interesting debug event...");
258 /// ```
259 ///
260 /// Enabling a higher level for a particular target by using `or` in
261 /// conjunction with the [`and`] combinator:
262 ///
263 /// ```
264 /// use tracing_subscriber::{
265 /// filter::{filter_fn, LevelFilter, FilterExt},
266 /// prelude::*,
267 /// };
268 ///
269 /// // This filter will enable spans and events with targets beginning with
270 /// // `my_crate`:
271 /// let my_crate = filter_fn(|meta| {
272 /// meta.target().starts_with("my_crate")
273 /// });
274 ///
275 /// let filter = my_crate
276 /// // Combine the `my_crate` filter with a `LevelFilter` to produce a
277 /// // filter that will enable the `INFO` level and lower for spans and
278 /// // events with `my_crate` targets:
279 /// .and(LevelFilter::INFO)
280 /// // If a span or event *doesn't* have a target beginning with
281 /// // `my_crate`, enable it if it has the `WARN` level or lower:
282 /// .or(LevelFilter::WARN);
283 ///
284 /// tracing_subscriber::registry()
285 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
286 /// .init();
287 /// ```
288 ///
289 /// [`Filter`]: crate::layer::Filter
290 /// [`and`]: FilterExt::and
291 fn or<B>(self, other: B) -> combinator::Or<Self, B, S>
292 where
293 Self: Sized,
294 B: layer::Filter<S>,
295 {
296 combinator::Or::new(self, other)
297 }
298
299 /// Inverts `self`, returning a filter that enables spans and events only if
300 /// `self` would *not* enable them.
301 ///
302 /// This inverts the values returned by the [`enabled`] and [`callsite_enabled`]
303 /// methods on the wrapped filter; it does *not* invert [`event_enabled`], as
304 /// filters which do not implement filtering on event field values will return
305 /// the default `true` even for events that their [`enabled`] method disables.
306 ///
307 /// Consider a normal filter defined as:
308 ///
309 /// ```ignore (pseudo-code)
310 /// // for spans
311 /// match callsite_enabled() {
312 /// ALWAYS => on_span(),
313 /// SOMETIMES => if enabled() { on_span() },
314 /// NEVER => (),
315 /// }
316 /// // for events
317 /// match callsite_enabled() {
318 /// ALWAYS => on_event(),
319 /// SOMETIMES => if enabled() && event_enabled() { on_event() },
320 /// NEVER => (),
321 /// }
322 /// ```
323 ///
324 /// and an inverted filter defined as:
325 ///
326 /// ```ignore (pseudo-code)
327 /// // for spans
328 /// match callsite_enabled() {
329 /// ALWAYS => (),
330 /// SOMETIMES => if !enabled() { on_span() },
331 /// NEVER => on_span(),
332 /// }
333 /// // for events
334 /// match callsite_enabled() {
335 /// ALWAYS => (),
336 /// SOMETIMES => if !enabled() { on_event() },
337 /// NEVER => on_event(),
338 /// }
339 /// ```
340 ///
341 /// A proper inversion would do `!(enabled() && event_enabled())` (or
342 /// `!enabled() || !event_enabled()`), but because of the implicit `&&`
343 /// relation between `enabled` and `event_enabled`, it is difficult to
344 /// short circuit and not call the wrapped `event_enabled`.
345 ///
346 /// A combinator which remembers the result of `enabled` in order to call
347 /// `event_enabled` only when `enabled() == true` is possible, but requires
348 /// additional thread-local mutable state to support a very niche use case.
349 //
350 // Also, it'd mean the wrapped layer's `enabled()` always gets called and
351 // globally applied to events where it doesn't today, since we can't know
352 // what `event_enabled` will say until we have the event to call it with.
353 ///
354 /// [`Filter`]: crate::subscribe::Filter
355 /// [`enabled`]: crate::subscribe::Filter::enabled
356 /// [`event_enabled`]: crate::subscribe::Filter::event_enabled
357 /// [`callsite_enabled`]: crate::subscribe::Filter::callsite_enabled
358 fn not(self) -> combinator::Not<Self, S>
359 where
360 Self: Sized,
361 {
362 combinator::Not::new(self)
363 }
364
365 /// [Boxes] `self`, erasing its concrete type.
366 ///
367 /// This is equivalent to calling [`Box::new`], but in method form, so that
368 /// it can be used when chaining combinator methods.
369 ///
370 /// # Examples
371 ///
372 /// When different combinations of filters are used conditionally, they may
373 /// have different types. For example, the following code won't compile,
374 /// since the `if` and `else` clause produce filters of different types:
375 ///
376 /// ```compile_fail
377 /// use tracing_subscriber::{
378 /// filter::{filter_fn, LevelFilter, FilterExt},
379 /// prelude::*,
380 /// };
381 ///
382 /// let enable_bar_target: bool = // ...
383 /// # false;
384 ///
385 /// let filter = if enable_bar_target {
386 /// filter_fn(|meta| meta.target().starts_with("foo"))
387 /// // If `enable_bar_target` is true, add a `filter_fn` enabling
388 /// // spans and events with the target `bar`:
389 /// .or(filter_fn(|meta| meta.target().starts_with("bar")))
390 /// .and(LevelFilter::INFO)
391 /// } else {
392 /// filter_fn(|meta| meta.target().starts_with("foo"))
393 /// .and(LevelFilter::INFO)
394 /// };
395 ///
396 /// tracing_subscriber::registry()
397 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
398 /// .init();
399 /// ```
400 ///
401 /// By using `boxed`, the types of the two different branches can be erased,
402 /// so the assignment to the `filter` variable is valid (as both branches
403 /// have the type `Box<dyn Filter<S> + Send + Sync + 'static>`). The
404 /// following code *does* compile:
405 ///
406 /// ```
407 /// use tracing_subscriber::{
408 /// filter::{filter_fn, LevelFilter, FilterExt},
409 /// prelude::*,
410 /// };
411 ///
412 /// let enable_bar_target: bool = // ...
413 /// # false;
414 ///
415 /// let filter = if enable_bar_target {
416 /// filter_fn(|meta| meta.target().starts_with("foo"))
417 /// .or(filter_fn(|meta| meta.target().starts_with("bar")))
418 /// .and(LevelFilter::INFO)
419 /// // Boxing the filter erases its type, so both branches now
420 /// // have the same type.
421 /// .boxed()
422 /// } else {
423 /// filter_fn(|meta| meta.target().starts_with("foo"))
424 /// .and(LevelFilter::INFO)
425 /// .boxed()
426 /// };
427 ///
428 /// tracing_subscriber::registry()
429 /// .with(tracing_subscriber::fmt::layer().with_filter(filter))
430 /// .init();
431 /// ```
432 ///
433 /// [Boxes]: std::boxed
434 /// [`Box::new`]: std::boxed::Box::new
435 fn boxed(self) -> Box<dyn layer::Filter<S> + Send + Sync + 'static>
436 where
437 Self: Sized + Send + Sync + 'static,
438 {
439 Box::new(self)
440 }
441}
442
443// === impl Filter ===
444
445#[cfg(feature = "registry")]
446#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
447impl<S> layer::Filter<S> for LevelFilter {
448 fn enabled(&self, meta: &Metadata<'_>, _: &Context<'_, S>) -> bool {
449 meta.level() <= self
450 }
451
452 fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
453 if meta.level() <= self {
454 Interest::always()
455 } else {
456 Interest::never()
457 }
458 }
459
460 fn max_level_hint(&self) -> Option<LevelFilter> {
461 Some(*self)
462 }
463}
464
465macro_rules! filter_impl_body {
466 () => {
467 #[inline]
468 fn enabled(&self, meta: &Metadata<'_>, cx: &Context<'_, S>) -> bool {
469 self.deref().enabled(meta, cx)
470 }
471
472 #[inline]
473 fn callsite_enabled(&self, meta: &'static Metadata<'static>) -> Interest {
474 self.deref().callsite_enabled(meta)
475 }
476
477 #[inline]
478 fn max_level_hint(&self) -> Option<LevelFilter> {
479 self.deref().max_level_hint()
480 }
481 };
482}
483
484#[cfg(feature = "registry")]
485#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
486impl<S> layer::Filter<S> for Arc<dyn layer::Filter<S> + Send + Sync + 'static> {
487 filter_impl_body!();
488}
489
490#[cfg(feature = "registry")]
491#[cfg_attr(docsrs, doc(cfg(feature = "registry")))]
492impl<S> layer::Filter<S> for Box<dyn layer::Filter<S> + Send + Sync + 'static> {
493 filter_impl_body!();
494}
495
496// === impl Filtered ===
497
498impl<L, F, S> Filtered<L, F, S> {
499 /// Wraps the provided [`Layer`] so that it is filtered by the given
500 /// [`Filter`].
501 ///
502 /// This is equivalent to calling the [`Layer::with_filter`] method.
503 ///
504 /// See the [documentation on per-layer filtering][plf] for details.
505 ///
506 /// [`Filter`]: crate::layer::Filter
507 /// [plf]: crate::layer#per-layer-filtering
508 pub fn new(layer: L, filter: F) -> Self {
509 Self {
510 layer,
511 filter,
512 id: MagicPlfDowncastMarker(FilterId::disabled()),
513 _s: PhantomData,
514 }
515 }
516
517 #[inline(always)]
518 fn id(&self) -> FilterId {
519 debug_assert!(
520 !self.id.0.is_disabled(),
521 "a `Filtered` layer was used, but it had no `FilterId`; \
522 was it registered with the subscriber?"
523 );
524 self.id.0
525 }
526
527 fn did_enable(&self, f: impl FnOnce()) {
528 FILTERING.with(|filtering| filtering.did_enable(self.id(), f))
529 }
530
531 /// Borrows the [`Filter`](crate::layer::Filter) used by this layer.
532 pub fn filter(&self) -> &F {
533 &self.filter
534 }
535
536 /// Mutably borrows the [`Filter`](crate::layer::Filter) used by this layer.
537 ///
538 /// When this layer can be mutably borrowed, this may be used to mutate the filter.
539 /// Generally, this will primarily be used with the
540 /// [`reload::Handle::modify`](crate::reload::Handle::modify) method.
541 ///
542 /// # Examples
543 ///
544 /// ```
545 /// # use tracing::info;
546 /// # use tracing_subscriber::{filter,fmt,reload,Registry,prelude::*};
547 /// # fn main() {
548 /// let filtered_layer = fmt::Layer::default().with_filter(filter::LevelFilter::WARN);
549 /// let (filtered_layer, reload_handle) = reload::Layer::new(filtered_layer);
550 /// #
551 /// # // specifying the Registry type is required
552 /// # let _: &reload::Handle<filter::Filtered<fmt::Layer<Registry>,
553 /// # filter::LevelFilter, Registry>,Registry>
554 /// # = &reload_handle;
555 /// #
556 /// info!("This will be ignored");
557 /// reload_handle.modify(|layer| *layer.filter_mut() = filter::LevelFilter::INFO);
558 /// info!("This will be logged");
559 /// # }
560 /// ```
561 pub fn filter_mut(&mut self) -> &mut F {
562 &mut self.filter
563 }
564
565 /// Borrows the inner [`Layer`] wrapped by this `Filtered` layer.
566 pub fn inner(&self) -> &L {
567 &self.layer
568 }
569
570 /// Mutably borrows the inner [`Layer`] wrapped by this `Filtered` layer.
571 ///
572 /// This method is primarily expected to be used with the
573 /// [`reload::Handle::modify`](crate::reload::Handle::modify) method.
574 ///
575 /// # Examples
576 ///
577 /// ```
578 /// # use tracing::info;
579 /// # use tracing_subscriber::{filter,fmt,reload,Registry,prelude::*};
580 /// # fn non_blocking<T: std::io::Write>(writer: T) -> (fn() -> std::io::Stdout) {
581 /// # std::io::stdout
582 /// # }
583 /// # fn main() {
584 /// let filtered_layer = fmt::layer().with_writer(non_blocking(std::io::stderr())).with_filter(filter::LevelFilter::INFO);
585 /// let (filtered_layer, reload_handle) = reload::Layer::new(filtered_layer);
586 /// #
587 /// # // specifying the Registry type is required
588 /// # let _: &reload::Handle<filter::Filtered<fmt::Layer<Registry, _, _, fn() -> std::io::Stdout>,
589 /// # filter::LevelFilter, Registry>, Registry>
590 /// # = &reload_handle;
591 /// #
592 /// info!("This will be logged to stderr");
593 /// reload_handle.modify(|layer| *layer.inner_mut().writer_mut() = non_blocking(std::io::stdout()));
594 /// info!("This will be logged to stdout");
595 /// # }
596 /// ```
597 ///
598 /// [subscriber]: Subscribe
599 pub fn inner_mut(&mut self) -> &mut L {
600 &mut self.layer
601 }
602}
603
604impl<S, L, F> Layer<S> for Filtered<L, F, S>
605where
606 S: Subscriber + for<'span> registry::LookupSpan<'span> + 'static,
607 F: layer::Filter<S> + 'static,
608 L: Layer<S>,
609{
610 fn on_register_dispatch(&self, collector: &Dispatch) {
611 self.layer.on_register_dispatch(collector);
612 }
613
614 fn on_layer(&mut self, subscriber: &mut S) {
615 self.id = MagicPlfDowncastMarker(subscriber.register_filter());
616 self.layer.on_layer(subscriber);
617 }
618
619 // TODO(eliza): can we figure out a nice way to make the `Filtered` layer
620 // not call `is_enabled_for` in hooks that the inner layer doesn't actually
621 // have real implementations of? probably not...
622 //
623 // it would be cool if there was some wild rust reflection way of checking
624 // if a trait impl has the default impl of a trait method or not, but that's
625 // almsot certainly impossible...right?
626
627 fn register_callsite(&self, metadata: &'static Metadata<'static>) -> Interest {
628 let interest = self.filter.callsite_enabled(metadata);
629
630 // If the filter didn't disable the callsite, allow the inner layer to
631 // register it — since `register_callsite` is also used for purposes
632 // such as reserving/caching per-callsite data, we want the inner layer
633 // to be able to perform any other registration steps. However, we'll
634 // ignore its `Interest`.
635 if !interest.is_never() {
636 self.layer.register_callsite(metadata);
637 }
638
639 // Add our `Interest` to the current sum of per-layer filter `Interest`s
640 // for this callsite.
641 FILTERING.with(|filtering| filtering.add_interest(interest));
642
643 // don't short circuit! if the stack consists entirely of `Layer`s with
644 // per-layer filters, the `Registry` will return the actual `Interest`
645 // value that's the sum of all the `register_callsite` calls to those
646 // per-layer filters. if we returned an actual `never` interest here, a
647 // `Layered` layer would short-circuit and not allow any `Filtered`
648 // layers below us if _they_ are interested in the callsite.
649 Interest::always()
650 }
651
652 fn enabled(&self, metadata: &Metadata<'_>, cx: Context<'_, S>) -> bool {
653 let cx = cx.with_filter(self.id());
654 let enabled = self.filter.enabled(metadata, &cx);
655 FILTERING.with(|filtering| filtering.set(self.id(), enabled));
656
657 if enabled {
658 // If the filter enabled this metadata, ask the wrapped layer if
659 // _it_ wants it --- it might have a global filter.
660 self.layer.enabled(metadata, cx)
661 } else {
662 // Otherwise, return `true`. The _per-layer_ filter disabled this
663 // metadata, but returning `false` in `Layer::enabled` will
664 // short-circuit and globally disable the span or event. This is
665 // *not* what we want for per-layer filters, as other layers may
666 // still want this event. Returning `true` here means we'll continue
667 // asking the next layer in the stack.
668 //
669 // Once all per-layer filters have been evaluated, the `Registry`
670 // at the root of the stack will return `false` from its `enabled`
671 // method if *every* per-layer filter disabled this metadata.
672 // Otherwise, the individual per-layer filters will skip the next
673 // `new_span` or `on_event` call for their layer if *they* disabled
674 // the span or event, but it was not globally disabled.
675 true
676 }
677 }
678
679 fn on_new_span(&self, attrs: &span::Attributes<'_>, id: &span::Id, cx: Context<'_, S>) {
680 self.did_enable(|| {
681 let cx = cx.with_filter(self.id());
682 self.filter.on_new_span(attrs, id, cx.clone());
683 self.layer.on_new_span(attrs, id, cx);
684 })
685 }
686
687 #[doc(hidden)]
688 fn max_level_hint(&self) -> Option<LevelFilter> {
689 self.filter.max_level_hint()
690 }
691
692 fn on_record(&self, span: &span::Id, values: &span::Record<'_>, cx: Context<'_, S>) {
693 if let Some(cx) = cx.if_enabled_for(span, self.id()) {
694 self.filter.on_record(span, values, cx.clone());
695 self.layer.on_record(span, values, cx)
696 }
697 }
698
699 fn on_follows_from(&self, span: &span::Id, follows: &span::Id, cx: Context<'_, S>) {
700 // only call `on_follows_from` if both spans are enabled by us
701 if cx.is_enabled_for(span, self.id()) && cx.is_enabled_for(follows, self.id()) {
702 self.layer
703 .on_follows_from(span, follows, cx.with_filter(self.id()))
704 }
705 }
706
707 fn event_enabled(&self, event: &Event<'_>, cx: Context<'_, S>) -> bool {
708 let cx = cx.with_filter(self.id());
709 let enabled = FILTERING
710 .with(|filtering| filtering.and(self.id(), || self.filter.event_enabled(event, &cx)));
711
712 if enabled {
713 // If the filter enabled this event, ask the wrapped subscriber if
714 // _it_ wants it --- it might have a global filter.
715 self.layer.event_enabled(event, cx)
716 } else {
717 // Otherwise, return `true`. See the comment in `enabled` for why this
718 // is necessary.
719 true
720 }
721 }
722
723 fn on_event(&self, event: &Event<'_>, cx: Context<'_, S>) {
724 self.did_enable(|| {
725 self.layer.on_event(event, cx.with_filter(self.id()));
726 })
727 }
728
729 fn on_enter(&self, id: &span::Id, cx: Context<'_, S>) {
730 if let Some(cx) = cx.if_enabled_for(id, self.id()) {
731 self.filter.on_enter(id, cx.clone());
732 self.layer.on_enter(id, cx);
733 }
734 }
735
736 fn on_exit(&self, id: &span::Id, cx: Context<'_, S>) {
737 if let Some(cx) = cx.if_enabled_for(id, self.id()) {
738 self.filter.on_exit(id, cx.clone());
739 self.layer.on_exit(id, cx);
740 }
741 }
742
743 fn on_close(&self, id: span::Id, cx: Context<'_, S>) {
744 if let Some(cx) = cx.if_enabled_for(&id, self.id()) {
745 self.filter.on_close(id.clone(), cx.clone());
746 self.layer.on_close(id, cx);
747 }
748 }
749
750 // XXX(eliza): the existence of this method still makes me sad...
751 fn on_id_change(&self, old: &span::Id, new: &span::Id, cx: Context<'_, S>) {
752 if let Some(cx) = cx.if_enabled_for(old, self.id()) {
753 self.layer.on_id_change(old, new, cx)
754 }
755 }
756
757 #[doc(hidden)]
758 #[inline]
759 unsafe fn downcast_raw(&self, id: TypeId) -> Option<*const ()> {
760 match id {
761 id if id == TypeId::of::<Self>() => Some(self as *const _ as *const ()),
762 id if id == TypeId::of::<L>() => Some(&self.layer as *const _ as *const ()),
763 id if id == TypeId::of::<F>() => Some(&self.filter as *const _ as *const ()),
764 id if id == TypeId::of::<MagicPlfDowncastMarker>() => {
765 Some(&self.id as *const _ as *const ())
766 }
767 _ => self.layer.downcast_raw(id),
768 }
769 }
770}
771
772impl<F, L, S> fmt::Debug for Filtered<F, L, S>
773where
774 F: fmt::Debug,
775 L: fmt::Debug,
776{
777 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
778 f&mut DebugStruct<'_, '_>.debug_struct("Filtered")
779 .field("filter", &self.filter)
780 .field("layer", &self.layer)
781 .field(name:"id", &self.id)
782 .finish()
783 }
784}
785
786// === impl FilterId ===
787
788impl FilterId {
789 const fn disabled() -> Self {
790 Self(std::u64::MAX)
791 }
792
793 /// Returns a `FilterId` that will consider _all_ spans enabled.
794 pub(crate) const fn none() -> Self {
795 Self(0)
796 }
797
798 pub(crate) fn new(id: u8) -> Self {
799 assert!(id < 64, "filter IDs may not be greater than 64");
800 Self(1 << id as usize)
801 }
802
803 /// Combines two `FilterId`s, returning a new `FilterId` that will match a
804 /// [`FilterMap`] where the span was disabled by _either_ this `FilterId`
805 /// *or* the combined `FilterId`.
806 ///
807 /// This method is called by [`Context`]s when adding the `FilterId` of a
808 /// [`Filtered`] layer to the context.
809 ///
810 /// This is necessary for cases where we have a tree of nested [`Filtered`]
811 /// layers, like this:
812 ///
813 /// ```text
814 /// Filtered {
815 /// filter1,
816 /// Layered {
817 /// layer1,
818 /// Filtered {
819 /// filter2,
820 /// layer2,
821 /// },
822 /// }
823 /// ```
824 ///
825 /// We want `layer2` to be affected by both `filter1` _and_ `filter2`.
826 /// Without combining `FilterId`s, this works fine when filtering
827 /// `on_event`/`new_span`, because the outer `Filtered` layer (`filter1`)
828 /// won't call the inner layer's `on_event` or `new_span` callbacks if it
829 /// disabled the event/span.
830 ///
831 /// However, it _doesn't_ work when filtering span lookups and traversals
832 /// (e.g. `scope`). This is because the [`Context`] passed to `layer2`
833 /// would set its filter ID to the filter ID of `filter2`, and would skip
834 /// spans that were disabled by `filter2`. However, what if a span was
835 /// disabled by `filter1`? We wouldn't see it in `new_span`, but we _would_
836 /// see it in lookups and traversals...which we don't want.
837 ///
838 /// When a [`Filtered`] layer adds its ID to a [`Context`], it _combines_ it
839 /// with any previous filter ID that the context had, rather than replacing
840 /// it. That way, `layer2`'s context will check if a span was disabled by
841 /// `filter1` _or_ `filter2`. The way we do this, instead of representing
842 /// `FilterId`s as a number number that we shift a 1 over by to get a mask,
843 /// we just store the actual mask,so we can combine them with a bitwise-OR.
844 ///
845 /// For example, if we consider the following case (pretending that the
846 /// masks are 8 bits instead of 64 just so i don't have to write out a bunch
847 /// of extra zeroes):
848 ///
849 /// - `filter1` has the filter id 1 (`0b0000_0001`)
850 /// - `filter2` has the filter id 2 (`0b0000_0010`)
851 ///
852 /// A span that gets disabled by filter 1 would have the [`FilterMap`] with
853 /// bits `0b0000_0001`.
854 ///
855 /// If the `FilterId` was internally represented as `(bits to shift + 1),
856 /// when `layer2`'s [`Context`] checked if it enabled the span, it would
857 /// make the mask `0b0000_0010` (`1 << 1`). That bit would not be set in the
858 /// [`FilterMap`], so it would see that it _didn't_ disable the span. Which
859 /// is *true*, it just doesn't reflect the tree-like shape of the actual
860 /// subscriber.
861 ///
862 /// By having the IDs be masks instead of shifts, though, when the
863 /// [`Filtered`] with `filter2` gets the [`Context`] with `filter1`'s filter ID,
864 /// instead of replacing it, it ors them together:
865 ///
866 /// ```ignore
867 /// 0b0000_0001 | 0b0000_0010 == 0b0000_0011;
868 /// ```
869 ///
870 /// We then test if the span was disabled by seeing if _any_ bits in the
871 /// mask are `1`:
872 ///
873 /// ```ignore
874 /// filtermap & mask != 0;
875 /// 0b0000_0001 & 0b0000_0011 != 0;
876 /// 0b0000_0001 != 0;
877 /// true;
878 /// ```
879 ///
880 /// [`Context`]: crate::layer::Context
881 pub(crate) fn and(self, FilterId(other): Self) -> Self {
882 // If this mask is disabled, just return the other --- otherwise, we
883 // would always see that every span is disabled.
884 if self.0 == Self::disabled().0 {
885 return Self(other);
886 }
887
888 Self(self.0 | other)
889 }
890
891 fn is_disabled(self) -> bool {
892 self.0 == Self::disabled().0
893 }
894}
895
896impl fmt::Debug for FilterId {
897 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
898 // don't print a giant set of the numbers 0..63 if the filter ID is disabled.
899 if self.0 == Self::disabled().0 {
900 return f&mut DebugTuple<'_, '_>
901 .debug_tuple(name:"FilterId")
902 .field(&format_args!("DISABLED"))
903 .finish();
904 }
905
906 if f.alternate() {
907 f&mut DebugStruct<'_, '_>.debug_struct("FilterId")
908 .field("ids", &format_args!("{:?}", FmtBitset(self.0)))
909 .field(name:"bits", &format_args!("{:b}", self.0))
910 .finish()
911 } else {
912 f.debug_tuple(name:"FilterId").field(&FmtBitset(self.0)).finish()
913 }
914 }
915}
916
917impl fmt::Binary for FilterId {
918 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
919 f&mut DebugTuple<'_, '_>.debug_tuple(name:"FilterId")
920 .field(&format_args!("{:b}", self.0))
921 .finish()
922 }
923}
924
925// === impl FilterExt ===
926
927impl<F, S> FilterExt<S> for F where F: layer::Filter<S> {}
928
929// === impl FilterMap ===
930
931impl FilterMap {
932 pub(crate) fn set(self, FilterId(mask): FilterId, enabled: bool) -> Self {
933 if mask == std::u64::MAX {
934 return self;
935 }
936
937 if enabled {
938 Self {
939 bits: self.bits & (!mask),
940 }
941 } else {
942 Self {
943 bits: self.bits | mask,
944 }
945 }
946 }
947
948 #[inline]
949 pub(crate) fn is_enabled(self, FilterId(mask): FilterId) -> bool {
950 self.bits & mask == 0
951 }
952
953 #[inline]
954 pub(crate) fn any_enabled(self) -> bool {
955 self.bits != std::u64::MAX
956 }
957}
958
959impl fmt::Debug for FilterMap {
960 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
961 let alt: bool = f.alternate();
962 let mut s: DebugStruct<'_, '_> = f.debug_struct(name:"FilterMap");
963 s.field(name:"disabled_by", &format_args!("{:?}", &FmtBitset(self.bits)));
964
965 if alt {
966 s.field(name:"bits", &format_args!("{:b}", self.bits));
967 }
968
969 s.finish()
970 }
971}
972
973impl fmt::Binary for FilterMap {
974 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
975 f&mut DebugStruct<'_, '_>.debug_struct("FilterMap")
976 .field(name:"bits", &format_args!("{:b}", self.bits))
977 .finish()
978 }
979}
980
981// === impl FilterState ===
982
983impl FilterState {
984 fn new() -> Self {
985 Self {
986 enabled: Cell::new(FilterMap::default()),
987 interest: RefCell::new(None),
988
989 #[cfg(debug_assertions)]
990 counters: DebugCounters::default(),
991 }
992 }
993
994 fn set(&self, filter: FilterId, enabled: bool) {
995 #[cfg(debug_assertions)]
996 {
997 let in_current_pass = self.counters.in_filter_pass.get();
998 if in_current_pass == 0 {
999 debug_assert_eq!(self.enabled.get(), FilterMap::default());
1000 }
1001 self.counters.in_filter_pass.set(in_current_pass + 1);
1002 debug_assert_eq!(
1003 self.counters.in_interest_pass.get(),
1004 0,
1005 "if we are in or starting a filter pass, we must not be in an interest pass."
1006 )
1007 }
1008
1009 self.enabled.set(self.enabled.get().set(filter, enabled))
1010 }
1011
1012 fn add_interest(&self, interest: Interest) {
1013 let mut curr_interest = self.interest.borrow_mut();
1014
1015 #[cfg(debug_assertions)]
1016 {
1017 let in_current_pass = self.counters.in_interest_pass.get();
1018 if in_current_pass == 0 {
1019 debug_assert!(curr_interest.is_none());
1020 }
1021 self.counters.in_interest_pass.set(in_current_pass + 1);
1022 }
1023
1024 if let Some(curr_interest) = curr_interest.as_mut() {
1025 if (curr_interest.is_always() && !interest.is_always())
1026 || (curr_interest.is_never() && !interest.is_never())
1027 {
1028 *curr_interest = Interest::sometimes();
1029 }
1030 // If the two interests are the same, do nothing. If the current
1031 // interest is `sometimes`, stay sometimes.
1032 } else {
1033 *curr_interest = Some(interest);
1034 }
1035 }
1036
1037 pub(crate) fn event_enabled() -> bool {
1038 FILTERING
1039 .try_with(|this| {
1040 let enabled = this.enabled.get().any_enabled();
1041 #[cfg(debug_assertions)]
1042 {
1043 if this.counters.in_filter_pass.get() == 0 {
1044 debug_assert_eq!(this.enabled.get(), FilterMap::default());
1045 }
1046
1047 // Nothing enabled this event, we won't tick back down the
1048 // counter in `did_enable`. Reset it.
1049 if !enabled {
1050 this.counters.in_filter_pass.set(0);
1051 }
1052 }
1053 enabled
1054 })
1055 .unwrap_or(true)
1056 }
1057
1058 /// Executes a closure if the filter with the provided ID did not disable
1059 /// the current span/event.
1060 ///
1061 /// This is used to implement the `on_event` and `new_span` methods for
1062 /// `Filtered`.
1063 fn did_enable(&self, filter: FilterId, f: impl FnOnce()) {
1064 let map = self.enabled.get();
1065 if map.is_enabled(filter) {
1066 // If the filter didn't disable the current span/event, run the
1067 // callback.
1068 f();
1069 } else {
1070 // Otherwise, if this filter _did_ disable the span or event
1071 // currently being processed, clear its bit from this thread's
1072 // `FilterState`. The bit has already been "consumed" by skipping
1073 // this callback, and we need to ensure that the `FilterMap` for
1074 // this thread is reset when the *next* `enabled` call occurs.
1075 self.enabled.set(map.set(filter, true));
1076 }
1077 #[cfg(debug_assertions)]
1078 {
1079 let in_current_pass = self.counters.in_filter_pass.get();
1080 if in_current_pass <= 1 {
1081 debug_assert_eq!(self.enabled.get(), FilterMap::default());
1082 }
1083 self.counters
1084 .in_filter_pass
1085 .set(in_current_pass.saturating_sub(1));
1086 debug_assert_eq!(
1087 self.counters.in_interest_pass.get(),
1088 0,
1089 "if we are in a filter pass, we must not be in an interest pass."
1090 )
1091 }
1092 }
1093
1094 /// Run a second filtering pass, e.g. for Subscribe::event_enabled.
1095 fn and(&self, filter: FilterId, f: impl FnOnce() -> bool) -> bool {
1096 let map = self.enabled.get();
1097 let enabled = map.is_enabled(filter) && f();
1098 self.enabled.set(map.set(filter, enabled));
1099 enabled
1100 }
1101
1102 /// Clears the current in-progress filter state.
1103 ///
1104 /// This resets the [`FilterMap`] and current [`Interest`] as well as
1105 /// clearing the debug counters.
1106 pub(crate) fn clear_enabled() {
1107 // Drop the `Result` returned by `try_with` --- if we are in the middle
1108 // a panic and the thread-local has been torn down, that's fine, just
1109 // ignore it ratehr than panicking.
1110 let _ = FILTERING.try_with(|filtering| {
1111 filtering.enabled.set(FilterMap::default());
1112
1113 #[cfg(debug_assertions)]
1114 filtering.counters.in_filter_pass.set(0);
1115 });
1116 }
1117
1118 pub(crate) fn take_interest() -> Option<Interest> {
1119 FILTERING
1120 .try_with(|filtering| {
1121 #[cfg(debug_assertions)]
1122 {
1123 if filtering.counters.in_interest_pass.get() == 0 {
1124 debug_assert!(filtering.interest.try_borrow().ok()?.is_none());
1125 }
1126 filtering.counters.in_interest_pass.set(0);
1127 }
1128 filtering.interest.try_borrow_mut().ok()?.take()
1129 })
1130 .ok()?
1131 }
1132
1133 pub(crate) fn filter_map(&self) -> FilterMap {
1134 let map = self.enabled.get();
1135 #[cfg(debug_assertions)]
1136 {
1137 if self.counters.in_filter_pass.get() == 0 {
1138 debug_assert_eq!(map, FilterMap::default());
1139 }
1140 }
1141
1142 map
1143 }
1144}
1145/// This is a horrible and bad abuse of the downcasting system to expose
1146/// *internally* whether a layer has per-layer filtering, within
1147/// `tracing-subscriber`, without exposing a public API for it.
1148///
1149/// If a `Layer` has per-layer filtering, it will downcast to a
1150/// `MagicPlfDowncastMarker`. Since layers which contain other layers permit
1151/// downcasting to recurse to their children, this will do the Right Thing with
1152/// layers like Reload, Option, etc.
1153///
1154/// Why is this a wrapper around the `FilterId`, you may ask? Because
1155/// downcasting works by returning a pointer, and we don't want to risk
1156/// introducing UB by constructing pointers that _don't_ point to a valid
1157/// instance of the type they claim to be. In this case, we don't _intend_ for
1158/// this pointer to be dereferenced, so it would actually be fine to return one
1159/// that isn't a valid pointer...but we can't guarantee that the caller won't
1160/// (accidentally) dereference it, so it's better to be safe than sorry. We
1161/// could, alternatively, add an additional field to the type that's used only
1162/// for returning pointers to as as part of the evil downcasting hack, but I
1163/// thought it was nicer to just add a `repr(transparent)` wrapper to the
1164/// existing `FilterId` field, since it won't make the struct any bigger.
1165///
1166/// Don't worry, this isn't on the test. :)
1167#[derive(Clone, Copy)]
1168#[repr(transparent)]
1169struct MagicPlfDowncastMarker(FilterId);
1170impl fmt::Debug for MagicPlfDowncastMarker {
1171 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1172 // Just pretend that `MagicPlfDowncastMarker` doesn't exist for
1173 // `fmt::Debug` purposes...if no one *sees* it in their `Debug` output,
1174 // they don't have to know I thought this code would be a good idea.
1175 fmt::Debug::fmt(&self.0, f)
1176 }
1177}
1178
1179pub(crate) fn is_plf_downcast_marker(type_id: TypeId) -> bool {
1180 type_id == TypeId::of::<MagicPlfDowncastMarker>()
1181}
1182
1183/// Does a type implementing `Subscriber` contain any per-layer filters?
1184pub(crate) fn subscriber_has_plf<S>(subscriber: &S) -> bool
1185where
1186 S: Subscriber,
1187{
1188 (subscriber as &dyn Subscriber).is::<MagicPlfDowncastMarker>()
1189}
1190
1191/// Does a type implementing `Layer` contain any per-layer filters?
1192pub(crate) fn layer_has_plf<L, S>(layer: &L) -> bool
1193where
1194 L: Layer<S>,
1195 S: Subscriber,
1196{
1197 unsafeOption<*const ()> {
1198 // Safety: we're not actually *doing* anything with this pointer --- we
1199 // only care about the `Option`, which we're turning into a `bool`. So
1200 // even if the layer decides to be evil and give us some kind of invalid
1201 // pointer, we don't ever dereference it, so this is always safe.
1202 layer.downcast_raw(id:TypeId::of::<MagicPlfDowncastMarker>())
1203 }
1204 .is_some()
1205}
1206
1207struct FmtBitset(u64);
1208
1209impl fmt::Debug for FmtBitset {
1210 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1211 let mut set: DebugSet<'_, '_> = f.debug_set();
1212 for bit: i32 in 0..64 {
1213 // if the `bit`-th bit is set, add it to the debug set
1214 if self.0 & (1 << bit) != 0 {
1215 set.entry(&bit);
1216 }
1217 }
1218 set.finish()
1219 }
1220}
1221