1/*!
2This crate exposes a variety of regex engines used by the `regex` crate.
3It provides a vast, sprawling and "expert" level API to each regex engine.
4The regex engines provided by this crate focus heavily on finite automata
5implementations and specifically guarantee worst case `O(m * n)` time
6complexity for all searches. (Where `m ~ len(regex)` and `n ~ len(haystack)`.)
7
8The primary goal of this crate is to serve as an implementation detail for the
9`regex` crate. A secondary goal is to make its internals available for use by
10others.
11
12# Table of contents
13
14* [Should I be using this crate?](#should-i-be-using-this-crate) gives some
15reasons for and against using this crate.
16* [Examples](#examples) provides a small selection of things you can do with
17this crate.
18* [Available regex engines](#available-regex-engines) provides a hyperlinked
19list of all regex engines in this crate.
20* [API themes](#api-themes) discusses common elements used throughout this
21crate.
22* [Crate features](#crate-features) documents the extensive list of Cargo
23features available.
24
25# Should I be using this crate?
26
27If you find yourself here because you just want to use regexes, then you should
28first check out whether the [`regex` crate](https://docs.rs/regex) meets
29your needs. It provides a streamlined and difficult-to-misuse API for regex
30searching.
31
32If you're here because there is something specific you want to do that can't
33be easily done with `regex` crate, then you are perhaps in the right place.
34It's most likely that the first stop you'll want to make is to explore the
35[`meta` regex APIs](meta). Namely, the `regex` crate is just a light wrapper
36over a [`meta::Regex`], so its API will probably be the easiest to transition
37to. In contrast to the `regex` crate, the `meta::Regex` API supports more
38search parameters and does multi-pattern searches. However, it isn't quite as
39ergonomic.
40
41Otherwise, the following is an inexhaustive list of reasons to use this crate:
42
43* You want to analyze or use a [Thompson `NFA`](nfa::thompson::NFA) directly.
44* You want more powerful multi-pattern search than what is provided by
45`RegexSet` in the `regex` crate. All regex engines in this crate support
46multi-pattern searches.
47* You want to use one of the `regex` crate's internal engines directly because
48of some interesting configuration that isn't possible via the `regex` crate.
49For example, a [lazy DFA's configuration](hybrid::dfa::Config) exposes a
50dizzying number of options for controlling its execution.
51* You want to use the lower level search APIs. For example, both the [lazy
52DFA](hybrid::dfa) and [fully compiled DFAs](dfa) support searching by exploring
53the automaton one state at a time. This might be useful, for example, for
54stream searches or searches of strings stored in non-contiguous in memory.
55* You want to build a fully compiled DFA and then [use zero-copy
56deserialization](dfa::dense::DFA::from_bytes) to load it into memory and use
57it for searching. This use case is supported in core-only no-std/no-alloc
58environments.
59* You want to run [anchored searches](Input::anchored) without using the `^`
60anchor in your regex pattern.
61* You need to work-around contention issues with
62sharing a regex across multiple threads. The
63[`meta::Regex::search_with`](meta::Regex::search_with) API permits bypassing
64any kind of synchronization at all by requiring the caller to provide the
65mutable scratch spaced needed during a search.
66* You want to build your own regex engine on top of the `regex` crate's
67infrastructure.
68
69# Examples
70
71This section tries to identify a few interesting things you can do with this
72crate and demonstrates them.
73
74### Multi-pattern searches with capture groups
75
76One of the more frustrating limitations of `RegexSet` in the `regex` crate
77(at the time of writing) is that it doesn't report match positions. With this
78crate, multi-pattern support was intentionally designed in from the beginning,
79which means it works in all regex engines and even for capture groups as well.
80
81This example shows how to search for matches of multiple regexes, where each
82regex uses the same capture group names to parse different key-value formats.
83
84```
85use regex_automata::{meta::Regex, PatternID};
86
87let re = Regex::new_many(&[
88 r#"(?m)^(?<key>[[:word:]]+)=(?<val>[[:word:]]+)$"#,
89 r#"(?m)^(?<key>[[:word:]]+)="(?<val>[^"]+)"$"#,
90 r#"(?m)^(?<key>[[:word:]]+)='(?<val>[^']+)'$"#,
91 r#"(?m)^(?<key>[[:word:]]+):\s*(?<val>[[:word:]]+)$"#,
92])?;
93let hay = r#"
94best_album="Blow Your Face Out"
95best_quote='"then as it was, then again it will be"'
96best_year=1973
97best_simpsons_episode: HOMR
98"#;
99let mut kvs = vec![];
100for caps in re.captures_iter(hay) {
101 // N.B. One could use capture indices '1' and '2' here
102 // as well. Capture indices are local to each pattern.
103 // (Just like names are.)
104 let key = &hay[caps.get_group_by_name("key").unwrap()];
105 let val = &hay[caps.get_group_by_name("val").unwrap()];
106 kvs.push((key, val));
107}
108assert_eq!(kvs, vec![
109 ("best_album", "Blow Your Face Out"),
110 ("best_quote", "\"then as it was, then again it will be\""),
111 ("best_year", "1973"),
112 ("best_simpsons_episode", "HOMR"),
113]);
114
115# Ok::<(), Box<dyn std::error::Error>>(())
116```
117
118### Build a full DFA and walk it manually
119
120One of the regex engines in this crate is a fully compiled DFA. It takes worst
121case exponential time to build, but once built, it can be easily explored and
122used for searches. Here's a simple example that uses its lower level APIs to
123implement a simple anchored search by hand.
124
125```
126use regex_automata::{dfa::{Automaton, dense}, Input};
127
128let dfa = dense::DFA::new(r"(?-u)\b[A-Z]\w+z\b")?;
129let haystack = "Quartz";
130
131// The start state is determined by inspecting the position and the
132// initial bytes of the haystack.
133let mut state = dfa.start_state_forward(&Input::new(haystack))?;
134// Walk all the bytes in the haystack.
135for &b in haystack.as_bytes().iter() {
136 state = dfa.next_state(state, b);
137}
138// DFAs in this crate require an explicit
139// end-of-input transition if a search reaches
140// the end of a haystack.
141state = dfa.next_eoi_state(state);
142assert!(dfa.is_match_state(state));
143
144# Ok::<(), Box<dyn std::error::Error>>(())
145```
146
147Or do the same with a lazy DFA that avoids exponential worst case compile time,
148but requires mutable scratch space to lazily build the DFA during the search.
149
150```
151use regex_automata::{hybrid::dfa::DFA, Input};
152
153let dfa = DFA::new(r"(?-u)\b[A-Z]\w+z\b")?;
154let mut cache = dfa.create_cache();
155let hay = "Quartz";
156
157// The start state is determined by inspecting the position and the
158// initial bytes of the haystack.
159let mut state = dfa.start_state_forward(&mut cache, &Input::new(hay))?;
160// Walk all the bytes in the haystack.
161for &b in hay.as_bytes().iter() {
162 state = dfa.next_state(&mut cache, state, b)?;
163}
164// DFAs in this crate require an explicit
165// end-of-input transition if a search reaches
166// the end of a haystack.
167state = dfa.next_eoi_state(&mut cache, state)?;
168assert!(state.is_match());
169
170# Ok::<(), Box<dyn std::error::Error>>(())
171```
172
173### Find all overlapping matches
174
175This example shows how to build a DFA and use it to find all possible matches,
176including overlapping matches. A similar example will work with a lazy DFA as
177well. This also works with multiple patterns and will report all matches at the
178same position where multiple patterns match.
179
180```
181use regex_automata::{
182 dfa::{dense, Automaton, OverlappingState},
183 Input, MatchKind,
184};
185
186let dfa = dense::DFA::builder()
187 .configure(dense::DFA::config().match_kind(MatchKind::All))
188 .build(r"(?-u)\w{3,}")?;
189let input = Input::new("homer marge bart lisa maggie");
190let mut state = OverlappingState::start();
191
192let mut matches = vec![];
193while let Some(hm) = {
194 dfa.try_search_overlapping_fwd(&input, &mut state)?;
195 state.get_match()
196} {
197 matches.push(hm.offset());
198}
199assert_eq!(matches, vec![
200 3, 4, 5, // hom, home, homer
201 9, 10, 11, // mar, marg, marge
202 15, 16, // bar, bart
203 20, 21, // lis, lisa
204 25, 26, 27, 28, // mag, magg, maggi, maggie
205]);
206
207# Ok::<(), Box<dyn std::error::Error>>(())
208```
209
210# Available regex engines
211
212The following is a complete list of all regex engines provided by this crate,
213along with a very brief description of it and why you might want to use it.
214
215* [`dfa::regex::Regex`] is a regex engine that works on top of either
216[dense](dfa::dense) or [sparse](dfa::sparse) fully compiled DFAs. You might
217use a DFA if you need the fastest possible regex engine in this crate and can
218afford the exorbitant memory usage usually required by DFAs. Low level APIs on
219fully compiled DFAs are provided by the [`Automaton` trait](dfa::Automaton).
220Fully compiled dense DFAs can handle all regexes except for searching a regex
221with a Unicode word boundary on non-ASCII haystacks. A fully compiled DFA based
222regex can only report the start and end of each match.
223* [`hybrid::regex::Regex`] is a regex engine that works on top of a lazily
224built DFA. Its performance profile is very similar to that of fully compiled
225DFAs, but can be slower in some pathological cases. Fully compiled DFAs are
226also amenable to more optimizations, such as state acceleration, that aren't
227available in a lazy DFA. You might use this lazy DFA if you can't abide the
228worst case exponential compile time of a full DFA, but still want the DFA
229search performance in the vast majority of cases. A lazy DFA based regex can
230only report the start and end of each match.
231* [`dfa::onepass::DFA`] is a regex engine that is implemented as a DFA, but
232can report the matches of each capture group in addition to the start and end
233of each match. The catch is that it only works on a somewhat small subset of
234regexes known as "one-pass." You'll want to use this for cases when you need
235capture group matches and the regex is one-pass since it is likely to be faster
236than any alternative. A one-pass DFA can handle all types of regexes, but does
237have some reasonable limits on the number of capture groups it can handle.
238* [`nfa::thompson::backtrack::BoundedBacktracker`] is a regex engine that uses
239backtracking, but keeps track of the work it has done to avoid catastrophic
240backtracking. Like the one-pass DFA, it provides the matches of each capture
241group. It retains the `O(m * n)` worst case time bound. This tends to be slower
242than the one-pass DFA regex engine, but faster than the PikeVM. It can handle
243all types of regexes, but usually only works well with small haystacks and
244small regexes due to the memory required to avoid redoing work.
245* [`nfa::thompson::pikevm::PikeVM`] is a regex engine that can handle all
246regexes, of all sizes and provides capture group matches. It tends to be a tool
247of last resort because it is also usually the slowest regex engine.
248* [`meta::Regex`] is the meta regex engine that combines *all* of the above
249engines into one. The reason for this is that each of the engines above have
250their own caveats such as, "only handles a subset of regexes" or "is generally
251slow." The meta regex engine accounts for all of these caveats and composes
252the engines in a way that attempts to mitigate each engine's weaknesses while
253emphasizing its strengths. For example, it will attempt to run a lazy DFA even
254if it might fail. In which case, it will restart the search with a likely
255slower but more capable regex engine. The meta regex engine is what you should
256default to. Use one of the above engines directly only if you have a specific
257reason to.
258
259# API themes
260
261While each regex engine has its own APIs and configuration options, there are
262some general themes followed by all of them.
263
264### The `Input` abstraction
265
266Most search routines in this crate accept anything that implements
267`Into<Input>`. Both `&str` and `&[u8]` haystacks satisfy this constraint, which
268means that things like `engine.search("foo")` will work as you would expect.
269
270By virtue of accepting an `Into<Input>` though, callers can provide more than
271just a haystack. Indeed, the [`Input`] type has more details, but briefly,
272callers can use it to configure various aspects of the search:
273
274* The span of the haystack to search via [`Input::span`] or [`Input::range`],
275which might be a substring of the haystack.
276* Whether to run an anchored search or not via [`Input::anchored`]. This
277permits one to require matches to start at the same offset that the search
278started.
279* Whether to ask the regex engine to stop as soon as a match is seen via
280[`Input::earliest`]. This can be used to find the offset of a match as soon
281as it is known without waiting for the full leftmost-first match to be found.
282This can also be used to avoid the worst case `O(m * n^2)` time complexity
283of iteration.
284
285Some lower level search routines accept an `&Input` for performance reasons.
286In which case, `&Input::new("haystack")` can be used for a simple search.
287
288### Error reporting
289
290Most, but not all, regex engines in this crate can fail to execute a search.
291When a search fails, callers cannot determine whether or not a match exists.
292That is, the result is indeterminate.
293
294Search failure, in all cases in this crate, is represented by a [`MatchError`].
295Routines that can fail start with the `try_` prefix in their name. For example,
296[`hybrid::regex::Regex::try_search`] can fail for a number of reasons.
297Conversely, routines that either can't fail or can panic on failure lack the
298`try_` prefix. For example, [`hybrid::regex::Regex::find`] will panic in
299cases where [`hybrid::regex::Regex::try_search`] would return an error, and
300[`meta::Regex::find`] will never panic. Therefore, callers need to pay close
301attention to the panicking conditions in the documentation.
302
303In most cases, the reasons that a search fails are either predictable or
304configurable, albeit at some additional cost.
305
306An example of predictable failure is
307[`BoundedBacktracker::try_search`](nfa::thompson::backtrack::BoundedBacktracker::try_search).
308Namely, it fails whenever the multiplication of the haystack, the regex and some
309constant exceeds the
310[configured visited capacity](nfa::thompson::backtrack::Config::visited_capacity).
311Callers can predict the failure in terms of haystack length via the
312[`BoundedBacktracker::max_haystack_len`](nfa::thompson::backtrack::BoundedBacktracker::max_haystack_len)
313method. While this form of failure is technically avoidable by increasing the
314visited capacity, it isn't practical to do so for all inputs because the
315memory usage required for larger haystacks becomes impractically large. So in
316practice, if one is using the bounded backtracker, you really do have to deal
317with the failure.
318
319An example of configurable failure happens when one enables heuristic support
320for Unicode word boundaries in a DFA. Namely, since the DFAs in this crate
321(except for the one-pass DFA) do not support Unicode word boundaries on
322non-ASCII haystacks, building a DFA from an NFA that contains a Unicode word
323boundary will itself fail. However, one can configure DFAs to still be built in
324this case by
325[configuring heuristic support for Unicode word boundaries](hybrid::dfa::Config::unicode_word_boundary).
326If the NFA the DFA is built from contains a Unicode word boundary, then the
327DFA will still be built, but special transitions will be added to every state
328that cause the DFA to fail if any non-ASCII byte is seen. This failure happens
329at search time and it requires the caller to opt into this.
330
331There are other ways for regex engines to fail in this crate, but the above
332two should represent the general theme of failures one can find. Dealing
333with these failures is, in part, one the responsibilities of the [meta regex
334engine](meta). Notice, for example, that the meta regex engine exposes an API
335that never returns an error nor panics. It carefully manages all of the ways
336in which the regex engines can fail and either avoids the predictable ones
337entirely (e.g., the bounded backtracker) or reacts to configured failures by
338falling back to a different engine (e.g., the lazy DFA quitting because it saw
339a non-ASCII byte).
340
341### Configuration and Builders
342
343Most of the regex engines in this crate come with two types to facilitate
344building the regex engine: a `Config` and a `Builder`. A `Config` is usually
345specific to that particular regex engine, but other objects such as parsing and
346NFA compilation have `Config` types too. A `Builder` is the thing responsible
347for taking inputs (either pattern strings or already-parsed patterns or even
348NFAs directly) and turning them into an actual regex engine that can be used
349for searching.
350
351The main reason why building a regex engine is a bit complicated is because
352of the desire to permit composition with de-coupled components. For example,
353you might want to [manually construct a Thompson NFA](nfa::thompson::Builder)
354and then build a regex engine from it without ever using a regex parser
355at all. On the other hand, you might also want to build a regex engine directly
356from the concrete syntax. This demonstrates why regex engine construction is
357so flexible: it needs to support not just convenient construction, but also
358construction from parts built elsewhere.
359
360This is also in turn why there are many different `Config` structs in this
361crate. Let's look more closely at an example: [`hybrid::regex::Builder`]. It
362accepts three different `Config` types for configuring construction of a lazy
363DFA regex:
364
365* [`hybrid::regex::Builder::syntax`] accepts a
366[`util::syntax::Config`] for configuring the options found in the
367[`regex-syntax`](regex_syntax) crate. For example, whether to match
368case insensitively.
369* [`hybrid::regex::Builder::thompson`] accepts a [`nfa::thompson::Config`] for
370configuring construction of a [Thompson NFA](nfa::thompson::NFA). For example,
371whether to build an NFA that matches the reverse language described by the
372regex.
373* [`hybrid::regex::Builder::dfa`] accept a [`hybrid::dfa::Config`] for
374configuring construction of the pair of underlying lazy DFAs that make up the
375lazy DFA regex engine. For example, changing the capacity of the cache used to
376store the transition table.
377
378The lazy DFA regex engine uses all three of those configuration objects for
379methods like [`hybrid::regex::Builder::build`], which accepts a pattern
380string containing the concrete syntax of your regex. It uses the syntax
381configuration to parse it into an AST and translate it into an HIR. Then the
382NFA configuration when compiling the HIR into an NFA. And then finally the DFA
383configuration when lazily determinizing the NFA into a DFA.
384
385Notice though that the builder also has a
386[`hybrid::regex::Builder::build_from_dfas`] constructor. This permits callers
387to build the underlying pair of lazy DFAs themselves (one for the forward
388searching to find the end of a match and one for the reverse searching to find
389the start of a match), and then build the regex engine from them. The lazy
390DFAs, in turn, have their own builder that permits [construction directly from
391a Thompson NFA](hybrid::dfa::Builder::build_from_nfa). Continuing down the
392rabbit hole, a Thompson NFA has its own compiler that permits [construction
393directly from an HIR](nfa::thompson::Compiler::build_from_hir). The lazy DFA
394regex engine builder lets you follow this rabbit hole all the way down, but
395also provides convenience routines that do it for you when you don't need
396precise control over every component.
397
398The [meta regex engine](meta) is a good example of something that utilizes the
399full flexibility of these builders. It often needs not only precise control
400over each component, but also shares them across multiple regex engines.
401(Most sharing is done by internal reference accounting. For example, an
402[`NFA`](nfa::thompson::NFA) is reference counted internally which makes cloning
403cheap.)
404
405### Size limits
406
407Unlike the `regex` crate, the `regex-automata` crate specifically does not
408enable any size limits by default. That means users of this crate need to
409be quite careful when using untrusted patterns. Namely, because bounded
410repetitions can grow exponentially by stacking them, it is possible to build a
411very large internal regex object from just a small pattern string. For example,
412the NFA built from the pattern `a{10}{10}{10}{10}{10}{10}{10}` is over 240MB.
413
414There are multiple size limit options in this crate. If one or more size limits
415are relevant for the object you're building, they will be configurable via
416methods on a corresponding `Config` type.
417
418# Crate features
419
420This crate has a dizzying number of features. The main idea is to be able to
421control how much stuff you pull in for your specific use case, since the full
422crate is quite large and can dramatically increase compile times and binary
423size.
424
425The most barebones but useful configuration is to disable all default features
426and enable only `dfa-search`. This will bring in just the DFA deserialization
427and search routines without any dependency on `std` or `alloc`. This does
428require generating and serializing a DFA, and then storing it somewhere, but
429it permits regex searches in freestanding or embedded environments.
430
431Because there are so many features, they are split into a few groups.
432
433The default set of features is: `std`, `syntax`, `perf`, `unicode`, `meta`,
434`nfa`, `dfa` and `hybrid`. Basically, the default is to enable everything
435except for development related features like `logging`.
436
437### Ecosystem features
438
439* **std** - Enables use of the standard library. In terms of APIs, this usually
440just means that error types implement the `std::error::Error` trait. Otherwise,
441`std` sometimes enables the code to be faster, for example, using a `HashMap`
442instead of a `BTreeMap`. (The `std` feature matters more for dependencies like
443`aho-corasick` and `memchr`, where `std` is required to enable certain classes
444of SIMD optimizations.) Enabling `std` automatically enables `alloc`.
445* **alloc** - Enables use of the `alloc` library. This is required for most
446APIs in this crate. The main exception is deserializing and searching with
447fully compiled DFAs.
448* **logging** - Adds a dependency on the `log` crate and makes this crate emit
449log messages of varying degrees of utility. The log messages are especially
450useful in trying to understand what the meta regex engine is doing.
451
452### Performance features
453
454* **perf** - Enables all of the below features.
455* **perf-inline** - When enabled, `inline(always)` is used in (many) strategic
456locations to help performance at the expense of longer compile times and
457increased binary size.
458* **perf-literal** - Enables all literal related optimizations.
459 * **perf-literal-substring** - Enables all single substring literal
460 optimizations. This includes adding a dependency on the `memchr` crate.
461 * **perf-literal-multisubstring** - Enables all multiple substring literal
462 optimizations. This includes adding a dependency on the `aho-corasick`
463 crate.
464
465### Unicode features
466
467* **unicode** -
468 Enables all Unicode features. This feature is enabled by default, and will
469 always cover all Unicode features, even if more are added in the future.
470* **unicode-age** -
471 Provide the data for the
472 [Unicode `Age` property](https://www.unicode.org/reports/tr44/tr44-24.html#Character_Age).
473 This makes it possible to use classes like `\p{Age:6.0}` to refer to all
474 codepoints first introduced in Unicode 6.0
475* **unicode-bool** -
476 Provide the data for numerous Unicode boolean properties. The full list
477 is not included here, but contains properties like `Alphabetic`, `Emoji`,
478 `Lowercase`, `Math`, `Uppercase` and `White_Space`.
479* **unicode-case** -
480 Provide the data for case insensitive matching using
481 [Unicode's "simple loose matches" specification](https://www.unicode.org/reports/tr18/#Simple_Loose_Matches).
482* **unicode-gencat** -
483 Provide the data for
484 [Unicode general categories](https://www.unicode.org/reports/tr44/tr44-24.html#General_Category_Values).
485 This includes, but is not limited to, `Decimal_Number`, `Letter`,
486 `Math_Symbol`, `Number` and `Punctuation`.
487* **unicode-perl** -
488 Provide the data for supporting the Unicode-aware Perl character classes,
489 corresponding to `\w`, `\s` and `\d`. This is also necessary for using
490 Unicode-aware word boundary assertions. Note that if this feature is
491 disabled, the `\s` and `\d` character classes are still available if the
492 `unicode-bool` and `unicode-gencat` features are enabled, respectively.
493* **unicode-script** -
494 Provide the data for
495 [Unicode scripts and script extensions](https://www.unicode.org/reports/tr24/).
496 This includes, but is not limited to, `Arabic`, `Cyrillic`, `Hebrew`,
497 `Latin` and `Thai`.
498* **unicode-segment** -
499 Provide the data necessary to provide the properties used to implement the
500 [Unicode text segmentation algorithms](https://www.unicode.org/reports/tr29/).
501 This enables using classes like `\p{gcb=Extend}`, `\p{wb=Katakana}` and
502 `\p{sb=ATerm}`.
503* **unicode-word-boundary** -
504 Enables support for Unicode word boundaries, i.e., `\b`, in regexes. When
505 this and `unicode-perl` are enabled, then data tables from `regex-syntax` are
506 used to implement Unicode word boundaries. However, if `regex-syntax` isn't
507 enabled as a dependency then one can still enable this feature. It will
508 cause `regex-automata` to bundle its own data table that would otherwise be
509 redundant with `regex-syntax`'s table.
510
511### Regex engine features
512
513* **syntax** - Enables a dependency on `regex-syntax`. This makes APIs
514for building regex engines from pattern strings available. Without the
515`regex-syntax` dependency, the only way to build a regex engine is generally
516to deserialize a previously built DFA or to hand assemble an NFA using its
517[builder API](nfa::thompson::Builder). Once you have an NFA, you can build any
518of the regex engines in this crate. The `syntax` feature also enables `alloc`.
519* **meta** - Enables the meta regex engine. This also enables the `syntax` and
520`nfa-pikevm` features, as both are the minimal requirements needed. The meta
521regex engine benefits from enabling any of the other regex engines and will
522use them automatically when appropriate.
523* **nfa** - Enables all NFA related features below.
524 * **nfa-thompson** - Enables the Thompson NFA APIs. This enables `alloc`.
525 * **nfa-pikevm** - Enables the PikeVM regex engine. This enables
526 `nfa-thompson`.
527 * **nfa-backtrack** - Enables the bounded backtracker regex engine. This
528 enables `nfa-thompson`.
529* **dfa** - Enables all DFA related features below.
530 * **dfa-build** - Enables APIs for determinizing DFAs from NFAs. This
531 enables `nfa-thompson` and `dfa-search`.
532 * **dfa-search** - Enables APIs for searching with DFAs.
533 * **dfa-onepass** - Enables the one-pass DFA API. This enables
534 `nfa-thompson`.
535* **hybrid** - Enables the hybrid NFA/DFA or "lazy DFA" regex engine. This
536enables `alloc` and `nfa-thompson`.
537
538*/
539
540// We are no_std.
541#![no_std]
542// All APIs need docs!
543#![deny(missing_docs)]
544// Some intra-doc links are broken when certain features are disabled, so we
545// only bleat about it when most (all?) features are enabled. But when we do,
546// we block the build. Links need to work.
547#![cfg_attr(
548 all(
549 feature = "std",
550 feature = "nfa",
551 feature = "dfa",
552 feature = "hybrid"
553 ),
554 deny(rustdoc::broken_intra_doc_links)
555)]
556// Broken rustdoc links are very easy to come by when you start disabling
557// features. Namely, features tend to change imports, and imports change what's
558// available to link to.
559//
560// Basically, we just don't support rustdoc for anything other than the maximal
561// feature configuration. Other configurations will work, they just won't be
562// perfect.
563//
564// So here, we specifically allow them so we don't even get warned about them.
565#![cfg_attr(
566 not(all(
567 feature = "std",
568 feature = "nfa",
569 feature = "dfa",
570 feature = "hybrid"
571 )),
572 allow(rustdoc::broken_intra_doc_links)
573)]
574// Kinda similar, but eliminating all of the dead code and unused import
575// warnings for every feature combo is a fool's errand. Instead, we just
576// suppress those, but still let them through in a common configuration when we
577// build most of everything.
578//
579// This does actually suggest that when features are disabled, we are actually
580// compiling more code than we need to be. And this is perhaps not so great
581// because disabling features is usually done in order to reduce compile times
582// by reducing the amount of code one compiles... However, usually, most of the
583// time this dead code is a relatively small amount from the 'util' module.
584// But... I confess... There isn't a ton of visibility on this.
585//
586// I'm happy to try to address this in a different way, but "let's annotate
587// every function in 'util' with some non-local combination of features" just
588// cannot be the way forward.
589#![cfg_attr(
590 not(all(
591 feature = "std",
592 feature = "nfa",
593 feature = "dfa",
594 feature = "hybrid",
595 feature = "perf-literal-substring",
596 feature = "perf-literal-multisubstring",
597 )),
598 allow(dead_code, unused_imports, unused_variables)
599)]
600// We generally want all types to impl Debug.
601#![warn(missing_debug_implementations)]
602// No clue why this thing is still unstable because it's pretty amazing. This
603// adds Cargo feature annotations to items in the rustdoc output. Which is
604// sadly hugely beneficial for this crate due to the number of features.
605#![cfg_attr(docsrs, feature(doc_auto_cfg))]
606
607// I have literally never tested this crate on 16-bit, so it is quite
608// suspicious to advertise support for it. But... the regex crate, at time
609// of writing, at least claims to support it by not doing any conditional
610// compilation based on the target pointer width. So I guess I remain
611// consistent with that here.
612//
613// If you are here because you're on a 16-bit system and you were somehow using
614// the regex crate previously, please file an issue. Please be prepared to
615// provide some kind of reproduction or carve out some path to getting 16-bit
616// working in CI. (Via qemu?)
617#[cfg(not(any(
618 target_pointer_width = "16",
619 target_pointer_width = "32",
620 target_pointer_width = "64"
621)))]
622compile_error!("not supported on non-{16,32,64}, please file an issue");
623
624#[cfg(any(test, feature = "std"))]
625extern crate std;
626
627#[cfg(feature = "alloc")]
628extern crate alloc;
629
630#[cfg(doctest)]
631doc_comment::doctest!("../README.md");
632
633#[doc(inline)]
634pub use crate::util::primitives::PatternID;
635pub use crate::util::search::*;
636
637#[macro_use]
638mod macros;
639
640#[cfg(any(feature = "dfa-search", feature = "dfa-onepass"))]
641pub mod dfa;
642#[cfg(feature = "hybrid")]
643pub mod hybrid;
644#[cfg(feature = "meta")]
645pub mod meta;
646#[cfg(feature = "nfa-thompson")]
647pub mod nfa;
648pub mod util;
649