1/*!
2This crate provides a robust regular expression parser.
3
4This crate defines two primary types:
5
6* [`Ast`](ast::Ast) is the abstract syntax of a regular expression.
7 An abstract syntax corresponds to a *structured representation* of the
8 concrete syntax of a regular expression, where the concrete syntax is the
9 pattern string itself (e.g., `foo(bar)+`). Given some abstract syntax, it
10 can be converted back to the original concrete syntax (modulo some details,
11 like whitespace). To a first approximation, the abstract syntax is complex
12 and difficult to analyze.
13* [`Hir`](hir::Hir) is the high-level intermediate representation
14 ("HIR" or "high-level IR" for short) of regular expression. It corresponds to
15 an intermediate state of a regular expression that sits between the abstract
16 syntax and the low level compiled opcodes that are eventually responsible for
17 executing a regular expression search. Given some high-level IR, it is not
18 possible to produce the original concrete syntax (although it is possible to
19 produce an equivalent concrete syntax, but it will likely scarcely resemble
20 the original pattern). To a first approximation, the high-level IR is simple
21 and easy to analyze.
22
23These two types come with conversion routines:
24
25* An [`ast::parse::Parser`] converts concrete syntax (a `&str`) to an
26[`Ast`](ast::Ast).
27* A [`hir::translate::Translator`] converts an [`Ast`](ast::Ast) to a
28[`Hir`](hir::Hir).
29
30As a convenience, the above two conversion routines are combined into one via
31the top-level [`Parser`] type. This `Parser` will first convert your pattern to
32an `Ast` and then convert the `Ast` to an `Hir`. It's also exposed as top-level
33[`parse`] free function.
34
35
36# Example
37
38This example shows how to parse a pattern string into its HIR:
39
40```
41use regex_syntax::{hir::Hir, parse};
42
43let hir = parse("a|b")?;
44assert_eq!(hir, Hir::alternation(vec![
45 Hir::literal("a".as_bytes()),
46 Hir::literal("b".as_bytes()),
47]));
48# Ok::<(), Box<dyn std::error::Error>>(())
49```
50
51
52# Concrete syntax supported
53
54The concrete syntax is documented as part of the public API of the
55[`regex` crate](https://docs.rs/regex/%2A/regex/#syntax).
56
57
58# Input safety
59
60A key feature of this library is that it is safe to use with end user facing
61input. This plays a significant role in the internal implementation. In
62particular:
63
641. Parsers provide a `nest_limit` option that permits callers to control how
65 deeply nested a regular expression is allowed to be. This makes it possible
66 to do case analysis over an `Ast` or an `Hir` using recursion without
67 worrying about stack overflow.
682. Since relying on a particular stack size is brittle, this crate goes to
69 great lengths to ensure that all interactions with both the `Ast` and the
70 `Hir` do not use recursion. Namely, they use constant stack space and heap
71 space proportional to the size of the original pattern string (in bytes).
72 This includes the type's corresponding destructors. (One exception to this
73 is literal extraction, but this will eventually get fixed.)
74
75
76# Error reporting
77
78The `Display` implementations on all `Error` types exposed in this library
79provide nice human readable errors that are suitable for showing to end users
80in a monospace font.
81
82
83# Literal extraction
84
85This crate provides limited support for [literal extraction from `Hir`
86values](hir::literal). Be warned that literal extraction uses recursion, and
87therefore, stack size proportional to the size of the `Hir`.
88
89The purpose of literal extraction is to speed up searches. That is, if you
90know a regular expression must match a prefix or suffix literal, then it is
91often quicker to search for instances of that literal, and then confirm or deny
92the match using the full regular expression engine. These optimizations are
93done automatically in the `regex` crate.
94
95
96# Crate features
97
98An important feature provided by this crate is its Unicode support. This
99includes things like case folding, boolean properties, general categories,
100scripts and Unicode-aware support for the Perl classes `\w`, `\s` and `\d`.
101However, a downside of this support is that it requires bundling several
102Unicode data tables that are substantial in size.
103
104A fair number of use cases do not require full Unicode support. For this
105reason, this crate exposes a number of features to control which Unicode
106data is available.
107
108If a regular expression attempts to use a Unicode feature that is not available
109because the corresponding crate feature was disabled, then translating that
110regular expression to an `Hir` will return an error. (It is still possible
111construct an `Ast` for such a regular expression, since Unicode data is not
112used until translation to an `Hir`.) Stated differently, enabling or disabling
113any of the features below can only add or subtract from the total set of valid
114regular expressions. Enabling or disabling a feature will never modify the
115match semantics of a regular expression.
116
117The following features are available:
118
119* **std** -
120 Enables support for the standard library. This feature is enabled by default.
121 When disabled, only `core` and `alloc` are used. Otherwise, enabling `std`
122 generally just enables `std::error::Error` trait impls for the various error
123 types.
124* **unicode** -
125 Enables all Unicode features. This feature is enabled by default, and will
126 always cover all Unicode features, even if more are added in the future.
127* **unicode-age** -
128 Provide the data for the
129 [Unicode `Age` property](https://www.unicode.org/reports/tr44/tr44-24.html#Character_Age).
130 This makes it possible to use classes like `\p{Age:6.0}` to refer to all
131 codepoints first introduced in Unicode 6.0
132* **unicode-bool** -
133 Provide the data for numerous Unicode boolean properties. The full list
134 is not included here, but contains properties like `Alphabetic`, `Emoji`,
135 `Lowercase`, `Math`, `Uppercase` and `White_Space`.
136* **unicode-case** -
137 Provide the data for case insensitive matching using
138 [Unicode's "simple loose matches" specification](https://www.unicode.org/reports/tr18/#Simple_Loose_Matches).
139* **unicode-gencat** -
140 Provide the data for
141 [Unicode general categories](https://www.unicode.org/reports/tr44/tr44-24.html#General_Category_Values).
142 This includes, but is not limited to, `Decimal_Number`, `Letter`,
143 `Math_Symbol`, `Number` and `Punctuation`.
144* **unicode-perl** -
145 Provide the data for supporting the Unicode-aware Perl character classes,
146 corresponding to `\w`, `\s` and `\d`. This is also necessary for using
147 Unicode-aware word boundary assertions. Note that if this feature is
148 disabled, the `\s` and `\d` character classes are still available if the
149 `unicode-bool` and `unicode-gencat` features are enabled, respectively.
150* **unicode-script** -
151 Provide the data for
152 [Unicode scripts and script extensions](https://www.unicode.org/reports/tr24/).
153 This includes, but is not limited to, `Arabic`, `Cyrillic`, `Hebrew`,
154 `Latin` and `Thai`.
155* **unicode-segment** -
156 Provide the data necessary to provide the properties used to implement the
157 [Unicode text segmentation algorithms](https://www.unicode.org/reports/tr29/).
158 This enables using classes like `\p{gcb=Extend}`, `\p{wb=Katakana}` and
159 `\p{sb=ATerm}`.
160*/
161
162#![no_std]
163#![forbid(unsafe_code)]
164#![deny(missing_docs, rustdoc::broken_intra_doc_links)]
165#![warn(missing_debug_implementations)]
166#![cfg_attr(docsrs, feature(doc_auto_cfg))]
167
168#[cfg(any(test, feature = "std"))]
169extern crate std;
170
171extern crate alloc;
172
173pub use crate::{
174 error::Error,
175 parser::{parse, Parser, ParserBuilder},
176 unicode::UnicodeWordError,
177};
178
179use alloc::string::String;
180
181pub mod ast;
182mod debug;
183mod either;
184mod error;
185pub mod hir;
186mod parser;
187mod rank;
188mod unicode;
189mod unicode_tables;
190pub mod utf8;
191
192/// Escapes all regular expression meta characters in `text`.
193///
194/// The string returned may be safely used as a literal in a regular
195/// expression.
196pub fn escape(text: &str) -> String {
197 let mut quoted: String = String::new();
198 escape_into(text, &mut quoted);
199 quoted
200}
201
202/// Escapes all meta characters in `text` and writes the result into `buf`.
203///
204/// This will append escape characters into the given buffer. The characters
205/// that are appended are safe to use as a literal in a regular expression.
206pub fn escape_into(text: &str, buf: &mut String) {
207 buf.reserve(additional:text.len());
208 for c: char in text.chars() {
209 if is_meta_character(c) {
210 buf.push(ch:'\\');
211 }
212 buf.push(ch:c);
213 }
214}
215
216/// Returns true if the given character has significance in a regex.
217///
218/// Generally speaking, these are the only characters which _must_ be escaped
219/// in order to match their literal meaning. For example, to match a literal
220/// `|`, one could write `\|`. Sometimes escaping isn't always necessary. For
221/// example, `-` is treated as a meta character because of its significance
222/// for writing ranges inside of character classes, but the regex `-` will
223/// match a literal `-` because `-` has no special meaning outside of character
224/// classes.
225///
226/// In order to determine whether a character may be escaped at all, the
227/// [`is_escapeable_character`] routine should be used. The difference between
228/// `is_meta_character` and `is_escapeable_character` is that the latter will
229/// return true for some characters that are _not_ meta characters. For
230/// example, `%` and `\%` both match a literal `%` in all contexts. In other
231/// words, `is_escapeable_character` includes "superfluous" escapes.
232///
233/// Note that the set of characters for which this function returns `true` or
234/// `false` is fixed and won't change in a semver compatible release. (In this
235/// case, "semver compatible release" actually refers to the `regex` crate
236/// itself, since reducing or expanding the set of meta characters would be a
237/// breaking change for not just `regex-syntax` but also `regex` itself.)
238///
239/// # Example
240///
241/// ```
242/// use regex_syntax::is_meta_character;
243///
244/// assert!(is_meta_character('?'));
245/// assert!(is_meta_character('-'));
246/// assert!(is_meta_character('&'));
247/// assert!(is_meta_character('#'));
248///
249/// assert!(!is_meta_character('%'));
250/// assert!(!is_meta_character('/'));
251/// assert!(!is_meta_character('!'));
252/// assert!(!is_meta_character('"'));
253/// assert!(!is_meta_character('e'));
254/// ```
255pub fn is_meta_character(c: char) -> bool {
256 match c {
257 '\\' | '.' | '+' | '*' | '?' | '(' | ')' | '|' | '[' | ']' | '{'
258 | '}' | '^' | '$' | '#' | '&' | '-' | '~' => true,
259 _ => false,
260 }
261}
262
263/// Returns true if the given character can be escaped in a regex.
264///
265/// This returns true in all cases that `is_meta_character` returns true, but
266/// also returns true in some cases where `is_meta_character` returns false.
267/// For example, `%` is not a meta character, but it is escapeable. That is,
268/// `%` and `\%` both match a literal `%` in all contexts.
269///
270/// The purpose of this routine is to provide knowledge about what characters
271/// may be escaped. Namely, most regex engines permit "superfluous" escapes
272/// where characters without any special significance may be escaped even
273/// though there is no actual _need_ to do so.
274///
275/// This will return false for some characters. For example, `e` is not
276/// escapeable. Therefore, `\e` will either result in a parse error (which is
277/// true today), or it could backwards compatibly evolve into a new construct
278/// with its own meaning. Indeed, that is the purpose of banning _some_
279/// superfluous escapes: it provides a way to evolve the syntax in a compatible
280/// manner.
281///
282/// # Example
283///
284/// ```
285/// use regex_syntax::is_escapeable_character;
286///
287/// assert!(is_escapeable_character('?'));
288/// assert!(is_escapeable_character('-'));
289/// assert!(is_escapeable_character('&'));
290/// assert!(is_escapeable_character('#'));
291/// assert!(is_escapeable_character('%'));
292/// assert!(is_escapeable_character('/'));
293/// assert!(is_escapeable_character('!'));
294/// assert!(is_escapeable_character('"'));
295///
296/// assert!(!is_escapeable_character('e'));
297/// ```
298pub fn is_escapeable_character(c: char) -> bool {
299 // Certainly escapeable if it's a meta character.
300 if is_meta_character(c) {
301 return true;
302 }
303 // Any character that isn't ASCII is definitely not escapeable. There's
304 // no real need to allow things like \☃ right?
305 if !c.is_ascii() {
306 return false;
307 }
308 // Otherwise, we basically say that everything is escapeable unless it's a
309 // letter or digit. Things like \3 are either octal (when enabled) or an
310 // error, and we should keep it that way. Otherwise, letters are reserved
311 // for adding new syntax in a backwards compatible way.
312 match c {
313 '0'..='9' | 'A'..='Z' | 'a'..='z' => false,
314 // While not currently supported, we keep these as not escapeable to
315 // give us some flexibility with respect to supporting the \< and
316 // \> word boundary assertions in the future. By rejecting them as
317 // escapeable, \< and \> will result in a parse error. Thus, we can
318 // turn them into something else in the future without it being a
319 // backwards incompatible change.
320 '<' | '>' => false,
321 _ => true,
322 }
323}
324
325/// Returns true if and only if the given character is a Unicode word
326/// character.
327///
328/// A Unicode word character is defined by
329/// [UTS#18 Annex C](https://unicode.org/reports/tr18/#Compatibility_Properties).
330/// In particular, a character
331/// is considered a word character if it is in either of the `Alphabetic` or
332/// `Join_Control` properties, or is in one of the `Decimal_Number`, `Mark`
333/// or `Connector_Punctuation` general categories.
334///
335/// # Panics
336///
337/// If the `unicode-perl` feature is not enabled, then this function
338/// panics. For this reason, it is recommended that callers use
339/// [`try_is_word_character`] instead.
340pub fn is_word_character(c: char) -> bool {
341 try_is_word_character(c).expect(msg:"unicode-perl feature must be enabled")
342}
343
344/// Returns true if and only if the given character is a Unicode word
345/// character.
346///
347/// A Unicode word character is defined by
348/// [UTS#18 Annex C](https://unicode.org/reports/tr18/#Compatibility_Properties).
349/// In particular, a character
350/// is considered a word character if it is in either of the `Alphabetic` or
351/// `Join_Control` properties, or is in one of the `Decimal_Number`, `Mark`
352/// or `Connector_Punctuation` general categories.
353///
354/// # Errors
355///
356/// If the `unicode-perl` feature is not enabled, then this function always
357/// returns an error.
358pub fn try_is_word_character(
359 c: char,
360) -> core::result::Result<bool, UnicodeWordError> {
361 unicode::is_word_character(c)
362}
363
364/// Returns true if and only if the given character is an ASCII word character.
365///
366/// An ASCII word character is defined by the following character class:
367/// `[_0-9a-zA-Z]'.
368pub fn is_word_byte(c: u8) -> bool {
369 match c {
370 b'_' | b'0'..=b'9' | b'a'..=b'z' | b'A'..=b'Z' => true,
371 _ => false,
372 }
373}
374
375#[cfg(test)]
376mod tests {
377 use alloc::string::ToString;
378
379 use super::*;
380
381 #[test]
382 fn escape_meta() {
383 assert_eq!(
384 escape(r"\.+*?()|[]{}^$#&-~"),
385 r"\\\.\+\*\?\(\)\|\[\]\{\}\^\$\#\&\-\~".to_string()
386 );
387 }
388
389 #[test]
390 fn word_byte() {
391 assert!(is_word_byte(b'a'));
392 assert!(!is_word_byte(b'-'));
393 }
394
395 #[test]
396 #[cfg(feature = "unicode-perl")]
397 fn word_char() {
398 assert!(is_word_character('a'), "ASCII");
399 assert!(is_word_character('à'), "Latin-1");
400 assert!(is_word_character('β'), "Greek");
401 assert!(is_word_character('\u{11011}'), "Brahmi (Unicode 6.0)");
402 assert!(is_word_character('\u{11611}'), "Modi (Unicode 7.0)");
403 assert!(is_word_character('\u{11711}'), "Ahom (Unicode 8.0)");
404 assert!(is_word_character('\u{17828}'), "Tangut (Unicode 9.0)");
405 assert!(is_word_character('\u{1B1B1}'), "Nushu (Unicode 10.0)");
406 assert!(is_word_character('\u{16E40}'), "Medefaidrin (Unicode 11.0)");
407 assert!(!is_word_character('-'));
408 assert!(!is_word_character('☃'));
409 }
410
411 #[test]
412 #[should_panic]
413 #[cfg(not(feature = "unicode-perl"))]
414 fn word_char_disabled_panic() {
415 assert!(is_word_character('a'));
416 }
417
418 #[test]
419 #[cfg(not(feature = "unicode-perl"))]
420 fn word_char_disabled_error() {
421 assert!(try_is_word_character('a').is_err());
422 }
423}
424