Michael Davis 3f0aa5cab0 Implement "ngram" suggestions
This is the last part of the `suggester`. Hunspell has a bespoke
string similarity measurement called "ngram similarity." Conceptually
it's like Jaro or Levenshtein similarity - a measurement for how close
two strings are.

The suggester resorts to ngram suggestions when it believes that the
simple string edits in `suggest_low` are not high quality. Ngram
suggestions are a pipeline:

* Iterate on all stems in the wordlist. Take the 100 most promising
  according to a basic ngram similarity score.
* Expand all affixes for each stem and give each expanded form a score
  based on another ngram-similarity-based metric. Take up to the top 200
  most promising candidates.
* Determine a threshold to eliminate lower quality candidates.
* Return the last remaining most promising candidates.

It's notable that because we iterate on the entire wordlist that ngram
suggestions are far far slower than the basic edit based suggestions.
2024-11-11 17:25:37 -05:00
2024-09-06 18:05:49 -04:00
2024-10-30 20:39:57 -04:00
2024-11-11 17:25:37 -05:00
2024-11-11 17:25:37 -05:00
2024-11-11 17:25:37 -05:00
2024-03-06 21:40:15 -05:00
2024-08-27 14:28:03 -04:00
2024-10-30 20:39:57 -04:00
2024-10-30 20:39:57 -04:00
2024-09-08 10:37:58 -04:00
2024-11-03 11:59:32 -05:00
2024-03-06 21:40:15 -05:00

Spellbook

Crates.io Documentation

Spellbook is a Rust spellchecking library compatible with the Hunspell dictionary format.

fn main() {
    let aff = std::fs::read_to_string("en_US.aff").unwrap();
    let dic = std::fs::read_to_string("en_US.dic").unwrap();
    let dict = spellbook::Dictionary::new(&aff, &dic).unwrap();

    let word = std::env::args().nth(1).expect("expected a word to check");

    if dict.check(&word) {
        println!("{word:?} is in the dictionary.");
    } else {
        let mut suggestions = Vec::new();
        dict.suggest(&word, &mut suggestions);
        eprintln!("{word:?} is NOT in the dictionary. Did you mean {suggestions:?}?");
        std::process::exit(1);
    }
}

Spellbook is no_std and only requires hashbrown as a dependency. (Note that ahash is included by default, see the feature flags section below.) This may change in the future for performance tweaks like small-string optimizations and maybe memchr but the hope is to keep this library as lightweight as possible: new dependencies must considerably improve performance or correctness.

Maturity

Spellbook is a work in progress and might see breaking changes to any part of the API as well as updates to the MSRV and dependencies.

Currently the check API works well for en_US - a relatively simple dictionary - though it should work reasonably well for most other dictionaries. Some dictionaries which use complex compounding directives may work less well. The suggest API is a work in progress.

Spellbook should be considered to be in alpha. Part of the Hunspell test corpus has been successfully ported and there are a healthy number of unit tests, but there are certainly bugs to be found.

Feature flags

The only feature flag currently is default-hasher which pulls in ahash and is enabled by default similar to the equivalent flag from hashbrown.

A non-cryptographic hash significantly improves the time it takes to initialize a dictionary and check and suggest words. Denial-of-service attacks are not usually relevant for this use-case since you would usually not take dictionary files as arbitrary inputs, so a non-cryptographic hash is probably ok. (I am not a cryptologist.) Note that Hashbrown v0.15 and above use foldhash instead of aHash. In my runs of the Spellbook benchmarks foldhash doesn't make a perceptible difference.

You can easily drop this default feature:

[dependencies]
spellbook = { version = "1.0", default-features = false }

and specify a hasher of your choosing instead:

use std::hash::BuildHasherDefault;
type Dictionary = spellbook::Dictionary<BuildHasherDefault<ahash::AHasher>>;

How does it work?

For a more in depth overview, check out @zverok's blog series Rebuilding the spellchecker.

Hunspell dictionaries are split into two files: <lang>.dic and <lang>.aff. The .dic file has a listing of stems and flags associated with that stem. For example en_US.dic contains the word adventure/DRSMZG meaning that "adventure" is a stem in the dictionary with flags D, R, S, M, Z and G. The .aff file contains a bunch of rules to use when determining if a word is correct or figuring out which words to suggest. The most intuitive of these are prefixes and suffixes. en_US contains suffixes like R and G:

SFX R Y 4
SFX R   0     r          e
SFX R   y     ier        [^aeiou]y
SFX R   0     er         [aeiou]y
SFX R   0     er         [^ey]

SFX G Y 2
SFX G   e     ing        e
SFX G   0     ing        [^e]

Since "adventure" has these flags, these suffixes can be applied. The rules are structured as tables that define the flag (like R), what to strip from the end of the word (0 for nothing), what to add to the end (er for example) and under what condition the suffix applies (matches [^aeiou]y meaning not 'a' 'e' 'i' 'o' 'u' and then 'y' for example). When checking a word like "adventurer" you find any suffixes where the "add" portion of the suffix matches the ending of the word and check if the condition applies. The first clause of R applies since the "adventure" ends in 'e', and we add a 'r' to the end. When checking this happens in reverse. Starting with a word like "adventurer" we strip the 'r' and check the condition. Similarly with G, the first clause matches "adventuring" because "adventure" ends with 'e' and we add an "ing".

Hunspell dictionaries use these prefixing and suffixing rules to compress the dictionary. Without prefixes and suffixes we'd need a big set of every possible conjugation of every word in the dictionary. That might be possible with the gigabytes of RAM we have today but it certainly isn't efficient.

Another way Hunspell dictionaries "compress" words like this is compounding. For example with the COMPOUNDRULE directive:

# compound rules:
# 1. [0-9]*1[0-9]th (10th, 11th, 12th, 56714th, etc.)
# 2. [0-9]*[02-9](1st|2nd|3rd|[4-9]th) (21st, 22nd, 123rd, 1234th, etc.)
COMPOUNDRULE 2
COMPOUNDRULE n*1t
COMPOUNDRULE n*mp

en_US.dic has words for digits like 0/nm, 0th/pt, 1/n1, 1st/p, etc. The COMPOUNDRULE directive describes a regex-like pattern using flags and * (zero-or-more) and ? (zero-or-one) modifiers. For example the first compound rule in the table n*1t allows a word like "10th": it matches the n flag zero times and then "1" (the stem of the 1 flag in the .dic file) and "0th". The n* modifier at the front allows adding any number of any other digit, so this rule also allows words like "110th" or "10000th".

Other docs

Credits

  • @zverok's blog series on rebuilding Hunspell was an invaluable resource during early prototypes. The old spylls-like prototype can be found on the spylls branch.
  • Ultimately Nuspell's codebase became the reference for Spellbook though as C++ idioms mesh better with Rust than Python's. Nuspell's code is in great shape and is much more readable than Hunspell so for now Spellbook is essentially a Rust rewrite of Nuspell (though we may diverge in the future).
  • The parser for .dic and .aff files is loosely based on ZSpell.
Description
A Hunspell-compatible spellchecking Rust library
Readme MPL-2.0 936 KiB
Languages
Rust 99.6%
Nix 0.4%