mirror of
https://github.com/solaeus/nucleo.git
synced 2024-12-22 01:47:49 +00:00
fix typos in documentation (#27)
This commit is contained in:
parent
8992c5c787
commit
89e32f915e
@ -94,7 +94,7 @@ Compared to `skim` nucleo does couple simpler (but arguably even more impactful)
|
||||
<!-- * [x] substring/prefix/postfix/exact matcher -->
|
||||
<!-- * [ ] case mismatch penalty. This doesn't seem like a good idea to me. `FZF` doesn't do this (only skin), smart case should cover most cases. .would be nice for fully case-insensitive matching without smart case like in autocompletion tough. Realistically there won't be more than 3 items that are identical with different casing tough, so I don't think it matters too much. It is a bit annoying to implement since you can no longer pre-normalize queries(or need two queries) :/ -->
|
||||
<!-- * [ ] high level API (worker thread, query parsing, sorting), in progress -->
|
||||
<!-- * apparently sorting is superfast (at most 5% of match time for `nucleo` matcher with a highly selective query, otherwise its completely negligible compared to fuzzy matching). All the bending over backwards `fzf` does (and `skim` copied but way worse) seems a little silly. I think `fzf` does it because go doesn't have a good parallel sort. `Fzf` divides the matches into a couple fairly large chunks and sorts those on each worker thread and then lazily merges the result. That makes the sorting without the merging `Nlog(N/M)` which is basically equivalent for large `N` and small `M` as is the case here. Atleast its parallel tough. In rust we have a great pattern defeating parallel quicksort tough (rayon) which is way easier. -->
|
||||
<!-- * apparently sorting is superfast (at most 5% of match time for `nucleo` matcher with a highly selective query, otherwise its completely negligible compared to fuzzy matching). All the bending over backwards `fzf` does (and `skim` copied but way worse) seems a little silly. I think `fzf` does it because go doesn't have a good parallel sort. `Fzf` divides the matches into a couple fairly large chunks and sorts those on each worker thread and then lazily merges the result. That makes the sorting without the merging `Nlog(N/M)` which is basically equivalent for large `N` and small `M` as is the case here. At least its parallel tough. In rust we have a great pattern defeating parallel quicksort tough (rayon) which is way easier. -->
|
||||
<!-- * [x] basic implementation (workers, streaming, invalidation) -->
|
||||
<!-- * [x] verify it actually works -->
|
||||
<!-- * [x] query paring -->
|
||||
|
@ -49,10 +49,10 @@ pub struct Item<'a, T> {
|
||||
pub matcher_columns: &'a [Utf32String],
|
||||
}
|
||||
|
||||
/// A handle that allow adding new items [`Nucleo`] worker.
|
||||
/// A handle that allows adding new items to a [`Nucleo`] worker.
|
||||
///
|
||||
/// It's internally reference counted and can be cheaply cloned
|
||||
/// and send acsorss tread
|
||||
/// and sent across threads.
|
||||
pub struct Injector<T> {
|
||||
items: Arc<boxcar::Vec<T>>,
|
||||
notify: Arc<(dyn Fn() + Sync + Send)>,
|
||||
|
Loading…
Reference in New Issue
Block a user