Compare commits

...

78 Commits

Author SHA1 Message Date
Blaž Hrastnik
ae5ecfdf66 Release v0.2.0 2021-06-13 22:35:13 +09:00
Blaž Hrastnik
d545e61644 ui: Prompt should figure out a reasonable column width
Fixes #192
Refs #225
2021-06-13 22:28:18 +09:00
Wojciech Kępka
df217f71c1 Fix wq 2021-06-13 20:48:18 +09:00
Wojciech Kępka
d008e86037 Document::is_modified should not check if path is set
If there is a new document we still want to know if there are unsaved changes
2021-06-13 20:48:18 +09:00
Wojciech Kępka
b9100fbd44 Fix clippy 2021-06-13 20:48:18 +09:00
Wojciech Kępka
52d3c29244 Deduplicate code 2021-06-13 20:48:18 +09:00
Wojciech Kępka
17c9a8499e Add qa and qa! 2021-06-13 20:48:18 +09:00
Wojciech Kępka
62e6232a32 Update write_all 2021-06-13 20:48:18 +09:00
Wojciech Kępka
d8b5d1181f Add Copy derive to PromptEvent 2021-06-13 20:48:18 +09:00
Wojciech Kępka
b500a2a138 commands: Add more write commands 2021-06-13 20:48:18 +09:00
Yusuf Bera Ertan
a3f01503e2 build(nix): use nix-cargo-integration, make shell.nix use flake devshell 2021-06-13 14:46:51 +09:00
Ivan Tham
9640ed1425 Add clarification to last buffer 2021-06-13 09:58:50 +09:00
Robin
9baf1ecc90 add symbol picker (#230)
* add symbol picker

use the lsp document_symbol request

* fix errors from merging in master

* add docs for symbol picker
2021-06-12 21:45:21 +09:00
Robin
44cc0d8eb0 add alternate file (#223)
* add alternate file

inspired by vim ctrl-6/kak ga commands. the alternate file is kept per view

* apply feedback from #223

* rename to last_accessed

* add ga doc

* add fail message for ga
2021-06-12 21:21:06 +09:00
Ivan Tham
1953588873 Change picker horizontal split to h
Follow window mode and vim behavior, x seemed weird.
2021-06-12 21:17:48 +09:00
Wojciech Kępka
45793d7c09 Update README 2021-06-12 17:26:41 +08:00
Wojciech Kępka
4b6aff8c66 Use runtime dir when defaulting to executable location 2021-06-12 17:26:41 +08:00
Wojciech Kępka
4a40e935de Make runtime_dir private 2021-06-12 17:26:41 +08:00
Wojciech Kępka
716067ba05 Add more ways to detect runtime directory 2021-06-12 17:26:41 +08:00
Wojciech Kępka
c754df12b3 lsp: Check bounds when converting lsp positions (#204)
* lsp: Make position conversion funcs return `Option`

* Add tests

* Fixes

* Revert pos_to_lsp_pos to panic
2021-06-12 16:04:30 +09:00
Blaž Hrastnik
1bf5b103b0 Add bug report template 2021-06-12 10:39:46 +09:00
Blaž Hrastnik
1665bac1b6 Fix broken test 2021-06-12 10:24:48 +09:00
Blaž Hrastnik
278361a086 Only auto-format for certain languages
Fixes #53
Fixes #207
2021-06-12 10:20:37 +09:00
Jakub Bartodziej
69fe46a122 Add :earlier and :later commands that can be used to navigate the full edit history. (#194)
* Disable deleting from an empty buffer which can cause a crash.

* Improve on the fix for deleting from the end of the buffer.

* Clean up leftover log.

* Avoid theoretical underflow.

* Implement :before which accepts a time interval and moves the editor to
the closest history state to the commit of the current time minus that
interval. Current time is now by default, or the commit time if :before
has just been used.

* Add :earlier an :later commands that can move through
the edit history and retrieve changes hidded by undoing
and commiting new changes. The commands accept a number
of steps or a time period relative to the currrent change.

* Fix clippy lint error.

* Remove the dependency on parse_duration, add a custom parser instead.

* Fix clippy errors.

* Make helix_core::history a public module.

* Use the helper for getting the current document and view.

* Handled some PR comments.

* Fix the logic in :later n.

Co-authored-by: Ivan Tham <pickfire@riseup.net>

* Add an alias for :earlier.

Co-authored-by: Ivan Tham <pickfire@riseup.net>

* Add an alias for later.

Co-authored-by: Ivan Tham <pickfire@riseup.net>

* Run cargo fmt.

* Add some tests for earlier and later.

* Add more tests and restore the fix for later that diappeared somehow.

* Use ? instead of a match on an option.

Co-authored-by: Ivan Tham <pickfire@riseup.net>

* Rename to UndoKind.

* Remove the leftover match.

* Handle a bunch of review comments.

* More systemd.time compliant time units and additional description for the new commands.

* A more concise rewrite of the time span parser using ideas from PR discussion.

* Replace a match with map_err().

Co-authored-by: Ivan Tham <pickfire@riseup.net>

Co-authored-by: Jakub Bartodziej <jqb@google.com>
Co-authored-by: Ivan Tham <pickfire@riseup.net>
2021-06-11 22:06:13 +09:00
PabloMansanet
86af55c379 Movement fixes, refactor and unit test suite (#217)
* Add convenience/clarity wrapper for Range initialization

* Test horizontal moves

* Add column jumping tests

* Add failing movement conditions for multi-word moves

* Refactor skip_over_next

* Add complex forward movement unit tests

* Add strict whitespace checks and edge case tests

* Restore formatting

* Remove unused function

* Add empty test case for deletion and fix nth_prev_word_boundary

* Add tests for backward motion

* Refactor word movement

* Address review comments and finish refactoring backwards move

* Finish unit test suite

* Fmt pass

* Fix lint erors

* Clean up diff restoring bad 'cargo fmt' actions

* Simplify movement closures (thanks Pickfire)

* Fmt pass

* Replace index-based movement with iterator based movement, ensuring that each move incurs a single call to the RopeSlice API

* Break down tuple function

* Extract common logic to all movement functions

* Split iterator helpers away into their own module

* WIP reducing clones

* Operate on spans

* WIP simplifying iterators

* Simplify motion helpers

* Fix iterator

* Fix all unit tests

* Refactor and simplify

* Simplify fold
2021-06-11 21:57:07 +09:00
Wojciech Kępka
0c2b99327a commands: Handle t<ENTER> as till newline 2021-06-11 18:34:46 +09:00
Blaž Hrastnik
a8a5bcd13d Temporarily disable workDone
Blows up on gopls because we don't handle receiving window/workDoneProgress/create method calls
2021-06-11 13:30:21 +09:00
Wojciech Kępka
098806ce2a lsp: Display LSP progress messages (#216) 2021-06-11 12:42:16 +09:00
Robin van Dijk
c0d32707d0 move to first nonwhitespace on shift-i
This matches the behaviour in vim and kak
2021-06-10 22:02:38 +09:00
Timothy DeHerrera
d8df10f295 Add Nix runtime 2021-06-10 22:01:48 +09:00
Timothy DeHerrera
38073fd64c Add Nix syntax 2021-06-10 22:01:48 +09:00
Timothy DeHerrera
01760c3845 embed runtime 2021-06-10 22:00:53 +09:00
Timothy DeHerrera
8590f6a912 ignore Nix outputs 2021-06-10 22:00:53 +09:00
Timothy DeHerrera
69378382c3 add overlay 2021-06-10 22:00:53 +09:00
Timothy DeHerrera
1a774d61bb Fix flake package 2021-06-10 22:00:53 +09:00
notoria
1b14e9a19a Downgrade unicode-segmentation 2021-06-10 22:00:08 +09:00
notoria
e46346c907 Correct tree-sitter-haskell submodule 2021-06-10 22:00:08 +09:00
notoria
9887b1275a Implement missing Debug and update Cargo.lock 2021-06-10 22:00:08 +09:00
Ivan Tham
7cc13fefe9 Derive debug without feature
Note that this also removed those `finish_non_exhaustive()`.
2021-06-10 22:00:08 +09:00
notoria
1a3a924634 Implement Debug for data structure as a feature 2021-06-10 22:00:08 +09:00
Blaž Hrastnik
aebdef8257 Reuse a cursor from the pool if available (fixes #202) 2021-06-10 12:49:34 +09:00
Ivan Tham
6b3c9d8ed3 Fix jump behavior, goto_implementation now jump
Better jump behavior since we override the first jump if it's on the
first document. At the same time, ctrl-i is now working with gd jumps.
2021-06-10 11:08:18 +08:00
wojciechkepka
4dbc23ff1c Fix documentation popup panic 2021-06-10 11:26:03 +09:00
Kevin Sjöberg
b20e4a108c Only enforce limit outside of .git 2021-06-09 10:06:31 +09:00
Kevin Sjöberg
1bb9977faf Match keybindings of menu 2021-06-09 09:54:22 +09:00
Kevin Sjöberg
29962a5bd9 Fix Shift-Tab for moving upwards in menu 2021-06-09 09:53:40 +09:00
Kevin Sjöberg
7ef0e2cab6 Don't panic on empty document 2021-06-09 09:43:21 +09:00
Corey Powell
35feb614b6 Updated elixir queries to fix crash 2021-06-09 00:07:44 +09:00
Ivan Tham
5e2ba28e0e Fix panic on ctrl-w empty document 2021-06-08 23:08:08 +09:00
Blaž Hrastnik
83723957fe Fix crash when too many completions available
Refs #81
2021-06-08 21:58:26 +09:00
Zheming Li
ae51065213 Support go to line 1 2021-06-08 17:27:21 +09:00
Wojciech Kępka
4e3a343602 Make r<ENTER> work 2021-06-08 17:23:38 +09:00
Wojciech Kępka
81e02e1ba4 Remove unwanted as_str 2021-06-08 17:23:38 +09:00
Wojciech Kępka
c349ceb61f Don't replace newlines 2021-06-08 17:23:38 +09:00
Wojciech Kępka
2e4a338944 Add bounds checks to replace 2021-06-08 17:23:38 +09:00
Wojciech Kępka
9c83a98469 commands: Replace all characters in selection 2021-06-08 17:23:38 +09:00
Wojciech Kępka
1bffb34350 Make matching bracket dimmed, prevent out of bounds rendering 2021-06-08 17:23:05 +09:00
Wojciech Kępka
c978d811d9 Cleanup find_first_non_whitespace_char funcs 2021-06-08 17:22:37 +09:00
Wojciech Kępka
48df05b16d commands: Add goto first non-whitespace char of line 2021-06-08 17:22:37 +09:00
Kirawi
b873fb9897 Fix Unicode (#135)
* init

* wip

* wip

* fix unicode break

* fix unicode break

* Update helix-core/src/transaction.rs

Co-authored-by: Benoît Cortier <benoit.cortier@fried-world.eu>

* clippy

* fix

* add changes

* added test

* wip

* wip

* wip

* wip

* fix

* fix view

* fix #88

Co-authored-by: Benoît Cortier <benoit.cortier@fried-world.eu>
2021-06-08 13:20:15 +09:00
Kelly Thomas Kline
8f1eb7b2b0 Add trace log primer to the Contributing section 2021-06-08 12:21:25 +09:00
Ivan Tham
82fdfdc38e Add missing newline to end of file on load
Fix #152
2021-06-08 11:38:56 +09:00
Egor Karavaev
ea6667070f helix-lsp cleanup 2021-06-08 10:56:46 +09:00
Egor Karavaev
960bc9f134 Don't panic on LSP not starting 2021-06-08 10:02:41 +09:00
Kevin Sjöberg
08f50310bd Bump file picker limit 2021-06-08 09:51:50 +09:00
Wojciech Kępka
4bec87ad18 Update keymap 2021-06-08 09:50:14 +09:00
Wojciech Kępka
c65b4dea09 commands: Add replace with yanked as R 2021-06-08 09:50:14 +09:00
Wojciech Kępka
6fc0e0b5fb completion: Fix unimplemented autocomplete 2021-06-08 09:38:53 +09:00
Blaž Hrastnik
0201ef9205 ui: completion: Use the correct type_name
Fixes #166
2021-06-08 01:38:57 +09:00
Wojciech Kępka
037f45f24e Create all parent directories for config and cache 2021-06-08 01:07:30 +09:00
Blaž Hrastnik
9821beb5c4 Make gh/gl extend selection in select mode 2021-06-07 23:32:44 +09:00
Blaž Hrastnik
3cee0bf200 Address clippy lint 2021-06-07 23:08:51 +09:00
Blaž Hrastnik
4fd38f82a3 Disable failing doctest 2021-06-07 23:05:39 +09:00
Ivan Tham
b5682f984b Separate helix-term as a library
helix-term stuff will now be documented in rustdoc.
2021-06-07 21:35:31 +08:00
Benoît CORTIER
68affa3c59 Implement register selection
User can select register to yank into with the " command.
A new state is added to `Editor` and `commands::Context` structs.
This state is managed by leveraging a new struct `RegisterSelection`.
2021-06-07 21:52:09 +09:00
Blaž Hrastnik
d5de9183ef Use upstream jsonrpc again 2021-06-07 21:33:17 +09:00
Blaž Hrastnik
8d6fad4cac lsp: Provide workspace root on client.initialize() 2021-06-07 21:32:01 +09:00
Blaž Hrastnik
14830e75ff Revert the line number rendering change, we were correct before 2021-06-07 13:24:03 +09:00
59 changed files with 2802 additions and 1114 deletions

5
.envrc
View File

@@ -1,2 +1,5 @@
watch_file shell.nix watch_file shell.nix
use flake watch_file flake.lock
# try to use flakes, if it fails use normal nix (ie. shell.nix)
use flake || use nix

28
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,28 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: C-bug
assignees: ''
---
<!-- Your issue may already be reported!
Please search on the issue tracker before creating one. -->
### Reproduction steps
<!-- Ideally provide a key sequence and/or asciinema.org recording. -->
### Environment
- Platform: <!-- macOS / Windows / Linux -->
- Helix version: <!-- 'hx -v' if using a release, 'git describe' if building from master -->
<details><summary>~/.cache/helix/helix.log</summary>
```
please provide a copy of `~/.cache/helix/helix.log` here if possible, you may need to redact some of the lines
```
</details>

1
.gitignore vendored
View File

@@ -2,3 +2,4 @@ target
.direnv .direnv
helix-term/rustfmt.toml helix-term/rustfmt.toml
helix-syntax/languages/ helix-syntax/languages/
result

3
.gitmodules vendored
View File

@@ -86,3 +86,6 @@
path = helix-syntax/languages/tree-sitter-elixir path = helix-syntax/languages/tree-sitter-elixir
url = https://github.com/IceDragon200/tree-sitter-elixir url = https://github.com/IceDragon200/tree-sitter-elixir
shallow = true shallow = true
[submodule "helix-syntax/languages/tree-sitter-nix"]
path = helix-syntax/languages/tree-sitter-nix
url = https://github.com/cstrahan/tree-sitter-nix

53
Cargo.lock generated
View File

@@ -79,11 +79,10 @@ dependencies = [
[[package]] [[package]]
name = "crossbeam-utils" name = "crossbeam-utils"
version = "0.8.4" version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4feb231f0d4d6af81aed15928e58ecf5816aa62a2393e2c82f46973e92a9a278" checksum = "d82cfc11ce7f2c3faef78d8a684447b40d503d9681acebed6cb728d45940c4db"
dependencies = [ dependencies = [
"autocfg",
"cfg-if", "cfg-if",
"lazy_static", "lazy_static",
] ]
@@ -238,12 +237,6 @@ dependencies = [
"wasi", "wasi",
] ]
[[package]]
name = "glob"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9b919933a397b79c37e33b77bb2aa3dc8eb6e165ad809e58ff75bc7db2e34574"
[[package]] [[package]]
name = "globset" name = "globset"
version = "0.4.6" version = "0.4.6"
@@ -259,7 +252,7 @@ dependencies = [
[[package]] [[package]]
name = "helix-core" name = "helix-core"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"etcetera", "etcetera",
"helix-syntax", "helix-syntax",
@@ -272,35 +265,32 @@ dependencies = [
"tendril", "tendril",
"toml", "toml",
"tree-sitter", "tree-sitter",
"unicode-general-category",
"unicode-segmentation", "unicode-segmentation",
"unicode-width", "unicode-width",
] ]
[[package]] [[package]]
name = "helix-lsp" name = "helix-lsp"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"futures-executor", "futures-executor",
"futures-util", "futures-util",
"glob",
"helix-core", "helix-core",
"jsonrpc-core", "jsonrpc-core",
"log", "log",
"lsp-types", "lsp-types",
"once_cell",
"pathdiff",
"serde", "serde",
"serde_json", "serde_json",
"thiserror", "thiserror",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
"url",
] ]
[[package]] [[package]]
name = "helix-syntax" name = "helix-syntax"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"cc", "cc",
"serde", "serde",
@@ -310,7 +300,7 @@ dependencies = [
[[package]] [[package]]
name = "helix-term" name = "helix-term"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono", "chrono",
@@ -336,7 +326,7 @@ dependencies = [
[[package]] [[package]]
name = "helix-tui" name = "helix-tui"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"bitflags", "bitflags",
"cassowary", "cassowary",
@@ -348,7 +338,7 @@ dependencies = [
[[package]] [[package]]
name = "helix-view" name = "helix-view"
version = "0.0.10" version = "0.2.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"crossterm", "crossterm",
@@ -429,7 +419,8 @@ dependencies = [
[[package]] [[package]]
name = "jsonrpc-core" name = "jsonrpc-core"
version = "17.1.0" version = "17.1.0"
source = "git+https://github.com/paritytech/jsonrpc#609d7a6cc160742d035510fa89fb424ccf077660" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4467ab6dfa369b69e52bd0692e480c4d117410538526a57a304a0f2250fd95e"
dependencies = [ dependencies = [
"futures-util", "futures-util",
"log", "log",
@@ -596,12 +587,6 @@ dependencies = [
"winapi", "winapi",
] ]
[[package]]
name = "pathdiff"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877630b3de15c0b64cc52f659345724fbf6bdad9bd9566699fc53688f3c34a34"
[[package]] [[package]]
name = "percent-encoding" name = "percent-encoding"
version = "2.1.0" version = "2.1.0"
@@ -803,9 +788,9 @@ dependencies = [
[[package]] [[package]]
name = "signal-hook-registry" name = "signal-hook-registry"
version = "1.3.0" version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "16f1d0fef1604ba8f7a073c7e701f213e056707210e9020af4528e0101ce11a6" checksum = "e51e73328dc4ac0c7ccbda3a494dfa03df1de2f46018127f60c693f2648455b0"
dependencies = [ dependencies = [
"libc", "libc",
] ]
@@ -986,10 +971,16 @@ dependencies = [
] ]
[[package]] [[package]]
name = "unicode-normalization" name = "unicode-general-category"
version = "0.1.17" version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07fbfce1c8a97d547e8b5334978438d9d6ec8c20e38f56d4a4374d181493eaef" checksum = "07547e3ee45e28326cc23faac56d44f58f16ab23e413db526debce3b0bfd2742"
[[package]]
name = "unicode-normalization"
version = "0.1.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d54590932941a9e9266f0832deed84ebe1bf2e4c9e4a3554d393d18f5e854bf9"
dependencies = [ dependencies = [
"tinyvec", "tinyvec",
] ]

View File

@@ -41,14 +41,12 @@ cargo install --path helix-term
This will install the `hx` binary to `$HOME/.cargo/bin`. This will install the `hx` binary to `$HOME/.cargo/bin`.
Now copy the `runtime/` directory somewhere. Helix will by default look for the Now copy the `runtime/` directory somewhere. Helix will by default look for the runtime
runtime inside the same folder as the executable, but that can be overriden via inside the config directory or the same directory as executable, but that can be overriden
the `HELIX_RUNTIME` environment variable. via the `HELIX_RUNTIME` environment variable.
> NOTE: You should set this to <path to repository>/runtime in development (if > NOTE: running via cargo doesn't require setting explicit `HELIX_RUNTIME` path, it will automatically
> running via cargo). > detect the `runtime` directory in the project root.
>
> `export HELIX_RUNTIME=$PWD/runtime`
If you want to embed the `runtime/` directory into the Helix binary you can build If you want to embed the `runtime/` directory into the Helix binary you can build
it with: it with:
@@ -78,6 +76,9 @@ Some suggestions to get started:
- You can look at the [good first issue](https://github.com/helix-editor/helix/labels/good%20first%20issue) label on the issue tracker. - You can look at the [good first issue](https://github.com/helix-editor/helix/labels/good%20first%20issue) label on the issue tracker.
- Help with packaging on various distributions needed! - Help with packaging on various distributions needed!
- To use print debugging to the `~/.cache/helix/helix.log` file, you must:
* Print using `log::info!`, `warn!`, or `error!`. (`log::info!("helix!")`)
* Pass the appropriate verbosity level option for the desired log level. (`hx -v <file>` for info, more `v`s for higher severity inclusive)
- If your preferred language is missing, integrating a tree-sitter grammar for - If your preferred language is missing, integrating a tree-sitter grammar for
it and defining syntax highlight queries for it is straight forward and it and defining syntax highlight queries for it is straight forward and
doesn't require much knowledge of the internals. doesn't require much knowledge of the internals.

View File

@@ -19,7 +19,7 @@
| F | find previous char | | F | find previous char |
| Home | move to the start of the line | | Home | move to the start of the line |
| End | move to the end of the line | | End | move to the end of the line |
| m | Jump to matching bracket | | m | Jump to matching bracket |
| PageUp | Move page up | | PageUp | Move page up |
| PageDown | Move page down | | PageDown | Move page down |
| ctrl-u | Move half page up | | ctrl-u | Move half page up |
@@ -38,13 +38,14 @@
| Key | Description | | Key | Description |
|-----|-----------| |-----|-----------|
| r | replace (single character change) | | r | replace with a character |
| R | replace with yanked text |
| i | Insert before selection | | i | Insert before selection |
| a | Insert after selection (append) | | a | Insert after selection (append) |
| I | Insert at the start of the line | | I | Insert at the start of the line |
| A | Insert at the end of the line | | A | Insert at the end of the line |
| o | Open new line below selection | | o | Open new line below selection |
| o | Open new line above selection | | o | Open new line above selection |
| u | Undo change | | u | Undo change |
| U | Redo change | | U | Redo change |
| y | Yank selection | | y | Yank selection |
@@ -53,26 +54,26 @@
| > | Indent selection | | > | Indent selection |
| < | Unindent selection | | < | Unindent selection |
| = | Format selection | | = | Format selection |
| d | Delete selection | | d | Delete selection |
| c | Change selection (delete and enter insert mode) | | c | Change selection (delete and enter insert mode) |
### Selection manipulation ### Selection manipulation
| Key | Description | | Key | Description |
|-----|-----------| |-----|-----------|
| s | Select all regex matches inside selections | | s | Select all regex matches inside selections |
| S | Split selection into subselections on regex matches | | S | Split selection into subselections on regex matches |
| alt-s | Split selection on newlines | | alt-s | Split selection on newlines |
| ; | Collapse selection onto a single cursor | | ; | Collapse selection onto a single cursor |
| alt-; | Flip selection cursor and anchor | | alt-; | Flip selection cursor and anchor |
| % | Select entire file | | % | Select entire file |
| x | Select current line | | x | Select current line |
| X | Extend to next line | | X | Extend to next line |
| [ | Expand selection to parent syntax node TODO: pick a key | | [ | Expand selection to parent syntax node TODO: pick a key |
| J | join lines inside selection | | J | join lines inside selection |
| K | keep selections matching the regex TODO: overlapped by hover help | | K | keep selections matching the regex TODO: overlapped by hover help |
| space | keep only the primary selection TODO: overlapped by space mode | | space | keep only the primary selection TODO: overlapped by space mode |
| ctrl-c | Comment/uncomment the selections | | ctrl-c | Comment/uncomment the selections |
### Search ### Search
@@ -81,10 +82,10 @@ in reverse, or searching via smartcase.
| Key | Description | | Key | Description |
|-----|-----------| |-----|-----------|
| / | Search for regex pattern | | / | Search for regex pattern |
| n | Select next search match | | n | Select next search match |
| N | Add next search match to selection | | N | Add next search match to selection |
| * | Use current selection as the search pattern | | * | Use current selection as the search pattern |
### Diagnostics ### Diagnostics
@@ -132,6 +133,7 @@ Jumps to various locations.
| e | Go to the end of the file | | e | Go to the end of the file |
| h | Go to the start of the line | | h | Go to the start of the line |
| l | Go to the end of the line | | l | Go to the end of the line |
| s | Go to first non-whitespace character of the line |
| t | Go to the top of the screen | | t | Go to the top of the screen |
| m | Go to the middle of the screen | | m | Go to the middle of the screen |
| b | Go to the bottom of the screen | | b | Go to the bottom of the screen |
@@ -139,6 +141,7 @@ Jumps to various locations.
| y | Go to type definition | | y | Go to type definition |
| r | Go to references | | r | Go to references |
| i | Go to implementation | | i | Go to implementation |
| a | Go to the last accessed/alternate file |
## Object mode ## Object mode
@@ -163,5 +166,6 @@ This layer is a kludge of mappings I had under leader key in neovim.
|-----|-----------| |-----|-----------|
| f | Open file picker | | f | Open file picker |
| b | Open buffer picker | | b | Open buffer picker |
| s | Open symbol picker (current document)|
| w | Enter window mode | | w | Enter window mode |
| space | Keep primary selection TODO: it's here because space mode replaced it | | space | Keep primary selection TODO: it's here because space mode replaced it |

124
flake.lock generated
View File

@@ -1,73 +1,83 @@
{ {
"nodes": { "nodes": {
"flake-utils": { "devshell": {
"locked": { "locked": {
"lastModified": 1620759905, "lastModified": 1622711433,
"narHash": "sha256-WiyWawrgmyN0EdmiHyG2V+fqReiVi8bM9cRdMaKQOFg=", "narHash": "sha256-rGjXz7FA7HImAT3TtoqwecByLO5yhVPSwPdaYPBFRQw=",
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "devshell",
"rev": "b543720b25df6ffdfcf9227afafc5b8c1fabfae8", "rev": "1f4fb67b662b65fa7cfe696fc003fcc1e8f7cc36",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "numtide", "owner": "numtide",
"repo": "flake-utils", "repo": "devshell",
"type": "github" "type": "github"
} }
}, },
"flake-utils_2": { "flakeCompat": {
"flake": false,
"locked": { "locked": {
"lastModified": 1614513358, "lastModified": 1606424373,
"narHash": "sha256-LakhOx3S1dRjnh0b5Dg3mbZyH0ToC9I8Y2wKSkBaTzU=", "narHash": "sha256-oq8d4//CJOrVj+EcOaSXvMebvuTkmBJuT5tzlfewUnQ=",
"owner": "numtide", "owner": "edolstra",
"repo": "flake-utils", "repo": "flake-compat",
"rev": "5466c5bbece17adaab2d82fae80b46e807611bf3", "rev": "99f1c2157fba4bfe6211a321fd0ee43199025dbf",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "numtide", "owner": "edolstra",
"repo": "flake-utils", "repo": "flake-compat",
"type": "github" "type": "github"
} }
}, },
"naersk": { "helix": {
"flake": false,
"locked": {
"lastModified": 1623545930,
"narHash": "sha256-14ASoYbxXHU/qPGctiUymb4fMRCoih9c7YujjxqEkdU=",
"ref": "master",
"rev": "9640ed1425f2db904fb42cd0c54dc6fbc05ca292",
"revCount": 821,
"submodules": true,
"type": "git",
"url": "https://github.com/helix-editor/helix.git"
},
"original": {
"submodules": true,
"type": "git",
"url": "https://github.com/helix-editor/helix.git"
}
},
"nixCargoIntegration": {
"inputs": { "inputs": {
"nixpkgs": "nixpkgs" "devshell": "devshell",
"nixpkgs": [
"nixpkgs"
],
"rustOverlay": "rustOverlay"
}, },
"locked": { "locked": {
"lastModified": 1620316130, "lastModified": 1623560601,
"narHash": "sha256-sU0VS5oJS1FsHsZsLELAXc7G2eIelVuucRw+q5B1x9k=", "narHash": "sha256-H1Dq461b2m8v/FxmPphd8pOAx4pPja0UE/xvcMUYwwY=",
"owner": "nmattia", "owner": "yusdacra",
"repo": "naersk", "repo": "nix-cargo-integration",
"rev": "a3f40fe42cc6d267ff7518fa3199e99ff1444ac4", "rev": "1238fd751e5d6eb030aee244da9fee6c3ad8b316",
"type": "github" "type": "github"
}, },
"original": { "original": {
"owner": "nmattia", "owner": "yusdacra",
"repo": "naersk", "repo": "nix-cargo-integration",
"type": "github" "type": "github"
} }
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1622059058, "lastModified": 1623324058,
"narHash": "sha256-t1/ZMtyxClVSfcV4Pt5C1YpkeJ/UwFF3oitLD7Ch/UA=", "narHash": "sha256-Jm9GUTXdjXz56gWDKy++EpFfjrBaxqXlLvTLfgEi8lo=",
"path": "/nix/store/2gam4i1fa1v19k3n5rc9vgvqac1c2xj5-source",
"rev": "84aa23742f6c72501f9cc209f29c438766f5352d",
"type": "path"
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1622194753,
"narHash": "sha256-76qtvFp/vFEz46lz5iZMJ0mnsWQYmuGYlb0fHgKqqMg=",
"owner": "nixos", "owner": "nixos",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "540dccb2aeaffa9dc69bfdc41c55abd7ccc6baa3", "rev": "432fc2d9a67f92e05438dff5fdc2b39d33f77997",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -77,40 +87,22 @@
"type": "github" "type": "github"
} }
}, },
"nixpkgs_3": {
"locked": {
"lastModified": 1617325113,
"narHash": "sha256-GksR0nvGxfZ79T91UUtWjjccxazv6Yh/MvEJ82v1Xmw=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "54c1e44240d8a527a8f4892608c4bce5440c3ecb",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"type": "github"
}
},
"root": { "root": {
"inputs": { "inputs": {
"flake-utils": "flake-utils", "flakeCompat": "flakeCompat",
"naersk": "naersk", "helix": "helix",
"nixpkgs": "nixpkgs_2", "nixCargoIntegration": "nixCargoIntegration",
"rust-overlay": "rust-overlay" "nixpkgs": "nixpkgs"
} }
}, },
"rust-overlay": { "rustOverlay": {
"inputs": { "flake": false,
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_3"
},
"locked": { "locked": {
"lastModified": 1622257069, "lastModified": 1623550815,
"narHash": "sha256-+QVnS/es9JCRZXphoHL0fOIUhpGqB4/wreBsXWArVck=", "narHash": "sha256-RumRrkE6OTJDndHV4qZNZv8kUGnzwRHZQSyzx29r6/g=",
"owner": "oxalica", "owner": "oxalica",
"repo": "rust-overlay", "repo": "rust-overlay",
"rev": "8aa5f93c0b665e5357af19c5631a3450bff4aba5", "rev": "9824f142cbd7bc3e2a92eefbb79addfff8704cd3",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -3,31 +3,54 @@
inputs = { inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable"; nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils"; nixCargoIntegration = {
rust-overlay.url = "github:oxalica/rust-overlay"; url = "github:yusdacra/nix-cargo-integration";
naersk.url = "github:nmattia/naersk"; inputs.nixpkgs.follows = "nixpkgs";
};
flakeCompat = {
url = "github:edolstra/flake-compat";
flake = false;
};
helix = {
url = "https://github.com/helix-editor/helix.git";
type = "git";
flake = false;
submodules = true;
};
}; };
outputs = inputs@{ self, nixpkgs, naersk, rust-overlay, flake-utils, ... }: outputs = inputs@{ nixCargoIntegration, helix, ... }:
flake-utils.lib.eachDefaultSystem (system: nixCargoIntegration.lib.makeOutputs {
let root = ./.;
pkgs = import nixpkgs { inherit system; overlays = [ rust-overlay.overlay ]; }; buildPlatform = "crate2nix";
rust = (pkgs.rustChannelOf { renameOutputs = { "helix-term" = "helix"; };
date = "2021-05-01"; # Set default app to hx (binary is from helix-term release build)
channel = "nightly"; # Set default package to helix-term release build
}).minimal; # cargo, rustc and rust-std defaultOutputs = { app = "hx"; package = "helix"; };
naerskLib = naersk.lib."${system}".override { overrides = {
# naersk can't build with stable?! crateOverrides = common: _: {
# inherit (pkgs.rust-bin.stable.latest) rustc cargo; helix-term = prev: { buildInputs = (prev.buildInputs or [ ]) ++ [ common.cCompiler.cc.lib ]; };
rustc = rust; # link runtime since helix-core expects it because of embed_runtime feature
cargo = rust; helix-core = _: { preConfigure = "ln -s ${common.root + "/runtime"} ../runtime"; };
# link languages and theme toml files since helix-view expects them
helix-view = _: { preConfigure = "ln -s ${common.root}/{languages.toml,theme.toml} .."; };
helix-syntax = prev: {
src = common.pkgs.runCommand prev.src.name { } ''
mkdir -p $out
ln -s ${prev.src}/* $out
ln -sf ${helix}/helix-syntax/languages $out
'';
};
}; };
in rec { shell = common: prev: {
packages.helix = naerskLib.buildPackage { packages = prev.packages ++ (with common.pkgs; [ lld_10 lldb ]);
pname = "helix"; env = prev.env ++ [
root = ./.; { name = "HELIX_RUNTIME"; eval = "$PWD/runtime"; }
{ name = "RUST_BACKTRACE"; value = "1"; }
{ name = "RUSTFLAGS"; value = "-C link-arg=-fuse-ld=lld -C target-cpu=native"; }
];
}; };
defaultPackage = packages.helix; build = _: prev: { rootFeatures = prev.rootFeatures ++ [ "embed_runtime" ]; };
devShell = pkgs.callPackage ./shell.nix {}; };
}); };
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "helix-core" name = "helix-core"
version = "0.0.10" version = "0.2.0"
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"
@@ -17,8 +17,9 @@ helix-syntax = { path = "../helix-syntax" }
ropey = "1.2" ropey = "1.2"
smallvec = "1.4" smallvec = "1.4"
tendril = "0.4.2" tendril = "0.4.2"
unicode-segmentation = "1.6" unicode-segmentation = "1.7.1"
unicode-width = "0.1" unicode-width = "0.1"
unicode-general-category = "0.4.0"
# slab = "0.4.2" # slab = "0.4.2"
tree-sitter = "0.19" tree-sitter = "0.19"
once_cell = "1.4" once_cell = "1.4"

View File

@@ -67,7 +67,7 @@ fn handle_open(
let mut offs = 0; let mut offs = 0;
let mut transaction = Transaction::change_by_selection(doc, selection, |range| { let transaction = Transaction::change_by_selection(doc, selection, |range| {
let pos = range.head; let pos = range.head;
let next = next_char(doc, pos); let next = next_char(doc, pos);
@@ -109,7 +109,7 @@ fn handle_close(doc: &Rope, selection: &Selection, _open: char, close: char) ->
let mut offs = 0; let mut offs = 0;
let mut transaction = Transaction::change_by_selection(doc, selection, |range| { let transaction = Transaction::change_by_selection(doc, selection, |range| {
let pos = range.head; let pos = range.head;
let next = next_char(doc, pos); let next = next_char(doc, pos);

View File

@@ -1,5 +1,5 @@
use crate::{ use crate::{
find_first_non_whitespace_char2, Change, Rope, RopeSlice, Selection, Tendril, Transaction, find_first_non_whitespace_char, Change, Rope, RopeSlice, Selection, Tendril, Transaction,
}; };
use core::ops::Range; use core::ops::Range;
use std::borrow::Cow; use std::borrow::Cow;
@@ -14,7 +14,7 @@ fn find_line_comment(
let mut min = usize::MAX; // minimum col for find_first_non_whitespace_char let mut min = usize::MAX; // minimum col for find_first_non_whitespace_char
for line in lines { for line in lines {
let line_slice = text.line(line); let line_slice = text.line(line);
if let Some(pos) = find_first_non_whitespace_char2(line_slice) { if let Some(pos) = find_first_non_whitespace_char(line_slice) {
let len = line_slice.len_chars(); let len = line_slice.len_chars();
if pos < min { if pos < min {

View File

@@ -1,4 +1,4 @@
#[derive(Eq, PartialEq)] #[derive(Debug, Eq, PartialEq)]
pub enum Severity { pub enum Severity {
Error, Error,
Warning, Warning,
@@ -6,10 +6,13 @@ pub enum Severity {
Hint, Hint,
} }
#[derive(Debug)]
pub struct Range { pub struct Range {
pub start: usize, pub start: usize,
pub end: usize, pub end: usize,
} }
#[derive(Debug)]
pub struct Diagnostic { pub struct Diagnostic {
pub range: Range, pub range: Range,
pub line: usize, pub line: usize,

View File

@@ -3,6 +3,8 @@ use ropey::{iter::Chunks, str_utils::byte_to_char_idx, RopeSlice};
use unicode_segmentation::{GraphemeCursor, GraphemeIncomplete}; use unicode_segmentation::{GraphemeCursor, GraphemeIncomplete};
use unicode_width::UnicodeWidthStr; use unicode_width::UnicodeWidthStr;
use std::fmt;
#[must_use] #[must_use]
pub fn grapheme_width(g: &str) -> usize { pub fn grapheme_width(g: &str) -> usize {
if g.as_bytes()[0] <= 127 { if g.as_bytes()[0] <= 127 {
@@ -156,6 +158,18 @@ pub struct RopeGraphemes<'a> {
cursor: GraphemeCursor, cursor: GraphemeCursor,
} }
impl<'a> fmt::Debug for RopeGraphemes<'a> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("RopeGraphemes")
.field("text", &self.text)
.field("chunks", &self.chunks)
.field("cur_chunk", &self.cur_chunk)
.field("cur_chunk_start", &self.cur_chunk_start)
// .field("cursor", &self.cursor)
.finish()
}
}
impl<'a> RopeGraphemes<'a> { impl<'a> RopeGraphemes<'a> {
#[must_use] #[must_use]
pub fn new(slice: RopeSlice) -> RopeGraphemes { pub fn new(slice: RopeSlice) -> RopeGraphemes {

View File

@@ -1,19 +1,61 @@
use crate::{ChangeSet, Rope, State, Transaction}; use crate::{ChangeSet, Rope, State, Transaction};
use once_cell::sync::Lazy;
use regex::Regex;
use smallvec::{smallvec, SmallVec}; use smallvec::{smallvec, SmallVec};
use std::num::NonZeroUsize;
use std::time::{Duration, Instant};
/// Undo-tree style history store. // Stores the history of changes to a buffer.
//
// Currently the history is represented as a vector of revisions. The vector
// always has at least one element: the empty root revision. Each revision
// with the exception of the root has a parent revision, a [Transaction]
// that can be applied to its parent to transition from the parent to itself,
// and an inversion of that transaction to transition from the parent to its
// latest child.
//
// When using `u` to undo a change, an inverse of the stored transaction will
// be applied which will transition the buffer to the parent state.
//
// Each revision with the exception of the last in the vector also has a
// last child revision. When using `U` to redo a change, the last child transaction
// will be applied to the current state of the buffer.
//
// The current revision is the one currently displayed in the buffer.
//
// Commiting a new revision to the history will update the last child of the
// current revision, and push a new revision to the end of the vector.
//
// Revisions are commited with a timestamp. :earlier and :later can be used
// to jump to the closest revision to a moment in time relative to the timestamp
// of the current revision plus (:later) or minus (:earlier) the duration
// given to the command. If a single integer is given, the editor will instead
// jump the given number of revisions in the vector.
//
// Limitations:
// * Changes in selections currently don't commit history changes. The selection
// will only be updated to the state after a commited buffer change.
// * The vector of history revisions is currently unbounded. This might
// cause the memory consumption to grow significantly large during long
// editing sessions.
// * Because delete transactions currently don't store the text that they
// delete, we also store an inversion of the transaction.
#[derive(Debug)]
pub struct History { pub struct History {
revisions: Vec<Revision>, revisions: Vec<Revision>,
cursor: usize, current: usize,
} }
// A single point in history. See [History] for more information.
#[derive(Debug)] #[derive(Debug)]
struct Revision { struct Revision {
parent: usize, parent: usize,
children: SmallVec<[(usize, Transaction); 1]>, last_child: Option<NonZeroUsize>,
/// The transaction to revert to previous state. transaction: Transaction,
revert: Transaction, // We need an inversion for undos because delete transactions don't store
// selection before, selection after? // the deleted text.
inversion: Transaction,
timestamp: Instant,
} }
impl Default for History { impl Default for History {
@@ -22,72 +64,253 @@ impl Default for History {
Self { Self {
revisions: vec![Revision { revisions: vec![Revision {
parent: 0, parent: 0,
children: SmallVec::new(), last_child: None,
revert: Transaction::from(ChangeSet::new(&Rope::new())), transaction: Transaction::from(ChangeSet::new(&Rope::new())),
inversion: Transaction::from(ChangeSet::new(&Rope::new())),
timestamp: Instant::now(),
}], }],
cursor: 0, current: 0,
} }
} }
} }
impl History { impl History {
pub fn commit_revision(&mut self, transaction: &Transaction, original: &State) { pub fn commit_revision(&mut self, transaction: &Transaction, original: &State) {
// TODO: could store a single transaction, if deletes also stored the text they delete self.commit_revision_at_timestamp(transaction, original, Instant::now());
let revert = transaction }
pub fn commit_revision_at_timestamp(
&mut self,
transaction: &Transaction,
original: &State,
timestamp: Instant,
) {
let inversion = transaction
.invert(&original.doc) .invert(&original.doc)
// Store the current cursor position // Store the current cursor position
.with_selection(original.selection.clone()); .with_selection(original.selection.clone());
let new_cursor = self.revisions.len(); let new_current = self.revisions.len();
self.revisions[self.current].last_child = NonZeroUsize::new(new_current);
self.revisions.push(Revision { self.revisions.push(Revision {
parent: self.cursor, parent: self.current,
children: SmallVec::new(), last_child: None,
revert, transaction: transaction.clone(),
inversion,
timestamp,
}); });
self.current = new_current;
// add a reference to the parent
self.revisions
.get_mut(self.cursor)
.unwrap() // TODO: get_unchecked_mut
.children
.push((new_cursor, transaction.clone()));
self.cursor = new_cursor;
} }
#[inline] #[inline]
pub fn current_revision(&self) -> usize { pub fn current_revision(&self) -> usize {
self.cursor self.current
} }
#[inline] #[inline]
pub const fn at_root(&self) -> bool { pub const fn at_root(&self) -> bool {
self.cursor == 0 self.current == 0
} }
pub fn undo(&mut self) -> Option<&Transaction> { pub fn undo(&mut self) -> Option<&Transaction> {
if self.at_root() { if self.at_root() {
// We're at the root of undo, nothing to do.
return None; return None;
} }
let current_revision = &self.revisions[self.cursor]; let current_revision = &self.revisions[self.current];
self.current = current_revision.parent;
self.cursor = current_revision.parent; Some(&current_revision.inversion)
Some(&current_revision.revert)
} }
pub fn redo(&mut self) -> Option<&Transaction> { pub fn redo(&mut self) -> Option<&Transaction> {
let current_revision = &self.revisions[self.cursor]; let current_revision = &self.revisions[self.current];
let last_child = current_revision.last_child?;
self.current = last_child.get();
// for now, simply pick the latest child (linear undo / redo) let last_child_revision = &self.revisions[last_child.get()];
if let Some((index, transaction)) = current_revision.children.last() { Some(&self.revisions[last_child.get()].transaction)
self.cursor = *index; }
return Some(&transaction); fn lowest_common_ancestor(&self, mut a: usize, mut b: usize) -> usize {
use std::collections::HashSet;
let mut a_path_set = HashSet::new();
let mut b_path_set = HashSet::new();
loop {
a_path_set.insert(a);
b_path_set.insert(b);
if a_path_set.contains(&b) {
return b;
}
if b_path_set.contains(&a) {
return a;
}
a = self.revisions[a].parent; // Relies on the parent of 0 being 0.
b = self.revisions[b].parent; // Same as above.
}
}
// List of nodes on the way from `n` to 'a`. Doesn`t include `a`.
// Includes `n` unless `a == n`. `a` must be an ancestor of `n`.
fn path_up(&self, mut n: usize, a: usize) -> Vec<usize> {
let mut path = Vec::new();
while n != a {
path.push(n);
n = self.revisions[n].parent;
}
path
}
fn jump_to(&mut self, to: usize) -> Vec<Transaction> {
let lca = self.lowest_common_ancestor(self.current, to);
let up = self.path_up(self.current, lca);
let down = self.path_up(to, lca);
self.current = to;
let up_txns = up.iter().map(|&n| self.revisions[n].inversion.clone());
let down_txns = down
.iter()
.rev()
.map(|&n| self.revisions[n].transaction.clone());
up_txns.chain(down_txns).collect()
}
fn jump_backward(&mut self, delta: usize) -> Vec<Transaction> {
self.jump_to(self.current.saturating_sub(delta))
}
fn jump_forward(&mut self, delta: usize) -> Vec<Transaction> {
self.jump_to(
self.current
.saturating_add(delta)
.min(self.revisions.len() - 1),
)
}
// Helper for a binary search case below.
fn revision_closer_to_instant(&self, i: usize, instant: Instant) -> usize {
let dur_im1 = instant.duration_since(self.revisions[i - 1].timestamp);
let dur_i = self.revisions[i].timestamp.duration_since(instant);
use std::cmp::Ordering::*;
match dur_im1.cmp(&dur_i) {
Less => i - 1,
Equal | Greater => i,
}
}
fn jump_instant(&mut self, instant: Instant) -> Vec<Transaction> {
let search_result = self
.revisions
.binary_search_by(|rev| rev.timestamp.cmp(&instant));
let revision = match search_result {
Ok(revision) => revision,
Err(insert_point) => match insert_point {
0 => 0,
n if n == self.revisions.len() => n - 1,
i => self.revision_closer_to_instant(i, instant),
},
};
self.jump_to(revision)
}
fn jump_duration_backward(&mut self, duration: Duration) -> Vec<Transaction> {
match self.revisions[self.current].timestamp.checked_sub(duration) {
Some(instant) => self.jump_instant(instant),
None => self.jump_to(0),
}
}
fn jump_duration_forward(&mut self, duration: Duration) -> Vec<Transaction> {
match self.revisions[self.current].timestamp.checked_add(duration) {
Some(instant) => self.jump_instant(instant),
None => self.jump_to(self.revisions.len() - 1),
}
}
pub fn earlier(&mut self, uk: UndoKind) -> Vec<Transaction> {
use UndoKind::*;
match uk {
Steps(n) => self.jump_backward(n),
TimePeriod(d) => self.jump_duration_backward(d),
}
}
pub fn later(&mut self, uk: UndoKind) -> Vec<Transaction> {
use UndoKind::*;
match uk {
Steps(n) => self.jump_forward(n),
TimePeriod(d) => self.jump_duration_forward(d),
}
}
}
#[derive(Debug, PartialEq)]
pub enum UndoKind {
Steps(usize),
TimePeriod(std::time::Duration),
}
// A subset of sytemd.time time span syntax units.
const TIME_UNITS: &[(&[&str], &str, u64)] = &[
(&["seconds", "second", "sec", "s"], "seconds", 1),
(&["minutes", "minute", "min", "m"], "minutes", 60),
(&["hours", "hour", "hr", "h"], "hours", 60 * 60),
(&["days", "day", "d"], "days", 24 * 60 * 60),
];
static DURATION_VALIDATION_REGEX: Lazy<Regex> =
Lazy::new(|| Regex::new(r"^(?:\d+\s*[a-z]+\s*)+$").unwrap());
static NUMBER_UNIT_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(\d+)\s*([a-z]+)").unwrap());
fn parse_human_duration(s: &str) -> Result<Duration, String> {
if !DURATION_VALIDATION_REGEX.is_match(s) {
return Err("duration should be composed \
of positive integers followed by time units"
.to_string());
}
let mut specified = [false; TIME_UNITS.len()];
let mut seconds = 0u64;
for cap in NUMBER_UNIT_REGEX.captures_iter(s) {
let (n, unit_str) = (&cap[1], &cap[2]);
let n: u64 = n.parse().map_err(|_| format!("integer too large: {}", n))?;
let time_unit = TIME_UNITS
.iter()
.enumerate()
.find(|(_, (forms, _, _))| forms.iter().any(|f| f == &unit_str));
if let Some((i, (_, unit, mul))) = time_unit {
if specified[i] {
return Err(format!("{} specified more than once", unit));
}
specified[i] = true;
let new_seconds = n.checked_mul(*mul).and_then(|s| seconds.checked_add(s));
match new_seconds {
Some(ns) => seconds = ns,
None => return Err("duration too large".to_string()),
}
} else {
return Err(format!("incorrect time unit: {}", unit_str));
}
}
Ok(Duration::from_secs(seconds))
}
impl std::str::FromStr for UndoKind {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let s = s.trim();
if s.is_empty() {
Ok(Self::Steps(1usize))
} else if let Ok(n) = s.parse::<usize>() {
Ok(UndoKind::Steps(n))
} else {
Ok(Self::TimePeriod(parse_human_duration(s)?))
} }
None
} }
} }
@@ -143,4 +366,191 @@ mod test {
undo(&mut history, &mut state); undo(&mut history, &mut state);
assert_eq!("hello", state.doc); assert_eq!("hello", state.doc);
} }
#[test]
fn test_earlier_later() {
let mut history = History::default();
let doc = Rope::from("a\n");
let mut state = State::new(doc);
fn undo(history: &mut History, state: &mut State) {
if let Some(transaction) = history.undo() {
transaction.apply(&mut state.doc);
}
};
fn earlier(history: &mut History, state: &mut State, uk: UndoKind) {
let txns = history.earlier(uk);
for txn in txns {
txn.apply(&mut state.doc);
}
};
fn later(history: &mut History, state: &mut State, uk: UndoKind) {
let txns = history.later(uk);
for txn in txns {
txn.apply(&mut state.doc);
}
};
fn commit_change(
history: &mut History,
state: &mut State,
change: crate::transaction::Change,
instant: Instant,
) {
let txn = Transaction::change(&state.doc, vec![change.clone()].into_iter());
history.commit_revision_at_timestamp(&txn, &state, instant);
txn.apply(&mut state.doc);
};
let t0 = Instant::now();
let t = |n| t0.checked_add(Duration::from_secs(n)).unwrap();
commit_change(&mut history, &mut state, (1, 1, Some(" b".into())), t(0));
assert_eq!("a b\n", state.doc);
commit_change(&mut history, &mut state, (3, 3, Some(" c".into())), t(10));
assert_eq!("a b c\n", state.doc);
commit_change(&mut history, &mut state, (5, 5, Some(" d".into())), t(20));
assert_eq!("a b c d\n", state.doc);
undo(&mut history, &mut state);
assert_eq!("a b c\n", state.doc);
commit_change(&mut history, &mut state, (5, 5, Some(" e".into())), t(30));
assert_eq!("a b c e\n", state.doc);
undo(&mut history, &mut state);
undo(&mut history, &mut state);
assert_eq!("a b\n", state.doc);
commit_change(&mut history, &mut state, (1, 3, None), t(40));
assert_eq!("a\n", state.doc);
commit_change(&mut history, &mut state, (1, 1, Some(" f".into())), t(50));
assert_eq!("a f\n", state.doc);
use UndoKind::*;
earlier(&mut history, &mut state, Steps(3));
assert_eq!("a b c d\n", state.doc);
later(&mut history, &mut state, TimePeriod(Duration::new(20, 0)));
assert_eq!("a\n", state.doc);
earlier(&mut history, &mut state, TimePeriod(Duration::new(19, 0)));
assert_eq!("a b c d\n", state.doc);
earlier(
&mut history,
&mut state,
TimePeriod(Duration::new(10000, 0)),
);
assert_eq!("a\n", state.doc);
later(&mut history, &mut state, Steps(50));
assert_eq!("a f\n", state.doc);
earlier(&mut history, &mut state, Steps(4));
assert_eq!("a b c\n", state.doc);
later(&mut history, &mut state, TimePeriod(Duration::new(1, 0)));
assert_eq!("a b c\n", state.doc);
later(&mut history, &mut state, TimePeriod(Duration::new(5, 0)));
assert_eq!("a b c d\n", state.doc);
later(&mut history, &mut state, TimePeriod(Duration::new(6, 0)));
assert_eq!("a b c e\n", state.doc);
later(&mut history, &mut state, Steps(1));
assert_eq!("a\n", state.doc);
}
#[test]
fn test_parse_undo_kind() {
use UndoKind::*;
// Default is one step.
assert_eq!("".parse(), Ok(Steps(1)));
// An integer means the number of steps.
assert_eq!("1".parse(), Ok(Steps(1)));
assert_eq!(" 16 ".parse(), Ok(Steps(16)));
// Duration has a strict format.
let validation_err = Err("duration should be composed \
of positive integers followed by time units"
.to_string());
assert_eq!(" 16 33".parse::<UndoKind>(), validation_err);
assert_eq!(" seconds 22 ".parse::<UndoKind>(), validation_err);
assert_eq!(" -4 m".parse::<UndoKind>(), validation_err);
assert_eq!("5s 3".parse::<UndoKind>(), validation_err);
// Units are u64.
assert_eq!(
"18446744073709551616minutes".parse::<UndoKind>(),
Err("integer too large: 18446744073709551616".to_string())
);
// Units are validated.
assert_eq!(
"1 millenium".parse::<UndoKind>(),
Err("incorrect time unit: millenium".to_string())
);
// Units can't be specified twice.
assert_eq!(
"2 seconds 6s".parse::<UndoKind>(),
Err("seconds specified more than once".to_string())
);
// Various formats are correctly handled.
assert_eq!(
"4s".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(4)))
);
assert_eq!(
"2m".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(120)))
);
assert_eq!(
"5h".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(5 * 60 * 60)))
);
assert_eq!(
"3d".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(3 * 24 * 60 * 60)))
);
assert_eq!(
"1m30s".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(90)))
);
assert_eq!(
"1m 20 seconds".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(80)))
);
assert_eq!(
" 2 minute 1day".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(24 * 60 * 60 + 2 * 60)))
);
assert_eq!(
"3 d 2hour 5 minutes 30sec".parse::<UndoKind>(),
Ok(TimePeriod(Duration::from_secs(
3 * 24 * 60 * 60 + 2 * 60 * 60 + 5 * 60 + 30
)))
);
// Sum overflow is handled.
assert_eq!(
"18446744073709551615minutes".parse::<UndoKind>(),
Err("duration too large".to_string())
);
assert_eq!(
"1 minute 18446744073709551615 seconds".parse::<UndoKind>(),
Err("duration too large".to_string())
);
}
} }

View File

@@ -105,11 +105,14 @@ fn suggested_indent_for_line(
line_num: usize, line_num: usize,
tab_width: usize, tab_width: usize,
) -> usize { ) -> usize {
let line = text.line(line_num); if let Some(start) = find_first_non_whitespace_char(text.line(line_num)) {
let current = indent_level_for_line(line, tab_width); return suggested_indent_for_pos(
Some(language_config),
if let Some(start) = find_first_non_whitespace_char(text, line_num) { syntax,
return suggested_indent_for_pos(Some(language_config), syntax, text, start, false); text,
start + text.line_to_char(line_num),
false,
);
}; };
// if the line is blank, indent should be zero // if the line is blank, indent should be zero
@@ -260,6 +263,7 @@ where
highlight_config: OnceCell::new(), highlight_config: OnceCell::new(),
// //
roots: vec![], roots: vec![],
auto_format: false,
language_server: None, language_server: None,
indent: Some(IndentationConfiguration { indent: Some(IndentationConfiguration {
tab_width: 4, tab_width: 4,

View File

@@ -3,7 +3,7 @@ pub mod auto_pairs;
pub mod comment; pub mod comment;
pub mod diagnostic; pub mod diagnostic;
pub mod graphemes; pub mod graphemes;
mod history; pub mod history;
pub mod indent; pub mod indent;
pub mod macros; pub mod macros;
pub mod match_brackets; pub mod match_brackets;
@@ -16,46 +16,60 @@ pub mod selection;
mod state; mod state;
pub mod syntax; pub mod syntax;
mod transaction; mod transaction;
pub mod words;
pub(crate) fn find_first_non_whitespace_char2(line: RopeSlice) -> Option<usize> { static RUNTIME_DIR: once_cell::sync::Lazy<std::path::PathBuf> =
// find first non-whitespace char once_cell::sync::Lazy::new(runtime_dir);
for (start, ch) in line.chars().enumerate() {
// TODO: could use memchr with chunks?
if ch != ' ' && ch != '\t' && ch != '\n' {
return Some(start);
}
}
None pub fn find_first_non_whitespace_char(line: RopeSlice) -> Option<usize> {
line.chars().position(|ch| !ch.is_whitespace())
} }
pub(crate) fn find_first_non_whitespace_char(text: RopeSlice, line_num: usize) -> Option<usize> {
let line = text.line(line_num);
let mut start = text.line_to_char(line_num);
// find first non-whitespace char pub fn find_root(root: Option<&str>) -> Option<std::path::PathBuf> {
for ch in line.chars() { let current_dir = std::env::current_dir().expect("unable to determine current directory");
// TODO: could use memchr with chunks?
if ch != ' ' && ch != '\t' && ch != '\n' { let root = match root {
return Some(start); Some(root) => {
let root = std::path::Path::new(root);
if root.is_absolute() {
root.to_path_buf()
} else {
current_dir.join(root)
}
} }
start += 1; None => current_dir,
} };
for ancestor in root.ancestors() {
// TODO: also use defined roots if git isn't found
if ancestor.join(".git").is_dir() {
return Some(ancestor.to_path_buf());
}
}
None None
} }
#[cfg(not(embed_runtime))] #[cfg(not(embed_runtime))]
pub fn runtime_dir() -> std::path::PathBuf { fn runtime_dir() -> std::path::PathBuf {
// runtime env var || dir where binary is located if let Ok(dir) = std::env::var("HELIX_RUNTIME") {
std::env::var("HELIX_RUNTIME") return dir.into();
.map(|path| path.into()) }
.unwrap_or_else(|_| {
std::env::current_exe() const RT_DIR: &str = "runtime";
.ok() let conf_dir = config_dir().join(RT_DIR);
.and_then(|path| path.parent().map(|path| path.to_path_buf())) if conf_dir.exists() {
.unwrap() return conf_dir;
}) }
if let Ok(dir) = std::env::var("CARGO_MANIFEST_DIR") {
// this is the directory of the crate being run by cargo, we need the workspace path so we take the parent
return std::path::PathBuf::from(dir).parent().unwrap().join(RT_DIR);
}
// fallback to location of the executable being run
std::env::current_exe()
.ok()
.and_then(|path| path.parent().map(|path| path.to_path_buf().join(RT_DIR)))
.unwrap()
} }
pub fn config_dir() -> std::path::PathBuf { pub fn config_dir() -> std::path::PathBuf {
@@ -89,7 +103,6 @@ pub use smallvec::SmallVec;
pub use syntax::Syntax; pub use syntax::Syntax;
pub use diagnostic::Diagnostic; pub use diagnostic::Diagnostic;
pub use history::History;
pub use state::State; pub use state::State;
pub use transaction::{Assoc, Change, ChangeSet, Operation, Transaction}; pub use transaction::{Assoc, Change, ChangeSet, Operation, Transaction};

View File

@@ -1,45 +1,62 @@
use crate::graphemes::{nth_next_grapheme_boundary, nth_prev_grapheme_boundary, RopeGraphemes}; use std::iter::{self, from_fn, Peekable, SkipWhile};
use crate::{coords_at_pos, pos_at_coords, ChangeSet, Position, Range, Rope, RopeSlice, Selection};
#[derive(Copy, Clone, PartialEq, Eq)] use ropey::iter::Chars;
use crate::{
coords_at_pos,
graphemes::{nth_next_grapheme_boundary, nth_prev_grapheme_boundary},
pos_at_coords, Position, Range, RopeSlice,
};
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum Direction { pub enum Direction {
Forward, Forward,
Backward, Backward,
} }
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum Movement {
Extend,
Move,
}
pub fn move_horizontally( pub fn move_horizontally(
text: RopeSlice, slice: RopeSlice,
range: Range, range: Range,
dir: Direction, dir: Direction,
count: usize, count: usize,
extend: bool, behaviour: Movement,
) -> Range { ) -> Range {
let pos = range.head; let pos = range.head;
let line = text.char_to_line(pos); let line = slice.char_to_line(pos);
// TODO: we can optimize clamping by passing in RopeSlice limited to current line. that way // TODO: we can optimize clamping by passing in RopeSlice limited to current line. that way
// we stop calculating past start/end of line. // we stop calculating past start/end of line.
let pos = match dir { let pos = match dir {
Direction::Backward => { Direction::Backward => {
let start = text.line_to_char(line); let start = slice.line_to_char(line);
nth_prev_grapheme_boundary(text, pos, count).max(start) nth_prev_grapheme_boundary(slice, pos, count).max(start)
} }
Direction::Forward => { Direction::Forward => {
// Line end is pos at the start of next line - 1 // Line end is pos at the start of next line - 1
let end = text.line_to_char(line + 1).saturating_sub(1); let end = slice.line_to_char(line + 1).saturating_sub(1);
nth_next_grapheme_boundary(text, pos, count).min(end) nth_next_grapheme_boundary(slice, pos, count).min(end)
} }
}; };
Range::new(if extend { range.anchor } else { pos }, pos) let anchor = match behaviour {
Movement::Extend => range.anchor,
Movement::Move => pos,
};
Range::new(anchor, pos)
} }
pub fn move_vertically( pub fn move_vertically(
text: RopeSlice, slice: RopeSlice,
range: Range, range: Range,
dir: Direction, dir: Direction,
count: usize, count: usize,
extend: bool, behaviour: Movement,
) -> Range { ) -> Range {
let Position { row, col } = coords_at_pos(text, range.head); let Position { row, col } = coords_at_pos(slice, range.head);
let horiz = range.horiz.unwrap_or(col as u32); let horiz = range.horiz.unwrap_or(col as u32);
@@ -47,143 +64,83 @@ pub fn move_vertically(
Direction::Backward => row.saturating_sub(count), Direction::Backward => row.saturating_sub(count),
Direction::Forward => std::cmp::min( Direction::Forward => std::cmp::min(
row.saturating_add(count), row.saturating_add(count),
text.len_lines().saturating_sub(2), slice.len_lines().saturating_sub(2),
), ),
}; };
// convert to 0-indexed, subtract another 1 because len_chars() counts \n // convert to 0-indexed, subtract another 1 because len_chars() counts \n
let new_line_len = text.line(new_line).len_chars().saturating_sub(2); let new_line_len = slice.line(new_line).len_chars().saturating_sub(2);
let new_col = std::cmp::min(horiz as usize, new_line_len); let new_col = std::cmp::min(horiz as usize, new_line_len);
let pos = pos_at_coords(text, Position::new(new_line, new_col)); let pos = pos_at_coords(slice, Position::new(new_line, new_col));
let mut range = Range::new(if extend { range.anchor } else { pos }, pos); let anchor = match behaviour {
Movement::Extend => range.anchor,
Movement::Move => pos,
};
let mut range = Range::new(anchor, pos);
range.horiz = Some(horiz); range.horiz = Some(horiz);
range range
} }
pub fn move_next_word_start(slice: RopeSlice, mut begin: usize, count: usize) -> Option<Range> { pub fn move_next_word_start(slice: RopeSlice, range: Range, count: usize) -> Range {
let mut end = begin; word_move(slice, range, count, WordMotionTarget::NextWordStart)
for _ in 0..count {
if begin + 1 == slice.len_chars() {
return None;
}
let mut ch = slice.char(begin);
let next = slice.char(begin + 1);
// if we're at the end of a word, or on whitespce right before new one
if categorize(ch) != categorize(next) {
begin += 1;
}
if !skip_over_next(slice, &mut begin, |ch| ch == '\n') {
return None;
};
ch = slice.char(begin);
end = begin + 1;
if is_word(ch) {
skip_over_next(slice, &mut end, is_word);
} else if ch.is_ascii_punctuation() {
skip_over_next(slice, &mut end, |ch| ch.is_ascii_punctuation());
}
skip_over_next(slice, &mut end, is_horiz_blank);
}
Some(Range::new(begin, end - 1))
} }
pub fn move_prev_word_start(slice: RopeSlice, mut begin: usize, count: usize) -> Option<Range> { pub fn move_next_word_end(slice: RopeSlice, range: Range, count: usize) -> Range {
let mut with_end = false; word_move(slice, range, count, WordMotionTarget::NextWordEnd)
let mut end = begin;
for _ in 0..count {
if begin == 0 {
return None;
}
let ch = slice.char(begin);
let prev = slice.char(begin - 1);
if categorize(ch) != categorize(prev) {
begin -= 1;
}
// return if not skip while?
skip_over_prev(slice, &mut begin, |ch| ch == '\n');
end = begin;
with_end = skip_over_prev(slice, &mut end, is_horiz_blank);
// refetch
let ch = slice.char(end);
if is_word(ch) {
with_end = skip_over_prev(slice, &mut end, is_word);
} else if ch.is_ascii_punctuation() {
with_end = skip_over_prev(slice, &mut end, |ch| ch.is_ascii_punctuation());
}
}
Some(Range::new(begin, if with_end { end } else { end + 1 }))
} }
pub fn move_next_word_end(slice: RopeSlice, mut begin: usize, count: usize) -> Option<Range> { pub fn move_prev_word_start(slice: RopeSlice, range: Range, count: usize) -> Range {
let mut end = begin; word_move(slice, range, count, WordMotionTarget::PrevWordStart)
}
for _ in 0..count { fn word_move(slice: RopeSlice, mut range: Range, count: usize, target: WordMotionTarget) -> Range {
if begin + 2 >= slice.len_chars() { (0..count).fold(range, |range, _| {
return None; slice.chars_at(range.head).range_to_target(target, range)
} })
let ch = slice.char(begin);
let next = slice.char(begin + 1);
if categorize(ch) != categorize(next) {
begin += 1;
}
if !skip_over_next(slice, &mut begin, |ch| ch == '\n') {
return None;
};
end = begin;
skip_over_next(slice, &mut end, is_horiz_blank);
// refetch
let ch = slice.char(end);
if is_word(ch) {
skip_over_next(slice, &mut end, is_word);
} else if ch.is_ascii_punctuation() {
skip_over_next(slice, &mut end, |ch| ch.is_ascii_punctuation());
}
}
Some(Range::new(begin, end - 1))
} }
// ---- util ------------ // ---- util ------------
#[inline]
// used for by-word movement
pub(crate) fn is_word(ch: char) -> bool { pub(crate) fn is_word(ch: char) -> bool {
ch.is_alphanumeric() || ch == '_' ch.is_alphanumeric() || ch == '_'
} }
pub(crate) fn is_horiz_blank(ch: char) -> bool { #[inline]
matches!(ch, ' ' | '\t') pub(crate) fn is_end_of_line(ch: char) -> bool {
ch == '\n'
}
#[inline]
// Whitespace, but not end of line
pub(crate) fn is_strict_whitespace(ch: char) -> bool {
ch.is_whitespace() && !is_end_of_line(ch)
}
#[inline]
pub(crate) fn is_punctuation(ch: char) -> bool {
use unicode_general_category::{get_general_category, GeneralCategory};
matches!(
get_general_category(ch),
GeneralCategory::OtherPunctuation
| GeneralCategory::OpenPunctuation
| GeneralCategory::ClosePunctuation
| GeneralCategory::InitialPunctuation
| GeneralCategory::FinalPunctuation
| GeneralCategory::ConnectorPunctuation
| GeneralCategory::DashPunctuation
| GeneralCategory::MathSymbol
| GeneralCategory::CurrencySymbol
| GeneralCategory::ModifierSymbol
)
} }
#[derive(Debug, Eq, PartialEq)] #[derive(Debug, Eq, PartialEq)]
pub(crate) enum Category { pub enum Category {
Whitespace, Whitespace,
Eol, Eol,
Word, Word,
@@ -191,14 +148,15 @@ pub(crate) enum Category {
Unknown, Unknown,
} }
#[inline]
pub(crate) fn categorize(ch: char) -> Category { pub(crate) fn categorize(ch: char) -> Category {
if ch == '\n' { if is_end_of_line(ch) {
Category::Eol Category::Eol
} else if ch.is_ascii_whitespace() { } else if ch.is_whitespace() {
Category::Whitespace Category::Whitespace
} else if is_word(ch) { } else if is_word(ch) {
Category::Word Category::Word
} else if ch.is_ascii_punctuation() { } else if is_punctuation(ch) {
Category::Punctuation Category::Punctuation
} else { } else {
Category::Unknown Category::Unknown
@@ -206,44 +164,160 @@ pub(crate) fn categorize(ch: char) -> Category {
} }
#[inline] #[inline]
/// Returns true if there are more characters left after the new position. /// Returns first index that doesn't satisfy a given predicate when
pub fn skip_over_next<F>(slice: RopeSlice, pos: &mut usize, fun: F) -> bool /// advancing the character index.
///
/// Returns none if all characters satisfy the predicate.
pub fn skip_while<F>(slice: RopeSlice, pos: usize, fun: F) -> Option<usize>
where where
F: Fn(char) -> bool, F: Fn(char) -> bool,
{ {
let mut chars = slice.chars_at(*pos); let mut chars = slice.chars_at(pos).enumerate();
chars.find_map(|(i, c)| if !fun(c) { Some(pos + i) } else { None })
while let Some(ch) = chars.next() {
if !fun(ch) {
break;
}
*pos += 1;
}
chars.next().is_some()
} }
#[inline] #[inline]
/// Returns true if the final pos matches the predicate. /// Returns first index that doesn't satisfy a given predicate when
pub fn skip_over_prev<F>(slice: RopeSlice, pos: &mut usize, fun: F) -> bool /// retreating the character index, saturating if all elements satisfy
/// the condition.
pub fn backwards_skip_while<F>(slice: RopeSlice, pos: usize, fun: F) -> Option<usize>
where where
F: Fn(char) -> bool, F: Fn(char) -> bool,
{ {
// need to +1 so that prev() includes current char let mut chars_starting_from_next = slice.chars_at(pos + 1);
let mut chars = slice.chars_at(*pos + 1); let mut backwards = iter::from_fn(|| chars_starting_from_next.prev()).enumerate();
backwards.find_map(|(i, c)| {
while let Some(ch) = chars.prev() { if !fun(c) {
if !fun(ch) { Some(pos.saturating_sub(i))
break; } else {
None
}
})
}
/// Possible targets of a word motion
#[derive(Copy, Clone, Debug)]
pub enum WordMotionTarget {
NextWordStart,
NextWordEnd,
PrevWordStart,
}
pub trait CharHelpers {
fn range_to_target(&mut self, target: WordMotionTarget, origin: Range) -> Range;
}
enum WordMotionPhase {
Start,
SkipNewlines,
ReachTarget,
}
impl CharHelpers for Chars<'_> {
fn range_to_target(&mut self, target: WordMotionTarget, origin: Range) -> Range {
let range = origin;
// Characters are iterated forward or backwards depending on the motion direction.
let characters: Box<dyn Iterator<Item = char>> = match target {
WordMotionTarget::PrevWordStart => {
self.next();
Box::new(from_fn(|| self.prev()))
}
_ => Box::new(self),
};
// Index advancement also depends on the direction.
let advance: &dyn Fn(&mut usize) = match target {
WordMotionTarget::PrevWordStart => &|u| *u = u.saturating_sub(1),
_ => &|u| *u += 1,
};
let mut characters = characters.peekable();
let mut phase = WordMotionPhase::Start;
let mut head = origin.head;
let mut anchor: Option<usize> = None;
let is_boundary = |a: char, b: Option<char>| categorize(a) != categorize(b.unwrap_or(a));
while let Some(peek) = characters.peek().copied() {
phase = match phase {
WordMotionPhase::Start => {
characters.next();
if characters.peek().is_none() {
break; // We're at the end, so there's nothing to do.
}
// Anchor may remain here if the head wasn't at a boundary
if !is_boundary(peek, characters.peek().copied()) && !is_end_of_line(peek) {
anchor = Some(head);
}
// First character is always skipped by the head
advance(&mut head);
WordMotionPhase::SkipNewlines
}
WordMotionPhase::SkipNewlines => {
if is_end_of_line(peek) {
characters.next();
if characters.peek().is_some() {
advance(&mut head);
}
WordMotionPhase::SkipNewlines
} else {
WordMotionPhase::ReachTarget
}
}
WordMotionPhase::ReachTarget => {
characters.next();
anchor = anchor.or(Some(head));
if reached_target(target, peek, characters.peek()) {
break;
} else {
advance(&mut head);
}
WordMotionPhase::ReachTarget
}
}
}
Range::new(anchor.unwrap_or(origin.anchor), head)
}
}
fn reached_target(target: WordMotionTarget, peek: char, next_peek: Option<&char>) -> bool {
let next_peek = match next_peek {
Some(next_peek) => next_peek,
None => return true,
};
match target {
WordMotionTarget::NextWordStart => {
((categorize(peek) != categorize(*next_peek))
&& (is_end_of_line(*next_peek) || !next_peek.is_whitespace()))
}
WordMotionTarget::NextWordEnd | WordMotionTarget::PrevWordStart => {
((categorize(peek) != categorize(*next_peek))
&& (!peek.is_whitespace() || is_end_of_line(*next_peek)))
} }
*pos = pos.saturating_sub(1);
} }
fun(slice.char(*pos))
} }
#[cfg(test)] #[cfg(test)]
mod test { mod test {
use std::array::{self, IntoIter};
use ropey::Rope;
use super::*; use super::*;
const SINGLE_LINE_SAMPLE: &str = "This is a simple alphabetic line";
const MULTILINE_SAMPLE: &str = "\
Multiline\n\
text sample\n\
which\n\
is merely alphabetic\n\
and whitespaced\n\
";
const MULTIBYTE_CHARACTER_SAMPLE: &str = "\
パーティーへ行かないか\n\
The text above is Japanese\n\
";
#[test] #[test]
fn test_vertical_move() { fn test_vertical_move() {
let text = Rope::from("abcd\nefg\nwrs"); let text = Rope::from("abcd\nefg\nwrs");
@@ -254,9 +328,477 @@ mod test {
assert_eq!( assert_eq!(
coords_at_pos( coords_at_pos(
slice, slice,
move_vertically(slice, range, Direction::Forward, 1, false).head move_vertically(slice, range, Direction::Forward, 1, Movement::Move).head
), ),
(1, 2).into() (1, 2).into()
); );
} }
#[test]
fn horizontal_moves_through_single_line_in_single_line_text() {
let text = Rope::from(SINGLE_LINE_SAMPLE);
let slice = text.slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
let moves_and_expected_coordinates = [
((Direction::Forward, 1usize), (0, 1)),
((Direction::Forward, 2usize), (0, 3)),
((Direction::Forward, 0usize), (0, 3)),
((Direction::Forward, 999usize), (0, 31)),
((Direction::Forward, 999usize), (0, 31)),
((Direction::Backward, 999usize), (0, 0)),
];
for ((direction, amount), coordinates) in IntoIter::new(moves_and_expected_coordinates) {
range = move_horizontally(slice, range, direction, amount, Movement::Move);
assert_eq!(coords_at_pos(slice, range.head), coordinates.into())
}
}
#[test]
fn horizontal_moves_through_single_line_in_multiline_text() {
let text = Rope::from(MULTILINE_SAMPLE);
let slice = text.slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
let moves_and_expected_coordinates = IntoIter::new([
((Direction::Forward, 1usize), (0, 1)), // M_ltiline
((Direction::Forward, 2usize), (0, 3)), // Mul_iline
((Direction::Backward, 6usize), (0, 0)), // _ultiline
((Direction::Backward, 999usize), (0, 0)), // _ultiline
((Direction::Forward, 3usize), (0, 3)), // Mul_iline
((Direction::Forward, 0usize), (0, 3)), // Mul_iline
((Direction::Backward, 0usize), (0, 3)), // Mul_iline
((Direction::Forward, 999usize), (0, 9)), // Multilin_
((Direction::Forward, 999usize), (0, 9)), // Multilin_
]);
for ((direction, amount), coordinates) in moves_and_expected_coordinates {
range = move_horizontally(slice, range, direction, amount, Movement::Move);
assert_eq!(coords_at_pos(slice, range.head), coordinates.into());
assert_eq!(range.head, range.anchor);
}
}
#[test]
fn selection_extending_moves_in_single_line_text() {
let text = Rope::from(SINGLE_LINE_SAMPLE);
let slice = text.slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
let original_anchor = range.anchor;
let moves = IntoIter::new([
(Direction::Forward, 1usize),
(Direction::Forward, 5usize),
(Direction::Backward, 3usize),
]);
for (direction, amount) in moves {
range = move_horizontally(slice, range, direction, amount, Movement::Extend);
assert_eq!(range.anchor, original_anchor);
}
}
#[test]
fn vertical_moves_in_single_column() {
let text = Rope::from(MULTILINE_SAMPLE);
let slice = dbg!(&text).slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
let moves_and_expected_coordinates = IntoIter::new([
((Direction::Forward, 1usize), (1, 0)),
((Direction::Forward, 2usize), (3, 0)),
((Direction::Backward, 999usize), (0, 0)),
((Direction::Forward, 3usize), (3, 0)),
((Direction::Forward, 0usize), (3, 0)),
((Direction::Backward, 0usize), (3, 0)),
((Direction::Forward, 5), (4, 0)),
((Direction::Forward, 999usize), (4, 0)),
]);
for ((direction, amount), coordinates) in moves_and_expected_coordinates {
range = move_vertically(slice, range, direction, amount, Movement::Move);
assert_eq!(coords_at_pos(slice, range.head), coordinates.into());
assert_eq!(range.head, range.anchor);
}
}
#[test]
fn vertical_moves_jumping_column() {
let text = Rope::from(MULTILINE_SAMPLE);
let slice = text.slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
enum Axis {
H,
V,
}
let moves_and_expected_coordinates = IntoIter::new([
// Places cursor at the end of line
((Axis::H, Direction::Forward, 8usize), (0, 8)),
// First descent preserves column as the target line is wider
((Axis::V, Direction::Forward, 1usize), (1, 8)),
// Second descent clamps column as the target line is shorter
((Axis::V, Direction::Forward, 1usize), (2, 4)),
// Third descent restores the original column
((Axis::V, Direction::Forward, 1usize), (3, 8)),
// Behaviour is preserved even through long jumps
((Axis::V, Direction::Backward, 999usize), (0, 8)),
((Axis::V, Direction::Forward, 999usize), (4, 8)),
]);
for ((axis, direction, amount), coordinates) in moves_and_expected_coordinates {
range = match axis {
Axis::H => move_horizontally(slice, range, direction, amount, Movement::Move),
Axis::V => move_vertically(slice, range, direction, amount, Movement::Move),
};
assert_eq!(coords_at_pos(slice, range.head), coordinates.into());
assert_eq!(range.head, range.anchor);
}
}
#[test]
fn multibyte_character_column_jumps() {
let text = Rope::from(MULTIBYTE_CHARACTER_SAMPLE);
let slice = text.slice(..);
let position = pos_at_coords(slice, (0, 0).into());
let mut range = Range::point(position);
// FIXME: The behaviour captured in this test diverges from both Kakoune and Vim. These
// will attempt to preserve the horizontal position of the cursor, rather than
// placing it at the same character index.
enum Axis {
H,
V,
}
let moves_and_expected_coordinates = IntoIter::new([
// Places cursor at the fourth kana
((Axis::H, Direction::Forward, 4), (0, 4)),
// Descent places cursor at the fourth character.
((Axis::V, Direction::Forward, 1usize), (1, 4)),
]);
for ((axis, direction, amount), coordinates) in moves_and_expected_coordinates {
range = match axis {
Axis::H => move_horizontally(slice, range, direction, amount, Movement::Move),
Axis::V => move_vertically(slice, range, direction, amount, Movement::Move),
};
assert_eq!(coords_at_pos(slice, range.head), coordinates.into());
assert_eq!(range.head, range.anchor);
}
}
#[test]
#[should_panic]
fn nonsensical_ranges_panic_on_forward_movement_attempt_in_debug_mode() {
move_next_word_start(Rope::from("Sample").slice(..), Range::point(99999999), 1);
}
#[test]
#[should_panic]
fn nonsensical_ranges_panic_on_forward_to_end_movement_attempt_in_debug_mode() {
move_next_word_end(Rope::from("Sample").slice(..), Range::point(99999999), 1);
}
#[test]
#[should_panic]
fn nonsensical_ranges_panic_on_backwards_movement_attempt_in_debug_mode() {
move_prev_word_start(Rope::from("Sample").slice(..), Range::point(99999999), 1);
}
#[test]
fn test_behaviour_when_moving_to_start_of_next_words() {
let tests = array::IntoIter::new([
("Basic forward motion stops at the first space",
vec![(1, Range::new(0, 0), Range::new(0, 5))]),
(" Starting from a boundary advances the anchor",
vec![(1, Range::new(0, 0), Range::new(1, 9))]),
("Long whitespace gap is bridged by the head",
vec![(1, Range::new(0, 0), Range::new(0, 10))]),
("Previous anchor is irrelevant for forward motions",
vec![(1, Range::new(12, 0), Range::new(0, 8))]),
(" Starting from whitespace moves to last space in sequence",
vec![(1, Range::new(0, 0), Range::new(0, 3))]),
("Starting from mid-word leaves anchor at start position and moves head",
vec![(1, Range::new(3, 3), Range::new(3, 8))]),
("Identifiers_with_underscores are considered a single word",
vec![(1, Range::new(0, 0), Range::new(0, 28))]),
("Jumping\n into starting whitespace selects the spaces before 'into'",
vec![(1, Range::new(0, 6), Range::new(8, 11))]),
("alphanumeric.!,and.?=punctuation are considered 'words' for the purposes of word motion",
vec![
(1, Range::new(0, 0), Range::new(0, 11)),
(1, Range::new(0, 11), Range::new(12, 14)),
(1, Range::new(12, 14), Range::new(15, 17))
]),
("... ... punctuation and spaces behave as expected",
vec![
(1, Range::new(0, 0), Range::new(0, 5)),
(1, Range::new(0, 5), Range::new(6, 9)),
]),
(".._.._ punctuation is not joined by underscores into a single block",
vec![(1, Range::new(0, 0), Range::new(0, 1))]),
("Newlines\n\nare bridged seamlessly.",
vec![
(1, Range::new(0, 0), Range::new(0, 7)),
(1, Range::new(0, 7), Range::new(10, 13)),
]),
("Jumping\n\n\n\n\n\n from newlines to whitespace selects whitespace.",
vec![
(1, Range::new(0, 8), Range::new(13, 15)),
]),
("A failed motion does not modify the range",
vec![
(3, Range::new(37, 41), Range::new(37, 41)),
]),
("oh oh oh two character words!",
vec![
(1, Range::new(0, 0), Range::new(0, 2)),
(1, Range::new(0, 2), Range::new(3, 5)),
(1, Range::new(0, 1), Range::new(2, 2)),
]),
("Multiple motions at once resolve correctly",
vec![
(3, Range::new(0, 0), Range::new(17, 19)),
]),
("Excessive motions are performed partially",
vec![
(999, Range::new(0, 0), Range::new(32, 40)),
]),
("", // Edge case of moving forward in empty string
vec![
(1, Range::new(0, 0), Range::new(0, 0)),
]),
("\n\n\n\n\n", // Edge case of moving forward in all newlines
vec![
(1, Range::new(0, 0), Range::new(0, 4)),
]),
("\n \n \n Jumping through alternated space blocks and newlines selects the space blocks",
vec![
(1, Range::new(0, 0), Range::new(1, 3)),
(1, Range::new(1, 3), Range::new(5, 7)),
]),
("ヒーリクス multibyte characters behave as normal characters",
vec![
(1, Range::new(0, 0), Range::new(0, 5)),
]),
]);
for (sample, scenario) in tests {
for (count, begin, expected_end) in scenario.into_iter() {
let range = move_next_word_start(Rope::from(sample).slice(..), begin, count);
assert_eq!(range, expected_end, "Case failed: [{}]", sample);
}
}
}
#[test]
fn test_behaviour_when_moving_to_start_of_previous_words() {
let tests = array::IntoIter::new([
("Basic backward motion from the middle of a word",
vec![(1, Range::new(3, 3), Range::new(3, 0))]),
("Starting from after boundary retreats the anchor",
vec![(1, Range::new(0, 8), Range::new(7, 0))]),
(" Jump to start of a word preceded by whitespace",
vec![(1, Range::new(5, 5), Range::new(5, 4))]),
(" Jump to start of line from start of word preceded by whitespace",
vec![(1, Range::new(4, 4), Range::new(3, 0))]),
("Previous anchor is irrelevant for backward motions",
vec![(1, Range::new(12, 5), Range::new(5, 0))]),
(" Starting from whitespace moves to first space in sequence",
vec![(1, Range::new(0, 3), Range::new(3, 0))]),
("Identifiers_with_underscores are considered a single word",
vec![(1, Range::new(0, 20), Range::new(20, 0))]),
("Jumping\n \nback through a newline selects whitespace",
vec![(1, Range::new(0, 13), Range::new(11, 8))]),
("Jumping to start of word from the end selects the word",
vec![(1, Range::new(6, 6), Range::new(6, 0))]),
("alphanumeric.!,and.?=punctuation are considered 'words' for the purposes of word motion",
vec![
(1, Range::new(30, 30), Range::new(30, 21)),
(1, Range::new(30, 21), Range::new(20, 18)),
(1, Range::new(20, 18), Range::new(17, 15))
]),
("... ... punctuation and spaces behave as expected",
vec![
(1, Range::new(0, 10), Range::new(9, 6)),
(1, Range::new(9, 6), Range::new(5, 0)),
]),
(".._.._ punctuation is not joined by underscores into a single block",
vec![(1, Range::new(0, 5), Range::new(4, 3))]),
("Newlines\n\nare bridged seamlessly.",
vec![
(1, Range::new(0, 10), Range::new(7, 0)),
]),
("Jumping \n\n\n\n\nback from within a newline group selects previous block",
vec![
(1, Range::new(0, 13), Range::new(10, 0)),
]),
("Failed motions do not modify the range",
vec![
(0, Range::new(3, 0), Range::new(3, 0)),
]),
("Multiple motions at once resolve correctly",
vec![
(3, Range::new(18, 18), Range::new(8, 0)),
]),
("Excessive motions are performed partially",
vec![
(999, Range::new(40, 40), Range::new(9, 0)),
]),
("", // Edge case of moving backwards in empty string
vec![
(1, Range::new(0, 0), Range::new(0, 0)),
]),
("\n\n\n\n\n", // Edge case of moving backwards in all newlines
vec![
(1, Range::new(0, 0), Range::new(0, 0)),
]),
(" \n \nJumping back through alternated space blocks and newlines selects the space blocks",
vec![
(1, Range::new(0, 7), Range::new(6, 4)),
(1, Range::new(6, 4), Range::new(2, 0)),
]),
("ヒーリクス multibyte characters behave as normal characters",
vec![
(1, Range::new(0, 5), Range::new(4, 0)),
]),
]);
for (sample, scenario) in tests {
for (count, begin, expected_end) in scenario.into_iter() {
let range = move_prev_word_start(Rope::from(sample).slice(..), begin, count);
assert_eq!(range, expected_end, "Case failed: [{}]", sample);
}
}
}
#[test]
fn test_behaviour_when_moving_to_end_of_next_words() {
let tests = array::IntoIter::new([
("Basic forward motion from the start of a word to the end of it",
vec![(1, Range::new(0, 0), Range::new(0, 4))]),
("Basic forward motion from the end of a word to the end of the next",
vec![(1, Range::new(0, 4), Range::new(5, 12))]),
("Basic forward motion from the middle of a word to the end of it",
vec![(1, Range::new(2, 2), Range::new(2, 4))]),
(" Jumping to end of a word preceded by whitespace",
vec![(1, Range::new(0, 0), Range::new(0, 10))]),
(" Starting from a boundary advances the anchor",
vec![(1, Range::new(0, 0), Range::new(1, 8))]),
("Previous anchor is irrelevant for end of word motion",
vec![(1, Range::new(12, 2), Range::new(2, 7))]),
("Identifiers_with_underscores are considered a single word",
vec![(1, Range::new(0, 0), Range::new(0, 27))]),
("Jumping\n into starting whitespace selects up to the end of next word",
vec![(1, Range::new(0, 6), Range::new(8, 15))]),
("alphanumeric.!,and.?=punctuation are considered 'words' for the purposes of word motion",
vec![
(1, Range::new(0, 0), Range::new(0, 11)),
(1, Range::new(0, 11), Range::new(12, 14)),
(1, Range::new(12, 14), Range::new(15, 17))
]),
("... ... punctuation and spaces behave as expected",
vec![
(1, Range::new(0, 0), Range::new(0, 2)),
(1, Range::new(0, 2), Range::new(3, 8)),
]),
(".._.._ punctuation is not joined by underscores into a single block",
vec![(1, Range::new(0, 0), Range::new(0, 1))]),
("Newlines\n\nare bridged seamlessly.",
vec![
(1, Range::new(0, 0), Range::new(0, 7)),
(1, Range::new(0, 7), Range::new(10, 12)),
]),
("Jumping\n\n\n\n\n\n from newlines to whitespace selects to end of next word.",
vec![
(1, Range::new(0, 8), Range::new(13, 19)),
]),
("A failed motion does not modify the range",
vec![
(3, Range::new(37, 41), Range::new(37, 41)),
]),
("Multiple motions at once resolve correctly",
vec![
(3, Range::new(0, 0), Range::new(16, 18)),
]),
("Excessive motions are performed partially",
vec![
(999, Range::new(0, 0), Range::new(31, 40)),
]),
("", // Edge case of moving forward in empty string
vec![
(1, Range::new(0, 0), Range::new(0, 0)),
]),
("\n\n\n\n\n", // Edge case of moving forward in all newlines
vec![
(1, Range::new(0, 0), Range::new(0, 4)),
]),
("\n \n \n Jumping through alternated space blocks and newlines selects the space blocks",
vec![
(1, Range::new(0, 0), Range::new(1, 3)),
(1, Range::new(1, 3), Range::new(5, 7)),
]),
("ヒーリクス multibyte characters behave as normal characters",
vec![
(1, Range::new(0, 0), Range::new(0, 4)),
]),
]);
for (sample, scenario) in tests {
for (count, begin, expected_end) in scenario.into_iter() {
let range = move_next_word_end(Rope::from(sample).slice(..), begin, count);
assert_eq!(range, expected_end, "Case failed: [{}]", sample);
}
}
}
#[test]
fn test_categorize() {
const WORD_TEST_CASE: &'static str =
"_hello_world_あいうえおー1234567890";
const PUNCTUATION_TEST_CASE: &'static str =
"!\"#$%&\'()*+,-./:;<=>?@[\\]^`{|}~!”#$%&’()*+、。:;<=>?@「」^`{|}~";
const WHITESPACE_TEST_CASE: &'static str = "  ";
assert_eq!(Category::Eol, categorize('\n'));
for ch in WHITESPACE_TEST_CASE.chars() {
assert_eq!(
Category::Whitespace,
categorize(ch),
"Testing '{}', but got `{:?}` instead of `Category::Whitespace`",
ch,
categorize(ch)
);
}
for ch in WORD_TEST_CASE.chars() {
assert_eq!(
Category::Word,
categorize(ch),
"Testing '{}', but got `{:?}` instead of `Category::Word`",
ch,
categorize(ch)
);
}
for ch in PUNCTUATION_TEST_CASE.chars() {
assert_eq!(
Category::Punctuation,
categorize(ch),
"Testing '{}', but got `{:?}` instead of `Category::Punctuation`",
ch,
categorize(ch)
);
}
}
} }

View File

@@ -6,16 +6,15 @@ use std::{collections::HashMap, sync::RwLock};
static REGISTRY: Lazy<RwLock<HashMap<char, Vec<String>>>> = static REGISTRY: Lazy<RwLock<HashMap<char, Vec<String>>>> =
Lazy::new(|| RwLock::new(HashMap::new())); Lazy::new(|| RwLock::new(HashMap::new()));
pub fn get(register: char) -> Option<Vec<String>> { /// Read register values.
pub fn get(register_name: char) -> Option<Vec<String>> {
let registry = REGISTRY.read().unwrap(); let registry = REGISTRY.read().unwrap();
registry.get(&register_name).cloned() // TODO: no cloning
// TODO: no cloning
registry.get(&register).cloned()
} }
/// Read register values.
// restoring: bool // restoring: bool
pub fn set(register: char, values: Vec<String>) { pub fn set(register_name: char, values: Vec<String>) {
let mut registry = REGISTRY.write().unwrap(); let mut registry = REGISTRY.write().unwrap();
registry.insert(register_name, values);
registry.insert(register, values);
} }

View File

@@ -35,6 +35,10 @@ impl Range {
} }
} }
pub fn point(head: usize) -> Self {
Self::new(head, head)
}
/// Start of the range. /// Start of the range.
#[inline] #[inline]
#[must_use] #[must_use]

View File

@@ -1,7 +1,7 @@
use crate::{Rope, Selection}; use crate::{Rope, Selection};
/// A state represents the current editor state of a single buffer. /// A state represents the current editor state of a single buffer.
#[derive(Clone)] #[derive(Debug, Clone)]
pub struct State { pub struct State {
pub doc: Rope, pub doc: Rope,
pub selection: Selection, pub selection: Selection,

View File

@@ -5,6 +5,7 @@ use std::{
borrow::Cow, borrow::Cow,
cell::RefCell, cell::RefCell,
collections::{HashMap, HashSet}, collections::{HashMap, HashSet},
fmt,
path::{Path, PathBuf}, path::{Path, PathBuf},
sync::Arc, sync::Arc,
}; };
@@ -12,13 +13,13 @@ use std::{
use once_cell::sync::{Lazy, OnceCell}; use once_cell::sync::{Lazy, OnceCell};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct Configuration { pub struct Configuration {
pub language: Vec<LanguageConfiguration>, pub language: Vec<LanguageConfiguration>,
} }
// largely based on tree-sitter/cli/src/loader.rs // largely based on tree-sitter/cli/src/loader.rs
#[derive(Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct LanguageConfiguration { pub struct LanguageConfiguration {
#[serde(rename = "name")] #[serde(rename = "name")]
@@ -27,8 +28,8 @@ pub struct LanguageConfiguration {
pub file_types: Vec<String>, // filename ends_with? <Gemfile, rb, etc> pub file_types: Vec<String>, // filename ends_with? <Gemfile, rb, etc>
pub roots: Vec<String>, // these indicate project roots <.git, Cargo.toml> pub roots: Vec<String>, // these indicate project roots <.git, Cargo.toml>
// pub path: PathBuf, #[serde(default)]
// root_path for tree-sitter (^) pub auto_format: bool,
// content_regex // content_regex
// injection_regex // injection_regex
@@ -46,7 +47,7 @@ pub struct LanguageConfiguration {
pub(crate) indent_query: OnceCell<Option<IndentQuery>>, pub(crate) indent_query: OnceCell<Option<IndentQuery>>,
} }
#[derive(Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct LanguageServerConfiguration { pub struct LanguageServerConfiguration {
pub command: String, pub command: String,
@@ -55,14 +56,14 @@ pub struct LanguageServerConfiguration {
pub args: Vec<String>, pub args: Vec<String>,
} }
#[derive(Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct IndentationConfiguration { pub struct IndentationConfiguration {
pub tab_width: usize, pub tab_width: usize,
pub unit: String, pub unit: String,
} }
#[derive(Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct IndentQuery { pub struct IndentQuery {
#[serde(default)] #[serde(default)]
@@ -75,8 +76,10 @@ pub struct IndentQuery {
#[cfg(not(feature = "embed_runtime"))] #[cfg(not(feature = "embed_runtime"))]
fn load_runtime_file(language: &str, filename: &str) -> Result<String, std::io::Error> { fn load_runtime_file(language: &str, filename: &str) -> Result<String, std::io::Error> {
let root = crate::runtime_dir(); let path = crate::RUNTIME_DIR
let path = root.join("queries").join(language).join(filename); .join("queries")
.join(language)
.join(filename);
std::fs::read_to_string(&path) std::fs::read_to_string(&path)
} }
@@ -189,6 +192,7 @@ impl LanguageConfiguration {
pub static LOADER: OnceCell<Loader> = OnceCell::new(); pub static LOADER: OnceCell<Loader> = OnceCell::new();
#[derive(Debug)]
pub struct Loader { pub struct Loader {
// highlight_names ? // highlight_names ?
language_configs: Vec<Arc<LanguageConfiguration>>, language_configs: Vec<Arc<LanguageConfiguration>>,
@@ -256,6 +260,12 @@ pub struct TsParser {
cursors: Vec<QueryCursor>, cursors: Vec<QueryCursor>,
} }
impl fmt::Debug for TsParser {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("TsParser").finish()
}
}
// could also just use a pool, or a single instance? // could also just use a pool, or a single instance?
thread_local! { thread_local! {
pub static PARSER: RefCell<TsParser> = RefCell::new(TsParser { pub static PARSER: RefCell<TsParser> = RefCell::new(TsParser {
@@ -264,6 +274,7 @@ thread_local! {
}) })
} }
#[derive(Debug)]
pub struct Syntax { pub struct Syntax {
config: Arc<HighlightConfiguration>, config: Arc<HighlightConfiguration>,
@@ -366,7 +377,11 @@ impl Syntax {
// prevents them from being moved. But both of these values are really just // prevents them from being moved. But both of these values are really just
// pointers, so it's actually ok to move them. // pointers, so it's actually ok to move them.
let mut cursor = QueryCursor::new(); // reuse a pool // reuse a cursor from the pool if possible
let mut cursor = PARSER.with(|ts_parser| {
let highlighter = &mut ts_parser.borrow_mut();
highlighter.cursors.pop().unwrap_or_else(QueryCursor::new)
});
let tree_ref = unsafe { mem::transmute::<_, &'static Tree>(self.tree()) }; let tree_ref = unsafe { mem::transmute::<_, &'static Tree>(self.tree()) };
let cursor_ref = unsafe { mem::transmute::<_, &'static mut QueryCursor>(&mut cursor) }; let cursor_ref = unsafe { mem::transmute::<_, &'static mut QueryCursor>(&mut cursor) };
let query_ref = unsafe { mem::transmute::<_, &'static Query>(&self.config.query) }; let query_ref = unsafe { mem::transmute::<_, &'static Query>(&self.config.query) };
@@ -440,6 +455,7 @@ impl Syntax {
// buffer_range_for_scope_at_pos // buffer_range_for_scope_at_pos
} }
#[derive(Debug)]
pub struct LanguageLayer { pub struct LanguageLayer {
// mode // mode
// grammar // grammar
@@ -748,6 +764,7 @@ pub enum HighlightEvent {
/// Contains the data neeeded to higlight code written in a particular language. /// Contains the data neeeded to higlight code written in a particular language.
/// ///
/// This struct is immutable and can be shared between threads. /// This struct is immutable and can be shared between threads.
#[derive(Debug)]
pub struct HighlightConfiguration { pub struct HighlightConfiguration {
pub language: Grammar, pub language: Grammar,
pub query: Query, pub query: Query,
@@ -778,6 +795,7 @@ struct LocalScope<'a> {
local_defs: Vec<LocalDef<'a>>, local_defs: Vec<LocalDef<'a>>,
} }
#[derive(Debug)]
struct HighlightIter<'a, 'tree: 'a, F> struct HighlightIter<'a, 'tree: 'a, F>
where where
F: FnMut(&str) -> Option<&'a HighlightConfiguration> + 'a, F: FnMut(&str) -> Option<&'a HighlightConfiguration> + 'a,
@@ -803,6 +821,12 @@ struct HighlightIterLayer<'a, 'tree: 'a> {
depth: usize, depth: usize,
} }
impl<'a, 'tree: 'a> fmt::Debug for HighlightIterLayer<'a, 'tree> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("HighlightIterLayer").finish()
}
}
impl HighlightConfiguration { impl HighlightConfiguration {
/// Creates a `HighlightConfiguration` for a given `Grammar` and set of highlighting /// Creates a `HighlightConfiguration` for a given `Grammar` and set of highlighting
/// queries. /// queries.

View File

@@ -15,7 +15,7 @@ pub enum Operation {
Insert(Tendril), Insert(Tendril),
} }
#[derive(Copy, Clone, PartialEq, Eq)] #[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum Assoc { pub enum Assoc {
Before, Before,
After, After,
@@ -758,7 +758,7 @@ mod test {
#[test] #[test]
fn combine_with_utf8() { fn combine_with_utf8() {
const TEST_CASE: &'static str = "Hello, これはヒレクスエディターです!"; const TEST_CASE: &'static str = "Hello, これはヘリックスエディターです!";
let empty = Rope::from(""); let empty = Rope::from("");
let mut a = ChangeSet::new(&empty); let mut a = ChangeSet::new(&empty);

View File

@@ -1,65 +0,0 @@
use crate::movement::{categorize, is_horiz_blank, is_word, skip_over_prev};
use ropey::RopeSlice;
#[must_use]
pub fn nth_prev_word_boundary(slice: RopeSlice, mut char_idx: usize, count: usize) -> usize {
let mut with_end = false;
for _ in 0..count {
if char_idx == 0 {
break;
}
// return if not skip while?
skip_over_prev(slice, &mut char_idx, |ch| ch == '\n');
with_end = skip_over_prev(slice, &mut char_idx, is_horiz_blank);
// refetch
let ch = slice.char(char_idx);
if is_word(ch) {
with_end = skip_over_prev(slice, &mut char_idx, is_word);
} else if ch.is_ascii_punctuation() {
with_end = skip_over_prev(slice, &mut char_idx, |ch| ch.is_ascii_punctuation());
}
}
if with_end {
char_idx
} else {
char_idx + 1
}
}
#[test]
fn different_prev_word_boundary() {
use ropey::Rope;
let t = |x, y| {
let text = Rope::from(x);
let out = nth_prev_word_boundary(text.slice(..), text.len_chars() - 1, 1);
assert_eq!(text.slice(..out), y, r#"from "{}""#, x);
};
t("abcd\nefg\nwrs", "abcd\nefg\n");
t("abcd\nefg\n", "abcd\n");
t("abcd\n", "");
t("hello, world!", "hello, world");
t("hello, world", "hello, ");
t("hello, ", "hello");
t("hello", "");
t("こんにちは、世界!", "こんにちは、世界!"); // TODO: punctuation
t("こんにちは、世界", "こんにちは、");
t("こんにちは、", "こんにちは、"); // what?
t("こんにちは", "");
t("この世界。", "この世界。"); // what?
t("この世界", "");
t("お前はもう死んでいる", "");
t("その300円です", ""); // TODO: should stop at 300
t("唱k", ""); // TODO: should stop at 唱
t("1 + 1 = 2", "1 + 1 = ");
t("1 + 1 =", "1 + 1 ");
t("1 + 1", "1 + ");
t("1 + ", "1 ");
t("1 ", "");
t("1+1=2", "1+1=");
}

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "helix-lsp" name = "helix-lsp"
version = "0.0.10" version = "0.2.0"
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"
@@ -10,20 +10,14 @@ license = "MPL-2.0"
[dependencies] [dependencies]
helix-core = { path = "../helix-core" } helix-core = { path = "../helix-core" }
once_cell = "1.4" anyhow = "1.0"
futures-executor = "0.3"
lsp-types = { version = "0.89", features = ["proposed"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1.5"
futures-executor = { version = "0.3" }
url = "2"
pathdiff = "0.2"
glob = "0.3"
anyhow = "1"
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
# jsonrpc-core = { version = "17.1", default-features = false } # don't pull in all of futures
jsonrpc-core = { git = "https://github.com/paritytech/jsonrpc", default-features = false } # don't pull in all of futures
futures-util = { version = "0.3", features = ["std", "async-await"], default-features = false } futures-util = { version = "0.3", features = ["std", "async-await"], default-features = false }
jsonrpc-core = { version = "17.1", default-features = false } # don't pull in all of futures
log = "0.4"
lsp-types = { version = "0.89", features = ["proposed"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
thiserror = "1.0" thiserror = "1.0"
log = "~0.4" tokio = { version = "1.6", features = ["full"] }
tokio-stream = "0.1.6"

View File

@@ -3,31 +3,24 @@ use crate::{
Call, Error, OffsetEncoding, Result, Call, Error, OffsetEncoding, Result,
}; };
use helix_core::{ChangeSet, Rope}; use helix_core::{find_root, ChangeSet, Rope};
// use std::collections::HashMap;
use std::future::Future;
use std::sync::atomic::{AtomicU64, Ordering};
use jsonrpc_core as jsonrpc; use jsonrpc_core as jsonrpc;
use lsp_types as lsp; use lsp_types as lsp;
use serde_json::Value; use serde_json::Value;
use std::future::Future;
use std::process::Stdio; use std::process::Stdio;
use std::sync::atomic::{AtomicU64, Ordering};
use tokio::{ use tokio::{
io::{BufReader, BufWriter}, io::{BufReader, BufWriter},
// prelude::*,
process::{Child, Command}, process::{Child, Command},
sync::mpsc::{channel, UnboundedReceiver, UnboundedSender}, sync::mpsc::{channel, UnboundedReceiver, UnboundedSender},
}; };
#[derive(Debug)]
pub struct Client { pub struct Client {
_process: Child, _process: Child,
server_tx: UnboundedSender<Payload>,
outgoing: UnboundedSender<Payload>, request_counter: AtomicU64,
// pub incoming: Receiver<Call>,
pub request_counter: AtomicU64,
capabilities: Option<lsp::ServerCapabilities>, capabilities: Option<lsp::ServerCapabilities>,
offset_encoding: OffsetEncoding, offset_encoding: OffsetEncoding,
} }
@@ -43,40 +36,27 @@ impl Client {
.kill_on_drop(true) .kill_on_drop(true)
.spawn(); .spawn();
// use std::io::ErrorKind; let mut process = process?;
let mut process = match process {
Ok(process) => process,
Err(err) => match err.kind() {
// ErrorKind::NotFound | ErrorKind::PermissionDenied => {
// return Err(Error::Other(err.into()))
// }
_kind => return Err(Error::Other(err.into())),
},
};
// TODO: do we need bufreader/writer here? or do we use async wrappers on unblock? // TODO: do we need bufreader/writer here? or do we use async wrappers on unblock?
let writer = BufWriter::new(process.stdin.take().expect("Failed to open stdin")); let writer = BufWriter::new(process.stdin.take().expect("Failed to open stdin"));
let reader = BufReader::new(process.stdout.take().expect("Failed to open stdout")); let reader = BufReader::new(process.stdout.take().expect("Failed to open stdout"));
let stderr = BufReader::new(process.stderr.take().expect("Failed to open stderr")); let stderr = BufReader::new(process.stderr.take().expect("Failed to open stderr"));
let (incoming, outgoing) = Transport::start(reader, writer, stderr); let (server_rx, server_tx) = Transport::start(reader, writer, stderr);
let client = Self { let client = Self {
_process: process, _process: process,
server_tx,
outgoing,
// incoming,
request_counter: AtomicU64::new(0), request_counter: AtomicU64::new(0),
capabilities: None, capabilities: None,
// diagnostics: HashMap::new(),
offset_encoding: OffsetEncoding::Utf8, offset_encoding: OffsetEncoding::Utf8,
}; };
// TODO: async client.initialize() // TODO: async client.initialize()
// maybe use an arc<atomic> flag // maybe use an arc<atomic> flag
Ok((client, incoming)) Ok((client, server_rx))
} }
fn next_request_id(&self) -> jsonrpc::Id { fn next_request_id(&self) -> jsonrpc::Id {
@@ -106,7 +86,7 @@ impl Client {
} }
/// Execute a RPC request on the language server. /// Execute a RPC request on the language server.
pub async fn request<R: lsp::request::Request>(&self, params: R::Params) -> Result<R::Result> async fn request<R: lsp::request::Request>(&self, params: R::Params) -> Result<R::Result>
where where
R::Params: serde::Serialize, R::Params: serde::Serialize,
R::Result: core::fmt::Debug, // TODO: temporary R::Result: core::fmt::Debug, // TODO: temporary
@@ -118,17 +98,20 @@ impl Client {
} }
/// Execute a RPC request on the language server. /// Execute a RPC request on the language server.
pub fn call<R: lsp::request::Request>( fn call<R: lsp::request::Request>(
&self, &self,
params: R::Params, params: R::Params,
) -> impl Future<Output = Result<Value>> ) -> impl Future<Output = Result<Value>>
where where
R::Params: serde::Serialize, R::Params: serde::Serialize,
{ {
let outgoing = self.outgoing.clone(); let server_tx = self.server_tx.clone();
let id = self.next_request_id(); let id = self.next_request_id();
async move { async move {
use std::time::Duration;
use tokio::time::timeout;
let params = serde_json::to_value(params)?; let params = serde_json::to_value(params)?;
let request = jsonrpc::MethodCall { let request = jsonrpc::MethodCall {
@@ -140,32 +123,29 @@ impl Client {
let (tx, mut rx) = channel::<Result<Value>>(1); let (tx, mut rx) = channel::<Result<Value>>(1);
outgoing server_tx
.send(Payload::Request { .send(Payload::Request {
chan: tx, chan: tx,
value: request, value: request,
}) })
.map_err(|e| Error::Other(e.into()))?; .map_err(|e| Error::Other(e.into()))?;
use std::time::Duration;
use tokio::time::timeout;
timeout(Duration::from_secs(2), rx.recv()) timeout(Duration::from_secs(2), rx.recv())
.await .await
.map_err(|_| Error::Timeout)? // return Timeout .map_err(|_| Error::Timeout)? // return Timeout
.unwrap() // TODO: None if channel closed .ok_or(Error::StreamClosed)?
} }
} }
/// Send a RPC notification to the language server. /// Send a RPC notification to the language server.
pub fn notify<R: lsp::notification::Notification>( fn notify<R: lsp::notification::Notification>(
&self, &self,
params: R::Params, params: R::Params,
) -> impl Future<Output = Result<()>> ) -> impl Future<Output = Result<()>>
where where
R::Params: serde::Serialize, R::Params: serde::Serialize,
{ {
let outgoing = self.outgoing.clone(); let server_tx = self.server_tx.clone();
async move { async move {
let params = serde_json::to_value(params)?; let params = serde_json::to_value(params)?;
@@ -176,7 +156,7 @@ impl Client {
params: Self::value_into_params(params), params: Self::value_into_params(params),
}; };
outgoing server_tx
.send(Payload::Notification(notification)) .send(Payload::Notification(notification))
.map_err(|e| Error::Other(e.into()))?; .map_err(|e| Error::Other(e.into()))?;
@@ -205,7 +185,7 @@ impl Client {
}), }),
}; };
self.outgoing self.server_tx
.send(Payload::Response(output)) .send(Payload::Response(output))
.map_err(|e| Error::Other(e.into()))?; .map_err(|e| Error::Other(e.into()))?;
@@ -216,15 +196,16 @@ impl Client {
// General messages // General messages
// ------------------------------------------------------------------------------------------- // -------------------------------------------------------------------------------------------
pub async fn initialize(&mut self) -> Result<()> { pub(crate) async fn initialize(&mut self) -> Result<()> {
// TODO: delay any requests that are triggered prior to initialize // TODO: delay any requests that are triggered prior to initialize
let root = find_root(None).and_then(|root| lsp::Url::from_file_path(root).ok());
#[allow(deprecated)] #[allow(deprecated)]
let params = lsp::InitializeParams { let params = lsp::InitializeParams {
process_id: Some(std::process::id()), process_id: Some(std::process::id()),
// root_path is obsolete, use root_uri
root_path: None, root_path: None,
// root_uri: Some(lsp_types::Url::parse("file://localhost/")?), root_uri: root,
root_uri: None, // set to project root in the future
initialization_options: None, initialization_options: None,
capabilities: lsp::ClientCapabilities { capabilities: lsp::ClientCapabilities {
text_document: Some(lsp::TextDocumentClientCapabilities { text_document: Some(lsp::TextDocumentClientCapabilities {
@@ -247,6 +228,11 @@ impl Client {
}), }),
..Default::default() ..Default::default()
}), }),
window: Some(lsp::WindowClientCapabilities {
// TODO: temporarily disabled until we implement handling for window/workDoneProgress/create
// work_done_progress: Some(true),
..Default::default()
}),
..Default::default() ..Default::default()
}, },
trace: None, trace: None,
@@ -674,4 +660,17 @@ impl Client {
self.call::<lsp::request::References>(params) self.call::<lsp::request::References>(params)
} }
pub fn document_symbols(
&self,
text_document: lsp::TextDocumentIdentifier,
) -> impl Future<Output = Result<Value>> {
let params = lsp::DocumentSymbolParams {
text_document,
work_done_progress_params: lsp::WorkDoneProgressParams::default(),
partial_result_params: lsp::PartialResultParams::default(),
};
self.call::<lsp::request::DocumentSymbolRequest>(params)
}
} }

View File

@@ -1,25 +1,27 @@
mod client; mod client;
mod transport; mod transport;
pub use client::Client;
pub use futures_executor::block_on;
pub use jsonrpc::Call;
pub use jsonrpc_core as jsonrpc; pub use jsonrpc_core as jsonrpc;
pub use lsp::{Position, Url};
pub use lsp_types as lsp; pub use lsp_types as lsp;
pub use client::Client; use futures_util::stream::select_all::SelectAll;
pub use lsp::{Position, Url};
pub type Result<T> = core::result::Result<T, Error>;
use helix_core::syntax::LanguageConfiguration; use helix_core::syntax::LanguageConfiguration;
use thiserror::Error; use std::{
collections::{hash_map::Entry, HashMap},
use std::{collections::HashMap, sync::Arc}; sync::Arc,
};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use thiserror::Error;
use tokio_stream::wrappers::UnboundedReceiverStream; use tokio_stream::wrappers::UnboundedReceiverStream;
pub use futures_executor::block_on; pub type Result<T> = core::result::Result<T, Error>;
type LanguageId = String;
#[derive(Error, Debug)] #[derive(Error, Debug)]
pub enum Error { pub enum Error {
@@ -27,8 +29,14 @@ pub enum Error {
Rpc(#[from] jsonrpc::Error), Rpc(#[from] jsonrpc::Error),
#[error("failed to parse: {0}")] #[error("failed to parse: {0}")]
Parse(#[from] serde_json::Error), Parse(#[from] serde_json::Error),
#[error("IO Error: {0}")]
IO(#[from] std::io::Error),
#[error("request timed out")] #[error("request timed out")]
Timeout, Timeout,
#[error("server closed the stream")]
StreamClosed,
#[error("LSP not defined")]
LspNotDefined,
#[error(transparent)] #[error(transparent)]
Other(#[from] anyhow::Error), Other(#[from] anyhow::Error),
} }
@@ -47,23 +55,54 @@ pub mod util {
use super::*; use super::*;
use helix_core::{Range, Rope, Transaction}; use helix_core::{Range, Rope, Transaction};
/// Converts [`lsp::Position`] to a position in the document.
///
/// Returns `None` if position exceeds document length or an operation overflows.
pub fn lsp_pos_to_pos( pub fn lsp_pos_to_pos(
doc: &Rope, doc: &Rope,
pos: lsp::Position, pos: lsp::Position,
offset_encoding: OffsetEncoding, offset_encoding: OffsetEncoding,
) -> usize { ) -> Option<usize> {
let max_line = doc.lines().count().saturating_sub(1);
let pos_line = pos.line as usize;
let pos_line = if pos_line > max_line {
return None;
} else {
pos_line
};
match offset_encoding { match offset_encoding {
OffsetEncoding::Utf8 => { OffsetEncoding::Utf8 => {
let line = doc.line_to_char(pos.line as usize); let max_char = doc
line + pos.character as usize .line_to_char(max_line)
.checked_add(doc.line(max_line).len_chars())?;
let line = doc.line_to_char(pos_line);
let pos = line.checked_add(pos.character as usize)?;
if pos <= max_char {
Some(pos)
} else {
None
}
} }
OffsetEncoding::Utf16 => { OffsetEncoding::Utf16 => {
let line = doc.line_to_char(pos.line as usize); let max_char = doc
.line_to_char(max_line)
.checked_add(doc.line(max_line).len_chars())?;
let max_cu = doc.char_to_utf16_cu(max_char);
let line = doc.line_to_char(pos_line);
let line_start = doc.char_to_utf16_cu(line); let line_start = doc.char_to_utf16_cu(line);
doc.utf16_cu_to_char(line_start + pos.character as usize) let pos = line_start.checked_add(pos.character as usize)?;
if pos <= max_cu {
Some(doc.utf16_cu_to_char(pos))
} else {
None
}
} }
} }
} }
/// Converts position in the document to [`lsp::Position`].
///
/// Panics when `pos` is out of `doc` bounds or operation overflows.
pub fn pos_to_lsp_pos( pub fn pos_to_lsp_pos(
doc: &Rope, doc: &Rope,
pos: usize, pos: usize,
@@ -87,6 +126,7 @@ pub mod util {
} }
} }
/// Converts a range in the document to [`lsp::Range`].
pub fn range_to_lsp_range( pub fn range_to_lsp_range(
doc: &Rope, doc: &Rope,
range: Range, range: Range,
@@ -98,6 +138,17 @@ pub mod util {
lsp::Range::new(start, end) lsp::Range::new(start, end)
} }
pub fn lsp_range_to_range(
doc: &Rope,
range: lsp::Range,
offset_encoding: OffsetEncoding,
) -> Option<Range> {
let start = lsp_pos_to_pos(doc, range.start, offset_encoding)?;
let end = lsp_pos_to_pos(doc, range.end, offset_encoding)?;
Some(Range::new(start, end))
}
pub fn generate_transaction_from_edits( pub fn generate_transaction_from_edits(
doc: &Rope, doc: &Rope,
edits: Vec<lsp::TextEdit>, edits: Vec<lsp::TextEdit>,
@@ -113,14 +164,21 @@ pub mod util {
None None
}; };
let start = lsp_pos_to_pos(doc, edit.range.start, offset_encoding); let start =
let end = lsp_pos_to_pos(doc, edit.range.end, offset_encoding); if let Some(start) = lsp_pos_to_pos(doc, edit.range.start, offset_encoding) {
start
} else {
return (0, 0, None);
};
let end = if let Some(end) = lsp_pos_to_pos(doc, edit.range.end, offset_encoding) {
end
} else {
return (0, 0, None);
};
(start, end, replacement) (start, end, replacement)
}), }),
) )
} }
// apply_insert_replace_edit
} }
#[derive(Debug, PartialEq, Clone)] #[derive(Debug, PartialEq, Clone)]
@@ -128,6 +186,7 @@ pub enum Notification {
PublishDiagnostics(lsp::PublishDiagnosticsParams), PublishDiagnostics(lsp::PublishDiagnosticsParams),
ShowMessage(lsp::ShowMessageParams), ShowMessage(lsp::ShowMessageParams),
LogMessage(lsp::LogMessageParams), LogMessage(lsp::LogMessageParams),
ProgressMessage(lsp::ProgressParams),
} }
impl Notification { impl Notification {
@@ -145,17 +204,20 @@ impl Notification {
} }
lsp::notification::ShowMessage::METHOD => { lsp::notification::ShowMessage::METHOD => {
let params: lsp::ShowMessageParams = let params: lsp::ShowMessageParams = params.parse().ok()?;
params.parse().expect("Failed to parse ShowMessage params");
Self::ShowMessage(params) Self::ShowMessage(params)
} }
lsp::notification::LogMessage::METHOD => { lsp::notification::LogMessage::METHOD => {
let params: lsp::LogMessageParams = let params: lsp::LogMessageParams = params.parse().ok()?;
params.parse().expect("Failed to parse ShowMessage params");
Self::LogMessage(params) Self::LogMessage(params)
} }
lsp::notification::Progress::METHOD => {
let params: lsp::ProgressParams = params.parse().ok()?;
Self::ProgressMessage(params)
}
_ => { _ => {
log::error!("unhandled LSP notification: {}", method); log::error!("unhandled LSP notification: {}", method);
return None; return None;
@@ -166,14 +228,9 @@ impl Notification {
} }
} }
pub use jsonrpc::Call; #[derive(Debug)]
type LanguageId = String;
use futures_util::stream::select_all::SelectAll;
pub struct Registry { pub struct Registry {
inner: HashMap<LanguageId, Option<Arc<Client>>>, inner: HashMap<LanguageId, Arc<Client>>,
pub incoming: SelectAll<UnboundedReceiverStream<Call>>, pub incoming: SelectAll<UnboundedReceiverStream<Call>>,
} }
@@ -192,35 +249,29 @@ impl Registry {
} }
} }
pub fn get(&mut self, language_config: &LanguageConfiguration) -> Option<Arc<Client>> { pub fn get(&mut self, language_config: &LanguageConfiguration) -> Result<Arc<Client>> {
// TODO: propagate the error
if let Some(config) = &language_config.language_server { if let Some(config) = &language_config.language_server {
// avoid borrow issues // avoid borrow issues
let inner = &mut self.inner; let inner = &mut self.inner;
let s_incoming = &mut self.incoming; let s_incoming = &mut self.incoming;
let language_server = inner match inner.entry(language_config.scope.clone()) {
.entry(language_config.scope.clone()) // can't use entry with Borrow keys: https://github.com/rust-lang/rfcs/pull/1769 Entry::Occupied(language_server) => Ok(language_server.get().clone()),
.or_insert_with(|| { Entry::Vacant(entry) => {
// TODO: lookup defaults for id (name, args)
// initialize a new client // initialize a new client
let (mut client, incoming) = let (mut client, incoming) = Client::start(&config.command, &config.args)?;
Client::start(&config.command, &config.args).ok()?;
// TODO: run this async without blocking // TODO: run this async without blocking
futures_executor::block_on(client.initialize()).unwrap(); futures_executor::block_on(client.initialize())?;
s_incoming.push(UnboundedReceiverStream::new(incoming)); s_incoming.push(UnboundedReceiverStream::new(incoming));
let client = Arc::new(client);
Some(Arc::new(client)) entry.insert(client.clone());
}) Ok(client)
.clone(); }
}
return language_server; } else {
Err(Error::LspNotDefined)
} }
None
} }
} }
@@ -249,3 +300,34 @@ impl Registry {
// there needs to be a way to process incoming lsp messages from all clients. // there needs to be a way to process incoming lsp messages from all clients.
// -> notifications need to be dispatched to wherever // -> notifications need to be dispatched to wherever
// -> requests need to generate a reply and travel back to the same lsp! // -> requests need to generate a reply and travel back to the same lsp!
#[cfg(test)]
mod tests {
use super::{lsp, util::*, OffsetEncoding};
use helix_core::Rope;
#[test]
fn converts_lsp_pos_to_pos() {
macro_rules! test_case {
($doc:expr, ($x:expr, $y:expr) => $want:expr) => {
let doc = Rope::from($doc);
let pos = lsp::Position::new($x, $y);
assert_eq!($want, lsp_pos_to_pos(&doc, pos, OffsetEncoding::Utf16));
assert_eq!($want, lsp_pos_to_pos(&doc, pos, OffsetEncoding::Utf8))
};
}
test_case!("", (0, 0) => Some(0));
test_case!("", (0, 1) => None);
test_case!("", (1, 0) => None);
test_case!("\n\n", (0, 0) => Some(0));
test_case!("\n\n", (1, 0) => Some(1));
test_case!("\n\n", (1, 1) => Some(2));
test_case!("\n\n", (2, 0) => Some(2));
test_case!("\n\n", (3, 0) => None);
test_case!("test\n\n\n\ncase", (4, 3) => Some(11));
test_case!("test\n\n\n\ncase", (4, 4) => Some(12));
test_case!("test\n\n\n\ncase", (4, 5) => None);
test_case!("", (u32::MAX, u32::MAX) => None);
}
}

View File

@@ -1,15 +1,9 @@
use std::collections::HashMap; use crate::Result;
use std::io;
use log::{error, info};
use crate::Error;
type Result<T> = core::result::Result<T, Error>;
use jsonrpc_core as jsonrpc; use jsonrpc_core as jsonrpc;
use log::{error, info};
use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use std::collections::HashMap;
use tokio::{ use tokio::{
io::{AsyncBufRead, AsyncBufReadExt, AsyncReadExt, AsyncWriteExt, BufReader, BufWriter}, io::{AsyncBufRead, AsyncBufReadExt, AsyncReadExt, AsyncWriteExt, BufReader, BufWriter},
process::{ChildStderr, ChildStdin, ChildStdout}, process::{ChildStderr, ChildStdin, ChildStdout},
@@ -26,47 +20,45 @@ pub enum Payload {
Response(jsonrpc::Output), Response(jsonrpc::Output),
} }
use serde::{Deserialize, Serialize};
/// A type representing all possible values sent from the server to the client. /// A type representing all possible values sent from the server to the client.
#[derive(Debug, PartialEq, Clone, Deserialize, Serialize)] #[derive(Debug, PartialEq, Clone, Deserialize, Serialize)]
#[serde(deny_unknown_fields)] #[serde(deny_unknown_fields)]
#[serde(untagged)] #[serde(untagged)]
enum Message { enum ServerMessage {
/// A regular JSON-RPC request output (single response). /// A regular JSON-RPC request output (single response).
Output(jsonrpc::Output), Output(jsonrpc::Output),
/// A JSON-RPC request or notification. /// A JSON-RPC request or notification.
Call(jsonrpc::Call), Call(jsonrpc::Call),
} }
#[derive(Debug)]
pub struct Transport { pub struct Transport {
incoming: UnboundedSender<jsonrpc::Call>, client_tx: UnboundedSender<jsonrpc::Call>,
outgoing: UnboundedReceiver<Payload>, client_rx: UnboundedReceiver<Payload>,
pending_requests: HashMap<jsonrpc::Id, Sender<Result<Value>>>, pending_requests: HashMap<jsonrpc::Id, Sender<Result<Value>>>,
headers: HashMap<String, String>,
writer: BufWriter<ChildStdin>, server_stdin: BufWriter<ChildStdin>,
reader: BufReader<ChildStdout>, server_stdout: BufReader<ChildStdout>,
stderr: BufReader<ChildStderr>, server_stderr: BufReader<ChildStderr>,
} }
impl Transport { impl Transport {
pub fn start( pub fn start(
reader: BufReader<ChildStdout>, server_stdout: BufReader<ChildStdout>,
writer: BufWriter<ChildStdin>, server_stdin: BufWriter<ChildStdin>,
stderr: BufReader<ChildStderr>, server_stderr: BufReader<ChildStderr>,
) -> (UnboundedReceiver<jsonrpc::Call>, UnboundedSender<Payload>) { ) -> (UnboundedReceiver<jsonrpc::Call>, UnboundedSender<Payload>) {
let (incoming, rx) = unbounded_channel(); let (client_tx, rx) = unbounded_channel();
let (tx, outgoing) = unbounded_channel(); let (tx, client_rx) = unbounded_channel();
let transport = Self { let transport = Self {
reader, server_stdout,
writer, server_stdin,
stderr, server_stderr,
incoming, client_tx,
outgoing, client_rx,
pending_requests: HashMap::default(), pending_requests: HashMap::default(),
headers: HashMap::default(),
}; };
tokio::spawn(transport.duplex()); tokio::spawn(transport.duplex());
@@ -74,105 +66,104 @@ impl Transport {
(rx, tx) (rx, tx)
} }
async fn recv( async fn recv_server_message(
reader: &mut (impl AsyncBufRead + Unpin + Send), reader: &mut (impl AsyncBufRead + Unpin + Send),
headers: &mut HashMap<String, String>, buffer: &mut String,
) -> core::result::Result<Message, std::io::Error> { ) -> Result<ServerMessage> {
// read headers let mut content_length = None;
loop { loop {
let mut header = String::new(); buffer.truncate(0);
// detect pipe closed if 0 reader.read_line(buffer).await?;
reader.read_line(&mut header).await?; let header = buffer.trim();
let header = header.trim();
if header.is_empty() { if header.is_empty() {
break; break;
} }
let parts: Vec<&str> = header.split(": ").collect(); let mut parts = header.split(": ");
if parts.len() != 2 {
return Err(std::io::Error::new( match (parts.next(), parts.next(), parts.next()) {
std::io::ErrorKind::Other, (Some("Content-Length"), Some(value), None) => {
"Failed to parse header", content_length = Some(value.parse().unwrap());
)); }
(Some(_), Some(_), None) => {}
_ => {
return Err(std::io::Error::new(
std::io::ErrorKind::Other,
"Failed to parse header",
)
.into());
}
} }
headers.insert(parts[0].to_string(), parts[1].to_string());
} }
// find content-length let content_length = content_length.unwrap();
let content_length = headers.get("Content-Length").unwrap().parse().unwrap();
//TODO: reuse vector
let mut content = vec![0; content_length]; let mut content = vec![0; content_length];
reader.read_exact(&mut content).await?; reader.read_exact(&mut content).await?;
let msg = String::from_utf8(content).unwrap(); let msg = String::from_utf8(content).unwrap();
// read data
info!("<- {}", msg); info!("<- {}", msg);
// try parsing as output (server response) or call (server request) // try parsing as output (server response) or call (server request)
let output: serde_json::Result<Message> = serde_json::from_str(&msg); let output: serde_json::Result<ServerMessage> = serde_json::from_str(&msg);
Ok(output?) Ok(output?)
} }
async fn err( async fn recv_server_error(
err: &mut (impl AsyncBufRead + Unpin + Send), err: &mut (impl AsyncBufRead + Unpin + Send),
) -> core::result::Result<(), std::io::Error> { buffer: &mut String,
let mut line = String::new(); ) -> Result<()> {
err.read_line(&mut line).await?; buffer.truncate(0);
error!("err <- {}", line); err.read_line(buffer).await?;
error!("err <- {}", buffer);
Ok(()) Ok(())
} }
pub async fn send_payload(&mut self, payload: Payload) -> io::Result<()> { async fn send_payload_to_server(&mut self, payload: Payload) -> Result<()> {
match payload { //TODO: reuse string
let json = match payload {
Payload::Request { chan, value } => { Payload::Request { chan, value } => {
self.pending_requests.insert(value.id.clone(), chan); self.pending_requests.insert(value.id.clone(), chan);
serde_json::to_string(&value)?
let json = serde_json::to_string(&value)?;
self.send(json).await
} }
Payload::Notification(value) => { Payload::Notification(value) => serde_json::to_string(&value)?,
let json = serde_json::to_string(&value)?; Payload::Response(error) => serde_json::to_string(&error)?,
self.send(json).await };
} self.send_string_to_server(json).await
Payload::Response(error) => {
let json = serde_json::to_string(&error)?;
self.send(json).await
}
}
} }
pub async fn send(&mut self, request: String) -> io::Result<()> { async fn send_string_to_server(&mut self, request: String) -> Result<()> {
info!("-> {}", request); info!("-> {}", request);
// send the headers // send the headers
self.writer self.server_stdin
.write_all(format!("Content-Length: {}\r\n\r\n", request.len()).as_bytes()) .write_all(format!("Content-Length: {}\r\n\r\n", request.len()).as_bytes())
.await?; .await?;
// send the body // send the body
self.writer.write_all(request.as_bytes()).await?; self.server_stdin.write_all(request.as_bytes()).await?;
self.writer.flush().await?; self.server_stdin.flush().await?;
Ok(()) Ok(())
} }
async fn recv_msg(&mut self, msg: Message) -> anyhow::Result<()> { async fn process_server_message(&mut self, msg: ServerMessage) -> Result<()> {
match msg { match msg {
Message::Output(output) => self.recv_response(output).await?, ServerMessage::Output(output) => self.process_request_response(output).await?,
Message::Call(call) => { ServerMessage::Call(call) => {
self.incoming.send(call).unwrap(); self.client_tx.send(call).unwrap();
// let notification = Notification::parse(&method, params); // let notification = Notification::parse(&method, params);
} }
}; };
Ok(()) Ok(())
} }
async fn recv_response(&mut self, output: jsonrpc::Output) -> io::Result<()> { async fn process_request_response(&mut self, output: jsonrpc::Output) -> Result<()> {
let (id, result) = match output { let (id, result) = match output {
jsonrpc::Output::Success(jsonrpc::Success { id, result, .. }) => { jsonrpc::Output::Success(jsonrpc::Success { id, result, .. }) => {
info!("<- {}", result); info!("<- {}", result);
@@ -200,29 +191,33 @@ impl Transport {
Ok(()) Ok(())
} }
pub async fn duplex(mut self) { async fn duplex(mut self) {
let mut recv_buffer = String::new();
let mut err_buffer = String::new();
loop { loop {
tokio::select! { tokio::select! {
// client -> server // client -> server
msg = self.outgoing.recv() => { msg = self.client_rx.recv() => {
if msg.is_none() { match msg {
break; Some(msg) => {
self.send_payload_to_server(msg).await.unwrap()
},
None => break
} }
let msg = msg.unwrap();
self.send_payload(msg).await.unwrap();
} }
// server <- client // server -> client
msg = Self::recv(&mut self.reader, &mut self.headers) => { msg = Self::recv_server_message(&mut self.server_stdout, &mut recv_buffer) => {
if msg.is_err() { match msg {
error!("err: <- {:?}", msg); Ok(msg) => {
break; self.process_server_message(msg).await.unwrap();
}
Err(_) => {
error!("err: <- {:?}", msg);
break;
},
} }
let msg = msg.unwrap();
self.recv_msg(msg).await.unwrap();
} }
_msg = Self::err(&mut self.stderr) => {} _msg = Self::recv_server_error(&mut self.server_stderr, &mut err_buffer) => {}
} }
} }
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "helix-syntax" name = "helix-syntax"
version = "0.0.10" version = "0.2.0"
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"

View File

@@ -68,18 +68,19 @@ mk_langs!(
// 2) tree-sitter function to call to get a Language // 2) tree-sitter function to call to get a Language
(Agda, tree_sitter_agda), (Agda, tree_sitter_agda),
(Bash, tree_sitter_bash), (Bash, tree_sitter_bash),
(C, tree_sitter_c),
(CSharp, tree_sitter_c_sharp),
(Cpp, tree_sitter_cpp), (Cpp, tree_sitter_cpp),
(CSharp, tree_sitter_c_sharp),
(Css, tree_sitter_css), (Css, tree_sitter_css),
(C, tree_sitter_c),
(Elixir, tree_sitter_elixir), (Elixir, tree_sitter_elixir),
(Go, tree_sitter_go), (Go, tree_sitter_go),
// (Haskell, tree_sitter_haskell), // (Haskell, tree_sitter_haskell),
(Html, tree_sitter_html), (Html, tree_sitter_html),
(Java, tree_sitter_java),
(Javascript, tree_sitter_javascript), (Javascript, tree_sitter_javascript),
(Java, tree_sitter_java),
(Json, tree_sitter_json), (Json, tree_sitter_json),
(Julia, tree_sitter_julia), (Julia, tree_sitter_julia),
(Nix, tree_sitter_nix),
(Php, tree_sitter_php), (Php, tree_sitter_php),
(Python, tree_sitter_python), (Python, tree_sitter_python),
(Ruby, tree_sitter_ruby), (Ruby, tree_sitter_ruby),

View File

@@ -1,12 +1,14 @@
[package] [package]
name = "helix-term" name = "helix-term"
version = "0.0.10" version = "0.2.0"
description = "A post-modern text editor." description = "A post-modern text editor."
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html [package.metadata.nix]
build = true
app = true
[features] [features]
embed_runtime = ["helix-core/embed_runtime"] embed_runtime = ["helix-core/embed_runtime"]

View File

@@ -1,6 +1,7 @@
use helix_lsp::lsp;
use helix_view::{document::Mode, Document, Editor, Theme, View}; use helix_view::{document::Mode, Document, Editor, Theme, View};
use crate::{compositor::Compositor, ui, Args}; use crate::{args::Args, compositor::Compositor, ui};
use log::{error, info}; use log::{error, info};
@@ -177,7 +178,7 @@ impl Application {
let diagnostics = params let diagnostics = params
.diagnostics .diagnostics
.into_iter() .into_iter()
.map(|diagnostic| { .filter_map(|diagnostic| {
use helix_core::{ use helix_core::{
diagnostic::{Range, Severity, Severity::*}, diagnostic::{Range, Severity, Severity::*},
Diagnostic, Diagnostic,
@@ -188,18 +189,29 @@ impl Application {
let language_server = doc.language_server().unwrap(); let language_server = doc.language_server().unwrap();
// TODO: convert inside server // TODO: convert inside server
let start = lsp_pos_to_pos( let start = if let Some(start) = lsp_pos_to_pos(
text, text,
diagnostic.range.start, diagnostic.range.start,
language_server.offset_encoding(), language_server.offset_encoding(),
); ) {
let end = lsp_pos_to_pos( start
} else {
log::warn!("lsp position out of bounds - {:?}", diagnostic);
return None;
};
let end = if let Some(end) = lsp_pos_to_pos(
text, text,
diagnostic.range.end, diagnostic.range.end,
language_server.offset_encoding(), language_server.offset_encoding(),
); ) {
end
} else {
log::warn!("lsp position out of bounds - {:?}", diagnostic);
return None;
};
Diagnostic { Some(Diagnostic {
range: Range { start, end }, range: Range { start, end },
line: diagnostic.range.start.line as usize, line: diagnostic.range.start.line as usize,
message: diagnostic.message, message: diagnostic.message,
@@ -213,7 +225,7 @@ impl Application {
), ),
// code // code
// source // source
} })
}) })
.collect(); .collect();
@@ -228,6 +240,59 @@ impl Application {
Notification::LogMessage(params) => { Notification::LogMessage(params) => {
log::warn!("unhandled window/logMessage: {:?}", params); log::warn!("unhandled window/logMessage: {:?}", params);
} }
Notification::ProgressMessage(params) => {
let token = match params.token {
lsp::NumberOrString::Number(n) => n.to_string(),
lsp::NumberOrString::String(s) => s,
};
let msg = {
let lsp::ProgressParamsValue::WorkDone(work) = params.value;
let parts = match work {
lsp::WorkDoneProgress::Begin(lsp::WorkDoneProgressBegin {
title,
message,
percentage,
..
}) => (Some(title), message, percentage.map(|n| n.to_string())),
lsp::WorkDoneProgress::Report(lsp::WorkDoneProgressReport {
message,
percentage,
..
}) => (None, message, percentage.map(|n| n.to_string())),
lsp::WorkDoneProgress::End(lsp::WorkDoneProgressEnd {
message,
}) => {
if let Some(message) = message {
(None, Some(message), None)
} else {
self.editor.clear_status();
return;
}
}
};
match parts {
(Some(title), Some(message), Some(percentage)) => {
format!("{}% {} - {}", percentage, title, message)
}
(Some(title), None, Some(percentage)) => {
format!("{}% {}", percentage, title)
}
(Some(title), Some(message), None) => {
format!("{} - {}", title, message)
}
(None, Some(message), Some(percentage)) => {
format!("{}% {}", percentage, message)
}
(Some(title), None, None) => title,
(None, Some(message), None) => message,
(None, None, Some(percentage)) => format!("{}%", percentage),
(None, None, None) => "".into(),
}
};
let status = format!("[{}] {}", token, msg);
self.editor.set_status(status);
self.render();
}
_ => unreachable!(), _ => unreachable!(),
} }
} }

53
helix-term/src/args.rs Normal file
View File

@@ -0,0 +1,53 @@
use anyhow::{Error, Result};
use std::path::PathBuf;
#[derive(Default)]
pub struct Args {
pub display_help: bool,
pub display_version: bool,
pub verbosity: u64,
pub files: Vec<PathBuf>,
}
impl Args {
pub fn parse_args() -> Result<Args> {
let mut args = Args::default();
let argv: Vec<String> = std::env::args().collect();
let mut iter = argv.iter();
iter.next(); // skip the program, we don't care about that
while let Some(arg) = iter.next() {
match arg.as_str() {
"--" => break, // stop parsing at this point treat the remaining as files
"--version" => args.display_version = true,
"--help" => args.display_help = true,
arg if arg.starts_with("--") => {
return Err(Error::msg(format!(
"unexpected double dash argument: {}",
arg
)))
}
arg if arg.starts_with('-') => {
let arg = arg.get(1..).unwrap().chars();
for chr in arg {
match chr {
'v' => args.verbosity += 1,
'V' => args.display_version = true,
'h' => args.display_help = true,
_ => return Err(Error::msg(format!("unexpected short arg {}", chr))),
}
}
}
arg => args.files.push(PathBuf::from(arg)),
}
}
// push the remaining args, if any to the files
for filename in iter {
args.files.push(PathBuf::from(filename));
}
Ok(args)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -182,10 +182,8 @@ pub trait AnyComponent {
/// # Examples /// # Examples
/// ///
/// ```rust /// ```rust
/// # use cursive_core::views::TextComponent; /// // let boxed: Box<Component> = Box::new(TextComponent::new("text"));
/// # use cursive_core::view::Component; /// // let text: Box<TextComponent> = boxed.as_boxed_any().downcast().unwrap();
/// let boxed: Box<Component> = Box::new(TextComponent::new("text"));
/// let text: Box<TextComponent> = boxed.as_boxed_any().downcast().unwrap();
/// ``` /// ```
fn as_boxed_any(self: Box<Self>) -> Box<dyn Any>; fn as_boxed_any(self: Box<Self>) -> Box<dyn Any>;
} }

View File

@@ -11,7 +11,8 @@ use std::collections::HashMap;
// W = next WORD // W = next WORD
// e = end of word // e = end of word
// E = end of WORD // E = end of WORD
// r = // r = replace
// R = replace with yanked
// t = 'till char // t = 'till char
// y = yank // y = yank
// u = undo // u = undo
@@ -156,6 +157,7 @@ pub fn default() -> Keymaps {
// and matching set for select mode (extend) // and matching set for select mode (extend)
// //
key!('r') => commands::replace, key!('r') => commands::replace,
key!('R') => commands::replace_with_yanked,
KeyEvent { KeyEvent {
code: KeyCode::Home, code: KeyCode::Home,
@@ -278,12 +280,17 @@ pub fn default() -> Keymaps {
// z family for save/restore/combine from/to sels from register // z family for save/restore/combine from/to sels from register
ctrl!('i') => commands::jump_forward, // TODO: ctrl-i conflicts tab KeyEvent { // supposedly ctrl!('i') but did not work
code: KeyCode::Tab,
modifiers: KeyModifiers::NONE,
} => commands::jump_forward,
ctrl!('o') => commands::jump_backward, ctrl!('o') => commands::jump_backward,
// ctrl!('s') => commands::save_selection, // ctrl!('s') => commands::save_selection,
key!(' ') => commands::space_mode, key!(' ') => commands::space_mode,
key!('z') => commands::view_mode, key!('z') => commands::view_mode,
key!('"') => commands::select_register,
); );
// TODO: decide whether we want normal mode to also be select mode (kakoune-like), or whether // TODO: decide whether we want normal mode to also be select mode (kakoune-like), or whether
// we keep this separate select mode. More keys can fit into normal mode then, but it's weird // we keep this separate select mode. More keys can fit into normal mode then, but it's weird

8
helix-term/src/lib.rs Normal file
View File

@@ -0,0 +1,8 @@
#![allow(unused)]
pub mod application;
pub mod args;
pub mod commands;
pub mod compositor;
pub mod keymap;
pub mod ui;

View File

@@ -1,16 +1,9 @@
#![allow(unused)] use helix_term::application::Application;
use helix_term::args::Args;
mod application;
mod commands;
mod compositor;
mod keymap;
mod ui;
use application::Application;
use std::path::PathBuf; use std::path::PathBuf;
use anyhow::{Context, Error, Result}; use anyhow::{Context, Result};
fn setup_logging(logpath: PathBuf, verbosity: u64) -> Result<()> { fn setup_logging(logpath: PathBuf, verbosity: u64) -> Result<()> {
let mut base_config = fern::Dispatch::new(); let mut base_config = fern::Dispatch::new();
@@ -45,58 +38,11 @@ fn setup_logging(logpath: PathBuf, verbosity: u64) -> Result<()> {
Ok(()) Ok(())
} }
pub struct Args {
display_help: bool,
display_version: bool,
verbosity: u64,
files: Vec<PathBuf>,
}
fn parse_args(mut args: Args) -> Result<Args> {
let argv: Vec<String> = std::env::args().collect();
let mut iter = argv.iter();
iter.next(); // skip the program, we don't care about that
while let Some(arg) = iter.next() {
match arg.as_str() {
"--" => break, // stop parsing at this point treat the remaining as files
"--version" => args.display_version = true,
"--help" => args.display_help = true,
arg if arg.starts_with("--") => {
return Err(Error::msg(format!(
"unexpected double dash argument: {}",
arg
)))
}
arg if arg.starts_with('-') => {
let arg = arg.get(1..).unwrap().chars();
for chr in arg {
match chr {
'v' => args.verbosity += 1,
'V' => args.display_version = true,
'h' => args.display_help = true,
_ => return Err(Error::msg(format!("unexpected short arg {}", chr))),
}
}
}
arg => args.files.push(PathBuf::from(arg)),
}
}
// push the remaining args, if any to the files
for filename in iter {
args.files.push(PathBuf::from(filename));
}
Ok(args)
}
#[tokio::main] #[tokio::main]
async fn main() -> Result<()> { async fn main() -> Result<()> {
let cache_dir = helix_core::cache_dir(); let cache_dir = helix_core::cache_dir();
if !cache_dir.exists() { if !cache_dir.exists() {
std::fs::create_dir(&cache_dir); std::fs::create_dir_all(&cache_dir).ok();
} }
let logpath = cache_dir.join("helix.log"); let logpath = cache_dir.join("helix.log");
@@ -125,14 +71,7 @@ FLAGS:
logpath.display(), logpath.display(),
); );
let mut args: Args = Args { let args = Args::parse_args().context("could not parse arguments")?;
display_help: false,
display_version: false,
verbosity: 0,
files: [].to_vec(),
};
args = parse_args(args).context("could not parse arguments")?;
// Help has a higher priority and should be handled separately. // Help has a higher priority and should be handled separately.
if args.display_help { if args.display_help {
@@ -147,14 +86,14 @@ FLAGS:
let conf_dir = helix_core::config_dir(); let conf_dir = helix_core::config_dir();
if !conf_dir.exists() { if !conf_dir.exists() {
std::fs::create_dir(&conf_dir); std::fs::create_dir_all(&conf_dir).ok();
} }
setup_logging(logpath, args.verbosity).context("failed to initialize logging")?; setup_logging(logpath, args.verbosity).context("failed to initialize logging")?;
// TODO: use the thread local executor to spawn the application task separately from the work pool // TODO: use the thread local executor to spawn the application task separately from the work pool
let mut app = Application::new(args).context("unable to create new appliction")?; let mut app = Application::new(args).context("unable to create new appliction")?;
app.run().await; app.run().await.unwrap();
Ok(()) Ok(())
} }

View File

@@ -108,20 +108,6 @@ impl Completion {
let item = item.unwrap(); let item = item.unwrap();
use helix_lsp::{lsp, util}; use helix_lsp::{lsp, util};
// determine what to insert: text_edit | insert_text | label
let edit = if let Some(edit) = &item.text_edit {
match edit {
lsp::CompletionTextEdit::Edit(edit) => edit.clone(),
lsp::CompletionTextEdit::InsertAndReplace(item) => {
unimplemented!("completion: insert_and_replace {:?}", item)
}
}
} else {
item.insert_text.as_ref().unwrap_or(&item.label);
unimplemented!();
// lsp::TextEdit::new(); TODO: calculate a TextEdit from insert_text
// and we insert at position.
};
// if more text was entered, remove it // if more text was entered, remove it
let cursor = doc.selection(view.id).cursor(); let cursor = doc.selection(view.id).cursor();
@@ -134,11 +120,27 @@ impl Completion {
} }
use helix_lsp::OffsetEncoding; use helix_lsp::OffsetEncoding;
let transaction = util::generate_transaction_from_edits( let transaction = if let Some(edit) = &item.text_edit {
doc.text(), let edit = match edit {
vec![edit], lsp::CompletionTextEdit::Edit(edit) => edit.clone(),
offset_encoding, // TODO: should probably transcode in Client lsp::CompletionTextEdit::InsertAndReplace(item) => {
); unimplemented!("completion: insert_and_replace {:?}", item)
}
};
util::generate_transaction_from_edits(
doc.text(),
vec![edit],
offset_encoding, // TODO: should probably transcode in Client
)
} else {
let text = item.insert_text.as_ref().unwrap_or(&item.label);
let cursor = doc.selection(view.id).cursor();
Transaction::change(
doc.text(),
vec![(cursor, cursor, Some(text.as_str().into()))].into_iter(),
)
};
doc.apply(&transaction, view.id); doc.apply(&transaction, view.id);
// TODO: merge edit with additional_text_edits // TODO: merge edit with additional_text_edits

View File

@@ -34,6 +34,12 @@ pub struct EditorView {
const OFFSET: u16 = 7; // 1 diagnostic + 5 linenr + 1 gutter const OFFSET: u16 = 7; // 1 diagnostic + 5 linenr + 1 gutter
impl Default for EditorView {
fn default() -> Self {
Self::new()
}
}
impl EditorView { impl EditorView {
pub fn new() -> Self { pub fn new() -> Self {
Self { Self {
@@ -328,11 +334,13 @@ impl EditorView {
if let Some(pos) = pos { if let Some(pos) = pos {
let pos = view.screen_coords_at_pos(doc, text, pos); let pos = view.screen_coords_at_pos(doc, text, pos);
if let Some(pos) = pos { if let Some(pos) = pos {
// this only prevents panic due to painting selection too far if (pos.col as u16) < viewport.width + view.first_col as u16
// TODO: prevent painting when scroll past x or in gutter && pos.col >= view.first_col
// TODO: use a more correct width check {
if (pos.col as u16) < viewport.width { let style = Style::default()
let style = Style::default().add_modifier(Modifier::REVERSED); .add_modifier(Modifier::REVERSED)
.add_modifier(Modifier::DIM);
surface surface
.get_mut( .get_mut(
viewport.x + pos.col as u16, viewport.x + pos.col as u16,
@@ -355,7 +363,7 @@ impl EditorView {
let info: Style = theme.get("info"); let info: Style = theme.get("info");
let hint: Style = theme.get("hint"); let hint: Style = theme.get("hint");
for (i, line) in (view.first_line..=last_line).enumerate() { for (i, line) in (view.first_line..last_line).enumerate() {
use helix_core::diagnostic::Severity; use helix_core::diagnostic::Severity;
if let Some(diagnostic) = doc.diagnostics().iter().find(|d| d.line == line) { if let Some(diagnostic) = doc.diagnostics().iter().find(|d| d.line == line) {
surface.set_stringn( surface.set_stringn(
@@ -519,7 +527,8 @@ impl EditorView {
// count handling // count handling
key!(i @ '0'..='9') => { key!(i @ '0'..='9') => {
let i = i.to_digit(10).unwrap() as usize; let i = i.to_digit(10).unwrap() as usize;
cxt.editor.count = Some(cxt.editor.count.map_or(i, |c| c * 10 + i)); cxt.editor.count =
std::num::NonZeroUsize::new(cxt.editor.count.map_or(i, |c| c.get() * 10 + i));
} }
// special handling for repeat operator // special handling for repeat operator
key!('.') => { key!('.') => {
@@ -532,11 +541,14 @@ impl EditorView {
} }
_ => { _ => {
// set the count // set the count
cxt.count = cxt.editor.count.take().unwrap_or(1); cxt._count = cxt.editor.count.take();
// TODO: edge case: 0j -> reset to 1 // TODO: edge case: 0j -> reset to 1
// if this fails, count was Some(0) // if this fails, count was Some(0)
// debug_assert!(cxt.count != 0); // debug_assert!(cxt.count != 0);
// set the register
cxt.register = cxt.editor.register.take();
if let Some(command) = self.keymap[&mode].get(&event) { if let Some(command) = self.keymap[&mode].get(&event) {
command(cxt); command(cxt);
} }
@@ -575,11 +587,12 @@ impl Component for EditorView {
let mode = doc.mode(); let mode = doc.mode();
let mut cxt = commands::Context { let mut cxt = commands::Context {
register: helix_view::RegisterSelection::default(),
editor: &mut cx.editor, editor: &mut cx.editor,
count: 1, _count: None,
callback: None, callback: None,
callbacks: cx.callbacks,
on_next_key_callback: None, on_next_key_callback: None,
callbacks: cx.callbacks,
}; };
if let Some(on_next_key) = self.on_next_key.take() { if let Some(on_next_key) = self.on_next_key.take() {

View File

@@ -166,8 +166,8 @@ impl<T: Item + 'static> Component for Menu<T> {
} }
// arrow up/ctrl-p/shift-tab prev completion choice (including updating the doc) // arrow up/ctrl-p/shift-tab prev completion choice (including updating the doc)
KeyEvent { KeyEvent {
code: KeyCode::Tab, code: KeyCode::BackTab,
modifiers: KeyModifiers::SHIFT, ..
} }
| KeyEvent { | KeyEvent {
code: KeyCode::Up, .. code: KeyCode::Up, ..

View File

@@ -85,10 +85,15 @@ pub fn file_picker(root: PathBuf) -> Picker<PathBuf> {
Err(_err) => None, Err(_err) => None,
}); });
const MAX: usize = 2048; let files = if root.join(".git").is_dir() {
files.collect()
} else {
const MAX: usize = 8192;
files.take(MAX).collect()
};
Picker::new( Picker::new(
files.take(MAX).collect(), files,
move |path: &PathBuf| { move |path: &PathBuf| {
// format_fn // format_fn
path.strip_prefix(&root) path.strip_prefix(&root)

View File

@@ -151,7 +151,11 @@ impl<T: 'static> Component for Picker<T> {
code: KeyCode::Up, .. code: KeyCode::Up, ..
} }
| KeyEvent { | KeyEvent {
code: KeyCode::Char('k'), code: KeyCode::BackTab,
..
}
| KeyEvent {
code: KeyCode::Char('p'),
modifiers: KeyModifiers::CONTROL, modifiers: KeyModifiers::CONTROL,
} => self.move_up(), } => self.move_up(),
KeyEvent { KeyEvent {
@@ -159,11 +163,18 @@ impl<T: 'static> Component for Picker<T> {
.. ..
} }
| KeyEvent { | KeyEvent {
code: KeyCode::Char('j'), code: KeyCode::Tab, ..
}
| KeyEvent {
code: KeyCode::Char('n'),
modifiers: KeyModifiers::CONTROL, modifiers: KeyModifiers::CONTROL,
} => self.move_down(), } => self.move_down(),
KeyEvent { KeyEvent {
code: KeyCode::Esc, .. code: KeyCode::Esc, ..
}
| KeyEvent {
code: KeyCode::Char('c'),
modifiers: KeyModifiers::CONTROL,
} => { } => {
return close_fn; return close_fn;
} }
@@ -177,7 +188,7 @@ impl<T: 'static> Component for Picker<T> {
return close_fn; return close_fn;
} }
KeyEvent { KeyEvent {
code: KeyCode::Char('x'), code: KeyCode::Char('h'),
modifiers: KeyModifiers::CONTROL, modifiers: KeyModifiers::CONTROL,
} => { } => {
if let Some(option) = self.selection() { if let Some(option) = self.selection() {

View File

@@ -125,13 +125,13 @@ impl<T: Component> Component for Popup<T> {
let mut rel_x = position.col as u16; let mut rel_x = position.col as u16;
let mut rel_y = position.row as u16; let mut rel_y = position.row as u16;
if viewport.width <= rel_x + width { if viewport.width <= rel_x + width {
rel_x -= ((rel_x + width) - viewport.width) rel_x = rel_x.saturating_sub((rel_x + width).saturating_sub(viewport.width));
}; };
// TODO: be able to specify orientation preference. We want above for most popups, below // TODO: be able to specify orientation preference. We want above for most popups, below
// for menus/autocomplete. // for menus/autocomplete.
if height <= rel_y { if height <= rel_y {
rel_y -= height // position above point rel_y = rel_y.saturating_sub(height) // position above point
} else { } else {
rel_y += 1 // position below point rel_y += 1 // position below point
} }

View File

@@ -18,7 +18,7 @@ pub struct Prompt {
pub doc_fn: Box<dyn Fn(&str) -> Option<&'static str>>, pub doc_fn: Box<dyn Fn(&str) -> Option<&'static str>>,
} }
#[derive(PartialEq)] #[derive(Clone, Copy, PartialEq)]
pub enum PromptEvent { pub enum PromptEvent {
/// The prompt input has been updated. /// The prompt input has been updated.
Update, Update,
@@ -126,8 +126,21 @@ impl Prompt {
let selected_color = theme.get("ui.menu.selected"); let selected_color = theme.get("ui.menu.selected");
// completion // completion
let max_col = std::cmp::max(1, area.width / BASE_WIDTH); let max_len = self
let height = ((self.completion.len() as u16 + max_col - 1) / max_col); .completion
.iter()
.map(|(_, completion)| completion.len() as u16)
.max()
.unwrap_or(BASE_WIDTH)
.max(BASE_WIDTH);
let cols = std::cmp::max(1, area.width / max_len);
let col_width = (area.width - (cols)) / cols;
let height = ((self.completion.len() as u16 + cols - 1) / cols)
.min(10) // at most 10 rows (or less)
.min(area.height);
let completion_area = Rect::new( let completion_area = Rect::new(
area.x, area.x,
(area.height - height).saturating_sub(1), (area.height - height).saturating_sub(1),
@@ -144,7 +157,13 @@ impl Prompt {
let mut row = 0; let mut row = 0;
let mut col = 0; let mut col = 0;
for (i, (_range, completion)) in self.completion.iter().enumerate() { // TODO: paginate
for (i, (_range, completion)) in self
.completion
.iter()
.enumerate()
.take(height as usize * cols as usize)
{
let color = if Some(i) == self.selection { let color = if Some(i) == self.selection {
// Style::default().bg(Color::Rgb(104, 60, 232)) // Style::default().bg(Color::Rgb(104, 60, 232))
selected_color // TODO: just invert bg selected_color // TODO: just invert bg
@@ -152,10 +171,10 @@ impl Prompt {
text_color text_color
}; };
surface.set_stringn( surface.set_stringn(
area.x + 1 + col * BASE_WIDTH, area.x + col * (1 + col_width),
area.y + row, area.y + row,
&completion, &completion,
BASE_WIDTH as usize - 1, col_width.saturating_sub(1) as usize,
color, color,
); );
row += 1; row += 1;
@@ -163,9 +182,6 @@ impl Prompt {
row = 0; row = 0;
col += 1; col += 1;
} }
if col > max_col {
break;
}
} }
} }

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "helix-tui" name = "helix-tui"
version = "0.0.10" version = "0.2.0"
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
description = """ description = """
A library to build rich terminal user interfaces or dashboards A library to build rich terminal user interfaces or dashboards

View File

@@ -1,6 +1,6 @@
[package] [package]
name = "helix-view" name = "helix-view"
version = "0.0.10" version = "0.2.0"
authors = ["Blaž Hrastnik <blaz@mxxn.io>"] authors = ["Blaž Hrastnik <blaz@mxxn.io>"]
edition = "2018" edition = "2018"
license = "MPL-2.0" license = "MPL-2.0"

View File

@@ -5,15 +5,16 @@ use std::path::{Component, Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use helix_core::{ use helix_core::{
history::History,
syntax::{LanguageConfiguration, LOADER}, syntax::{LanguageConfiguration, LOADER},
ChangeSet, Diagnostic, History, Rope, Selection, State, Syntax, Transaction, ChangeSet, Diagnostic, Rope, Selection, State, Syntax, Transaction,
}; };
use crate::{DocumentId, ViewId}; use crate::{DocumentId, ViewId};
use std::collections::HashMap; use std::collections::HashMap;
#[derive(Copy, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum Mode { pub enum Mode {
Normal, Normal,
Select, Select,
@@ -52,6 +53,29 @@ pub struct Document {
language_server: Option<Arc<helix_lsp::Client>>, language_server: Option<Arc<helix_lsp::Client>>,
} }
use std::fmt;
impl fmt::Debug for Document {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Document")
.field("id", &self.id)
.field("text", &self.text)
.field("selections", &self.selections)
.field("path", &self.path)
.field("mode", &self.mode)
.field("restore_cursor", &self.restore_cursor)
.field("syntax", &self.syntax)
.field("language", &self.language)
.field("changes", &self.changes)
.field("old_state", &self.old_state)
// .field("history", &self.history)
.field("last_saved_revision", &self.last_saved_revision)
.field("version", &self.version)
.field("diagnostics", &self.diagnostics)
// .field("language_server", &self.language_server)
.finish()
}
}
/// Like std::mem::replace() except it allows the replacement value to be mapped from the /// Like std::mem::replace() except it allows the replacement value to be mapped from the
/// original value. /// original value.
fn take_with<T, F>(mut_ref: &mut T, closure: F) fn take_with<T, F>(mut_ref: &mut T, closure: F)
@@ -147,7 +171,12 @@ impl Document {
Rope::from("\n") Rope::from("\n")
} else { } else {
let file = File::open(&path).context(format!("unable to open {:?}", path))?; let file = File::open(&path).context(format!("unable to open {:?}", path))?;
Rope::from_reader(BufReader::new(file))? let mut doc = Rope::from_reader(BufReader::new(file))?;
// add missing newline at the end of file
if doc.len_bytes() == 0 || doc.byte(doc.len_bytes() - 1) != b'\n' {
doc.insert_char(doc.len_chars(), '\n');
}
doc
}; };
let mut doc = Self::new(doc); let mut doc = Self::new(doc);
@@ -359,7 +388,7 @@ impl Document {
success success
} }
pub fn undo(&mut self, view_id: ViewId) -> bool { pub fn undo(&mut self, view_id: ViewId) {
let mut history = self.history.take(); let mut history = self.history.take();
let success = if let Some(transaction) = history.undo() { let success = if let Some(transaction) = history.undo() {
self._apply(&transaction, view_id) self._apply(&transaction, view_id)
@@ -372,11 +401,9 @@ impl Document {
// reset changeset to fix len // reset changeset to fix len
self.changes = ChangeSet::new(self.text()); self.changes = ChangeSet::new(self.text());
} }
success
} }
pub fn redo(&mut self, view_id: ViewId) -> bool { pub fn redo(&mut self, view_id: ViewId) {
let mut history = self.history.take(); let mut history = self.history.take();
let success = if let Some(transaction) = history.redo() { let success = if let Some(transaction) = history.redo() {
self._apply(&transaction, view_id) self._apply(&transaction, view_id)
@@ -389,8 +416,20 @@ impl Document {
// reset changeset to fix len // reset changeset to fix len
self.changes = ChangeSet::new(self.text()); self.changes = ChangeSet::new(self.text());
} }
}
false pub fn earlier(&mut self, view_id: ViewId, uk: helix_core::history::UndoKind) {
let txns = self.history.get_mut().earlier(uk);
for txn in txns {
self._apply(&txn, view_id);
}
}
pub fn later(&mut self, view_id: ViewId, uk: helix_core::history::UndoKind) {
let txns = self.history.get_mut().later(uk);
for txn in txns {
self._apply(&txn, view_id);
}
} }
pub fn append_changes_to_history(&mut self, view_id: ViewId) { pub fn append_changes_to_history(&mut self, view_id: ViewId) {
@@ -423,8 +462,7 @@ impl Document {
let history = self.history.take(); let history = self.history.take();
let current_revision = history.current_revision(); let current_revision = history.current_revision();
self.history.set(history); self.history.set(history);
self.path.is_some() current_revision != self.last_saved_revision || !self.changes.is_empty()
&& (current_revision != self.last_saved_revision || !self.changes.is_empty())
} }
#[inline] #[inline]

View File

@@ -1,4 +1,4 @@
use crate::{theme::Theme, tree::Tree, Document, DocumentId, View, ViewId}; use crate::{theme::Theme, tree::Tree, Document, DocumentId, RegisterSelection, View, ViewId};
use tui::layout::Rect; use tui::layout::Rect;
use std::path::PathBuf; use std::path::PathBuf;
@@ -9,17 +9,19 @@ use anyhow::Error;
pub use helix_core::diagnostic::Severity; pub use helix_core::diagnostic::Severity;
#[derive(Debug)]
pub struct Editor { pub struct Editor {
pub tree: Tree, pub tree: Tree,
pub documents: SlotMap<DocumentId, Document>, pub documents: SlotMap<DocumentId, Document>,
pub count: Option<usize>, pub count: Option<std::num::NonZeroUsize>,
pub register: RegisterSelection,
pub theme: Theme, pub theme: Theme,
pub language_servers: helix_lsp::Registry, pub language_servers: helix_lsp::Registry,
pub status_msg: Option<(String, Severity)>, pub status_msg: Option<(String, Severity)>,
} }
#[derive(Copy, Clone)] #[derive(Debug, Copy, Clone)]
pub enum Action { pub enum Action {
Replace, Replace,
HorizontalSplit, HorizontalSplit,
@@ -57,12 +59,17 @@ impl Editor {
tree: Tree::new(area), tree: Tree::new(area),
documents: SlotMap::with_key(), documents: SlotMap::with_key(),
count: None, count: None,
register: RegisterSelection::default(),
theme, theme,
language_servers, language_servers,
status_msg: None, status_msg: None,
} }
} }
pub fn clear_status(&mut self) {
self.status_msg = None;
}
pub fn set_status(&mut self, status: String) { pub fn set_status(&mut self, status: String) {
self.status_msg = Some((status, Severity::Info)); self.status_msg = Some((status, Severity::Info));
} }
@@ -81,6 +88,12 @@ impl Editor {
pub fn switch(&mut self, id: DocumentId, action: Action) { pub fn switch(&mut self, id: DocumentId, action: Action) {
use crate::tree::Layout; use crate::tree::Layout;
use helix_core::Selection; use helix_core::Selection;
if !self.documents.contains_key(id) {
log::error!("cannot switch to document that does not exist (anymore)");
return;
}
match action { match action {
Action::Replace => { Action::Replace => {
let view = self.view(); let view = self.view();
@@ -91,6 +104,7 @@ impl Editor {
let view = self.view_mut(); let view = self.view_mut();
view.jumps.push(jump); view.jumps.push(jump);
view.last_accessed_doc = Some(view.doc);
view.doc = id; view.doc = id;
view.first_line = 0; view.first_line = 0;
@@ -153,7 +167,7 @@ impl Editor {
let language_server = doc let language_server = doc
.language .language
.as_ref() .as_ref()
.and_then(|language| self.language_servers.get(language)); .and_then(|language| self.language_servers.get(language).ok());
if let Some(language_server) = language_server { if let Some(language_server) = language_server {
doc.set_language_server(Some(language_server.clone())); doc.set_language_server(Some(language_server.clone()));
@@ -194,7 +208,7 @@ impl Editor {
let language_server = doc let language_server = doc
.language .language
.as_ref() .as_ref()
.and_then(|language| language_servers.get(language)); .and_then(|language| language_servers.get(language).ok());
if let Some(language_server) = language_server { if let Some(language_server) = language_server {
tokio::spawn(language_server.text_document_did_close(doc.identifier())); tokio::spawn(language_server.text_document_did_close(doc.identifier()));
} }

View File

@@ -1,5 +1,6 @@
pub mod document; pub mod document;
pub mod editor; pub mod editor;
pub mod register_selection;
pub mod theme; pub mod theme;
pub mod tree; pub mod tree;
pub mod view; pub mod view;
@@ -10,5 +11,6 @@ new_key_type! { pub struct ViewId; }
pub use document::Document; pub use document::Document;
pub use editor::Editor; pub use editor::Editor;
pub use register_selection::RegisterSelection;
pub use theme::Theme; pub use theme::Theme;
pub use view::View; pub use view::View;

View File

@@ -0,0 +1,48 @@
/// Register selection and configuration
///
/// This is a kind a of specialized `Option<char>` for register selection.
/// Point is to keep whether the register selection has been explicitely
/// set or not while being convenient by knowing the default register name.
#[derive(Debug)]
pub struct RegisterSelection {
selected: char,
default_name: char,
}
impl RegisterSelection {
pub fn new(default_name: char) -> Self {
Self {
selected: default_name,
default_name,
}
}
pub fn select(&mut self, name: char) {
self.selected = name;
}
pub fn take(&mut self) -> Self {
Self {
selected: std::mem::replace(&mut self.selected, self.default_name),
default_name: self.default_name,
}
}
pub fn is_default(&self) -> bool {
self.selected == self.default_name
}
pub fn name(&self) -> char {
self.selected
}
}
impl Default for RegisterSelection {
fn default() -> Self {
let default_name = '"';
Self {
selected: default_name,
default_name,
}
}
}

View File

@@ -4,6 +4,7 @@ use tui::layout::Rect;
// the dimensions are recomputed on windo resize/tree change. // the dimensions are recomputed on windo resize/tree change.
// //
#[derive(Debug)]
pub struct Tree { pub struct Tree {
root: ViewId, root: ViewId,
// (container, index inside the container) // (container, index inside the container)
@@ -17,11 +18,13 @@ pub struct Tree {
stack: Vec<(ViewId, Rect)>, stack: Vec<(ViewId, Rect)>,
} }
#[derive(Debug)]
pub struct Node { pub struct Node {
parent: ViewId, parent: ViewId,
content: Content, content: Content,
} }
#[derive(Debug)]
pub enum Content { pub enum Content {
View(Box<View>), View(Box<View>),
Container(Box<Container>), Container(Box<Container>),
@@ -45,13 +48,14 @@ impl Node {
// TODO: screen coord to container + container coordinate helpers // TODO: screen coord to container + container coordinate helpers
#[derive(PartialEq, Eq)] #[derive(Debug, PartialEq, Eq)]
pub enum Layout { pub enum Layout {
Horizontal, Horizontal,
Vertical, Vertical,
// could explore stacked/tabbed // could explore stacked/tabbed
} }
#[derive(Debug)]
pub struct Container { pub struct Container {
layout: Layout, layout: Layout,
children: Vec<ViewId>, children: Vec<ViewId>,
@@ -432,6 +436,7 @@ impl Tree {
} }
} }
#[derive(Debug)]
pub struct Traverse<'a> { pub struct Traverse<'a> {
tree: &'a Tree, tree: &'a Tree,
stack: Vec<ViewId>, // TODO: reuse the one we use on update stack: Vec<ViewId>, // TODO: reuse the one we use on update

View File

@@ -12,6 +12,7 @@ pub const PADDING: usize = 5;
type Jump = (DocumentId, Selection); type Jump = (DocumentId, Selection);
#[derive(Debug)]
pub struct JumpList { pub struct JumpList {
jumps: Vec<Jump>, jumps: Vec<Jump>,
current: usize, current: usize,
@@ -37,20 +38,28 @@ impl JumpList {
pub fn forward(&mut self, count: usize) -> Option<&Jump> { pub fn forward(&mut self, count: usize) -> Option<&Jump> {
if self.current + count < self.jumps.len() { if self.current + count < self.jumps.len() {
self.current += count; self.current += count;
return self.jumps.get(self.current); self.jumps.get(self.current)
} else {
None
} }
None
} }
pub fn backward(&mut self, count: usize) -> Option<&Jump> { // Taking view and doc to prevent unnecessary cloning when jump is not required.
if self.current.checked_sub(count).is_some() { pub fn backward(&mut self, view_id: ViewId, doc: &mut Document, count: usize) -> Option<&Jump> {
self.current -= count; if let Some(current) = self.current.checked_sub(count) {
return self.jumps.get(self.current); if self.current == self.jumps.len() {
let jump = (doc.id(), doc.selection(view_id).clone());
self.push(jump);
}
self.current = current;
self.jumps.get(self.current)
} else {
None
} }
None
} }
} }
#[derive(Debug)]
pub struct View { pub struct View {
pub id: ViewId, pub id: ViewId,
pub doc: DocumentId, pub doc: DocumentId,
@@ -58,6 +67,8 @@ pub struct View {
pub first_col: usize, pub first_col: usize,
pub area: Rect, pub area: Rect,
pub jumps: JumpList, pub jumps: JumpList,
/// the last accessed file before the current one
pub last_accessed_doc: Option<DocumentId>,
} }
impl View { impl View {
@@ -69,6 +80,7 @@ impl View {
first_col: 0, first_col: 0,
area: Rect::default(), // will get calculated upon inserting into tree area: Rect::default(), // will get calculated upon inserting into tree
jumps: JumpList::new((doc, Selection::point(0))), // TODO: use actual sel jumps: JumpList::new((doc, Selection::point(0))), // TODO: use actual sel
last_accessed_doc: None,
} }
} }
@@ -106,7 +118,7 @@ impl View {
/// Calculates the last visible line on screen /// Calculates the last visible line on screen
#[inline] #[inline]
pub fn last_line(&self, doc: &Document) -> usize { pub fn last_line(&self, doc: &Document) -> usize {
let height = self.area.height.saturating_sub(2); // - 2 for statusline let height = self.area.height.saturating_sub(1); // - 1 for statusline
std::cmp::min( std::cmp::min(
self.first_line + height as usize, self.first_line + height as usize,
doc.text().len_lines() - 1, doc.text().len_lines() - 1,

View File

@@ -4,6 +4,7 @@ scope = "source.rust"
injection-regex = "rust" injection-regex = "rust"
file-types = ["rs"] file-types = ["rs"]
roots = [] roots = []
auto-format = true
language-server = { command = "rust-analyzer" } language-server = { command = "rust-analyzer" }
indent = { tab-width = 4, unit = " " } indent = { tab-width = 4, unit = " " }
@@ -61,6 +62,7 @@ scope = "source.go"
injection-regex = "go" injection-regex = "go"
file-types = ["go"] file-types = ["go"]
roots = ["Gopkg.toml", "go.mod"] roots = ["Gopkg.toml", "go.mod"]
auto-format = true
language-server = { command = "gopls" } language-server = { command = "gopls" }
# TODO: gopls needs utf-8 offsets? # TODO: gopls needs utf-8 offsets?
@@ -116,6 +118,15 @@ language-server = { command = "pyls" }
# TODO: pyls needs utf-8 offsets # TODO: pyls needs utf-8 offsets
indent = { tab-width = 2, unit = " " } indent = { tab-width = 2, unit = " " }
[[language]]
name = "nix"
scope = "source.nix"
injection-regex = "nix"
file-types = ["nix"]
roots = []
indent = { tab-width = 2, unit = " " }
[[language]] [[language]]
name = "ruby" name = "ruby"
scope = "source.ruby" scope = "source.ruby"

View File

@@ -17,7 +17,8 @@
(atom_content) (atom_content)
(atom_end)] @tag (atom_end)] @tag
(comment) @comment [(comment)
(unused_identifier)] @comment
(escape_sequence) @escape (escape_sequence) @escape
@@ -50,8 +51,7 @@
left: (identifier) @variable.parameter left: (identifier) @variable.parameter
operator: _ @function operator: _ @function
right: (identifier) @variable.parameter)] right: (identifier) @variable.parameter)]
(#match? @keyword "^(defp|def|defmacrop|defmacro|defguardp|defguard|defdelegate)$") (#match? @keyword "^(defp|def|defmacrop|defmacro|defguardp|defguard|defdelegate)$"))
(#match? @variable.parameter "^[^_]"))
(call (function_identifier) @keyword (call (function_identifier) @keyword
[(call [(call
@@ -73,8 +73,7 @@
(_ (_ (identifier) @variable.parameter)) (_ (_ (identifier) @variable.parameter))
(_ (_ (_ (identifier) @variable.parameter))) (_ (_ (_ (identifier) @variable.parameter)))
(_ (_ (_ (_ (identifier) @variable.parameter)))) (_ (_ (_ (_ (identifier) @variable.parameter))))
(_ (_ (_ (_ (_ (identifier) @variable.parameter)))))])) (_ (_ (_ (_ (_ (identifier) @variable.parameter)))))])))
(#match? @variable.parameter "^[^_]"))
(unary_op (unary_op
operator: "@" operator: "@"
@@ -134,13 +133,6 @@
">>" ">>"
] @punctuation.bracket ] @punctuation.bracket
[(identifier) @function.special (special_identifier) @function.special
(#match? @function.special "^__.+__$")]
[(remote_identifier) @function.special
(#match? @function.special "^__.+__$")]
[(identifier) @comment
(#match? @comment "^_")]
(ERROR) @warning (ERROR) @warning

View File

@@ -0,0 +1,87 @@
(comment) @comment
[
"if"
"then"
"else"
"let"
"inherit"
"in"
"rec"
"with"
"assert"
] @keyword
((identifier) @variable.builtin
(#match? @variable.builtin "^(__currentSystem|__currentTime|__nixPath|__nixVersion|__storeDir|builtins|false|null|true)$")
(#is-not? local))
((identifier) @function.builtin
(#match? @function.builtin "^(__add|__addErrorContext|__all|__any|__appendContext|__attrNames|__attrValues|__bitAnd|__bitOr|__bitXor|__catAttrs|__compareVersions|__concatLists|__concatMap|__concatStringsSep|__deepSeq|__div|__elem|__elemAt|__fetchurl|__filter|__filterSource|__findFile|__foldl'|__fromJSON|__functionArgs|__genList|__genericClosure|__getAttr|__getContext|__getEnv|__hasAttr|__hasContext|__hashFile|__hashString|__head|__intersectAttrs|__isAttrs|__isBool|__isFloat|__isFunction|__isInt|__isList|__isPath|__isString|__langVersion|__length|__lessThan|__listToAttrs|__mapAttrs|__match|__mul|__parseDrvName|__partition|__path|__pathExists|__readDir|__readFile|__replaceStrings|__seq|__sort|__split|__splitVersion|__storePath|__stringLength|__sub|__substring|__tail|__toFile|__toJSON|__toPath|__toXML|__trace|__tryEval|__typeOf|__unsafeDiscardOutputDependency|__unsafeDiscardStringContext|__unsafeGetAttrPos|__valueSize|abort|baseNameOf|derivation|derivationStrict|dirOf|fetchGit|fetchMercurial|fetchTarball|fromTOML|import|isNull|map|placeholder|removeAttrs|scopedImport|throw|toString)$")
(#is-not? local))
[
(string)
(indented_string)
] @string
[
(path)
(hpath)
(spath)
] @string.special.path
(uri) @string.special.uri
[
(integer)
(float)
] @number
(interpolation
"${" @punctuation.special
"}" @punctuation.special) @embedded
(escape_sequence) @escape
(function
universal: (identifier) @variable.parameter
)
(formal
name: (identifier) @variable.parameter
"?"? @punctuation.delimiter)
(app
function: [
(identifier) @function
(select
attrpath: (attrpath
attr: (attr_identifier) @function .))])
(unary
operator: _ @operator)
(binary
operator: _ @operator)
(attr_identifier) @property
(inherit attrs: (attrs_inherited (identifier) @property) )
[
";"
"."
","
] @punctuation.delimiter
[
"("
")"
"["
"]"
"{"
"}"
] @punctuation.bracket
(identifier) @variable

View File

@@ -0,0 +1,9 @@
indent = [
"if",
"let",
"function",
"attrset",
"list",
"indented_string",
"parenthesized"
]

View File

@@ -1,20 +1,6 @@
{ stdenv, pkgs }: # Flake's devShell for non-flake-enabled nix instances
let
pkgs.mkShell { src = (builtins.fromJSON (builtins.readFile ./flake.lock)).nodes.flakeCompat.locked;
nativeBuildInputs = with pkgs; [ compat = fetchTarball { url = "https://github.com/edolstra/flake-compat/archive/${src.rev}.tar.gz"; sha256 = src.narHash; };
(rust-bin.stable.latest.default.override { extensions = ["rust-src"]; }) in
lld_10 (import compat { src = ./.; }).shellNix.default
lldb
# pythonPackages.six
stdenv.cc.cc.lib
# pkg-config
];
RUSTFLAGS = "-C link-arg=-fuse-ld=lld -C target-cpu=native";
RUST_BACKTRACE = "1";
# https://github.com/rust-lang/rust/issues/55979
LD_LIBRARY_PATH="${stdenv.cc.cc.lib}/lib64:$LD_LIBRARY_PATH";
shellHook = ''
export HELIX_RUNTIME=$PWD/runtime
'';
}