1

Update generated neovim config

This commit is contained in:
2024-09-22 20:41:25 +02:00
parent 1743764e48
commit aa1271c42c
1247 changed files with 26512 additions and 15067 deletions

View File

@ -95,7 +95,7 @@ If your contribution updates annotations used to generate help file, please rege
## Testing
If your contribution updates code and you use Linux (not Windows or MacOS), please make sure that it doesn't break existing tests. If it adds new functionality or fixes a recognized bug, add new test case(s). There are two ways of running tests:
If your contribution updates code, please make sure that it doesn't break existing tests. If it adds new functionality or fixes a recognized bug, add new test case(s). There are two ways of running tests:
- From command line:
- Execute `make test` to run all tests (with `nvim` as executable).
@ -107,11 +107,13 @@ If your contribution updates code and you use Linux (not Windows or MacOS), plea
This plugin uses 'mini.test' to manage its tests. For a more hands-on introduction, see [TESTING.md](TESTING.md).
**Notes**:
- If you have Windows or MacOS and want to contribute code related change, make your best effort to not break existing behavior. It will later be tested automatically after making Pull Request. The reason for this distinction is that tests are not well designed to be run on those operating systems.
- If new functionality relies on an external dependency (`git` CLI tool, LSP server, etc.), use mocking (writing Lua code which emulates dependency usage as close as reasonably possible). For examples, take a look at tests for 'mini.pick', 'mini.completion', and 'mini.statusline'.
- There is a certain number of tests that are flaky (i.e. will sometimes report an error due to other reasons than actual functionality being broke). It is usually the ones which test time related functionality (i.e. that certain action was done after specific amount of delay).
A commonly used way to know if the test is flaky is that it fails on non-nightly Neovim version yet there were no changes to its tested module after it had passed in the past. For example, some 'mini.animate' test is shown to break but there were no changes to it since test passed in CI couple of days before.
In case there is some test breaking which reasonably should not, rerun that test (or the whole file) at least several times.
- Advice for writing more robust tests:
- To test asynchronous or slow execution, use common `sleep()` test helper. For a more robust testing code, **never** directly use numbers to compute sleep time. Use precomputed time delay constants, which should always take into account different testing OSs (like be bigger on Windows, etc.). If module testing requires its extensive use and tests can not be made robust enough (examples are 'mini.animate', 'mini.jump', etc.), consider using it with argument that skips entire test case if `sleep()` is called in slow context.
- Take into account that Windows uses "\" as default path separator instead of Unix "/". This should be accounted either in module's code (preferably) or in test files (for example, by computing path separator and relying on it).
## Formatting
@ -226,6 +228,7 @@ Here is a list of all highlight groups defined inside 'mini.nvim' modules. See d
- `MiniPickBorder`
- `MiniPickBorderBusy`
- `MiniPickBorderText`
- `MiniPickCursor`
- `MiniPickIconDirectory`
- `MiniPickIconFile`
- `MiniPickHeader`