Skip to content

Latest commit

 

History

History
185 lines (114 loc) · 8.92 KB

Development.md

File metadata and controls

185 lines (114 loc) · 8.92 KB

Development

Requirements

I believe some of these make the project unfit for development under Windows & Powershell.

I'll be happy to update the setup to work there if an interested contributor is willing to drive this change. (And I'm willing to pair program on this.)

On the other hand, the Windows Subsystem for Linux (WSL) should do just fine.

Sublime Syntaxes

I cannot say much here that isn't better explained in the official documentation.

More precisely, the targeted interpreter for the syntax is syntect, a Rust implementation of Sublime Syntaxes used by bat. For all intents and purposes, the Sublime Syntax documentation for v3.2 also applies to syntect.

Both Sublime Syntaxes and syntect use the Oniguruma regex engine, which follows this specification. (No need to read through this though, just keep it as reference.)

This syntax

You don't have to build or compile the syntax. The testing scripts take care of that, isolated within a Docker container. You only need to check the test results.

You should take a look at the Principles doc. It contains guidelines on the syntax's scope, coding style, etc.

I owe an Architecture doc on how the syntax is structured. In the meantime, if you get confused by how things work, ask me and I shall explain. (And I'll use that session to write the doc!)

You can also configure your IDE to highlight .sublime-syntax files like YAML code. For instance, I have this in my VS Code settings:

"files.associations": {
    "*.sublime-syntax": "yaml"
},

Testing

Tests in this project are cheap as fuck (in time & space) so don't hesitate to run them often and add new ones as required.

The project uses two classes of tests:

  • syntax tests: much like unit tests
  • regression tests: catch-all integration tests

There's a number of command-line utils I use to streamline development, including tests. You can load them with $ source scripts/utils

Syntax tests

These are assertion-based. They live in tests/syntax/. You can find how they work in the official documentation.

You can run them with $ tests/syntax.py

Syntax tests should always pass. If your changes break an assertion, update that assertion within the same commit.

If you expect the tests to pass, it's better to run them with the summary option: $ tests/syntax.py -s

Syntax tests are also useful to debug the workings of the syntax line by line. If you can't make heads or tails of why something doesn't work, run $ tests/syntax.py -d tests/syntax/<TEST_FILE>

scripts/utils also contains an util to run with debug and pipe the output to less: $ debug tests/syntax/<TEST_FILE>

The syntax -d output is a bit hard to grasp unless you understand Sublime Syntaxes very well (not my case). So have patience and ask for help if you get stuck.

Lastly, there's an util to create a syntax test from a tests/source/ help message: $ mksyn tests/source/<SOURCE_FILE>

As a rule of thumb, if there's a syntax test for a command, it should also be in tests/source.

Sublime syntax regex can get quite obscure, but syntax test changes are a quick & easy reference of the implications of a syntax change. For that, I like to pair both within the same commit, so the former serves as an example of the later.

Highlight regression tests

There are a bunch of samples (help messages from actual commands) in tests/source.

Regression tests take all these samples and run them through bat + cmd-help, storing the result (syntax highlighted text) in tests/highlighted/.

Run them with $ tests/highlight_regression

This script runs the entire suite of highlight regression tests. They usually take < 5s, but it depends on how fast Docker + bat update the syntax theme.

They're good for quickly validating if your change breaks existing functionality.

Displaying regression test differences

Because the result files in tests/highlighted contain color escape characters (e.g.: ESC[0m), they should be opened with less -R.

To check highlight regression changes with respect to the git index, load scripts/utils and do $ reg

There are a few more utils based on reg:

  • For the regression diff between the index and the last commit (so staged changes), do $ regs
  • For the regression diff between your branch and main, do $ regm
  • To show the regression diff from a commit (HEAD by default), do $ regshow

Syntax tests > regression tests to illustrate changes

After a big syntax change that causes many highlight test files to update, it's ok not to include them in the same commit as the syntax change.

Instead, I focus on a syntax test example, or include just 2-3 representative regression tests in the commit. Then I update the rest of the highlight tests in a follow-up commit.

For instance, see commit 6b0171b and its follow-up d158798.

Theme regression tests

These track the syntax's coverage for the themes included with bat. The motivation for which is documented in Principles.md.

You probably don't need to look into these unless you change the scopes that we assign to tokens. If you do, do follow the guidelines in the Principles doc.

You can run them with $ tests/theme_regression

It runs a synthetic help message through bat + cmd-help, twice for each theme: with and without italics enabled. Then it stores the result in tests/theme_regression/, but deleting the italics version if it makes no difference.

Everything I mentioned on highlight_regression applies here, just replacing highlight for theme.

As for committing scope name changes, I bundle the syntax change with the tests/theme/ changes and leave syntax + highlight tests changes for a follow-up commit.

For example, see commit 47a0e8e and its follow-up 4689def.

Finding pending tasks

I use comments in syntax tests as to-do markers, to indicate pending work and/or known issues.

The keywords I use are, in order of severity/priority: fixme, todo, nice, nit, wontfix. For instance, #todo: handle option patterns like this.

If you search for these keywords in the project, you will find a lot of pending tasks.

I should probably make these into GitHub Issues, but I haven't got the chance to do it yet. You're welcome to lend a hand with this if you want!

There's also a Roadmap doc that defines some long-term goals for the project. Look there if you want to break entirely new ground.

Spotting issues in regression tests

The corpus of highlighted help messages in tests/highlighted is also a great place to look for pending work:

  • do $ less -R tests/highlighted/*
  • iterate through the files with :n (next) and :p (previous)
  • scroll through the highlighted text and look for tokens that should (not) be colorized

Sample development workflow

  1. Search pending tasks with the to-do marker keywords. Choose one.
  2. Update the assertion/s related to that task, negate it to make it fail.
  3. Make whatever changes in the syntax to try and fix that test case.
  4. Run tests/syntax.py to check the assertion now passes without breaking anything else.
  5. Run tests/highlight_regression.sh to ensure the changes don't have unintended consequences in the larger body of help messages.
  6. Repeat steps 3-5 until you get it right.
  7. Commit, early and often!
  8. Run tests/theme_regression.sh to check that you didn't break support for any theme.
  9. Submit a Pull Request

Chores

There are some How-To documents explaining how to do some maintainance tasks in the project's wiki.

Contact

Contact me (@victor-gp) if you have trouble getting started, get stuck during development or want to ask for direction.

You can find my email address in the commit messages.