Shellac Protocol Proposal - oilshell/oil GitHub Wiki

(Back to Shell Autocompletion)

This is a DRAFT. Don't circulate yet!

May 2019: Shellac Protocol Proposal V2

Shellac Protocol

Shellac is a protocol for shell-agnostic autocompletion. Shells and command line tools written in any language can communicate with each other.

Motivation

The status quo is that you can only expect upstream authors to maintain autocompletions for bash, the most popular shell in the world.

Shellac is a simple protocol aims to change this dynamic. The author of a CLI tool can easily implement it, and their completions will work in all shells that are Shellac clients.

The author of a shell can implement Shellac and get many common completions "for free". (These may be basic bash-style completions, or more elaborate zsh/fish style ones.).

In addition, existing corpuses of completion logic like the bash-completion project, the zsh core, and zsh-completions can be wrapped in this protocol, and reused by alternative shells like Oil or Elvish.

Overview

Roughly speaking, Shellac plays the same role for shells as the Language Server Protocol does for editors, but it looks more like CGI or FastCGI.

Shellac clients request completions, and Shellac servers provide them.

  • A client is typically a shell like Elvish, ZSH, Oil.
    • It could also be an editor that's editing a shell script! (Vim, EMacs, VS Code, etc.)
    • Clients knows how to find server binaries, send them requests, and parse their replies.
  • A server could be the binary itself (git, npm, clang) OR a shell!
    • That is, the completion logic could be written in C, JavaScript, or Python -- or it could be written in Elvish, ZSH, or Oil (or a compleat-like DSL).
    • So note that shells are both clients and servers. They may request completions or they may provide them.
    • Servers have logic about the syntax of specific commands. They may shell out to additional binaries.
    • (You can also call a server that runs in single-shot "batch" mode a provider.)

Rough Example 1

Let's use the example of busybox ash, which is derived from the dash code. I've heard some people complain that you have to use bash on Alpine Linux to get completions, because ash/dash have no support for it. The Shellac protocol potentially provides a migration path out of that situation.

Type this in ash:

$ git --git-dir . a<TAB>

ash will act as a Shellac client. It forms a request that looks something like this (encoding to be discussed):

{ "SHELLAC_ARGV": ["git", "--git-dir", ".", "a"]
  "SHELLAC_ARGV_INDEX": 3,
  "SHELLAC_CHAR_INDEX": 1,
}

ash just needs way of associating a command with a binary that supports the Shellac protocol. It doesn't need its own completion API.

It invokes the Shellac server/provider. Servers come in two flavors: SHELLAC_MODE=batch and SHELLAC_MODE=coprocess:

  • batch starts and stops a process every time you hit <TAB>, like complete -C in bash.
  • coprocess maintains a persistent process that reads and writes from pipes.

In this case, let's say we have a batch provider. It can just be the bash interpreter itself running git-completion.bash! We should be able to write a tiny wrapper shcomp_provider.bash that adapts between the bash completionAPI and the Shellac protocol.

The response is:

{ "candidates": ["add", "am", "annotate", "apply", "archive"] }

ash displays these alternatives to the user.

NOTE: I've written the protocol like JSON, but the encoding will most likely not be JSON.

Rough Example 2

Like the above, but perhaps Clang decides to implement Shellac. Then you have ash invoking Clang itself, not ash invoking bash.

Request format

SHELLAC_* environment prefix. SHELLAC_

SHELLAC_ARGV@, SHELLAC_ARG_INDEX, SHELLAC_CHAR_INDEX ?

problem: you can't have NUL bytes for arrays? Maybe the request comes on stdin then? Can bash deal with that?

  • read -d $'' ?

  • $SHELLAC_VERSION environment variable for detection.

  • $SHELLAC_MODE=batch, or coprocess, or even JSON-RPC. Perhaps text editors that already use the Language Server Protocol will want to use JSON-RPC. I think the xi-editor uses JSON-RPC.

Response format

Types of responses:

  • {"candidates": ["doc", "doc2"]}
  • {"candidates": [ {"value": "--all", "description": "list all"}, ... ]}
  • candidate stream
  • {"compgen": {"what": "file", "prefix": "w"}} -- delegate back

Status/Progress communication

(from Ilya Sher)

There must be a status/progress communication. During the response building phase, imagine the completion server needs to talk to all AWS regions (even if it is in parallel, it is not fast). It would be nice to have something like "Listing instances. Found X instances in R out of RR regions". Why all regions? It's a real use case. There is an ec2din.ngs with --allreg switch.

We can go further into semantic level. Status - an arbitrary string. Progress can be more defined, such as X out of Y items done. This more defined approach will let shells to display the information in a meaningful way, maybe a progress bar.

The server should also be able to communicate ETA (I think this is less important).

Needs thinking and discussion.

Response Encoding

  • netstrings are out because bash can't generate the length of a bytestring!
  • Don't want newlines, because newlines can appear in filenames! touch $'\n'.
  • So we use NUL delimited strings. Maybe we have a length prefix for the array count. ${#COMPREPLY[@]}.

Rich Completions

The request and response format have a JSON-like data model, so ZSH-like descriptions can also be returned:

ls --a
--all                                      -- list entries starting with .
--almost-all                               -- list all except . and ..
--author                                   -- print the author of each file
{ "candidates": [
  {"value": "--all", "desc": "list entires starting with ." }, 
   ...
  ]
}

This kind of structured data should handle the following:

  • Per- match descriptions
  • Grouping of matches
  • Per-group descriptions
  • Sorted/unsorted groups

Delegating Back to the Shell For Rich Completions

Filename completion could be fuzzy or case-insensitive. Instead of returning candidates, the completion server can specify a type of completion

{ "compgen": { "what": "files", "prefix": "RE" }}  # complete files beginning with RE

{ "compgen": { "what": "dirs", "prefix": "foo/testdata/c" }}  # complete dirs

This is similar to a bash completion function invoking compgen. It's user-defined code delegating back to the shell.

Other ZSH Like Features

(from Oliver Kiddle)

  • auto-remove: diff --col gets completed to diff --color=, but you might want to press space and remove the =.

    • this might also remove commas and trailing slashes
    • hm not sure I like this feature. problem: bash redraws the prompt a lot.
  • completion of part of a word:

    • for diff --color=a<TAB> -C1, auto is a suggestion that replaces a, not the whole word.
    • but why not just complete --color=auto then?
    • TODO: how does this relate to the fact that readline redraws the entire command line?
      • if we send over an argv array and not a shell string, how does this get handled?
    • how about we just say that only a whole word is completed?
  • color highlighting: I think anything that happens on every keypress is out of scope for Shellac ?

    • we mostly care about things that happen when you hit TAB.

Modes

  • CLI providers - stdin, environment variables, stdout
  • Coprocess providers
  • Maybe later: JSON-RPC like the language server protocol. I don't necessarily see the need for multi-threaded servers, but we'll see.

Character Encodings

Shellac clients and servers should prefer UTF-8 where possible. But file system paths are often the things being completed, and they are just byte strings. So technically most of the strings in the request and response format are NUL-terminated byte sterings, and UTF-8 is a special case of that.

Dispatch

  • Should this be done with the file system? Or It can be done in the shell itself with registration functions.
    • complete -C git_completion_command git already registers a command. It could be complete -S for Shellac.

Typical Client Algorithm

  • Partially parse the shell language to argv. Perhaps to variable and tilde subsitution. The last argv entry may be incomplete or empty. (TODO: does it make sense to complete in the middle?)
  • Dispatch to the right binary that implements Shellac
  • Start it up with SHELLAC_VERSION=0.1 to make sure it supports the protocol.
  • Send over ARGV, as NUL-terminated strings. Maybe an array length prefix.
  • Receive a response.
  • Dequote them into shell syntax -- e.g. ${x@Q} in bash -- and then display to the user.

Typical Server Algorithm

  • Check if you were started with SHELLAC_VERSION=<non-empty>.
  • Check if you were started with SHELLAC_MODE=batch or SHELLAC_MODE=coprocess and behave as appropriate.
  • Receive ARGV.
  • Determine completions. Example strategies:
    • Run an existing command line parser or use its data structures to figure out what we need to complete
    • dynamically grep --help (or a cached copy of it). bash-completion does this grepping.
  • Send back a response header?
  • Send back REPLY

Design and Implementation Issues

  • Shells should NOT consult a Shellac completion server for $<TAB> and ${<TAB>. They should complete their own variables!
  • If you have something ls $(echo long-time; sleep 100) --ref=<TAB>, then the $(echo) can be replaced with DUMMY before sending it to the completion server.
  • What about tilde expansion? That can be done beforehand? Or the completion provider has to know about it?
  • Are the key-value pairs in arbitrary order?

Streaming Responses

  • Low latency for shells is important. A user might want to accept a completion before all candidates are generated (e.g. from a distributed file system or cloud storage service). So we need to support streaming.

  • Instead of length-prefixed arrays, we can have arrays terminated by sentinels. The sentinel could just be an additional \0 byte? That is like the empty string.

Security

  • To prevent resource exhaustion attacks, shells may truncate long strings.
  • Completion servers can be sandboxed since they only communicate over stdin and stdout.

Why Coprocesses?

For low latency responses. Startup time of processes is large, especially for Python, Ruby, JVM, Julia, etc.

Why not Multithreaded Servers?

  • Because most CLI tools use global variables, making this difficult.
  • Because shells need to modify global process state, like the descriptor state, calling wait(), etc. It would very difficult to have two threads each running a shell interpreter, both calling wait(). Single-threaded is more robust and easier to implement.

Why not put one completion per line?

Because touch $'\n' breaks that protocol.

Risks

  • If there are N different completion servers, does that lead to an inconsistent user experience?

    • This can be somewhat mitigated somewhat by delegating back to the shell for more behavior (completion of filenames).
  • What about deployment of completions? Instead of zsh or bash scripts, they're now arbitrary code in other languages. This could lead to greater requirements for sandboxing.

    • On the other hand, if the completion is packaged with the binary, it leads to FEWER deployment problems and fewer versioning problems.
  • Maybe we can have a Shellac option for a static descriptions, like a --help or --helpxml.

Related

⚠️ **GitHub.com Fallback** ⚠️