Command Line Usage - QueensGambit/CrazyAra GitHub Wiki

Command Line Usage

For most convenient usage, it is recommended to use an external GUI (e.g. Cute Chess, XBoard, WinBoard if possible.

However, to check if the engine is working properly, or for instance to run a position analysis, you can use the command line interface of the engine instead.

To start the engine from the command-line, you open a new command line window from the same directory as the engine executable (e.g. on Windows: Navigate to the release folder -> SHIFT+Right-Click -> Open Shell here).

Now you can start the engine (Linux: ./CrazyAra, Windows: .\CrazyAra.exe, Python-Version: python crazyara.py).

After CrazyAra's welcome banner, you can use

uci

to show all available UCI options.

In order to modify a specific option use for e.g.:

setoption name UCI_Variant value crazyhouse

The default options should suffice in most case, and you are free to skip them. To load the neural network into memory, use the command:

isready

If the neural network was loaded successfully, it will reply:

readyok

By default CrazyAra is set to the default starting position for the defined UCI_Variant. In order to setup a specific game position in Forsyth–Edwards Notation (FEN) use:

position fen rnbqkb1r/pppp1ppp/5n2/4p3/4P3/5N2/PPPP1PPP/RNBQKB1R/ w KQkq - 4 3

Alternatively, you can define a position by a series of moves.

position startpos moves e2e4 e7e5

In order to run a position for a given time, until a certain depth, or for a given amount of nodes, use the go command:

go movetime <milliseconds>
go depth <ply-depth>
go nodes <nodes>

To start an infinite search use:

go infinite

The search can also be stopped using:

stop

Additional non-standard UCI commands

Furthermore, there are additional non-standard UCI commands available for releases >= 0.6.0:

Command Description Required Build Parameter Supported Versions
root Shows a list of available moves and their respective visits (n), initial probability (p), and evaluation (q). By default all moves are ordered based on the initial network policy without any search. The p (policy/probability) entry tells how likely a certain move will be explored during search. If the UCI-options Centi_Dirichlet_Epsilon and Centi_Node_Temperature both to zero, you get the direct neural network probabilities without being affected by modifications or external noise. Noise or temperature scaling is used to encourage exploration. The column n tells how often each move was explored. The move with the most visits, will by default become the best move after the search. The column q evaluates each move in a score ranging from -1 (lost) to +1 (won) in the perspective of the current player to move. Q is based on the average of all value evaluation of the respective subtree. Higher q-values will be explored more often and often end with the most visits. - ≥0.6.0
benchmark <X> Runs a benchmark for <X> milli-seconds on a predefined list of positions. Afterwards a summary on the average and mean nodes per seconds (nps) and depth-search is given. - ≥0.6.0
selfplay <N> Generates <N> new games in self-play mode using the current loaded model. The training samples will be exported in the directory data_<device>_<device_id>.zarr. If <N> == 0, then an arbitrary amount of games will be generated until all samples of a single export file are filled. The number of samples is Selfplay_Number_Chunks * Selfplay_Chunk_Size and defined as UCI-options. -DUSE_RL=ON ≥0.7.0
arena <N> Loads the contender model from the direectory specified in Model_Directory_Contender and runs <N> comparison matches between the current model. Afterwards a summary is given and either keep or replace is returned. In the case of replace, it is suggested that the current contender network should become the new generating network. -DUSE_RL=ON ≥0.7.0
tree <maxDepth> <filename.gv> Exports the current search tree as a .gv file. Next the file can be rendered e.g. via: dot -Tpng tree.gv -o tree.png - ≥0.9.0
match <1_TYPE> <1_MODEL> <2_TYPE> <2_MODEL> <N> Let you test two models against each other. It is an extension to the arena command to specify two different agents or models. The match command is passed 5 parameters. <N> the number of matches to be made. Parameter 2. and 3. determine the first agent. The parameters consist of the type of the agent <1_TYPE> (default is 0: MCTS) and the folder of the model <1_MODEL>. The models must be placed in folders next to the binary and follow the scheme m1, m2, m3,... Parameter 4. and 5. specify the second agent. Example: match 0 1 0 2 100 which results in 100 matches between agent 1 (model in folder m1) and agent 2 (model in folder m2). Both models use the standard MCTS agent. -DUSE_RL=ON ≥1.0.1
tournament <N> <1_TYPE> <1_MODEL> <2_TYPE> <2_MODEL> <N_TYPE> <N_MODEL> Tournament is an extension of the arena command to compare multiple (>2 agents) as easily as possible. For this purpose a round robin tournament is played between all defined agents. The syntax of the command is similar to the match command. The first parameter of the command defines the number of games played between two agents. Afterwards, any number (>2) of agents can be passed as (type,folder) tuples. Example: tournament 100 0 1 0 2 0 3 0 4 0 5 defines 5 agents which all have the same agent type (0, MCTS, default) but each uses a different model (folder m1,m2,m3,m4 and m5). A total of 1000 games are played here. -DUSE_RL=ON ≥1.0.1

More information about the UCI protocol can be found here:

⚠️ **GitHub.com Fallback** ⚠️