Wasp Chess Engine

Home         Downloads         Technical         About John         Links

The following sections give just a few details for some of the more important functions in Wasp. Note: This will probably be mostly illegible to anyone who doesn't have some prior knowledge of chess programming. A really good resource is the Chess Programming Wiki.

Board Representation

Wasp uses a board[64] array indexed by square where square a1 has an index of 0 and square h8 has an index of 63.

56 57 58 59 60 61 62 63
48 49 50 51 52 53 54 55
40 41 42 43 44 45 46 47
32 33 34 35 36 37 38 39
24 25 26 27 28 29 30 31
16 17 18 19 20 21 22 23
 8  9 10 11 12 13 14 15
 0  1  2  3  4  5  6  7
The value in board[sq] is a number representing the color and type of piece on that square. An empty square has a value of 0, white pieces have a value of 1..6 and black pieces have a value of 9..14.

Wasp makes extensive use of 64-bit integer "bitboards" where each bit represents a square on the chessboard. Square a1 is represented by the LSB of the bitboard and square h8 by the MSB. Wasp has an array of bitboards called PieceBB[2][7] where PieceBB[white][0] has bits set for for the squares of all white pieces, PieceBB[white][1] has bits set for the squares of all white pawns, and so on. The board[] and piece bitboards arrays are updated whenever a move is made or unmade.

Attack Generation

Upon startup, Wasp intitializes bitboards such as

BITBOARD PawnRays[2][64], KnightRays[64],BishopRays[64], RookRays[64], KingRays[64],
         RayN[64], RayS[64], RayE[64], RayW[64], RayNE[64], RaySW[64], RayNW[64], RaySE[64],
         Between[64][64];
to assist in quick computation of attacks for a give type of piece on a given square. For example, to find all squares attacked by a knight on g5, a simple lookup of KnightRays[sq_g5] gives a bitboard with bits set for squares e4,f3,h3,h7,f7, and e6. Attacks for pawns and kings are done by similar lookups. Sliders are a bit harder since some squares may be blocked by other pieces. To generate attacks for sliders Wasp uses the simple Classical Approach as shown below.
BITBOARD GenAttacksB(SQUARE sq, BITBOARD occ)
{
  return(BishopRays[sq]
         ^ RayNW[GetLSB((RayNW[sq] & occ) | 0x8000000000000000ULL)]
         ^ RayNE[GetLSB((RayNE[sq] & occ) | 0x8000000000000000ULL)]
         ^ RaySW[GetMSB((RaySW[sq] & occ) | 0x0000000000000001ULL)]
         ^ RaySE[GetMSB((RaySE[sq] & occ) | 0x0000000000000001ULL)]);
}
This is probably slower than methods such as magic bitboards, but the calls to GenAttacksB() and GenAttacksR() represent a very small fraction of the execution time for Wasp, so I haven't bothered to work on this. Also, I like the simplicity of this approach.

To see if a rook on square "fsq" attacks a piece on square "tsq" does not require calculation of all rook attacks from "fsq". A simple statement

BITBOARD blocked = (Between[fsq][tsq] & occupied);
is done to see if the ray is blocked.

Move Generation

Moves are generated for a piece on square fsq by first getting the attack bitboard for fsq and then extracting each of the destination squares. For example, bishop moves are generated like this:

BITBOARD bb = (GenAttacksB(fsq,occ) & target);
while (bb)
  {
    SQUARE tsq = PopLSB(&bb);
    AddMove(fsq,tsq);
  }
For generating captures, the target bitboard in the above code snippet contains only the squares occupied by enemy pieces. For generating quiet moves the target bitboard contains only empty squares.

Pawn non-captures, enpassant, and castling moves require a little special handling but are also efficiently generated using bitboards.

Wasp stores a move in a 16-bit integer where bits 0-5 contain the destination square, bits 8-13 contain the origination square, and bits 6,7,14,15 are used for promotion piece type, or type of castling move if it is a king move. Wasp has an "extended move" structure containing a 16-bit move, a 16-bit sort-score, and 8-bit integers for the moving and captured piece types.

Move ordering is extremely important in reducing the size of the search tree. Wasp assigns sort scores for captures using the "most valuable victim least valuable aggressor" (MVVLVA) heuristic and applies a static exchange evaluator (SEE) to detect losing captures. Wasp uses killer and refutation move tables and a fairly simple history table to assign sort scores to quiet moves. These tables are updated each time a new best move is found in the search.

Moves are generated in stages in order to speed up the search, if a move from an early stage gives a cutoff then no more moves need to be generated. The move generator returns moves in the following order: hash move, winning captures and promotions, quiet moves with a decent "sort" score, losing captures, and quiet moves with poor sort score. The best untried move is selected and returned to the search until a cutoff is reached or all moves have been generated and tried.

Search

Wasp uses a fairly conventional iteratively-deepened alpha-beta negamax search with a quiescence search (only captures, checks, and promotions are tried). To reduce the size of the search tree (at the expense of occasionally missing a critical move) Wasp utilizes many of the currently popular techniques such as:

Wasp increases the depth by one ply after checks, pawn push to the 7th rank, and capture of the last remaining piece. For some reason, the popular Singular Extension does not seem beneficial for Wasp.

Wasp uses the "lazy SMP" approach where up to 160 threads are launched simultaneously to search the root position using a shared hash table. An iterative search starting with depth 1 is done by each thread, but if enough threads are already searching at a given iteration a thread will skip that iteration.

Position Evaluation

Starting with Wasp 5.00, position evaluation is done using a neural network. Wasp's NN consists of 1572 input nodes which each feed a hidden layer with 1536 nodes which each feed 7 output nodes. There are 2x6x64 inputs for friendly and enemy pieces on each of the 64 squares and another 2x6x64 inputs if kings are on opposite flanks. The file for each white piece is mirrored if the white king on the queenside. Black pieces are mirrored similarly. If the friendly side is black (it's black's turn to move), the ranks for both white and black pieces are mirrored. For example, a black pawn on a7 with black to move would have the same weights as a pawn on a2 with white to move. There are 36 inputs for the difference in material between friendly and enemy pieces. The material is computed using the traditional 1,3,3,5,9 values for pawn..queen. For a given position, one of the 7 outputs is selected based on the amount of material on the board. For example, in the early opening output 0 is used, while for king and pawn endgames output 6 is used. "Leaky ReLU" activation is used for the hidden layer and sigmoid activation is used for the output.

To create and train a new network, the weights are initially assigned random values between -1 and 1. A set of about 220M positions taken from over 20M games between Wasp and other engines of similar strength is used for training. Most of these games were played at ultra-fast time controls such as Game in 10 seconds with a 150 millisecond increment. Positions are extracted randomly from each game and Wasp does a depth 4 search and saves the position at the end of the principle variation along with the game result. Next, Wasp searches each of these positions with a short search and saves the position, search score, and game result to a file. For training, a target value is calculated based on a weighted average of the game result and the search score. During training, positions are randomly selected from the training data and evaluated by the NN. The error between the NN eval and the target score is calculated and propogated backward to adjust the network weights by a small amount determined by the learn_rate. This process is repeated for about 6M "mini-batches" of 30K positions each. During training, 4-byte floating point weights and node values are used. It takes about 2-3 days to train a new network using 20 threads. The NN weights are converted to 16-bit integers and embedded into the Wasp binary. I am hoping that the neural network will continue to improve if I increase the number of training positions, maybe using more FRC games or games from different opening books. I also hope that using a search with newer and stronger neural net evaluation to score the training positions will continue to improve playing strength.

For normal play Wasp uses the embedded 16-bit integer weights and integer math to calculate the node values. To speed up calculation, a list of inputs and the corresponding hidden layer node vector is cached after every evaluation. When a position is evaluated, instead of computing the hidden layer node vector from scratch, the cached hidden layer vector is copied and then then updated based on the difference in piece locations for the current board as compared to the cached board. Functions for vector copy, add, subtract, and dot-product use SIMD (single instruction, multiple data) instructions to process up to 16 calculations simultaneously.

Wasp has gained over 300 Elo by moving from a hand-crafted eval to the NN eval!

Hash Table

Wasp uses a "hash table" where the score, best move, and depth are stored after a position is searched. At each position in the search tree Wasp first probes the hash table. If the current position is found in the hash table and the depth of the hashed entry is sufficient Wasp may be able to take a cutoff without searching this node. Otherwise, it will search the best move from the hash table first which helps to reduce the size of the subtree. When storing a position in the hash table, Wasp calculates an index from some of the 64 bits of the current position hash, then probes four consecutive entries starting at that index and replaces the least useful of those entries based on the depth and "age" for the entries. When probing, Wasp calculates the index for the current position, and retrieves up to 4 consecutive entries and checks them for a full hash signature match. To minimize the chance of collisions when using multi-threaded search, Wasp uses the "lockless" hash signature approach developed by Bob Hyatt. I found out the hard way from crashes in several games at TCEC (Top Chess Engine Competition) that the move from the hash table must be fully verified since an entry can be corrupted if more than one thread tries to write to the same hash table location.

Opening Book

Wasp can use polyglot ".bin" books. The command line "wasp -mkbook pgnfile" will read games from file "pgnfile" and output a polyglot book called "tmp_book.bin". The UCI commands "OwnBook", "OwnBook_File", "OwnBook_Depth", "OwnBook_Variety", "OwnBook2_File", and "OwnBook2_Depth" control which book(s) are used, the depth of use, and the randomness of the book move selection.

The book file in the Downloads section called "wasp_book_20200514.bin" was created from a large PGN file of games which mostly came from computer-computer matches using Frank Quisinsky's FEOBOS opening book. The moves are weighted using the formula:

 weight = 4*nwins - 2*nlosses + ndraws;
This is intended to be a "general" book with good variety rather than a narrow, deep book for competition.

Endgame Tablebases

Wasp uses the Syzygy Tablebases created by Ronald de Man and the "Pyrrhic" library developed by Basil Falcinelli, Jon Dart, and Andrew Grant to probe these tables. The "rtbz" (distance to zero) tables are accessed only at the root so that Wasp can know which move(s) give the shortest path to a win or draw. The "rtbw" (win,draw,loss) tables are accessed during the search at all positions where depth >= 0 and #pieces <= 7.




Last revised: February 28, 2023