Design: AI (General Notes) - C7-Game/Prototype GitHub Wiki
This pages contains general notes on AI intended to aid the design philosophy. As far as I know, none of us our AI designers, so we're learning as we go.
One may ask, AI is so advanced nowadays, can we just find an AI program and turn it loose on our game? I believe the answer is "no" for several reasons. One is it would have to be able to interact with the game, e.g. knowing which buttons are which. Another is we haven't added any goal conditions (you can't win yet). A third is that the game has a combinatorial explosion of possibilities. A fourth is it would have to play an extremely large number of games to help it figure out which strategies work, and we don't have any supercomputers. If it were that easy, Civilization VI would probably have taken a similar approach and had a better AI.
With that out of the way, this page contains notes on AI design, some of which may be decades old, but still helpful as the fundamental building blocks in some cases carry forward. The approach of "study old things first" is not just to get those fundamental building blocks, but in recognition that the things modern AIs are good at - big data, identifying birds in photographs, displaying ads that are more likely to result in a human buying something - are not necessarily the same things that will help it win a strategy game. Even in the cases where AI is good at winning a strategy game, such as chess, the game is different.
Search Space / Limiting Search
First, some terminology. The search space is the set of all possible searches. The search graph or search tree are the paths of that search space that has been explored (Barr, p. 26).
In chess, the search space is the set of all possible positions possible on a chess board from the starting position. So in our game, it's the set of all possible games resulting from one starting configuration. Even on a tiny (60x60) map, you can see that this is a nearly infinite space - not just which cities are settled where, but which units and improvements are built, which tiles are worked, where units are moved. Chess has an estimated 10^120 possible plays in an average length game (Shannon, 1950, 1956), and our game will have more. We can't explore all the possibilities.
Instead, we must limit search. Several high-level possibilities exist to help with this.
Planning
Planning is "the sense of defining and solving problems in a search space from which unimportant details have been omitted. The details of the solution are filled in (by smaller searches within the more detailed space) only after a satisfactory outline of the solution, or plan, has been found" (Barr, p. 28).
This fits in well with our plan to have a Strategic Priority AI. That part of the AI's responsibility is to figure out at the highest level what its goal is. The planning approach suggests that a logical next step to improving it would be for the Strategic Priority AI to figure out a rough outline of that plan. If it identifies its goal as "eliminate these three barbarian camps in order", that may be a satisfactory outline. If it identifies its goal as "secure a source of iron, then a source of horses", that may again be an outline. The details would be filled in by a military AI that finds the units to accomplish those goals, or perhaps by a city AI if the goal is to pursue scientific research.
Heuristic Knowledge
To be added: heuristics
This will be important for pathing (especially on larger maps), but likely other areas as well.
Problem Representations
Barr (p. 27) quotes a great example of problem representations:
Suppose two diagonally opposite corner squares are removed from a standard 8 by 8 square chessboard. Can 31 rectangular dominoes, each the size of exactly two squares, be so placed as to cover precisely the remaining board? (Raphael, 1976, p. 31)
The straightforward, but inefficient, approach is to start placing dominoes in various configurations. But as Barr notes, if one realizes that each domino must be on one black and one red square, and two red squares have been removed, it becomes immediately apparent that the answer is "no". This is an example of representing the problem differently.
Barr notes that "Unfortunately, little theory exists about how to find good problem representations," and notes Amarel (1968) as a citation for more on the topic. Still, it is a good reminder that if we're struggling to get a performant AI (either in results or time efficiency), it may help to consider whether there's another way to look at the general problem.
Bibliography
Barr = The Handbook of Artificial Intelligence, Barr & Feigenbaum
If not listed above, the citations refer to papers written by the authors in the given year.