# Minimax algorithm is too slow

Hello guys!
I’ve implemented my game a minimax algorithm with “DLS” and “transpostion tables” but it is suspiciously very slow, When using a 3x3 board.

The goal of the game is to get the grid to an order which is determined at the start of the game.

Square is a class which represent a cell in the grid, and checks for invalid switch attempts.

SwitchGrid is a function which switches 2 cells in the grid.

SwitchStorage is a class which stores 2 cell position. (Used for transposition table)

tTable is the transposition table.

notPossible can be ignored.

Here is my code for the minimax function. Can you catch any errors I did?

``````public void InitZobrist(int[,] grid)
{
System.Random rand = new System.Random();

int cols = Conversion.GetCols(grid);
int rows = Conversion.GetRows(grid);

ZobristTable = new int[cols][][];
for (int i = 0; i < cols; i++)
{
ZobristTable[i] = new int[rows][];
for (int j = 0; j < rows; j++)
{
ZobristTable[i][j] = new int[grid.Length];
for (int k = 0; k < grid.Length; k++)
{
ZobristTable[i][j][k] = rand.Next(0, int.MaxValue);
}
}
}
}

public long GetZobristKey(int[,] grid)
{
if (ZobristTable == null)
InitZobrist(grid);

long hash = 0;
for (int i = 0; i < grid.GetLength(0); i++)
{
for (int j = 0; j < grid.GetLength(1); j++)
{
int index = grid[i, j];
hash ^= ZobristTable[i][j][index];
}
}
return hash;
}

int GetBestMove(int[,] gridA, int[][] notPossible)
{
for (int depth = 0; depth < 15; depth++)
{
if (GetBestMoveDLS(gridA, notPossible, depth) == 10)
{
return depth;
}
}
return -1;
}

int GetBestMoveDLS(int[,] grid, int[][] notPossible, int depth)
{
if (GetCloseToFinish(grid) == 1) //game end
{
return 10;
}

if (depth <= 0)
return 0;

long hash = GetZobristKey(grid);
if (tTable.ContainsKey(hash))
{
int pos1 = tTable[hash].pos1;
int pos2 = tTable[hash].pos2;

int score = GetBestMoveDLS(LevelCreator.SwitchGrid(grid, pos1, pos2), notPossible, depth - 1);
return score;
}

for (int i = 0; i < grid.Length; i++) //from  0 - > end
{
for (int j = (i + 1); j < grid.Length; j++)
{
if (!Square.CanSwitch(i, j, grid, notPossible)) continue; //invalid switch attempt

int score = GetBestMoveDLS(LevelCreator.SwitchGrid(grid, i, j), notPossible, depth - 1);
if (score == 10)
{
return 10;
}
}
}
return 0;
}
``````

That is quite a lot of complicated code to follow. But it’s also incomplete (for example, it isn’t obvious what `Square`, `SwitchGrid` or `SwitchStorage` do).

I’m not sure from reading it all exactly what the goal is or the overview of the method you are using.

However, if you can create a 2x2 grid, it should be possible to step through that simple scenario in the debugger. Maybe you will find, for example, that you are checking the same grid settings multiple times.

You are right it wasn’t clear what the code meant. I edited the post and added some definitions and comments to the code.

So is this a sliding tile puzzle solver?

If so, create a 2x2 grid with the puzzle being rigged as one single move away from the solution. Then step through your code in the debugger. Does the code find that single move very quickly? If so, then try setting the puzzle two moves from the solution and repeat the testing process.

That way, you should be able to determine if it is causing itself an unnecessarily heavy calculation load to work through.

Another possible way to approach this is to try using the profiler. Is there one particular part of the logic that is taking the lion’s share of the overall time? Maybe that could help offer a clue as to what’s happening.

Also, if this is a sliding puzzle then, rather than thinking of the board as being a 2D set of tiles individually checked in nested loops, think of it as a ‘hole’ that has a limited set of possible moves.

If the hole is at a given x,y position, it will have, at most, 4 possible moves (irrespective of grid size). Sometimes, it will have only 3, or even 2, possible moves. In terms of calculation efficiency, that will greatly reduce the number of tests that have to be performed.

However, in this particular case, I would still expect a brute-force algorithm working over a 3x3 grid to complete fairly quickly on modern hardware.

No, its not a sliding tile puzzle, but it is simillar, You can swap each tile with adjacent tiles (including diag).

So this is a 3x3 grid with each cell occupied (no sliding hole). All cells can be swapped in any one of (up to) 8 directions with a neighbouring cell. The objective is to get from a given start grid layout to a given target grid layout. Is that correct?

If so, then on the first round, the number of permutations to check are :-

• 4 corners, each with 3 possible moves.
• 2 middle edges each with 5 possible moves (minus the 2 already covered by step 1).
• The remaining 2 middle edges each have 1 possible swap (with the middle).
• The middle does not need a specific check as it is covered by the 3 previous steps.

For the first move, that’s : ( 4 * 3 ) + ( 2 * ( 5 - 2 )) + ( 2 * 1 ) = 20 permutations.

With the 2nd move, you now have these 20 permutations to check. As each has been arrived at by 1 particular move, that means this round, for each of the 20 permutations, there are only 19 moves to check.

So I’m thinking (maybe incorrectly) that this is a factorial scenario. And given you seem to have a depth limit of 15, my maths is telling me that the overall number of permutations to check is, therefore, 20! - 5!. However, that number is so phenomenally large (2 million trillion!), my logic / maths has to be off somewhere!!

I have just tested on my laptop, a simple for loop counting to one trillion in a Start() method. It took 36 seconds. So to count up to 2 million times that is around 833 days by my reckoning.

So when you say it is running slow … is it running at 833 days per frame type of slow??? (I am presuming not.)

Well im afraid so… (well not 833 days)…
Do you think there is a better algorithm for that?

Is there really no way to simplify the problem? Even just removing the ability for diagonal moves will drastically reduce the number of permutations.

If this problem really can’t be simplified, the answer here is still most definitely “yes”. The problem is with the follow up question “what is that better solution?”.

Basically, I don’t know what that will be but it will probably be some form of heuristic algorithm (think travelling salesman).

So maybe a potential solution could look something (emphasis on “something”) like this :

• Create a container(called previousGrids). It will contain a grid plus a link to the parent from which it was generated.
• Create a container (called checkMoves). It will contain a list of ‘currently in play’ moves to be checked and also a reference into previousGrids of the grid to which this move applies.
• Pick 3 different valid moves at random and add them to checkMoves and to previousGrids (with no parents at this point).
• Check if any of the moves in checkMoves reaches the solution. If so, hurrah! Done! Trace the parents back through previousGrids and you have a route to the solution.
• If not done, replace each move in checkMoves with (up to) 3 valid random moves for its grid that are not already in previousGrids. Add these new moves into previousGrids. If all 20 possible moves for a current grid are taken, then just drop the current item from checkMoves.
• Go back to step 4.

I think I’ve found a way to simplify the problem. You don’t need to search the problem space or use transposition tables, just reframe it in terms of directed graphs.

Firstly diagonal moves are redundant; any nodes diagonally adjacent to each other can be swapped by decomposing it into a set of horizontal/vertical moves. In 2D, imagine two nodes A and C sharing a corner, with B sharing one edge from both. This can be decomposed into three moves: Swap (A, B), Swap(A, C) and Swap(B, C). This can be generalised to any dimension.

In fact, ANY two nodes in the grid can be swapped in this way without disturbing any other nodes positions. Imagine a string of nodes, all connected by faces only:
ABCDEF

If you want to swap A and F, then you can shift A all the way to the right:
BACDEF
BCDAEF
BCDEAF
BCDEFA

Followed by swapping F all the way to the left:
BCDFEA
BCFDEA
BFCDEA
FBCDEA

A and F are now swapped without disturbing any other nodes. This is trivial to generate from two positions by simply swapping along the X, Y and Z axis. This means the number of swaps is 2 * Dist(A, B) - 1, where Dist is the Manhattan distance between the two nodes.

For the next part, disregard the whole grid structure, just think of it as a directed graph for now. Every node has a position it needs to get to in the end. Imagine we have a starting 3x3 grid and we want to solve it by turning it into the ordered grid ABCDEFGHI:

This creates two cycles, ACGE and BIFHD. Using the fact we can swap any two nodes, start at some node in the cycle (say, E), and swap it backwards until it reaches the position it needs to get to. Since every node needs to get to the next node and the last needs to get to the first position this will put the whole cycle in an ordered state. For the first cycle, that would be:
bigswap(E, G)
bigswap(E, C)
bigswap(E, A)

Similarly, every other cycle in the graph can be solved this way. This doesn’t guarantee the optimal sequence of moves, but it should be extremely quick and guaranteed to find a solution. You can even get the number of swaps it will generate by summing the number of swaps in every big swap (which is also trivial to calculate, just sum the distance between them on every axis).

1 Like

Nice. Looks like a really neat solution.

1 Like

Wow! Ill check that out! Thanks