GPGPU in Unity?

I’m not sure if my game’s logic needs it, but I would like to test my logic being processed by the GPU. It’s mostly heavy array maths, which I feel might be processed faster by my GPU.

So, is it possible? And how do I do this?
(documentation files are just fine :slight_smile: )

Just notice this is in the wrong subforum… If a mod happens to see this, you can move it to the scripting section :slight_smile:

Is it highly parallelizable (i.e. you have hundreds-thousands of independent operations to perform)? That’s the only way it will be worth computing on the GPU.

This series has a few videos that can give you an introduction. If you’re talking about CUDA/OpenCL, Unity’s not going to provide much help, but you do it the old school way, with shader programs. I don’t know what the most efficient way of doing these calculations would be; there’s no Unity documentation on it. Unity is optimized for using the GPU for its primary purpose – rendering 3D scenes.

Well, the math I need to do is the following loop in each Update()

for (int x = 0; x < gridX; x++)
		{
			for (int y = 0; y < gridY; y++)
			{
				adjacentSquares = 0;
				
				if (grid[calcX(x - 1), calcY(y - 1)])
					adjacentSquares++;				
				if (grid[calcX(x), calcY(y - 1)])
					adjacentSquares++;				
				if (grid[calcX(x + 1), calcY(y -1)])
					adjacentSquares++;				
				
				if (grid[calcX(x - 1), calcY(y)])
					adjacentSquares++;
				if (grid[calcX(x + 1), calcY(y)])
					adjacentSquares++;
				
				if (grid[calcX(x - 1), calcY(y + 1)])
					adjacentSquares++;
				if (grid[calcX(x), calcY(y + 1)])
					adjacentSquares++;
				if (grid[calcX(x + 1), calcY(y + 1)])
					adjacentSquares++;
				
				newGrid[x,y] = IsCellAlive(adjacentSquares, grid[x,y]);
			}
		}
		
		return newGrid;

The loop is currently 10.000 iterations long, but I would like to drive it up to possibly a 100.000 iterations.

It can be calculated in parallel. Nut to be honest I have no idea if it’s worth doing this on the GPU.

EDIT: It looks like I’m out of luck :slight_smile:

I managed to find this pdf about a C# class that allows GPGPU processing. And they explicitly say that you can’t look up other array positions, due to the nature of GPU arrays. I can however use it in Start(), where I need to initialize my arrays.

The bottleneck in games is almost never the CPU. By loading more onto the GPU, you’re only making performance worse.

In mine it is :slight_smile: CPU is at 100% for 1 core, GPU is not even reaching 1% load.