[Devlog] Turbo's Game of Life

Hello All! Johnny here from the Turbo Makes Games YouTube channel. I wanted to create this forum post as a means for discussion on my recreation of Conway's Game of Life using Unity ECS, aptly named Turbo's Game of Life.

As I began to develop this project, I noticed that there were many changes and optimizations I could make to this project and there were a number of different approaches to the problems I came across. So I figure that this project would serve as a great way to not only document the process of creating a more feature complete project (rather than just a tech demo) but it can also be used as a test bed to try out different implementations of solutions to problems and run performance comparisons.

So this forum post will be dedicated to just this project, I'll be documenting my progress in written form on here and in video form on my YouTube channel. And I would love to hear your feedback on this project - what do you think I can do to make it better? What features would you like to see me implement? What are some different approaches to the problems you'd like to see performance comparisons on?

With this first forum post, I'll be going into detail on some of the problems I came across and how I went about solving them. Then towards the end, I'll talk about some of the plans for the next phase(s) of this project.

Here is the GitHub repo for the project: https://github.com/JohnnyTurbo/TurbosGameOfLife/tree/master/GameOfLiveV2-Project
You can download and play the game here: https://johnnyturbo.itch.io/turbos-game-of-life

And if video is more your thing, here is the video devlog that documents my process for creating this iteration of the Turbo's Game of Life:

Conway's Game of Life
If you're not familiar with Conway's Game of Life, it is a cellular automation created by John Conway in 1970. As such, it is played out on a 2D grid and each cell on the grid can either be alive or dead. When the simulation runs, each cell references its neighboring cells to determine if it should be alive or dead the next "generation."

The rules are as follows:

  • Any live cell with fewer than 2 live neighboring cells dies from underpopulation
  • Any live cell with 2 or 3 live neighboring cells continues to live through survival
  • Any live cell with greater than 3 live neighboring cells dies from overpopulation
  • Any dead cell with exactly 3 live neighboring cells becomes a live cell through reproduction

From these few simple rules, some incredible and beautiful patterns can emerge and it can be quite relaxing to just watch the simulation play out before you for some time.

And of course, because humans are awesome, people have engineered Conway's Game of Life to do some actually mind-blowing things. People have made digital clocks, computers, and even a Turing machine to consider Conway's Game of Life Turing complete.

But the thing that really got me interested in Conway's Game of Life and the main driver behind me wanting to create this with Unity ECS is the fact that you can simulate Conway's Game of Life, inside Conway's Game of life.

That's right, in 2006 Brice Due created what is known as the OTCA metapixel which is a 2048 x 2048 grid that represents one cell in the meta-game of life. I originally learned about this phenomena from this Veritasium video. Also, this video from Alan Zucconi goes more into depth on this.

Planning Phase
Planning for this project wasn't too much different than planning for any non-ECS project. I mostly just wrote out all the requirements and features for the project and wrote out a few general thoughts about how I might solve some of the problems.

I didn't get too caught up in how I was going to structure the specific data and systems I would be using as that ended up changing several times throughout development of this iteration of the project.

There was just one thing that I thought of during the planning phase which I wansn't able to implement on this iteration of the project. This has to do with the physical layout of data on the grid to optimize performance - I'll go into more detail on this in the "Plans Moving Forward" section.

Spawning The Grid
The first step was to spawn a grid of cells. I created a prefab for the cell using a gray quad, then instantiated one of those for every cell on the grid (determined by multiplying the grid height and width), then stored the results in a native array.

var newCellEntities = EntityManager.Instantiate(persistentGridData.CellPrefab, cellCount, Allocator.Temp);

Then using nested for loops, I just set the translation of each cell to the corresponding x, y position on the grid. Now I could effectively spawn grids of cells of any arbitrary size.

After doing this, I took a look at my chunk utilization and saw that I could only fit 55 of these entities in a single chunk. Remember, this is just a quad renderer with no other data attached.

So at this point I decided to create two entities per cell, one would be a lightweight data entity and the other would be the render entity. The idea is that I can optimize the alive/dead calculations on the lightweight data entity as it just contains basic pertinent information on the entity (i.e. if it's alive and its grid coordinates). Once the simulation is complete, the render entity can just reference the associated data entity to determine if it should render the alive or dead state.

The data entity archetype is constructed through code so they don't end up with anything unnecessary such as transform components. They have the following data on them:

  • int2 for the grid coordinate position
  • bool for the current vital state
  • bool to determine if the vital state should change this generation
  • BlobAssetReference to all data and render entities (More detail on all these later)

There can be 560 of these data entities in a chunk - over 10x more than the render entities.

Implementing Rules of the Game of Life
Implementing the rules of the game of life was quite trivial as they are so simple. To do this, I created a system that goes through each data entity in the grid, counts the number of neighboring cells that are alive. Then based off the number of alive cells and the current vital state (if it's alive or dead) of the entity, it determines if the cell should change its vital state this generation.

if (cellData.IsAlive) 
   if (aliveNeighbors < 2) 
       // Die from underpopulation 
       changeVitalState.Value = true; 
   else if (aliveNeighbors > 3) 
       // Die from overpopulation 
       changeVitalState.Value = true; 
else if (aliveNeighbors == 3) 
   // Birth by reproduction 
   changeVitalState.Value = true; 

Note that at this point the actual vital state of the entity does not change as other entities may need to reference the vital state of the cell for this generation to determine if it should change its vital state this generation. So a flag is set now then a system is ran later to actually change the vital state.

I chose to use a flag over a tag component because of how frequently the cells can change between states and I wanted to minimize the structural changes and need of having to move entities back and forth between chunks. This is especially important for an idea I have in mind for the next iteration of this project (see "Plans Moving Forward" section).

For the system where the vital state changes on the data entity, I also set the material on render entity so it could display green if the cell was alive or gray if it was dead. Unfortunately changing the material on the entity is a structural change and needs to be ran on the main thread, without the burst compiler optimizations.

During development, I wasn't yet aware of the material property overrides that would allow me to simply change the color of the cell - an operation that can be scheduled on parallel threads - thankfully many people in my Discord community and YouTube comments let me know about that one :)

By the end of development, I did come up with another solution that would allow me to change how these entities displayed that could be multithreaded, but I'll go over that in the "Scaling the Game of Life" section.

One thing that I skipped over was how I reference the neighboring cells from each cell. For this project, I've decided to use a Blob Asset as a way to look up data about the neighboring cells, which I'll go over next.

Blob Assets
For this project, I have one Blob Asset that is accessed quite heavily during simulation. The Blob Asset is a 2D array of a struct that contains a reference to the data entity and render entity for each cell on the grid. The x, y index in the array is mapped to the x, y position of these entities on the grid. As mentioned above, all data entities have a reference to this Blob Asset.

The Blob Asset is structured as such: the base struct contains references to a data entity and a render entity for each cell on the grid

public struct CellEntities 
   public Entity DataEntity; 
   public Entity RendererEntity; 

Next, there is an array of CellEntities structs called CellEntitiesY - this represents a column of cells on the grid

public struct CellEntitiesY 
   public BlobArray<CellEntities> Y; 

Then, there is an array of CellEntitiesY called CellEntitiesX which holds all the columns of cells on the grid

public struct CellEntitiesX 
   public BlobArray<CellEntitiesY> X; 

The Blob Asset reference just points to the CellEntitiesX type

public struct CellEntitiesReference : IComponentData 
   public BlobAssetReference<CellEntitiesX> Value;

As all cells on the grid have that reference, any cell on the grid can reference any other cell. The nice thing about the underlying struct of the Blob Asset pointing to Entities is that during simulation, the entities themselves aren't being destroyed and created, so they are technically immutable, although the associated components can still change.

This means that the referenced data entities can change their vital state and the render entities can change color, and this doesn't break the blob asset. If you do this however, you just need to keep track on your own about what has read and write access to the data to make sure you're not trying to read data that's being written to, as the computer won't do this for you.

One inconvenience I came across with the 2D array I created, was how ugly it was to access the actual data of an arbitrary cell on the grid. For example, if I wanted to get a reference to the data entity at grid position (4,5) I would need to type the following:

var dataEntity = cellEntitiesReference.Value.Value.X[4].Y[4].DataEntity;

It's really just a lot of typing of structural things that don't really matter, so the solution I came up with was to create custom indexers on the BlobAssetReference to cleanly access the underlying data. These basically just

public readonly CellEntities this[int x, int y] => Value.Value.X[x].Y[y]; 

public readonly CellEntities this[int2 index] => Value.Value.X[index.x].Y[index.y];

Now I can access the data cleanly in one of two ways. I have both because sometimes I have the x and y values as separate ints and sometimes they are stored in an int2.

var data entity = cellEntitiesReference[4, 5].DataEntity;

var cellIndex = new int2(4, 5);
dataEntity = cellEntitiesReference[cellIndex].DataEntity;

I was quite happy with what I came up with for how I use these Blob Assets, of course there are some things I might have done differently in hindsight, but I'll be talking about that in the Plans Moving Forward section. But because I enjoyed working with these Blob Assets so much, I created a dedicated video on how I used them in this project:

Using MonoBehaviors to Control the Game
At this time, I had the Game of Life working in Unity ECS, but I didn't have much control over it at runtime. If I wanted to change the grid size, I needed to set the values in an authoring component in the inspector, then manually set the size and position of the camera to show everything on screen. Also I didn't have any control over how fast the simulation was iterating.

The most important thing for me at first was to have control over the iteration of the simulation. To do this, I disabled auto creation of the main ProcessGameOfLifeSystem with the [DisableAutoCreation] attribute. Then I get a reference to that system from a MonoBehaviour so I can manually call the Update() for that system.

With some simple logic, I can Play, Pause, and advance the simulation by one tick at a time with a single button press. Additionally I added in a timer so I could have the simulation run at faster/slower rates.

The other big thing was to give me the ability to resize the grid during runtime. As you might expect, this is done by destroying the existing grid, then spawning a new one.

To destroy the existing grid, I put a special tag component on my game controller. I have a system that only runs in the presence of that DestroyGridTag, which destroys all data and render entities - the game controller has the Blob Asset Reference as well to iterate through all cells and destroy both entities at each cell. Finally the DestroyGridTag is removed from the game controller so the system doesn't run again.

To create a new grid, my MonoBehaviour helper class creates a new NewGridData component with information on the size of the new grid. The CreateGridSystem is then ran, spawning all the data and render entities for each cell on the grid, then the NewGridData component is removed from the game controller so the system doesn't run again.

Lastly I just made a basic camera controller script to resize the camera to show the full grid when a new one is created. The player can also zoom in/out with the scroll wheel and pan around then screen by doing a middle mouse click and drag.

Scaling the Game of Life
Great, so I've got Conway's Game of Life up and running using Unity ECS and I have some basic control over it. So now I can just scale up to a 1000x1000 grid and everything will work flawlessly at a high framerate cuz DOTS right? You all know the answer - NOPE! It's never that easy

Biggest issue at first was rendering all those entities. I was using Hybrid Renderer V1 at this point and the number of batches was equal to the number of cells on the grid and I was getting around 5FPS on a 500x500 grid with no cells changing color. Unsurprisingly, the most expensive part in the profiler was the rendering system.

When I saw this I started to get a bit nervous, thinking I was going to need to create my own custom renderer - which in theory shouldn't be too difficult as I'm just rendering a bunch of quads, but my experience there is limited. I started going through the Hybrid Renderer docs to do some research for what I might try next and came across the Hybrid Renderer V2. So I decided to give it a shot, because what's the worst that could happen, it don't work? Well it already don't work good.

So I backed up my project and converted it to Hybrid Renderer V2 - that one change alone brought the performance of a 500x500 grid up to 60FPS in the editor. I also hadn't multithreaded any of my jobs at this point, so I changed them around to be multithreaded when they could, which brought the performance up to 200FPS on the 500x500 grid.

Now I should clarify, these performance numbers aren't really indicative of running a simulation across the whole grid. This was basically just me clicking a bunch of cells in a few localized areas on the grid to mark them as alive cells then running the simulation. Even if the alive cells expanded, still 95% of the cells on the grid were just staying as dead cells.

So I came up with a system where I could randomize the state of all cells on the grid then run the simulation across all the cells. This works by entering a percentage chance of if the cell should be alive, then every cell is iterated through and determined if the cell should be marked alive or dead. Check out my video on using Random in Unity ECS if you'd like.

Doing this highlighted the next bottleneck in my code, the system to change the renderer on the entities. At a larger scale when more entities needed to change color each frame, this system was taking up a lot of time. If you'll remember, my initial implementation of this was to change the material on the render entity, which could only be ran on the main thread, without burst.

So the kindof hacky solution I came up with was to not change the color of the entities at all. Instead, when I create a new grid, I'd scale up a single quad of a dead cell to the size of the full grid to be used as a background. I then set the tiling properties of the material to give the appearance of individual cells.

Next, I spawn a render entity for each cell, just like before, but I spawn them behind this background quad. Then when I want to change a cell from a dead cell to an alive cell, I move the associated render entity in front of the background quad. And vice versa for changing cells from alive to dead.

Doing this meant that I could multithread moving the render entities in front and behind the background quad.

Again, at the time of development, I wasn't yet familiar with the material property overrides, so it will be interesting to do some performance comparisons between that and my current solution, among other things...

Plans Moving Forward
If you can't already tell by the facts that I've already made two videos and this 3500+ word postmortem on this project, I've had a ton of fun making this project and I've learned so much along the way. But this is far from the end of this project, I have many more features in mind about things I can add to make this project more interesting.

But most importantly, I've received a ton of feedback from the videos I've put out about suggestions of things to try and ways I can experiment with performance to make this perform even better than it does now at a much larger scale. Here are a few of the things I'd like to do, in no particular order:

  • Experimenting with physical layouts of the entities on the grid - Right now, all the entities spawn linearly in columns. On a small scale this is fine because one chunk of data entities can fill up a few adjacent columns making cache hits more likely when checking the vital state of adjacent cells. However as the size of the grid increases greater than the chunk capacity for the data entities, when entities evaluate their neighbors, they will be in another chunk and more likely to be a cache miss. I can optimize by spawning chunks of entities in squares on the grid so only the entities on the perimeter of the square will be looking outside the chunk.
  • Using tags vs. flags to track the vital state of a cell - This wouldn't be possible with the above solution as doing this would change the chunks the entity belongs to, but it will be useful data to test to see how performance is affected with many entities changing state each frame.
  • Performance comparison between moving render entities and changing them with material property overrides
  • Rendering cells with compute shaders
  • Simulating multiple "data generations" for each "render generation" - simulate X generations behind the scenes, then display the current state of the grid
  • Experimenting with what defines an entity - Right now I have two entities for each cell on the grid. Can I make it so one entity holds data for multiple cells, or even one entity for the whole grid?
  • Blob Asset optimization - Does using a 1D blob array perceptibly improve performance? Maybe each cell has a reference to a unique blob asset containing all neighboring cells. How would that compare to using a dynamic buffer?

And of course because this is a forum post, I really do want to hear from you all - what other performance comparisons would you like to see me run for this project? What are some of the features you want me to add? How can I improve upon what I have here?

Thank you so much for taking the time to read (or at least scroll though) and hope you found it informative and enjoyable. I'll likely be posting updates to my YouTube channel first, followed with more thorough write-ups in the replies to this thread.

tl;dr - I had way too much fun with this simple project. I learned a lot about how to do (and not do) things in Unity ECS and there are so many cool features I can add and experiments I can run with this project going forward.


If you really want to scale this simulation up, there's a few things to keep in mind:
1) Minimize the amount of data you use
2) Minimize the amount of work you do
3) Minimize the amount of random accesses you do

There are two ways I can imagine approaching this problem. Both assume you want to eventually zoom out far enough where a single pixel on screen covers multiple cells.

The first is probably the faster and simpler approach, but it is also quite boring since it won't really utilize anything from Entities. It is more of a Jobs + Burst exercise.

Basically, you create a double-buffered 2D bit array and run your simulation on that. You also have HLOD tiles which might summarize 2048x2048 cells with a grayscale float, which you compute while you run your simulation using popcnt. After the simulation step, you pick the best HLOD with sub-pixel resolution and dump that to the GPU, where a compute shader bakes that down into screen pixels. And then you are done!

The second approach probably isn't as fast, but in complete contrast to the first approach, it will force you to explore some advanced usages of Entities and the Hybrid Renderer.

The first step is to make an entity represent a 128x128 grid of cells. Why that? It is because you can represent it in a uint4x4. That can be uploaded as a material property on the GPU and using UVs and derivative functions, you can calculate the subset of bits mapped to a single pixel and average them.

The second step is getting your entities aligned. The Hybrid Renderer allows for up to 128 entities per chunk, but a safe goal would be to get 64 per chunk (there's a mechanism to limit how many entities with a particular component exist per chunk). This gives you 1024x1024 cells per chunk. If you are having trouble hitting this with all the Hybrid Renderer components, make sure you delete the Translation and Rotation components from the entities.

The third step is going to be getting chunks to reference their neighbors in the larger grid. The secret to to that will be chunk components. You can either make two neighbors share a reference to a "border buffer entity" or just make them reference each other's ArchetypeChunk.

The last step is going to be making chunk components reference HLOD entities where you can summarize the chunk with a single grayscale value. If you set this up correctly, the Hybrid Renderer will automatically render the correct entities for you.

By the way, both of these approaches are really only effective if you write vectorized bitwise operations to do your simulation.

Regardless of whether or not you choose to take my suggestions or not, I wish you best of luck on your adventure!


[quote=“DreamingImLatios”, post:2, topic: 861306]
If you really want to scale this simulation up, there’s a few things to keep in mind:

  1. Minimize the amount of data you use
  2. Minimize the amount of work you do
  3. Minimize the amount of random accesses you do

Think that’s a great way to sum up the mindset to have to really scale this up to the extremes.

Appreciate you for taking the time to detail out some ways to push this - those all sound like some really fun options to explore.

My current plan for the next iteration is to experiment a bit more with the one entity per cell approach. I figure the results of my testing will be a bit more applicable to a broader range of projects - not only do I want this to be a well oiled simulation of the game of life, but I want it to serve as some concrete metrics for people to gauge relative performance of doing different things in their games.

But the iteration after that is when I plan on going real crazy with this with compute shaders, bit-level optimizations and the rest to see how far I can push this.

Lots of fun to be had soon!