Making of Time Ghost: Using DOTS ECS for more complex environments

By Mikko Alaluusua, and Xuan Prada

In September 2024, Unity’s Demo team unveiled a new real-time cinematic demo, Time Ghost. It was created with the Unity 6 engine, which includes a number of features and technological capabilities that, when used together, enabled us to achieve a higher level of visual quality and complexity than ever before.

One of the ambitions we had when we started work on Time Ghost was to raise the bar on how large exterior environments can be built in Unity. This goal involves being able to handle greater complexity, requires the necessary tooling to be in place, and puts higher pressure on the engine’s core graphics, lighting and post-processing related features, rendering, and performance. While we expected the graphics improvements in Unity 6 to apply to most of these areas, tooling remained a challenge in need of a solution.

As game worlds become increasingly complex, efficient tooling to manage placement and rendering of natural assets becomes essential. We needed smart systems that ensured realistic distribution and variation of environmental assets. At the same time, rendering the immense amount of vegetation we had planned required technical solutions that are both scalable and highly performant at runtime.

In this post, we’ll share how we approached the environment work for one of the scenes in Time Ghost, and break down the workflows and tools we used.

Generating scattering data

As a first step, we import the meshes that constitute our environment base into our DCC software – Houdini, in our case. These can be either lidars, drone scans, or manually sculpted terrains in Unity. For Houdini, we develop scattering analysis tools that automate asset placement based on landscape attributes. These tools help us design coherent and realistic environments with a lot of detail and variation. By using SpeedTree and standard photogrammetry workflows, we create high-quality vegetation prefabs and scatter them onto these meshes. The scattering systems allow for fine-tuning parameters for randomness and density, ensuring natural spreads of vegetation and its variations.

The scattering data is exported from Houdini as a point cache / point cloud, using a slightly modified version of the Point Cache Exporter HDA that is offered by Unity as part of the VFX Toolbox on Github. The purpose of this tool is to save the position of each scattered instance while also saving the scale, orientation, color, age, and health attributes. The exporter generates one point cache for each model used in the scatter.

Unity integration

The use of DOTS (Data-Oriented Technology Stack) and ECS (Entity Component System) allows us to create scenes with hundreds of thousands of entities without significant performance degradation. This is achieved through careful management of entity instantiation and resource allocation.

Each Houdini-exported point cloud asset is again just a collection of positions, orientations, scale, and potentially some extra data (age, health, color, etc.). We gather all the Houdini-exported point clouds into a resource called PointCloudFromHoudiniAsset, which reads the source point clouds, finds the prefab to be used with a given point cloud using a naming convention, and creates an internal presentation of the data. This data is stored in the PointCloudFromHoudiniAsset. The data is then used by the baking process to spatially partition the points into tiles for faster streaming.

The next step is to add an authoring component into an ECS subscene. The authoring component is called ScatterPointCloudAuthoring, which takes as input a PointCloudFromHoudiniAsset and some parameters that control when to load and unload the data, and how to subdivide the point cloud data.

In order to efficiently stream the data in and out, we subdivide the point cloud data into scene sections, which can be loaded and unloaded individually. This allows us to unload and load sections based on the distance to the viewer. This tile size is controlled by the ScatterSceneSectionSize property in the authoring component.

However, these scene sections are fairly large, and trying to instantiate everything in one go would cause a considerable spike on CPU. One scene section can contain hundreds of thousands of points, so these sections are further subdivided into smaller tiles called scatter tiles (controlled by ScatterTileSize property in authoring component) and the instantiating logic chooses which scatter tiles to instantiate next and which to unload based on loosely defined importance rules.

Tile impostors

Even with the ability to stream instances efficiently in and out, we are still left with a huge number of instances to render. Many of these instances are fairly small, e.g., batches of grass that cover a small portion of the screen when moving further away from them. So, we also bake tile impostors out of some of the scattered instances. A tile impostor covers a certain area and mimics the look of the scattered instances in that area.

The data is sourced from the point cloud asset directly, because we want to be in control of what type of vegetation is included in the tiles. In our case, we are interested in grass assets only, and omit any trees that might be in the same location.

A tile impostor generator renders all the instances belonging to that tile from above, producing low-resolution textures containing approximate color, normal, and depth information per tile. On top of this, a number of the most important foliage types are selected and rendered into an atlas from the side and from above. This atlas is shared by all the tile impostors and is used to produce the detail for the low-resolution per-tile texture data. Generator also creates a mesh which is a collection of quads that will represent the tile and orient the quads towards the viewer.

During runtime, we project both the per-tile low-resolution texture information and the more detailed but generic foliage atlas entries to the tile impostor mesh, producing an approximate look of the tile.

As the camera moves further from an area, the individual instances first switch to lower LOD levels, and finally are replaced by the tile impostors. Tile impostors also have more than one LOD level, making the quad distribution sparser as the camera moves further away.

Imposter system for large-scale efficiency

For very distant objects in the scene, we use octahedral imposters. This method allows for displaying simplified versions of objects that are far from the camera, enabling us to balance visual fidelity and performance. We have created a simple tool to generate and integrate the imposters directly within Unity. It simplifies the workflow for artists, providing them with efficient methods to maintain high visual standards while optimizing performance.

Foliage control systems

Our foliage control system introduces sophisticated configuration settings that allow us to tweak environmental effects like wind. This includes adjustments to wind speed, variation, and frequency, ensuring that the animated elements of the environment are both realistic and performant.

The foliage shader receives the health and age attributes and uses these to create both a natural color variation and a more accurate wind interaction, where for example a dry plant is slightly stiffer and sways less than one that is green.

Entities are designed to interact realistically with characters within the scene. For example, vegetation will dynamically respond to the presence and movement of characters. The system uses a GPU-based approach to handle interactions like collisions with vegetation, and simplified spring physics calculates and simulates the effects of characters interacting with environmental elements.

High-quality environment lighting

Adaptive Probe Volumes

To get high-quality lighting results, a scene will typically require different lighting setups in different areas. Tight spaces with lots of irregular surfaces for light to bounce off, and wide open plains where the lighting is more or less uniform, usually need different approaches.

Unity 6 includes the new Adaptive Probe Volumes feature, which automatically places light probes where they’re most needed.


Image caption: When visualizing the probes, it’s clear that the ones in the trench are much denser than those on the plain, capturing more detailed changes in the lighting.

Scenario Blending

Scenario Blending lets us bake different states of the lighting setups and blend between them.
This can be used to create separate lighting scenarios for different times of the day in the same environment. Changing the sun angle and ambient color is accomplished through real-time lighting, but the baked lighting scenarios now also allow for the indirect lighting to match.

What’s next?

The ability to seamlessly scatter and light massive amounts of natural assets – from intricate close-ups to expansive horizons – is crucial for today’s advanced digital environment creation. The ECS-based approach doesn’t just elevate the quality; it also provides a lot of flexibility in how the data is handled and interpreted, taking into account the performance needs of larger real-time 3D projects.

Download a sample

As we promised, we are releasing two Unity projects from the Time Ghost demo on the Unity Asset Store – one with an environment scene, and one with the character.

To see the results of the pipeline described above for yourself, you can download Time Ghost: Environment. It includes a Unity 6 sample scene with one of the environments we created for Time Ghost.

The sample is meant to provide an illustration of the techniques and approaches we used, so that you can consider whether applying similar ideas can be relevant to your own productions.

In addition, you are also welcome to use anything you find in this scene in your own projects, including commercial ones. All assets in the scene are created from scratch for this project, without usage of content libraries, to ensure that we have the legal right to release the scene to you.

18 Likes

Love this work, thanks! Quick question, we’ve been using the Unity hair package for a lot of this type of widespread grass in our games. It has built-in support for a lot of this (HLOD generation, physics interaction, streaming, point cloud importing, etc).

I know you all used the hair package elsewhere in the demo – any reason not to use it here? Just too many points?

1 Like

Hey, just downloaded the demo. Downloaded the Asset into a new empty HDRP project, but unable to get it running. What is the recommended approach for this?

1 Like

You didn’t use any of the terrain features like trees, details, etc?

1 Like

QQ截图20241018175107

2 Likes

I’m glad this got released but I was left with bit of a disappointment.

All the CPU is clogged by the LOD system. What should run on the GPU runs on the CPU and takes pretty much all threads to do so. DOTS tries its best with the massive data that is thrown at it but overall this should not be a job for dots.

On Windows a build reaches 60fps on a 4080 with DX12. The render thread is impressively fast, clocking in at just a bit over 4ms. Whoever made this happen, wow, I am seriously impressed!

But in Linux and Vulkan the render thread takes 16+ms and it can barely reach 30fps. Vulkan is not a first class citizen, I know, but this is a bit too much. The DX12 build emulated under Wine runs better! RenderLoop of 3.2ms!!! Yeah, I’m not a fan of excessive exclamation marks but just re-read this. It’s bonkers.

Unitys Vulkan implementation is absolutely terrible.

2 Likes

Doesn’t match my experience (in windows)

dx11 by far the slowest (83ms frame)

dx12 (55ms)

while vulkan (48ms) is actually the fastest
image

I don’t see the very high RenderLoop timing in Windows/Vulkan either.

In your case I’d have other concerns, why does not even Vulkan reach more than 20fps? These are really low numbers and I believe you have a 4070?

2 Likes

Hi,

I agree with having the scattering on GPU would generally make more sense and it’s not the most efficient way to deal with large scale foliage. Apart from LOD selection, the instantiation of the new grass batches also takes considerable time. While the actual instantiation and entity setup can be done off thread, copying the entities to the main world still has considerable cost taken on main thread.
Also the decision to bake the point clouds in Houdini will become a problem for a big terrain, since the data baked will be quite big. Having a procedural approach of scattering at runtime would be more feasible for a big world.

The rationale behind having the scattering on CPU and rely on DOTS was partly to see how how far we could push DOTS itself, rather than implement a universal environment system within the project. For the cinematic, we don’t have a lot of logic running on CPU and we are heavily GPU bound. As such, even though the scattering takes considerable time slice on CPU, we are still
GPU limited and removing the scattering cost from the CPU wouldn’t really give us more FPS. This is also the reason why the LOD selection (and some other parts of the scattering logic which do their work on main thread) are naive and could be much more optimized.
In truth, all of the above is the responsibility of the future environment system in Unity, not something users should need to solve on their own in every project.

1 Like

Correct, we just used the terrain mesh

1 Like

Interesting idea! Some of the foliage in the cinematic is not grass but rather flowers and bushes where the hair simulation wouldn’t really work. But for grass specifically, this would have been an interesting experiment.

This reminds me of that Multiplayer demo that was released maybe 6 years ago? Really cool tech but, not productized or intended to be imported into an existing project, so… you’re probably going to struggle to use any of it in your project unless you’re prepared to spend a lot of time picking it apart and porting it over. If at least the foliage system was integrated into Unity Terrain such that it could be imported as a package and just work, it would be a game changer for a lot of projects. …but – approaching it with no background or experience with the development of the demo – this looks like something that I could spend weeks (or longer) to get working. I just can’t justify the time for nice looking grass at the moment.

Also a little disappointing that it didn’t include the other scenes (or a way to switch time of day/environment?) from the video. …but I guess that was to keep the size on disk down.

1 Like

Do you guys from the Unity Demo Team have additional resources regarding how do you generated the point cloud in Houdini with the extra attributes such as scale, orientation, color, age, and health?

I’m interested in using the scattering system in the Time Ghost Environment Demo to scatter my own vegetation assets, but to do that I’d have to first generate the point cloud, and I couldn’t find any text or video resources explaining how did you guys created it.
Also because I have other software to create the point cloud, but I’d have to know how this extra info is embedded in the file.

@UnityDemoTeam

I’m currently delving into the environment and I got a question. What’s really the point of using ECS for the vegetation? Wouldn’t it make more sense to prebake the point cloud data into a files for each section and then at runtime deserialize the files for the necessary sections and directly send those vertex attributes to the GPU without involving ECS at all?