By Mikko Alaluusua, and Xuan Prada
In September 2024, Unity’s Demo team unveiled a new real-time cinematic demo, Time Ghost. It was created with the Unity 6 engine, which includes a number of features and technological capabilities that, when used together, enabled us to achieve a higher level of visual quality and complexity than ever before.
One of the ambitions we had when we started work on Time Ghost was to raise the bar on how large exterior environments can be built in Unity. This goal involves being able to handle greater complexity, requires the necessary tooling to be in place, and puts higher pressure on the engine’s core graphics, lighting and post-processing related features, rendering, and performance. While we expected the graphics improvements in Unity 6 to apply to most of these areas, tooling remained a challenge in need of a solution.
As game worlds become increasingly complex, efficient tooling to manage placement and rendering of natural assets becomes essential. We needed smart systems that ensured realistic distribution and variation of environmental assets. At the same time, rendering the immense amount of vegetation we had planned required technical solutions that are both scalable and highly performant at runtime.
In this post, we’ll share how we approached the environment work for one of the scenes in Time Ghost, and break down the workflows and tools we used.
Generating scattering data
As a first step, we import the meshes that constitute our environment base into our DCC software – Houdini, in our case. These can be either lidars, drone scans, or manually sculpted terrains in Unity. For Houdini, we develop scattering analysis tools that automate asset placement based on landscape attributes. These tools help us design coherent and realistic environments with a lot of detail and variation. By using SpeedTree and standard photogrammetry workflows, we create high-quality vegetation prefabs and scatter them onto these meshes. The scattering systems allow for fine-tuning parameters for randomness and density, ensuring natural spreads of vegetation and its variations.
The scattering data is exported from Houdini as a point cache / point cloud, using a slightly modified version of the Point Cache Exporter HDA that is offered by Unity as part of the VFX Toolbox on Github. The purpose of this tool is to save the position of each scattered instance while also saving the scale, orientation, color, age, and health attributes. The exporter generates one point cache for each model used in the scatter.
Unity integration
The use of DOTS (Data-Oriented Technology Stack) and ECS (Entity Component System) allows us to create scenes with hundreds of thousands of entities without significant performance degradation. This is achieved through careful management of entity instantiation and resource allocation.
Each Houdini-exported point cloud asset is again just a collection of positions, orientations, scale, and potentially some extra data (age, health, color, etc.). We gather all the Houdini-exported point clouds into a resource called PointCloudFromHoudiniAsset, which reads the source point clouds, finds the prefab to be used with a given point cloud using a naming convention, and creates an internal presentation of the data. This data is stored in the PointCloudFromHoudiniAsset. The data is then used by the baking process to spatially partition the points into tiles for faster streaming.
The next step is to add an authoring component into an ECS subscene. The authoring component is called ScatterPointCloudAuthoring, which takes as input a PointCloudFromHoudiniAsset and some parameters that control when to load and unload the data, and how to subdivide the point cloud data.
In order to efficiently stream the data in and out, we subdivide the point cloud data into scene sections, which can be loaded and unloaded individually. This allows us to unload and load sections based on the distance to the viewer. This tile size is controlled by the ScatterSceneSectionSize property in the authoring component.
However, these scene sections are fairly large, and trying to instantiate everything in one go would cause a considerable spike on CPU. One scene section can contain hundreds of thousands of points, so these sections are further subdivided into smaller tiles called scatter tiles (controlled by ScatterTileSize property in authoring component) and the instantiating logic chooses which scatter tiles to instantiate next and which to unload based on loosely defined importance rules.
Tile impostors
Even with the ability to stream instances efficiently in and out, we are still left with a huge number of instances to render. Many of these instances are fairly small, e.g., batches of grass that cover a small portion of the screen when moving further away from them. So, we also bake tile impostors out of some of the scattered instances. A tile impostor covers a certain area and mimics the look of the scattered instances in that area.
The data is sourced from the point cloud asset directly, because we want to be in control of what type of vegetation is included in the tiles. In our case, we are interested in grass assets only, and omit any trees that might be in the same location.
A tile impostor generator renders all the instances belonging to that tile from above, producing low-resolution textures containing approximate color, normal, and depth information per tile. On top of this, a number of the most important foliage types are selected and rendered into an atlas from the side and from above. This atlas is shared by all the tile impostors and is used to produce the detail for the low-resolution per-tile texture data. Generator also creates a mesh which is a collection of quads that will represent the tile and orient the quads towards the viewer.
During runtime, we project both the per-tile low-resolution texture information and the more detailed but generic foliage atlas entries to the tile impostor mesh, producing an approximate look of the tile.
As the camera moves further from an area, the individual instances first switch to lower LOD levels, and finally are replaced by the tile impostors. Tile impostors also have more than one LOD level, making the quad distribution sparser as the camera moves further away.
Imposter system for large-scale efficiency
For very distant objects in the scene, we use octahedral imposters. This method allows for displaying simplified versions of objects that are far from the camera, enabling us to balance visual fidelity and performance. We have created a simple tool to generate and integrate the imposters directly within Unity. It simplifies the workflow for artists, providing them with efficient methods to maintain high visual standards while optimizing performance.
Foliage control systems
Our foliage control system introduces sophisticated configuration settings that allow us to tweak environmental effects like wind. This includes adjustments to wind speed, variation, and frequency, ensuring that the animated elements of the environment are both realistic and performant.
The foliage shader receives the health and age attributes and uses these to create both a natural color variation and a more accurate wind interaction, where for example a dry plant is slightly stiffer and sways less than one that is green.
Entities are designed to interact realistically with characters within the scene. For example, vegetation will dynamically respond to the presence and movement of characters. The system uses a GPU-based approach to handle interactions like collisions with vegetation, and simplified spring physics calculates and simulates the effects of characters interacting with environmental elements.
High-quality environment lighting
Adaptive Probe Volumes
To get high-quality lighting results, a scene will typically require different lighting setups in different areas. Tight spaces with lots of irregular surfaces for light to bounce off, and wide open plains where the lighting is more or less uniform, usually need different approaches.
Unity 6 includes the new Adaptive Probe Volumes feature, which automatically places light probes where they’re most needed.
Image caption: When visualizing the probes, it’s clear that the ones in the trench are much denser than those on the plain, capturing more detailed changes in the lighting.
Scenario Blending
Scenario Blending lets us bake different states of the lighting setups and blend between them.
This can be used to create separate lighting scenarios for different times of the day in the same environment. Changing the sun angle and ambient color is accomplished through real-time lighting, but the baked lighting scenarios now also allow for the indirect lighting to match.
What’s next?
The ability to seamlessly scatter and light massive amounts of natural assets – from intricate close-ups to expansive horizons – is crucial for today’s advanced digital environment creation. The ECS-based approach doesn’t just elevate the quality; it also provides a lot of flexibility in how the data is handled and interpreted, taking into account the performance needs of larger real-time 3D projects.
Download a sample
As we promised, we are releasing two Unity projects from the Time Ghost demo on the Unity Asset Store – one with an environment scene, and one with the character.
To see the results of the pipeline described above for yourself, you can download Time Ghost: Environment. It includes a Unity 6 sample scene with one of the environments we created for Time Ghost.
The sample is meant to provide an illustration of the techniques and approaches we used, so that you can consider whether applying similar ideas can be relevant to your own productions.
In addition, you are also welcome to use anything you find in this scene in your own projects, including commercial ones. All assets in the scene are created from scratch for this project, without usage of content libraries, to ensure that we have the legal right to release the scene to you.




















