I know that a lot of people who have used them for some time will probably say they’re easy, but lets be real here. This should be a built in feature that is ready to use at the start of any project regardless of render pipeline.
This is like a basic rudimentary tool that has been around since Gold Source engine in 1997.
So why is it that it requires special steps to get to work???
2 Likes
Actually they’re not “basic”.
Gold source used BSP-based levels built out of brushes, that simplified creation of decals, because originally they were polygonal particle-like objects that were created out of cut level geometry. Basically, when a bullet hits a level, you needed to find faces it hit, cut them in shape of decal, offset them from underlying geometry and then apply texture and add them to the list of decals to track. They also did not really participate in lighting.
This worked fairly well in low-poly levels, but started causing issues with high poly meshes, because you needed to consider what to do if you hit edges, if you hit walls at angle and so on. Decal glitches could be seen for a long time, for example, they were common in Postal 2.
Modern decal projectors try to deal with decals without cutting polygons, but that introduces other issues instead.
1 Like
Really???
I had no idea it changed the level geometry.
Original decals were geometry. They did not CHANGE level geometry, but they were based off level geometry.
Meaning to make a splash on a wall corner, you’d need to access the level itself, find the affected area, make its duplicate, then cut it to match the texture size. It was a pain.
I actually wrote this sort of system before:
https://discussions.unity.com/t/651258/36
At some point someone got an idea to store decals in some sort of buffer and not make them geometry, and this is the path both unity and unreal are currently following.
Because its notoriously hard to implement for all types of games. That is without visual glitches.
Questions arise, such as:
- Do decals have to be static or dynamic?
- What rendering pipeline is being used?
- What target platform is?
- Is your game low-poly or high-poly?
- Will quads be enough to render it without artifacts?
- Should it follow meshes’ shape?
- Should it be visible on top, or should they intersect?
- Should lightning affect them?
- What number of decals should be supported simultaneously?
- Should they be mesh based or texture based?
… and so on
Then there’s platform scalability issue. Mobile devices tend to run compute shaders poorly.
(E.g. for DBuffer / Deferred rendering)
If you look at built-in pipeline, it actually has “decal” feature buit-in. Its called “Projector”.
Problem is, it pretty much re-draws any object affected by N decals multiple times, which makes it unscalable solution.
Then there’s URP, which uses DBuffer approach (I think), its better, but its still no go for mobile.
HDRP is the best of the pipelines in this regard, DBuffer + (decently) scales with number of.
Mainly due to better hardware used as a target.
Technically, if all devices could run deferred rendering just fine, DBuffer approach could’ve been used anywhere just fine. Though, even with it, there are some cases that it does not cover well.