The github page leads to a version that is only compatible with the Standard Rendering Pipeline and it would be quite cool if the same result can be archived in HDRP.
The logic behind this is probably quite straight-forward. Render the Scene from the Perspective of the Enemy.
Then have a post processing effect, which checks the depth of each point on screen and overlaps it with the depth textures of the Enemy.
So for each pixel on the player camera screen →
get depth → get world position by camera.localToWorld → transform the point to the enemy local space (camera.worldToLocal) → get enemyCamera Depth at the screen position → compare those two points, if the PlayerCam-Point is futher away than the EnemyCam-Point, do not render it.
BUT, I have no idea how I can get the depth texture only from a camera. Rendering everything seems very performance heavy (even when disabling almost everything in “CustomFrameSettings” on the camera and adjusting the clipping planes drastically (40m = enemy view distance = far clipping plane).
I tried adjusting a spotlight heavily but it didn’t work out. There is always some kind of falloff and walls are lit too, I only want “light” (color) on floors.
Any idea how I could pull off some kind of custom shadow mapping to archive the effect?
Performance is key here - and I have no idea how I could archive any good results on the CPU, since the world is 3D with steps, ramps and so on, so I can’t just raycast on one plane, then generate a mesh.
Edit:
Custom Pass alows to render the depth only, as it can be seen in the CustomPassExamples DepthCapture , but in terms of Performance, this still seems pretty horrible…
Maybe you could approach this as a 2D problem, if you just care about the floor? You could generate a 2D mask for the shapes that make the obstacles (pillars etc.) and then use that mask to cast shadows. Perform the shadow rendering in a compute shader on the GPU side so that you end up with a shadow texture which you could then somehow map back to your game world. And with this kind of texture approach you could just project the shadows back vertically without caring about the elevation. Of course this too can create many sorts of artifacts. I know this is somewhat high level and sketch-like idea but I did some tests a few years ago when I tried to make a 2D lighting system without using any geometry but only textures.
Here’s one screenshot of my old test. It renders 1-16 light sources with a compute shader and generates the texture shadows from a mask that it is given (the “environment” you see in my screenshot.) In this shot there’s just two, those green and blue.
If my game would have a flat floor, this would most likely work.
With a flat floor, I could just do raycasting and mesh generation, using the Job System and only calculating it when needed, it would probably satisfy my performance requirements.
Unfortunately that’s not the case, we have many different heights, connected with stairs, ramps and elevators.
And the player should know when there is a blind spot right below an enemy he can use to hide.
I’d be interested to know a bit more about this. The depth capture example only renders the object in your scene from another point of view in the depth buffer and nothing more, so you should have the exact same performances as in the depth-prepass of HDRP or when you add a spot light casting shadows in your scenes.
I downloaded the newest version of the same right now and the frame drop is only 10fps (not tested in build) which seems probably fine.
The old one had a frame drop of ~35fps. Without the custom pass and the special two cameras, just adding a normal camera to the given example scene, I had ~70fps - switching back and it dropped to ~30fps.
Guess I’ll just try it, but this seems quite above my skill level.
Hurdle Nr1: Accessing scene normals from within the decal shader. (or any shader)
Something like a SceneDepth node, just for normals, like SceneNormals - well it doesn’t exist. I know the camera is rendering a normal texture (which is needed for AO and ContactShadows) but I have absolutely no idea how I could get it from within a shader graph.
Use case: only drawing on the floor, not on walls (the pillar).
Hurdle Nr2: The “custom shadow mapping”.
Transforming the depth texture values to something that can be compared to screen depth which brought me here in the first place.
I have the texture, rendered from the view of the enemy, it is linked to the shader … I know that I probably have something to do with the transformMatrix of the enemy-camera, which I can pass to the shader via code. But I have no idea how to convert a pixel in the depth texture to a 3D point / Vector3.
And how to go about it, so I get the correct point - becasue I am viewing the object (view cone mesh (just a decal projector cube)) from the position of the player.
So for each pixel the player sees, I have to do some transforming to see if the enemy can see the same point in space (Vector3).
Use case: not drawing the view cone where the enemy can’t see (because his view is blocked by an object)
Ok after I had no chance to get the normal vector, I took on “Hurdle Nr2” first.
For each pixel, getting the world position, then using the VP (View Projection) Matrix to get screen-coordinates and depth.
I am looking through the eyes of the enemy in this picture. Setting a gradient for X values (Y looks the same, just vertical).
Green should be a very small stripe at the very left of the screen, purple too, on the right side of the screen.
Turns out, the projection matrix is not working. I can replace it with the identity matrix and the result is exactly the same.
Here is my code for setting the matrix:
using UnityEngine;
using UnityEngine.Rendering.HighDefinition;
[ExecuteAlways]
public class SetMatrix : MonoBehaviour
{
[SerializeField] DecalProjector _renderer;
[SerializeField] Camera _cam;
// will be set to true when disabled in the inspector
// used to see if the code is executing
[SerializeField] bool _checkActive;
// expose matrices in inspector
[SerializeField] Matrix4x4 _currentMatrix;
[SerializeField] Matrix4x4 V;
[SerializeField] Matrix4x4 P;
[SerializeField] Matrix4x4 VP;
void Update()
{
if (!_renderer || !_cam) return;
V = _cam.worldToCameraMatrix;
P = _cam.projectionMatrix;
P = GL.GetGPUProjectionMatrix(P, false);
//Matrix4x4 P = Matrix4x4.identity;
VP = P * V; // view projection matrix
_currentMatrix = VP;
_renderer.material.SetMatrix("_EnemyCamViewMatrix", V);
_renderer.material.SetMatrix("_EnemyCamProjectionMatrix", P);
_checkActive = true;
}
}
As I couldn’t really debug what’s happening in the shader code, I wrote the entire thing with all matrix multiplications in C# - got it working within a few hours, including some debug drawing
Now all I want to do is the exact same thing in Shader Code, but MultiplyPoint() does not exist there, which says “MultiplyPoint is slower, but can handle projective transformations as well.”
Seems like Matrix4x4 * Vector4 does not return the same results when dealing with perspective matrices - so in shader code, it seems like the perspecive matrix multiplication does not work.
Dividing by “w”.
Seems like, if you multiply by the VP matrix (from MVP, but model is not needed, as the position is already in world space), you have to divide the result by “w” or in shader graph “A”.
Now the only thing left is excluding the vertical faces - so I need world normals for each screen position.
And then checking the performance when 5-10 of these vision cones are visible at the same time.
If your colliders are set up correctly, you can just use Raycasts.
The first raycast is just ScreenPointToRay() to get the world point under the mouse.
Then search through all enemies and find those, who can theoretically see the point (radius + Vector3.Angle(transform.forward, floorHitPoint - eyeTransform.position)< viewAngle)
Then for all those enemies, just cast a ray from the hit-point towards their eyes or vice versa.
If your colliders are not setup correctly, you still do everything up to point 2) - but then, idk if that’s possible, you could recreate the inverse projection matrix from the shader (without the division, as unity handles that for you, so just Matrix.MultiplyPoint, if I remember correctly) and then check the pixel on the depth texture, the red channel should tell you, how far away the enemy can see there. 0-1, where 1 is camera near plane and 0 is camera far plane I think.
That’s quite an interesting topic.
Awesome work guys:)
I’m also playing around with the above mentioned package. And now I am trying to draw an outline around the view
cones. But I couldn’t work it out so far
My approaches:
Tried to draw a second Cone (slightly bigger then the original) and put it “under” the original cone. So only the Outlines of the bigger cone are visible, which look like the
outlines of the original Cone.
à the problems with this solution: I can’t scale the whole cone congruently up. And the resulting cone is starting from the same point as the original.
Itried to implement some edge detection algorithm(like sobel filter) and tried to “print” the resulting image of LOS Mask to a Render Texture. And then tried
to calculate the edges of the triangle on the Render Texture with a sobel filter.
I thought about using some edge detection math to calculate the points on the edges based on their vector directions but I don’t know exactly which variables of the LOS Mask shader are the one to use for this calculation.
Has anyone here tried something similiar and made it work?
Do you want an outline around objects inside the cone or just a border around the cone?
The border should be quite simple, just add the math for it to the view-cone-shader (and a color field of course).
To draw an outline around objects, you should probably do this from C# and not on the GPU. Someone above asked how to detect objects inside the view cone, just use this, then apply the outline.
Or do you need something different? Then please elaborate (a picture would help a lot in this case)
This would be the desired outcome, so I “just” need a border around the cone.
But here is, where i have troubles. I dont’t understand the mathematical operations for
outline calculation in this case.
Shader "Hidden/Line Of Sight Mask"
{
CGINCLUDE
#include "/LOSInclude.cginc"
// Samplers
uniform sampler2D _SourceDepthTex;
uniform sampler2D _CameraDepthNormalsTexture;
// For fast world space reconstruction
uniform float4x4 _FrustumRays;
uniform float4x4 _FrustumOrigins;
uniform float4x4 _SourceWorldProj;
uniform float4x4 _WorldToCameraMatrix;
uniform float4 _SourceInfo; // xyz = source position, w = source far plane
uniform float4 _ColorMask;
uniform float4 _Settings; // x = distance fade, y = edge fade, z = min variance, w = backface fade
uniform float4 _Flags; // x = clamp out of bound pixels, y = include / exclude out of bound pixels, z = invert mask, w = exclude backfaces
uniform float4 _MainTex_TexelSize;
v2f_img_ray Vert( appdata_img v )
{
v2f_img_ray o;
int index = v.vertex.z;
v.vertex.z = 0.0f;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
#if UNITY_UV_STARTS_AT_TOP
if (_MainTex_TexelSize.y < 0)
o.uv.y = 1-o.uv.y;
#endif
o.interpolatedRay = _FrustumRays[index];
o.interpolatedRay.w = index;
o.interpolatedOrigin = _FrustumOrigins[index];
o.interpolatedOrigin.w = index;
return o;
}
float CalculateBackfaceFade(float4 pixelWorldPos, float3 pixelViewNormals)
{
float3 directionWorld = normalize(pixelWorldPos - _SourceInfo.xyz );
float3 directionView = mul((float3x3)_WorldToCameraMatrix, directionWorld);
float backfaceFade = dot(directionView, pixelViewNormals);
backfaceFade = smoothstep(0, -_Settings.w, backfaceFade);
return backfaceFade;
}
float CalculateVisibility(float4 pixelWorldPos, float3 pixelViewNormals)
{
// Calculate distance to source in range[0 - far plane]
float sourceDistance = distance(pixelWorldPos.xyz, _SourceInfo.xyz);
// Convert world space to LOS cam depth texture UV's
float4 sourcePos = mul(_SourceWorldProj, pixelWorldPos);
float3 sourceNDC = sourcePos.xyz / sourcePos.w;
// Clip pixels outside of source
clip(max(min(sourcePos.w, 1 - abs(sourceNDC.x)), _Flags.z - 0.5));
// Convert from NDC to UV
float2 sourceUV = sourceNDC.xy;
sourceUV *= 0.5f;
sourceUV += 0.5f;
// VSM
float2 moments = tex2D(_SourceDepthTex, sourceUV).rg;
float visible = ChebyshevUpperBound(moments, sourceDistance, _Settings.z);
// Backface Fade
float backfaceFade = CalculateBackfaceFade(pixelWorldPos, pixelViewNormals);
visible *= lerp(1, backfaceFade, _Flags.w);
// Handle vertical out of bound pixels
visible += _Flags.x * _Flags.y * (1 - step(abs(sourceNDC.y), 1.0));
visible = saturate(visible);
// Ignore pixels behind source
visible *= step(-sourcePos.w, 0);
// Calculate fading
float edgeFade = CalculateFade(abs(sourceNDC.x), _Settings.y);
float distanceFade = CalculateFade(sourceDistance / _SourceInfo.w, _Settings.x);
// Apply fading
visible *= distanceFade;
visible *= edgeFade;
return visible;
}
float4 GenerateMask(float visible)
{
// Invert visibility if needed
if(_Flags.z > 0.0)
{
visible = 1 - visible;
}
// Apply mask color
float4 mainColor = visible * _ColorMask;
return mainColor;
}
half4 Frag (v2f_img_ray i) : COLOR
{
float4 normalDepth = SampleAndDecodeDepthNormal(_CameraDepthNormalsTexture, i.uv);
float4 positionWorld = DepthToWorldPosition(normalDepth.w, i.interpolatedRay, i.interpolatedOrigin);
float visible = CalculateVisibility(positionWorld, normalDepth.xyz);
return GenerateMask(visible);
}
ENDCG
SubShader
{
Pass
{
ZTest Always
ZWrite Off
Cull Off
Blend One One
Fog { Mode off }
CGPROGRAM
#pragma vertex Vert
#pragma fragment Frag
#pragma fragmentoption ARB_precision_hint_nicest
#pragma exclude_renderers flash
#pragma target 3.0
ENDCG
}
}
Fallback off
}
This is how the cone is drawn in the mentioned project.
Would it be possible, to take the resutling value of GenerateMask(visible)
and calculate the outline from there?
If yes, what would be the best approach to solve this?
Sorry if these questions seem dumb, but I am really struggling with
this
You are generating the cone based on the camera, in my original shader (made in shader graph) iirc I had to set an “angle” value that had to match the camera cone.
IDK if there is a better way to do this, probably there are quite a few ways, but I would write the viewspace position into the vertices, so you have values from 0-1 there, representing the position on your enemy-camera. Draw an outline where this value is 0-0.1 and 0.9-1.
Then, the only remaining thing is the arc at the end of your cone and I have no idea how you calculate it. I don’t see any “discard(vecMagnitude > 50)”, so I don’t understand how the arc is even created. But there I would do the same, just drawing pixels where the value is 50-50.1 .
Everything else seems way more complicated. Getting neighbour pixels (which isn’t really possible or at least very expensive) or doing it in post processing (which would probably also draw an outline behind your enemies or anything blocking the cone). Or drawing it based on stencil masks … all of that seems way more complicated than passing one more float (or two, if the Arc is based on the Camera Pitch Angle, then it should be the viewspace Y-Coordinate) to your vertices.
I’m not a shader expert at all, I’m just messing around from time to time. But yea, that’s how I would approach it.
If one had to add two layers of cones (far and near zones), would we have to use two CCTVs and then figure out the player’s position relative to these two cameras OR is there a better way like using a circular texture of two layers? Also, is there a simple way to shade the shadow in a different color or does it require complex knowledge of shaders?
Should be doable in the (decal) shader. You already have each world position of every pixel, so all you need to do is to add if(Distance(pixelPos, camPos) < farDistance) and then use style 1 or style 2.
That’s what shaders are all about, right? Defining the shading (reaction to light, including shadows) of surfaces.
I think I’ve used an Unlit-Decal-Shader. Unlit-Shaders don’t receive any Light-Information afaik, so you’d have to switch the shader type (just create a new one of the corret type and copy/paste all nodes) and then react to shadows and lights however you want.
Would this be possible on URP? Awesome work btw, very interesting. You don’t know how hard it was to find this, I can’t find anything else that comes close but it’s exactly what I need.
As URP supports decals, it should be possible on URP, yes. Back in May 2021 this wasn’t the case. Now the main difference between HDRP and URP should be the component that gets the Depth Texture from the enemys view. Everything else should be quite the same.