Hello everyone. Can someone help me with writting the simple fogOfWar shader. I have an array of integers meaning the view of the teams (visionArray[x + y * width] == view of teams in (x,y); 0 - noteam, 1 - firstTeam, 2- secondTeam, 3 - bothTeams). So I need to push use that array as a StructuredBuffer in my shader. But for some reason it doesn’t work…
Here is my code for SettingBuffer:
Why you have 249999 instead of 250000 in EndWrite. It should match if you copy 250000 length array. And different if you copy different length amount of data.
What are you doing here:
int x = int(floor(i.vertex.x));
int y = int(floor(i.vertex.z));
int visInfo = _VisionMap[x + y];
Where your width multiply here (x + y * width)? Is you vertices positions match fog cells if you doing that? (no negative positions according to pivot etc.) Otherwise these rows makes no sense.
As a side note - you don’t need extern here.
Oh, that was just misunderstanding the parameter of function. Corrected to 250000, thanks
2.Well, the width was deleted through my tryies of tests, but it should be there, you are right (and I should have put it back before posting that thread )
What about the positions… Yeah you are right. Here is the corrected version of frag (it is still not working - the texture is all black, so visInfo == 0 everywhere; the original array has another values from zero for sure):
There are division by to because the actual size of physical map is 1000x1000, but the visualMap is only 500x500 for the better perfomance.
Also there are such strange return-s because I am still in proccess of understanding how to enable transparency in my shader So for now it will just change the red value. After understanding transparency will try to remove those if-s because as I understand if-s are extremely bad for shader perfomance + will try to add some blur to the texture (still to think about how exactly)
What you see with this? I mean, your plane (?) with this shader. Is it completely black? And for test purposes make alpha 1 (fourth element of fixed4).
And put these rows between Tags and Pass
Blend SrcAlpha OneMinusSrcAlpha
Cull Off Lighting Off ZWrite Off
var visionMap = SystemAPI.GetSingletonBuffer<VisionMapBuffer>();
int limit = 0;
int limitMax = 500;
for (int i = 0; i < 250000; i++)
{
visionMapCopied[i] = visionMap[i];
if (visionMapCopied[i] != 0 && limit < limitMax)
{
Debug.Log(visionMapCopied[i]);
limit++;
}
}
So there are indeed other values than zero.
Also Alpha = 1 didn’t change anything
Also may be it is a useful info, but I have not plane, but a cube. As it is a 3D game and I don’t want player to look under the plane (even if I restrict the min height of camera, it will be still visible because of the 2D nature of plane)
Texture is not needed here, I suppose, as it is not used in frag function, but I didn’t touch it for now, though. I didn’t touch it at all, if I am not mistaken - all by default
Well, then it’s even worse in this case Because you use vertex position in clip space. You should pass in in object space or in world space. (In case of object space your scale wouldn’t count and you’ll need to manually pass it inside and calculate)
If so, than it is some mess with the positions - on the Material I can see this:
But there is nothing I can see on the scene…
And for this one, I am not sure I understand why. Before for debug I spawned cubes by the map, which was filled by the same algorithm of converting WorldPosition to one dimensional array, and it was spawned right. Now I redone this and it is still right:
(I have one unit of blue team in the center with a big radius of vision and 2 small units of red team nearly with small vision. Green means both teams can see that tile)
(Here is the algorithm of filling that map, if needed. But it has pretty the same algorithm of getting idx in the array…):
const int ORIG_MAP_SIZE = 1000;
const int VIS_MAP_SIZE = 500;
const int ORIG_TO_VIS_MAP_RATIO = 2;
//...
for (int x = (-visionChars.radius) - visionChars.radius % ORIG_TO_VIS_MAP_RATIO; x <= visionChars.radius; x += ORIG_TO_VIS_MAP_RATIO)
{
for (int y = (-visionChars.radius) - visionChars.radius % ORIG_TO_VIS_MAP_RATIO; y <= visionChars.radius; y += ORIG_TO_VIS_MAP_RATIO)
{
curpointline = new float2(x, y);
if (math.length(curpointline) > visionChars.radius)
continue;
point = math.floor(localtoworld.Position.xz + curpointline + VIS_MAP_SIZE);
int idx = (int)(point.x / ORIG_TO_VIS_MAP_RATIO + math.floor(point.y / ORIG_TO_VIS_MAP_RATIO) * VIS_MAP_SIZE);
if (idx >= 0 && idx < visionMap.Length)
visionMap[idx] |= team.teamInd;
}
}
No. That’s incorrect too. Just convert it to world space and pass NOT in vertex (Because it needs to be in clip space for SV_POSITION) but add vertexWS (example) to your v2f. No need to multiply to anything if you already pass world space coordinates.
Well, it now it I can see that cube… but there is still no changes due to the _VisionMap… So maybe I am wrongly getting the vertexWS?
Sorry, if it is a silly question…
Also isn’t it a problem that I am using #include “UnityCG.cginc” in the HLSL shader? Only now I understood, that it was there all the time instead of #include “Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl” #include “Packages/com.unity.render-pipelines.core/ShaderLibrary/UnityInstancing.hlsl”
If you using SRP you shouldn’t use cgincludes.
And in case of SRP you have other utility methods TransformObjectToWorld.
And for clip space (as you have world from row above) you can use TransformWorldToHClip.
Also to make it SRP batcher compatible you should use UnityPerMaterial constant buffer.
Well I’ve changed methods and added UnityPerMaterial constant buffer so now it is SRP batchare compatible
But there is still no changes due to the VisualMap… Maybe it is a mistake in the SetBuffer or somewhere there? Because it just seems like data from the original array is not transfered to the Structured buffer…
(Here is the shader code, if needed. But I suppose now it is finnaly right):