10:40 in the video
an almost full screen distortion effect,
I can run shadow gun smoothly on my iphone4S, even with chain explosion!
I tried to write my own.
I called onRenderImage() in C#, and use Graphics.Blit() to edit the image using a custom fragment shader.
But my build on the same iphone4S is very slow(added near 85ms extra CPU time,8ms GPU time),
fps droped from 60 to 10.I even tried to just output the screen texture without any distortion calculation in fragment shader,no increase in fps(CPU bound).
So onRenderImage() is not a good way to do image effect on mobile?
How did Shadow Gun do all the distortion effect?
They must have a fast way to get the current screen image in shader.
Page 32 - Screen deformation FX.
They metioned rather than doing distortion in fragment shader, we should made a grid and deform uvs in vertex shader, but this should be GPU side optimization, currently I am CPU bound.
So is there a fast way to get the current screen texture in shader?
if the image is not update every frame / image is delay a few frame, will it helps?
I found out that changing quality setting-MSAA from 8x to 0, solve the problem!
So how to create a screen mesh with, lets say, 1k vertices and render it back to the framebuffer after the mesh has been rendered with the image effect?
and the steps are:
1.render to RenderTexture (everything in the scene)
2.render a plane using Graphics.DrawMeshNow(), which the target is frame buffer
in step 2, I use a 30x25 mesh made by code(generate at Start()), so that I can write clip-coord vertex position directly in the mesh, which will be used directly by the vertex shader.(No MVP transform needed)
I do everything inside OnPreRender() & OnPostRender(), not OnRenderImage()
Hi Colin, I’m currently trying to replicate the same effect, but I’m kind of stuck. Could you share your effect, if you’re able, or elaborate a bit more on the process? Thank you!
hello, i will write everything you need to know to make this effect here.
This effect do the following in the first frame(C#):
-prepare a MxN plane mesh that has object space pos from -1 to 1
this plane will be used to replace what Graphics.Blit() does when calling Graphics.DrawMeshNow(), it just give us more vertices to animate uv in vertex shader, which cannot be done using Graphics.Blit().
public int width = 130;
public int height = 125;
protected override Mesh createMesh()
{
Mesh m = new Mesh();
//assign vertices
Vector3[] vertices = new Vector3[width * height];
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++)
{
vertices[x + width * y] = new Vector3(x / (float)(width - 1) - 0.5f, y / (float)(height - 1) - 0.5f); //pos from [0,1] to [-0.5,0.5],
vertices[x + width * y] *= 2; //from [-0.5,0.5] to [-1,1], so it can be treat as clip space vertex pos in vertex shader directly
}
//assign triangle
int[] triangles = new int[(width - 1) * (height - 1) * 6];
for (int x = 0; x < width - 1; x++)
for (int y = 0; y < height - 1; y++)
{
//clock-wise triangle A
triangles[x * 6 + (width - 1) * 6 * y + 0] = x + width * y; //bottomLeft vert
triangles[x * 6 + (width - 1) * 6 * y + 1] = x + width * (y + 1); //topLeft vert
triangles[x * 6 + (width - 1) * 6 * y + 2] = (x + 1) + width * (y + 1); //topRightvert
//clock-wise triangle B
triangles[x * 6 + (width - 1) * 6 * y + 3] = triangles[x * 6 + (width - 1) * 6 * y + 0]; //bottomLeft vert
triangles[x * 6 + (width - 1) * 6 * y + 4] = triangles[x * 6 + (width - 1) * 6 * y + 2]; //topRight vert
triangles[x * 6 + (width - 1) * 6 * y + 5] = triangles[x * 6 + (width - 1) * 6 * y + 0] + 1; //bottomRight vert
}
m.vertices = vertices;
m.triangles = triangles;
return m;
}
Then in every frame, we do the following:
1.Render everything the camera sees into a RenderTexture.
RenderTexture mainRT;
void OnPreRender()
{
mainRT = RenderTexture.GetTemporary(Screen.width,Screen.height, 16);
GetComponent<Camera>().targetTexture = mainRT;//render to RenderTexture, not framebuffer
}
2.render our plane MxN mesh(which we prepared earlier) to framebuffer using a postprocess material
public Material distortionMaterial;
public Mesh screenMesh;
public int passNum = 0;
void OnPostRender()
{
GetComponent<Camera>().targetTexture = null; //now render to framebuffer
RenderTexture.active = null; //must place before set pass
//tell the post process Material to use this RenderTexture as main texture's input
distortionMaterial.mainTexture = mainRT;
distortionMaterial.SetPass(passNum); //define which pass in shader to use
//only the first param(mesh) is useful, you can enter anything for other params
Graphics.DrawMeshNow(screenMesh, Vector3.zero, Quaternion.identity);
RenderTexture.ReleaseTemporary(mainRT);
}
At this stage, we need a special shader that do the correct transform so that it always fill the screen fully.
But how? Usually in a vertex shader, we do MVP transform using MVP matrix supplied by Unity, which convert a mesh from object space to clip space.But in this case, the clip space result we need is already saved in the mesh itself(the mesh that we prepared earlier in the first frame),
so just treat the object space vertex pos as clip space pos!
also we need to calculate the uv used for sampling the RenderTexture in step 1,
our vertex pos is in [-1,1], the uv we need is in [0,1], so a simple transform can give us a valid uv from vertex pos.
//vertex shader code
v2f vert (appdata v)
{
v2f o;
//o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);//no need, as we can pre calculate clip space pos in C#, store directly in mesh(object space)
o.vertex = v.vertex;//use object space input as clip space pos directly
o.projUV = o.vertex * 0.5 + 0.5;//use clip space pos as uv directly, still we need to transform from [-1,1] to [0,1], as tex2D() expect a uv inside [0,1]
the final step is to edit the uv in vertex shader when shockwave exist,
imagine whenever the user clicks on screen once, a virtual ring appears & expands.
Any vertices that are close enough to this ring, we distort its UV.
You can have any creative implement for distorting the UV, if you are not sure how to distort uv, I can explain it more.
I saw some ifelse & cos() in your shader code, if you are targeting some low-end devices, try to avoid them if possible.
But in this case it is just inside vertex shader, I guess it is ok!
any low-frequency post process like
-intensity & vignette(calculate color in vertex shader, multiply in fragment shader)
-chroma aberration(calculate 2 extra offsetted uvs in vertex shader, just do independent texture read in fragment shader)
can also be done in vertex level, while fragment shader just do the sampling.
see if you need those effects, intensity & vignette maybe just as fast as free postprocess.
Since the main cost of this solution is render the scene to RenderTexture & final blit back to framebuffer.
The vertex shader code really don’t affect performance that much.
Maikel you just can download the shadowgun project from asset store (free) and find the shader and scripts there. It also includes other effects they used (which are really nice).
I was going to say that Madfinger Games released this project to the Asset Store, it’s the exact scene seen in these videos. Has all of the shaders, code, art, everything in the scene. I especially liked the vertex deforming shader for the flags blowing in the breeze.
Thanks, after sorting out madfinger’s script for screen distort and what @colin299 suggested I able to implement screen distort effect.Thank you very much. to both of you
Although there is one bug which I cross checked with madfinger’s original scene.
Here is how to reproduce bug,
Load main menu > click play button > You will redirect to “screen distort” > click button screen distort > You will only see blurr image with no screen distortion.
If you run “Screen distort” directly > it will run screen distort effect properly…
I also submitted bug report few days back but haven’t got any response (yet).