Not yet anything mind blowing visually…
As I understand it the initial aim is to get where Unity is right now, then the really cool stuff can follow, check out this for more info GDC - Rendering -17 - Google Slides
Not yet anything mind blowing visually…
As I understand it the initial aim is to get where Unity is right now, then the really cool stuff can follow, check out this for more info GDC - Rendering -17 - Google Slides
Hi Tim. A question:
What these blocks actually are? I am bit confused with idea of appending (injecting, a la CommandBuffer) to the Pipeline features, writing my own block, without a code change. Does it possible? Can I see that somewhere in examples?
Is not Skinning shader process included?
That would be so awesome!!
what is this “skinning shader” that you guys talking about?
Based on this https://forum.unity3d.com/threads/gpu-skinning-vs-your-custom-vertex-shader.255866/ and this https://forum.unity3d.com/threads/cpu-gpu-skinning.59565/#post-1691091 threads, and this http://www.novosad.ch/files/mps.pdf document i’ve found online, it’s basically a way of applying bone wheights of an animation to the vertices of a mesh.
Apparently it’s done by the CPU, but can be done by GPU? I didn’t know that.
it’s already in unity since. . . uhh don’t remember when they’ve add it actually
Since Unity 4.2, almost 4 years ago.
Yes and no. Some callbacks will be issued (those internal to unity that you can’t mimic in a render pipeline), and some need to be manually issued. We have a test that shows this here, you can see what we currently expect:
https://hastebin.com/dehegitunu.cs
We are internally talking about moving away from sending ANY callbacks and if you want to send OnBecameVisible other things like that then you’ll need to manually issue them after a cull. This would allow us to massivly increase performance due to not having to call c# unless really needed
This is something we internally had a big discussion about this week. Manual cull results. This would mean that instead of specifying a camera and doing a cull you could specify a list of objects / lights up front and just do a cull on them (or do no cull and just pass it through). This is on our list.
They are the ‘broad phase’ rendering operations. Some are things like, “Perform Culling” others are things like “DrawRenderers”. On top of this you can build your own from normal “Command buffers”
Take a look at this:
https://github.com/Unity-Technologies/ScriptableRenderLoop/blob/master/Assets/ScriptableRenderPipeline/BasicRenderPipeline/BasicRenderPipeline.cs
A block that does culling:
CullingParameters cullingParams;
if (!CullResults.GetCullingParameters(camera, out cullingParams))
continue;
CullResults cull = CullResults.Cull(ref cullingParams, context);
A block that does rendering
// Draw opaque objects using BasicPass shader pass
var settings = new DrawRendererSettings(cull, camera, new ShaderPassName("BasicPass"));
settings.sorting.flags = SortFlags.CommonOpaque;
settings.inputFilter.SetQueuesOpaque();
context.DrawRenderers(ref settings);
Something more advanced from the HDPipe (allocate and set render target)
var cmd = new CommandBuffer { name = "" };
cmd.GetTemporaryRT(m_VelocityBuffer, w, h, 0, FilterMode.Point, Builtin.RenderLoop.GetVelocityBufferFormat(), Builtin.RenderLoop.GetVelocityBufferReadWrite());
cmd.SetRenderTarget(m_VelocityBufferRT, m_CameraDepthStencilBufferRT);
renderContext.ExecuteCommandBuffer(cmd);
cmd.Dispose();
Hi.
I’m a student at a university who has been watching this project from the shadows from a while, and now this summer on weekends I think I would like to try building a couple of SRPs for the experience. I have two projects in mind that would greatly benefit from custom pipelines. One of them is a game in which different scenes use different visual styles with different effects, weather, ect. The other is an FPS somewhat similar to Splatoon with a “paint the world” effect (which I got a prototype working in 5.6 using particles and texture arrays and a fake lightmap, though the mechanics are different and I would eventually want to add a full fluid simulation. So if you don’t mind, I have quite a few questions on things I have ran into so far.
First, what exactly is the purpose of using a factory system and producing pipeline instances? I couldn’t really figure out how that is useful, and some of the Unity pipelines don’t seem to bother with it (BasicRenderPipeline just uses static functions, and the mobile deferred pipeline just passes rendering back to the asset). While I really like the idea of splitting run-time data with serialized data, Unity automatically creates and deletes pipeline instances whenever something is changed in the asset from the editor, which means any run-time data registered into the instance from script during Start() would get broken if an artist decided to modify the shadow settings during play mode. And I’m not sold on the idea of generating a new pipeline instance every frame to handle dynamic events. What am I missing here?
Second is just a nitpick, but why is GetCullingParameters in CullResults rather than a method called GetPararmetersFromCamera in CullingParameters?
Third, what exactly are your plans for managing lightmaps? Will we have access to controlling when Enlighten would perform a meta pass after requesting a renderer update? Will we be able to store our own custom texture transform matrices in renderer components that work for multiple lightmaps and other kinds of world-space maps? For example, when I built the FPS prototype, I had to create a baked black directional light and specify a small lightmap resolution to get the entire baked lightmap into a single lightmap atlas, so that I could use the baked lightmap UVs to index my texture array. But in the future I would love to be able to just automatically have Unity pack the baked lightmap uvs to fill a single atlas, and then after painting the particles to the texture array, update global illumination on either the CPU or GPU (not sure which will be easier/more performant) using either Enlighten or my own system, and then draw the scene. Are there any plans to make something like this feasible?
Fourth, regarding discussion of callbacks within the render pipeline, I imagine it would be something like this:
Would this be an efficient approach compared to how Unity is currently doing things?
Fifth, will we be able to customize CameraEvents to attach command buffers to?
Sixth, will we be able to define our own render queue enumerations instead of simply using “Opaque”, “Transparent”, ect? More specifically, have an easy way to specify them from Shaders and such?
Seventh, how would the following use case be possible in SRPs (assuming it is possible)?
I have a particular type of enemy whose body is emitting fire. However, I want full control over the style of the fire, so I write a compute shader that takes in the deformed mesh (hopefully from GPU skinning), and outputs mesh data for fire for that frame. I have over 100 of these fire enemies in my scene, but only about 10 of them will be on screen at once, and I only need to update the fire when the enemy is on screen. I want to render the enemy mesh during the opaque pass. Then in the transparent pass I want to run the compute shader on only the visible enemies and then draw indirectly the fire. I then want to use a similar technique for smoke enemies, water enemies, ect. The only way I can imagine doing this would be to use a custom callback that sends out a reference to a command buffer to fill to all the objects after culling objects on a specific render queue (hence why I asked about custom render queue enumerations).
Eighth, I’m noticing in some the examples the use of an AdditionalLightData script. I have a sinking feeling this is going to lead to a lot of artist frustration, as an artist could easily forget to add this script when creating a light. It could also lead to confusion when changing the light component parameters have no effect, because actual data lies within the AdditionalLightData script. For example, maybe I wanted the light intensity to be calculated by a lightbulb type and wattage so that I could simulate a sketchy electrical system. Would it be possible to get a minimal version of a light (and probes and maybe even cameras) that we could inherit from that has the normal MonoBehavior messages (or at least the important ones)? Maybe this minimal class would only contain info for culling (like bounding box and such that would be hidden from the editor)? And then provide some way to print a warning when a user adds the regular Unity light? (This might already be possible. I’m not very good at editor scripting.) Are there alternative solutions far superior to this idea?
Ninth, can we get callbacks for when a pipeline asset gets assigned in the editor to that particular pipeline (as well as a callback for when a pipeline gets removed)?
Tenth, which shader variables does SetupCameraProperties actually set up?
And finally, is it a good idea for me to be trying to build my own SRPs this early? Am I asking too many questions?
I really like where Scriptable Render Pipelines are going. Aside from the things above, everything is really intuitive. It is easy to cull what you want to cull. I have full customization over shadows, light falloffs, styles, abilities to do crazy interactions between multiple lights and cameras, apply filtering to the skybox by drawing it first and then running compute shaders, whose results I can use to apply other shading effects, and all sorts of stuff.
And things for the most part just work. The easiest evidence that anyone can try is Debug.Log the order the cameras get passed in the array. You’ll find it is sorted by the cameras’ depth values, just as one would expect!
Lots of questions
There is a separation here between runtime data and configuration data. The Asset is used to configure settings for the pipeline, and the runtime version is an instance of the created pipeline. The idea is that the runtime version can cache things like RenderTextures, Shaders, Materials and similar. This is important because it’s possible to have more than one pipe of the same time active at once. For example a number of our tools instantiate an instance of the current pipe with ‘debug settings’ for rendering (material window, look dev, scene view). If these shared the same instance as used by the game view then each time a render happened all the render textures would need to be resized for the view. This stops this happening as each context owns it’s own instance. You can still ‘runtime’ settings embedded in the instance, and changing these won’t recreate the instance, it’s only settings on the asset that do this. It’s a heavy operation but should never be done every frame.
We are already changing this
Out plan for Unity to expose a number of lightmapping modes, and the pipeline you write should support a subset of these. The pipeline advertises what modes it supports and the lighting UI changes to only show the supported mode. We are not going to support custom lightmapping / scripting the lightmapper currently.
Currently we are going to keep this list opaque and have callbacks like “SendVisibilityChangedCallabcks(cullResults)” this is for performance reasons. I would really really not want to use SendMessage for this.
CameraEvents don’t exist in vanilla SRP as you have access to everything when you write one. You could add support for camera events into your SRP but it will mean a lot more work.
The queue is just a number, and rather arbitrary. You don’t need to use the given names and if you want set your own names / numbers up. In the DrawRenderesSettings you can specify the queue ranges to draw.
I need to think about this one a little bit.
We have some changes coming that allow you to write a custom Light / Camera inspector for your SRP. In this you can show additional light setting in the normal light inspector and hide any normal settings that don’t make sense for your srp.
What the use case?
[/quote]
Tenth, which shader variables does SetupCameraProperties actually set up?
[/quote]
We will eventually be removing this for more fine grained control, it calls deep into the unity engine and sets up a bunch of stuff in many places. What i’m basically saying is that we don’t have an easily visible list atm. But it looks like:
void Camera::SetCameraShaderProps(ShaderPassContext& passContext, const CameraRenderingParams& params)
{
float overrideTime = -1.0f;
# if UNITY_EDITOR
if (m_State.m_AnimateMaterials)
overrideTime = m_State.m_AnimateMaterialsTime;
# endif // if UNITY_EDITOR
ShaderLab::UpdateGlobalShaderProperties(overrideTime);
GfxDevice& device = GetGfxDevice();
BuiltinShaderParamValues& shaderParams = device.GetBuiltinParamValues();
shaderParams.SetVectorParam(kShaderVecWorldSpaceCameraPos, Vector4f(params.worldPosition, 0.0f));
Matrix4x4f worldToCamera;
Matrix4x4f cameraToWorld;
CalculateMatrixShaderProps(params.matView, worldToCamera, cameraToWorld);
shaderParams.SetMatrixParam(kShaderMatWorldToCamera, worldToCamera);
shaderParams.SetMatrixParam(kShaderMatCameraToWorld, cameraToWorld);
// Get the matrix to use for cubemap reflections.
// It's camera to world matrix; rotation only, and mirrored on Y.
worldToCamera.SetPosition(Vector3f::zero); // clear translation
Matrix4x4f invertY;
invertY.SetScale(Vector3f(1, -1, 1));
Matrix4x4f reflMat;
MultiplyMatrices4x4(&worldToCamera, &invertY, &reflMat);
passContext.properties.SetMatrix(kSLPropReflection, reflMat);
// Camera clipping planes
SetClippingPlaneShaderProps();
const float projNear = GetProjectionNear();
const float projFar = GetProjectionFar();
const float invNear = (projNear == 0.0f) ? 1.0f : 1.0f / projNear;
const float invFar = (projFar == 0.0f) ? 1.0f : 1.0f / projFar;
shaderParams.SetVectorParam(kShaderVecProjectionParams, Vector4f(device.GetInvertProjectionMatrix() ? -1.0f : 1.0f, projNear, projFar, invFar));
Rectf view = GetScreenViewportRect();
shaderParams.SetVectorParam(kShaderVecScreenParams, Vector4f(view.width, view.height, 1.0f + 1.0f / view.width, 1.0f + 1.0f / view.height));
// But as depth component textures on OpenGL always return in 0..1 range (as in D3D), we have to use
// the same constants for both D3D and OpenGL here.
double zc0, zc1;
// OpenGL would be this:
// zc0 = (1.0 - projFar / projNear) / 2.0;
// zc1 = (1.0 + projFar / projNear) / 2.0;
// D3D is this:
zc0 = 1.0 - projFar * invNear;
zc1 = projFar * invNear;
Vector4f v = Vector4f(zc0, zc1, zc0 * invFar, zc1 * invFar);
if (GetGraphicsCaps().usesReverseZ)
{
v.y += v.x;
v.x = -v.x;
v.w += v.z;
v.z = -v.z;
}
shaderParams.SetVectorParam(kShaderVecZBufferParams, v);
// Ortho params
Vector4f orthoParams;
const bool isPerspective = params.matProj.IsPerspective();
orthoParams.x = m_State.m_OrthographicSize * m_State.m_Aspect;
orthoParams.y = m_State.m_OrthographicSize;
orthoParams.z = 0.0f;
orthoParams.w = isPerspective ? 0.0f : 1.0f;
shaderParams.SetVectorParam(kShaderVecOrthoParams, orthoParams);
// Camera projection matrices
Matrix4x4f invProjMatrix;
InvertMatrix4x4_Full(params.matProj.GetPtr(), invProjMatrix.GetPtr());
shaderParams.SetMatrixParam(kShaderMatCameraProjection, params.matProj);
shaderParams.SetMatrixParam(kShaderMatCameraInvProjection, invProjMatrix);
#if GFX_SUPPORTS_SINGLE_PASS_STEREO
// Set stereo matrices to make shaders with UNITY_SINGLE_PASS_STEREO enabled work in mono
// View and projection are handled by the device
device.SetStereoMatrix(kMonoOrStereoscopicEyeMono, kShaderMatCameraInvProjection, invProjMatrix);
device.SetStereoMatrix(kMonoOrStereoscopicEyeMono, kShaderMatWorldToCamera, worldToCamera);
device.SetStereoMatrix(kMonoOrStereoscopicEyeMono, kShaderMatCameraToWorld, cameraToWorld);
#endif
}
void setup()
{
if (m_State.m_UsingHDR)
passContext.keywords.Enable(keywords::kHDROn);
else
passContext.keywords.Disable(keywords::kHDROn);
GraphicsHelper::SetWorldViewAndProjection(device, NULL, ¶ms.matView, ¶ms.matProj);
SetCameraShaderProps(passContext, params);
}
And finally, is it a good idea for me to be trying to build my own SRPs this early? Am I asking too many questions?
Was hoping to play around with this but i’m stuck at the first hurdle and don’t seem able to download a fully working version of SRL.
I’m not familiar with github, so I just use their gitHub desktop software to deal with it. Unfortunately this means when cloning it will download the latest version ( master ). I then tried reverting back to the 156fb11 commit, but it told me there were merging conflicts that had to be resolved first and, yeah no idea what to do about that. I’m not even sure if the conflict refers to current local version or reverting to the 156fb11 version.
I then tried downloading the 156fb11 zip and the PostProcessingStack V2. Unfortunately something is messed up and the PPS just spews out errors until i disable the component. Whilst in some scenes ( e.g. LDRenderPipelineVikingVillage ) I have a Missing prefab, but no idea what it was or if its important. So even when I have a ‘working version’ its unclear as to whether this really is working or if things are broken/missing intentionally.
Can anyone provide clear series of commandlines for doing the github stuff - assuming that the results provide a working version of SRL for a specific Unity version
So with regard to pushing updates and experimental builds of SRL I would say
My suggestion then is that periodically Unity deploy a working version of the project for a specific Unity beta/alpha version that is available to the public. Though I dislike having multiple beta/alpha installs i’d rather that and be able to get in and playing with SRL immediately than the situation currently where I have nothing or spend most of my time trying to get a working version.
As for SRL themselves, its too early for any real comments, but I am somewhat confused with the pipeline asset apparently needing to be assigned by hand to graphics-ScriptableRenderLoopSettings properties.This would appear to be an awful decision, not least because if I hadn’t happened to have a read a GUI popup in the GDC2017 demo I would never have known to swap pipeline assets and been very stuck/confused as to why it wasn’t working or that i just had a black screen.
Are there any plans to change this?
Why can’t it be changed via script at runetime? or if it can shouldn’t these SRL test demos be doing so, instead of the tester trying to workout with pipeline asset goes with which demo?
Hi Noisecrime,
Can anyone provide clear series of commandlines for doing the github stuff - assuming that the results provide a working version of SRL for a specific Unity version
From the commandline issuing the following should be enough:
git clone https://github.com/Unity-Technologies/ScriptableRenderLoop
git checkout unity-2017.1b5 (or whatever is the latest tag)
git submodule update --init --recursive
From what you’re reporting the problems you’re having with missing prefabs and errors in the PostProcessing are due to not having the submodules checked in. The above command line will do everything for you. Also, it’s important to match the tag with the correct unity version. You can run git tag command to see all available tags.
I’ll take a look to upgrade the github instructions page to be more informative.
My suggestion then is that periodically Unity deploy a working version of the project for a specific Unity beta/alpha version that is available to the public.
Thanks for the feedback. There’s currently a plan to deploy SRP in a more elegant way. @Tim-C knows more about it.
As for SRL themselves, its too early for any real comments, but I am somewhat confused with the pipeline asset apparently needing to be assigned by hand to graphics-ScriptableRenderLoopSettings properties.This would appear to be an awful decision, not least because if I hadn’t happened to have a read a GUI popup in the GDC2017 demo I would never have known to swap pipeline assets and been very stuck/confused as to why it wasn’t working or that i just had a black screen.
Are there any plans to change this?
Why can’t it be changed via script at runetime? or if it can shouldn’t these SRL test demos be doing so, instead of the tester trying to workout with pipeline asset goes with which demo?
The SRP can be assigned both by the inspector interface or by script (in the github project the test scenes have a script that changes the pipeline per scene). I guess most of the source of confusion comes by the fact that it’s not explicit which scenes in the project should work with each pipe and the fact that as off now, if no pass is valid the SRP won’t render anything.
For the scenes we can improve it by making it more explicit/automatic the pipeline configuration. As off the pipeline rendering nothing when unmatched, IIRC there’s a plan to fallback to an error shader similar to what happens when no pass is suitable in legacy.
Best,
Felipe
Oh wow! I totally was not expecting such a detailed response! This answered a lot of my questions. Embedding custom data right into into the inspectors of lights and cameras sounds amazing! That actually makes things like implementing custom camera events and such safer too.
You don’t need to use the given names and if you want set your own names / numbers up.
So how could I write code like this in my shader’s subshader block:
Tags { “Queue”=“OpaqueMoonlit” }
Where somewhere else in my C# code I define OpaqueMoonlit to be equal to 2115?
In addition, how would I change the dropdown options in the “Render Queue” setting in the material inspector?
Right now, per-object configuration is still a bit lacking, both for the example I gave previously and for light and probe sorting. If I was on low-end mobile (which I usually never am) and wanted to sort lights either by intensity, by distance, or both (taking into account a custom falloff equation), I would not be able to. I feel like either a new variant of command buffers are needed, command buffers need to be attachable to materials (with some way to access per instance data), or both, or some other more intuitive solution.
I’m kinda curious if and if so how surface shaders could be incorporated into SRP? They were nice for the black box render pipelines, and they might be nice for when people start sharing their custom pipelines with each other, but probably not the highest priority.
Anyways, I think I’m going to hold off on building my render pipeline until either some new information on how to do things or a new iteration of the API arrives. In the meantime I think I’m going to work on building the non-SRP aspects of some games that could really take advantage of SRPs.
Thanks again for all the information! It’s exciting stuff!
Hi @phil_lira @Tim-C I am following this awesome feature since the beginning and I must say that is an incredible and fantastic initiative.
I would like to know if there is any ETA (Unity version, weeks, months ?) for this and if it can be used in a production environment. I am mainly talking about the LD pipe.
Thanks a lot !
Hi @phil_lira @Tim-C I am following this awesome feature since the beginning and I must say that is an incredible and fantastic initiative.
I would like to know if there is any ETA (Unity version, weeks, months ?) for this and if it can be used in a production environment. I am mainly talking about the LD pipe.Thanks a lot !
Tim-C responded earlier in this thread about LD pipe’s stability:
In reality what that means is that until the HDPipe, LDpipe, VR features are polished the experimental tag will remain on the feature. That being said the LDPipeline is getting really stable and at a good place in 2017.1 to start using. I foresee that in 2017.2 it will be usable for a large number of mobile games and even if the experimental tag remains on SRP as a whole this pipe will be passed that state.
So looks like in 2017.1 (july) it will be usable for some, and in 2017.2 (november?) usable for most.
Thanks for the answer @scvnathan . Seems that I missed that post.
I am also wondering if we can add properties to lights in SRL.
Are there also some information about the deferred rendering path ? @scvnathan @phil_lira @Tim-C