I’m a very experienced game dev but very new to Unity. I’ve just purchased an iPhone license, and for this it looks awesome.
I’m potentially looking at this for other projects, particularly PC. I would be interested in porting some of my XNA/DirectX projects over to Unity as a proof of concept. XNA provides a very thin but flexible API to the programmer(and no content creation tools), whereas Unity provides great content tools and a higher level interface to the graphics.
I implemented a deferred rendering engine for XNA, and was able to create realtime volumetric lightshafts using it. I’m slightly concerned that I won’t have this flexibility in Unity(for PC), and perhaps not in the PRO version either. This was quite a complicated effect to write requiring multiple shaders and passes.
The short question is, can I use/get access to Multiple Render Targets? It is a given that I have a graphics card that supports this function.
I’m guessing that I’d need the source code license. How much more expensive is this??
…and just to reiterate, I bought Unity for iPhone development and I’m pretty excited about the time it’s going to save me, so great work on that front.
Right now MRTs nor floating point render targets are not exposed in Unity. Why - because there haven’t been a lot of requests for that Most folks here do games for casual-ish audience, that usually does not have hardware to use them anyway. But if enough people will want MRTs and/or floating point render targets - we’ll have to add them, I guess.
Doing a deferred renderer without those two could be hard. Some forms of deferred renderer (e.g. Wolfgang Engel’s Light Pre-Pass renderer) should be doable though.
Perhaps consider this the first request. I know that you are moving into the PC market, which is great, but I predict that there would be many requests at this level. You only have to look at XNA, which is aimed at hobbyists too. Despite being quite a thin API there really aren’t too many restrictions, and there’s plenty of examples out there of people implementing the latest graphical techniques in XNA. You could do a lot worse than look at how MS have designed XNA. Dare I suggest that you build ontop of XNA. Your content tools married to their API would be an ideal world for me( I enjoy working in C#) . Before you cry heresy, it would be an easy way for you to get onto the Xbox.
You didn’t really answer the technical side of my question re: the source code license. Are you saying that even if I had the source code (and perhaps that still means an API and dlls) I still couldn’t use fp buffers and MRT? If not cost prohibitive I would rather go this route than wait for a new release.
I do sympathize that the minute you start exposing functionality and especially when its not part of the cross platform subset, that design becomes trickier.
Heh, give this thread a look to see why Unity cannot (or will not) be ported to use XNA: http://forum.unity3d.com/viewtopic.php?t=17106
But, you do realize all scripting is in C# (or JS or Boo – you choice), so you don’t have to leave it :-).
If you did have the source license (which would mean all the actual source, not a compiled .dll with header files) you could definitely go ahead and add MRT to the engine. The source code license is very expensive though (I’ve heard $50,000). The only reason you’d need the source code for MRT specifically is because you really need to build this into the graphics engine to implement this. Most other things that you’d like to add can be done in C#, or via a plugin (like adding a 3rd party c++ library).
I just wonder how people are planning on using MRTs. With MRTs, often it only makes sense if you write all your shaders to understand MRTs (afterall, if you’re rendering into multiple targets, you probably want different stuff in them). But some cards (e.g. Intel GMA 950 or whole GeForce FX range) might not support MRTs at all. So would you write two rendering paths for everything, or not support those hardware configurations, or… ?
I’m flying a little blind here as I’ve done pretty much no Unity programming, but why not use XNA as a template…it has a very straight forward interface and I see no reason to implement different rendering pipelines.
Perhaps have one function to query the number of rendertargets. This could be part of a broader device capabilities query a la DirectX. By default all devices support 1 render target.
public void SetRenderTarget (
int renderTargetIndex,
RenderTarget2D renderTarget
)
//
My graphics card supports 8 render targets but the shader model (3.0 only supports 4 max) supported is also a determining factor.
SetRenderTarget (0, null);
This unlocks rendertarget 0, and would reset by default to the backbuffer.
In HLSL, accessing the render targets in the pixel shader is straight forward, by having multiple colour registers.
We already have the workings of multiple rendering paths for different hardware. Most of it is kind of hacky at the moment, but we’re moving in that direction.
One case I would’ve preferred to use MRTs is in Jetpack Brontosaurus. I would’ve much rather just written custom shaders that had multiple outputs than putting together a complex layer/camera system that the artists had to fight with. Most of our shaders were custom made anyway. “Drop in your living world texture here and your death world texture here and that’s it” makes much more sense to the artists. Plus it would have helped with the draw call issue of having 3 different “worlds” rendering at the same time.
Just seems to me that Camera.RenderWithShader + MRTs would be a powerful combination.
And another request, FP buffers being my priority over MRT; please guys, don’t wait until there is a small army of people breaking down the door It’s just a texture format that needs to not be disallowed.
Another vote for MRTs and FP textures here. I just assumed they were on their way, because it’s difficult or inefficient to do modern effects like HDR and SSDO without them. And is HDR even modern any more?
As far as compatibility goes, I don’t think people are worried about supporting older ones. Unity supports older chipsets, and people who want to write fallback pipelines can do so. The issue here is that Unity doesn’t fully support the new ones.
Floating point render texture support is coming in Unity 2.6 (no built-in way to “just enable HDR” though, but it will be technically possible to do it at least).