I’m getting really close to making the leap to Unity, and wanted to verify that a couple of features I need for the game I have in mind are supported.
Does Unity allow you to attach custom shaders to particles? Even better, does Unity allow you to attach custom appearances to particles beyond just meshes?
I would assume the screen space transforms that billboard the particles are handled in the vertex shader, so it’s unlikely, but would Unity provide the hooks to create a new billboard shader?
Is the render pipeline configureable enough that I could swap render targets, render a batch of particles to a new target, and later in the pipeline reference the render target in another shader?
Appreciate the input on this, I’ve got the mac store open as I write this, and just want to make sure I don’t spend a lot of money for nothing.
Of course. We ship half a dozen or so built-in shaders for particles (additive, multiplicative, smooth additive, …), but you can write and use any shader on them.
Not really. Particles in Unity are quads or stretched quads (depending on options in Particle Renderer). If you want something else, you’ll have to implement it yourself using procedural mesh interface.
Particles are oriented towards the camera on the CPU. The reason is that we generally have to run on very old machines as well, where vertex shaders are not available (this is not an issue on Direct3D, but when using OpenGL vertex programs are not available everywhere). Of course you can use a vertex shader on the particles that transforms the vertices into something else than a billboard.
Yes. There are several ways to do it, the easiest is probably creating a new camera that renders into your render target, and setup it’s culling mask so that it renders only the objects you want. Then you’ve got your render texture. Alternatively you could go more a coding-way and use functionality in RenderTexture class.
In most systems I’ve seen the particles get batched into the rendering pipeline during the update phase, and the renderer basically treats them (and all other surfaces that match the render pass you’ve put them in) equivalently, so it has been difficult to use a generic particle system, and force a batch of particles into a custom render pass so that you can switch render targets temporarily, and then back to the main target when you’re done.
The use of a second camera has potential, although it complicates things somewhat if there’s an assumption that the particles are drawn somewhere else in the world (out of the main game camera frustum), because the game I’d like to make uses the particles in the main simulation (which in of itself may indicate needing to simply create a custom object class with particle-like draw behaviours, which should trivialize the render target swap as that could just be coded into the shader codes draw function presumably)
In fact, I think in retrospect that’s likely the answer in this case, since I have no idea if Unity’s particles are capable of interacting with the simulation by colliding with other game objects…
Thanks Aras…
Tz.
(I think the support on this board may be the single best reason for wanting to move to Unity… great community here!)
Not sure why that would complicate it. You can have as many cameras in Unity as you like and of course you can just move it from a script to for example frame a specific particle system. What exactly do you want to do?
You can just attach a World Particle Collider component. Then all particles will perform collision detection against all colliders in the scene.
Yes, that is true in Unity as well. But read below.
They don’t have to be somewhere else. Any object in Unity can be assigned a Layer, and components like Cameras, Lights or Projectors can be set to render/illuminate only specific layers.
So in your case you’d assign some layer to your particles, setup a camera that renders only that layer and you’re set.
If you want to dig much deeper into rendering pipeline, there’s OnRenderObject function, it is possible to change active render target, render something and revert to previous render target there. Some of Unity’s terrain code is done in C# right now, and it uses this function to update vegetation impostor textures.
That sounds like an ideal solution… so the objects can all participate in the simulation, but be masked to various camera layers… nice. I presume you can choose the order those layers are rendered in which solves the problem nicely. =)
Thanks for all the help, you’ve got another convert; I ordered my Mac today and will be ordering Unity Pro sometime late next week when it gets here…
Yes. Cameras have a depth property that determines the order in which they are rendered. Additionally, all cameras that render into render textures are drawn before cameras than render to screen.