OnWillRenderObject() problems.

So I’ve been trying to update my script that makes the object look at the current camera from working with only 1 camera to working with multiple cameras rendering that object (for split screens, render to textures and stuff like that).

Logical thing to do according to help would be to use OnWillRenderObject() function which should be called for each object that would be rendered for each camera, before the object is rendered.

But this function seems to only work with the editor camera :face_with_spiral_eyes:
The object never orients itself according to any game camera using this approach.

Is it that editing object rotation at this stage is “too late” within the pipeline?
How can we make it work?

Bump, since I don’t know of any other way to receive support from a Unity team.

Is this a known bug? When should we expect it to be fixed? Should we go ahead and impement something like “Camera Render” observer on our own to remedy this problem?

you didn’t mention half the informations like which unity version, if you have pro etc
I ask cause some of the things were removed with U3 so if your concept rely on “what used to work in U2” that might be the reason. (see documentation in either case)

If there’s not an automatic function, you might try creating a Cam1 layer and a Cam2 layer. Make Cam1 not render Cam2 and vice-versa.

Copy your camera-facing object into both Cam1 and Cam2 layers, and have each copy look at the appropriate camera.

dreamora:

I am using Unity 3.0 and a pro version. There’s no relying on the “what used to work in U2”. I rely on documentation only, and according to which this feature should work.

Vicenti:
This approach is extremily hard to maintain and manage. And what if I need to support 4 cameras for a 4 player split screen?
This also means that objects have to be literally duplicated 4 times which sounds like a waste of resources, especially on consoles.

A probably better solution (even though it would just impelment the feature that Unity should’ve had) is to do something along the lines of :

CameraObserver - object that will register for and receive camera callbacks.
GameCamera - represent the camera in the game, send BeforeRendering events to each CameraObserver.

This is extremely useful for things that need to be aligned with the camera, like 3d text, helath bars, effect plates, etc.

This is probably still less efficient than OnWillRenderObject though. This is because that function should be only called for objects that are VISIBLE in the camera. With my solution, the logic would be performed for all objects in the world, not just those visible.
This could be remedied by implementing our own frustum culling (even if it’s rough).
The problem is that it would be reinventing the wheel and I would rather have Unity guys fix it. On the other hand, we can’t wait for too long, we will need split screen demonstration pretty soon. Bummer.

Bump?

I am not entirely sure if this is a support forum or not anymore.
If it is, aren’t we supposed to be getting help from Unity team?
It’s been more than a week since the original post.

We have invested a lot into buying our licenses and having close to absolutely not support is literally not cool.
Or is there another support method that Unity team provides that I don’t know of?

This is a community board where people help you with problems if they can and want.
if you need support on something that technically does not work, I would recommend mailing the support and filing a bug report if something does not behave as documented by the U3 documentation for the platform (each platform has an own set of documentations basically)