To try and make this question a little bit more understandable…
Look in front of yourself. Recognize what you can and cannot see without moving your head or eyes.
Realize you cannot see anything beyond your peripheral vision. Everything behind you is not rendered in your vision(barring reflections…)
Now imagine that you are able to step out of your body, 3 paces back and hover a similar distance in the air behind yourself. So you are observing yourself. The only limitation is that you CANNOT see what your body’s eyes cannot see.
—If this was too hard to imagine, then just picture this instead.
You are in a pitch black room with a flashlight that acts more like a spot light than a flood light. Now put your perspective in 3rd person view over your shoulder.
This is essentially what I’m trying to achieve WITHOUT using shadows and lights to achieve the same goal. I want to render an FPS view from a 3rd person camera.
Anyone seen an example of a game which does this or know a good way to achieve this?
Currently I am using depth of field on my 3rd person camera to try and achieve the peripheral vision part, but I’m not able to achieve the culling and occlusion part because my main camera which renders the world sees everything behind my player and thus doesn’t cull it. The 3rd person camera can also see over walls which causes it to not occlude anything either. I thought I could just put a quad in front of the camera with an alpha texture cutting out the areas my player is allows to see to simulate his view frustum, but that still doesn’t fix my culling and occlusion problem.
The only possible work around I can imagine for this is one of two things.
Either I do ray casting from the location where my player’s eyes are into their view frustum and just turn on mesh renders for anything that gets hits, so the player can see what he sees or I do something similar using a collider instead. Obviously this approach is more taxing and not as smooth of an implementation as I’d like.
And for those who are curious as to why I want to achieve this, I want to make a game which doesn’t give the player any sort of omniscient features(minus the overhead view), but that heavily relies on player focus and attention.
sound like you need something like mask that have the form of the frustrum culling you would test depth against, and maybe something similar to shadow masking technique, in which you take a depth image from a camera (instead of light perspective) then compare depth from the new camera to point of the old camera, along the old camera vector, to see if the new depth is below the old depth, if below then don’t show it.
I do this now for a game I am working on. I have two cameras in my scene.
One is active, used for culling and rendering the background.
Second disabled only used to get camera projection.
I have a script write the second cameras MVP matrix to global shader matrix.
For my foreground object the shader assigned internally renders it using both the first and second cameras transform.
I see something like this working for you maybe except all your objects would use it.
If your camera views are not to different.
Second you could manually get the camera frustum from your first camera and manually enable disable your game objects based on it.
And then just render normally with you second camera.
Again depending on how similar the cameras are will have an effect.
Both these would let you see more than they could see before. Say if the second camera looks down it would draw behind it. But it wouldn’t draw any more objects. If you want there to be empty spaces behind the objects then you might need to do something like neoshaman mentioned.
A quick test would be to put a shadow casting light aligned with your first camera.
Then when rendering your second anything in shadow could be considered as invisible.
You might be able to adapt this with unitys shadow system and layers.
Or just do your own depth map from your first camera and do the shadow/depth comparisons when rendering from the second.
[Update]
While I have not found a smooth and hassle free way to accomplish what I wanted… I have found a rather innovative way of accomplishing the deed. I decided to make a scalable radar using raycasts which flood the viewport in all directions of the frustum and I animated them so that they sweep just like a radar. Currently my code is working like a charm, but it isn’t easily drag and dropable onto any camera one desires without the hassle of placing child objects for vector references. I’m going to find a way to do all that automatically through code so that the component can just be equipped to any camera without the need to do anything special.
I can’t say that will be the most efficient or accurate way but if it works for you then have at it.
If your going that way you can look at other Physics methods like OverlapBox and others that can query all the colliders in an area in one pass instead of multiple rays.
This will also keep you from missing smaller objects.
I’m down for trying another way if its within my grasp to comprehend how to implement it.
That and if it will accomplish what my goal is of course.
The ideas mentioned by others above went completely over my head, so I tried to find information with screen shots to show what exactly the methods accomplish, but alas I found more hay than needles.
Neoshaman mentioned something about depth images. The only thing I know about depth and camera is the render level order. Draw one thing on top of the other on the screen during render time. I tried this, but couldn’t get an acceptable result. The closest thing I managed required an oblique and 2 cameras rendering the scene using a 0.01near and 0.3 far plane and a 0.3near and 1000 far plane respectively with my player situated right at the 0.3 mark in between the camera points. A 3rd camera was used to render my player sandwiched between these two cameras so give the illusion. Problem is, getting too close to walls caused some quirky issues with corners rendering on my player or my player rendering through the corners of walls.
Not sure if that was what Neoshaman was suggesting with the depth stuff or not. I’m still quiet new to this stuff.
Daxiongmao, does an OverlapBox work essentially like a normal collider’s OnCollisionEnter method and just store every single collision inside of a Collision array? If so, how would I cross reference that list with a list of all the objects within my frustum that are not behind other objects? I don’t want to go rendering objects behind walls.
I forgot about you wanting to not draw things behind others.
Basically overlap box will find all colliders inside of it and then give you a list.
So this might not work as well because you want to hide things.
So like i said before if your way is working and you are not having performance issues. Move on and come back to it if you do.
So what you are doing now is basically all CPU based.
What neoshaman mention and how shadowmaps work is kind of similar but it is all done on the GPU. Its kind of like you are raytracing one time for every pixel and storing the distance to the closest object (from the lights point of view).
But you don’t know what object it was. So when you draw your normal objects you just draw them all and check to see which is closer.
If the thing in the shadow map value is closer than the thing your drawing now is behind it. So its in shadow, or in your case hidden.
I would say move on, and later if your current method becomes and issue come back and investigate this method.