Ok, so on my 3D sphere map (shown here) all of my stations, big ships, ect, are all being rendered twice, once on the gameworld camera once on the radar camera. My question is because this is being rendered twice is it pretty much using the same amount of reasorces as having 2 different objects?
I would like to use the same mesh but with a different shader for the radar. I would doubt there is much of a performance difference between those options any way but I was just curious.
Second, in my game you can go extremely far away from objects and one of the small issues I was having is Z fighting. I was wondering how resource taxing it would be to have one camera rendering close objects and far away objects. (kinda like a LOD system with out the deatail).
For example lets say we have a camera with a near clip plane of 0.1 and far of 2,000 and another camera with a near of 2,000 and far of 50,000. Neil said then you would be rendering every thing twice. What I didnt understand is if one camera has a near clip of 2,000 how is it rendering any thing closer then that sinse it doesnt see it? This also isnt a huge deal because I have worked out ok numbers for using one camera 1:20000 seems to be a good ratio. Once again im just curious.
Third, when using a multi camera LOD system can it really speed things up that much? It seems to me your sacrificing less polys being rendered at once for more objects, which I know can be reasource heavy. So how much of a performance gain can there really be?
Thanks a lot,
Bill