I’m at a bit of a crossroads here. Let me explain. We are making a virtual trainer for electrical connectors in a helicopter. There are hundreds of components that use hundreds of different types of connectors (different pin counts, layouts, etc.). There are two aspects to this trainer, which is the actual 3d model which involves removing the wire connector from the component plug and then we switch to a 2D window overlay view to actual test the connectors/probe the connectors.
Because there could be over a hundred pins on a single connector, we use a higher res image in the 2d window to show the connector face and allow the student to probe the pins using a multimeter. For the 2d window, I am using the original 256 map which has a single connector face on it.
In the 3d world we are using a much lower res version of the connector. Basically, for the 3d world I have a 1024 map with dozens and dozens of connector faces. The actual face can be clearly seen, but is not clear enough to be blown up for probing.
So my question is, is it better to continue the flow I am using or would it be better to just use the higher res single connector face textures throughout the 3d world and get rid of the 1024 multi-connector texture?
I figured that since in the 2d window you can only have 2 connectors showing at a time and in the 3d world you could have dozens of connectors showing at any given time, my current workflow would be better for overall game resources. The programmer thought that maybe since the higher res images were in the world anyway, even though they are culled until he calls them up in the window view, the impact on performance could be the same. My thought was that if the student has disconnected dozens of connectors, the game would then need to call dozens of individual maps to render all those different connectors vs just calling 1 or 2 maps to render the same scene.
We haven’t had a chance to test it, but before we go and redo hours worth of work for the sake of a test, we thought we could pose the question to the community and see what the consensus is.
It should be pretty obvious that higher resolution textures will have a larger memory footprint. Bear in mind that this relationship is exponential (doubling a square texture’s resolution will quadruple its size in memory).
As far as I understand it, larger textures introduce two main concerns:
They take longer to load
They require more memory
Presumably all of those textures will be loaded into memory at some point, regardless of whether or not they’re currently displayed. In that sense, using fewer textures may actually reduce your memory footprint.
If you’re rendering on a GPU with lots of memory to cache your textures, I wouldn’t anticipate any major framerate concern with texture size until you hit a point where the cache runs out of space. That would be a Bad Thing. Swapping cache is extremely expensive in general, will probably scale at least linearly per swap, and will swap more often as your memory starvation problems get worse.
If you’re rendering on integrated graphics, the simplest explanation is that you’ll have about the same problem but a much smaller cache. Adding insult to injury, you are probably sharing both memory and memory bus directly with the CPU, which makes all of the above problems even worse.
Probably someone who specializes in these things could give you a much better explanation, but that’s what I’ve got for now.
All textures in the scene are loaded in general memory. BUT, each texture currently being displayed is also loaded into texture memory (of the graphics card.) This is generally the bottleneck. In Unity: Play, select “Stats” and look at UsedTextures. You’ll see that it goes up and down wildly as you move around.
If you look at some tiny 4x4 pixel objects using a 4meg texture, you’ll see usedTextures jump by 4 megs. When you look away, you’ll see it drop back down.
So, what you are doing now is mostly correct. A used texture is something that wasn’t viewPort or backface culled – the high-res images hanging around off-camera aren’t causing a problem. You’ve basically invented LOD (Level of Detail) which is exactly that trick – swap in your own lower-rez texture when you know it will look fine.
What happens is standard operating system stuff. The graphics card would love to have all textures (and models) for the frame loaded into its own memory. It can’t actually read a texture from main memory. Instead, it “bumps” a current texture and loads the new one. With luck (or planning,) it bumped a textured it had already used this frame, but maybe not. So, it has to reload it and bump something else. Worst case, you manage to accidentally always “bump” the next texture you were going to use (known as thrashing.)
So, as “texture amount used in one frame” goes up to the size of graphics card memory, no frame rate change. As soon as you go above that, you could get a sudden, large frame-rate drop. Then it just gets gradually worse as you add more.