I want to test the limits of my iPhone 4 and wanted to suggest to our character modeler that he should make a 5k character with 2048x2048 diffuse, normal, and specular maps. We would then keep putting the character into the scene until the scene dropped under 30 fps. The game will have a maximum of five characters on screen at one time and is situated inside a cabin.
If the game cannot sustain a frame rate over 30fps with the aforementioned setup, we would ask him to reduce the poly count on the character by 10% and try again. My question is if reducing the poly count on the character would require the character to be new texture maps or could the already created texture maps be used for testing purposes.
If not, what would you suggest is the best, most efficient way to optimize this approach?
The texture maps could be reused. However, the model would need to be reskinned, and it’s a pain in the arse. Easily the worst and hardest part of modelling, at least for me.
Lowering the number of polygons can usually be done automatically (depending on the software) using some sort of “mesh optimization” utility. However, it can really mess up the structure of the model, make it so it doesn’t bend right, etc. You can also increase the number of polygons (like a “mesh smooth”, but usually this will not really make the model look much better. Models have to be designed a certain way for them to look and animate right. i would recommend asking your artist to make the model with as few polygons as possible, while still having it look and animate correctly. I would also recommend lowering the texture size to 1024x1024 at the most. It’s not like you’re going to see much detail on that little screen anyway.
Something like this cannot be answered. The UV map could make good use of the texture, or every vertex could be on the same texel! How close is the model to the screen? I’ve seen questions like this from various people; you’re not alone in not understanding that a certain size doesn’t have “a look”. But it’s imperative you acquire the understanding. Based on this thread, I can’t imagine you’re going to get to work with the same character modeler more than once, or keep the one you have, unless the salary is huge. You need to learn how to prototype better. What’s the point of modeling over and over?? Putting any mesh with a certain number of vertices, that takes up the same numbers of pixels, will give you the same performance – just use capsules or boxes! Redoing a model to test performance will only lead to wasted time and burnout.
You’re right. I do need to learn how to prototype better - which is why I’m here asking these questions. The character modelers I’m working with haven’t done any work yet but I understand the hazard of wasted time and burnout and thought I would post here asking for the best approach.
I thought it was better I posted here and learn from your experience before having them work on a single thing. From your response, I feel like I made the right decision.
This itself could probably run at 60fps. The real issues are how many bones, and what is the shader complexity?
If it helps you to decide - all our textures are 2048x2048 minimum, and all our models are a few million polys. Because the artist uses topogun for low poly.
Perhaps 3.6 will bring the automatic LOD stuff in, meaning we can really tweak those speeds. Unfortunately, I don’t think Jessy support will be available in unity until at least 6.0.
Are you talking about your side scrolling game hippo? how and why would you do that wouldent not be vissible even with retina display ipad would it? As the charicter proboly only takes up 1/4 of the screen and that would proboly be around 1024x1024 and the polygon count would be so dense that they wouldent be vissible?
This is my picture i look at to decide on texture sizes
creating a character using planes of a side view and front view are i find the best way, you can make a high poly model and then retopologize it in a program suitable, but i find its more work to make the model twice, if you could use planes your basically already retopologizing it.
As long as you know proper toplogy. These are just my opinions. I ussually base my textures on whats going to be seen more and what the game is going to be doing. We are making a webplayer fps with textures using a mix of 512, 1024, and 4096. As some might find that to be over doing it, its becuase the game is small the levels are small and not alot will be going on. If your making a full fledged game with aspects further than we are putting out you might want to look into lowering texture amount to save FPS and Vram usage. For my personal project I havent used anything above 512 yet and my lowest is 50X50
I dont think its possible for a microchip to actualy do a 50x50 texture. it goes 2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768… that should do it for a few years not quite sure but i think its something todo with the general architecture of them and due to the number of transistors used.
Most likely unity would convert it to 64 or 32, But again im not quite sure.
READ: This is wrong my later post explaines the actual problems
Just looked it up and its due to mip mapping for quality as when a texture is reduced problems will arise like: 50/2 = 25, 25/2 = 2.5 meaning some distortion and extra processing power is needed to extend or reduce it to a whole number, were is it was 64/2 = 32, 32/2 = 16, 16/2 = 8, 8/2 = 4, 4/2 = 2, 2/2 =1, and no problems arise… infact i may do some tests with android builds with open gl 1.9 as aparently textures not power of 2 are only supported by 2.X.
im not saying Im making the textures 50X50 in unity, I create them in photoshop as 50X50 Pixels. untiy then converts them to the highest amount they will go. My guess is it converts it to a 512 automatically but then i just go and replace that with the lowest res.