I’ve started looking into optimization, the language commonly used is to optimize for the target platform.
Two things, This leaves me with no idea what good data for what I’m doing might look like and it suggests there is a time to benefit ratio. That it doesn’t matter how inefficient a project is as long as it works well for the minimum planned requirements.
Even so, I still need a frame of reference to recognize problems in a data set, because just looking at the tallest graph lines near the end of a production cycle probably isn’t a swell practice.
I’ve started building and running tasks on poorly optimized code.
I have 2 builds running different solutions of terminal windows, each intentionally left poorly optimized. Each build has 18 chatboxes, each updating text to looping string array of size 32 once on a fixed update and a regular update. Both windows are built about the the same with functional scroll event to iterate through the array.
Solution 1:
Each frame adds 10 strings from the together using a for loop and outputs it in 1 text box on the UI canvas.
CPU PlayerUpdate pulled 16.5ms
Garbage collector created 36kb p/frame
solution 2:
rotated out 15 vertically stacked text boxes in the hierarchy as their texts were updated, producing the same effect as solution 1.
CPU playerupdate pulled 12.5 ms
Garbage collector created 1.4kb per frame.
A blank new project startup scene scored 7ms in the CPU section
Im not advocating that one method is a better way than the other. But the current trade off is solution 2 has 250 more UI draws than solution 1 each frame, and performed much better.
I don’t know how well 18 updating chat boxes gets to simulating a UI in a completed game, but based on those few numbers, how would they stack? Light, heavy, is 36 kb/s pretty high?Or is it all just not mature enough to say?