as same as Question’s title, Does Object’s scale matter on memory efficiency and Game performance?
Like example… Road Object of scale Vector(10000 , 10000, 10) vs Road Object of scale Vector(100 , 100, 0.1)
There really shouldn’t be a performance issue, but there might be a few render quality issues. Those scales are going to be stored with 32-bit floats no matter what the value is, and a 32-bit float will execute at the speed of a 32-bit float. Also, a 32-bit float will take up 32-bits of memory space no matter what value it holds. Now for the possible quality issues.
Most game engines will use 32-bit floating point values (a.k.a. floating-point-values or just floats) for performance reasons (they take up less space, they’re faster to load, you can fit more of them inside SMID instruction, you can cache a larger number of them at once, etc., etc.). While a lot of scientific computing will use 64-bit floating point values (a.k.a. double-floating-point-values or just doubles). A double has about twice as many significant digits and larger exponents. Did I forget to mention, floats and doubles are stored in a form of binary scientific notation. So, something like 0.5 is 4.999999 * 10^-1 for a float and 4.99999999999999 * 10^-1 for a double.
Due to the lack of precision when working with floats, it’s a lot easier to start getting gaps between the seams of the objects when you start using extreme scales and/or translations. HDRP has an option to reposition the location of GameObjects during rendering so that the World-Space origin-point is at the Camera’s position. This way, an object that is 50,000 units away from the World-Space origin-point, will have less position based rendering errors when the camera is close by (which is also when those errors are more noticeable).
The next issue with using extreme scales is mesh density (or lack there of). When the points that form a triangle are screeched so far apart, that they go off screen in opposite directions, you’ll get some strange rendering artifacts. For instance, shadow map creation can get distorted when the render pipeline flattens the shadow casters alone the light’s normal direction to stop them from clipping out of existence while the shadow map is created. Or, maybe a texture get’s sampled in a kind of funny UV-position due to using a faster interpolation algorithm for spreading the UVs from vertices to fragments in the rasterizer step. I’ve actually seen rendering markers get cut short when I decided to just use a quad and stretch it really far (via scaling) because I didn’t think it was worth the trouble of creating a strip of quads to form a straight line.
The best art assets are going to have a one to one scale for their intended purpose. And, if you scale them up by one (or five) hundred, then there shouldn’t be an issue unless it’s a really low poly model that the camera is going to get real close to (triangles should never be so large, that they span the entire screen). UI elements can break this rule of don’t make the mesh to sparse for its size, but a lot of care goes into UI systems to make sure they look good and render properly.