That doesn’t make much sense (not to me at least).
Setting aside other practical issues, if you run a simulation, then scale everything up by a factor of 10 and run it again (with everything adjusted accordingly, including camera parameters and so on), there should be no observable difference in behavior, correct? In other words, in the context of a computer simulation there’s no direct correlation between realism and scale (which your statement implies that there is).
Consider, for example, a physics-based game in which the actors are mile-long space vessels, and for which the environment includes moons and asteroids and so forth. Using SI units is going to give you very large values for size, mass, and applied forces, which isn’t so great from a numerical standpoint. What you’re more likely to do in a case like this is scale everything down by at least a couple of orders of magnitude. In other words, ‘inventing your own scale’ is exactly what you should do in this case.
Consider also that SI units are somewhat arbitrary already. This is sort of a fanciful example, but consider the case of an alien civilization in some other galaxy that, like us, has developed equations describing Newtonian physics. What are the chances of them using the same units we do? Pretty small, probably. Maybe for them, length is measured in units that would be 4.78 meters for us, and time is measured in units that would be .32 seconds for us. In other words, relative to us at least, they’ve ‘invented their own scale’. And yet, their description of Newtonian physics would be no less ‘realistic’ than ours.
(Actually, now that I think about it, that example isn’t really even necessary, given that humans themselves have used a variety of different units and measuring systems over the course of their collective history.)
When working with computer-simulated physics (especially in games), the primary concerns are going to be numerical rather than using ‘realistic units’, IMO; that is, staying in the numerical ‘sweet spot’ as far as the simulator is concerned, and avoiding overflow, precision loss, and other pitfalls of floating-point representations.
Now, I’m not an expert on physics simulations or on the internal workings of PhysX, so I’m certainly open to being proven wrong. But at this point at least, it’s not clear to me why ‘inventing your own scale’ in the context of computer-simulated physics would necessarily have any impact on realism.