# World space to pixel space accuracy

I'm working on a 2D game, and looking through an orthographic camera. However, given the setup I have thus far, it seems that my final output is not as pixel perfect as the numbers I have entered. Here is an example. I have a grid, where cubes are 1.1 units apart from each other. Every few cubes, there is a larger space between them, that is undesired. Obviously since we have to shoot though the camera at objects existing in space, there is an eventual decision made by unity to round to the next pixel.

My question is, has anyone found a good setup for your camera to pixel ratio for iOS4s res of 960x640? so that a unit in unity is more mathematically equivalent to the pixels on screen?

My apologies if this didn't come out clearly, and thanks in advance!

You could just set up your orthographic camera to have the desired height by editing Size in the Inspector. For an orthographic camera, Size is half of the height of the camera in world units. If you wanted your camera 640 "pixels" tall, you'd set Size to be 320, and make sure all your objects have sizes specified in pixels.

I find that solution to be cumbersome, so what I do is just calculate the pixel ratio:

`float pixelRatio = (camera.orthographicSize * 2) / camera.pixelHeight;`

Now you can use this ratio to convert pixels to world units to properly size your objects:

`someObject.width = pixelRatio * 64; // 64 pixels wide on screen`