So I'm creating a Procedural Mesh (a Plane) for a 2D app. The app is set to 640x480 pixels, and uses a camera set to Orthographic. The Vector3s used to create a Mesh are floating-point coordinates for Unity world-space. However, I want to set my Mesh dimensions in pixels, but haven't yet figured out a reliable conversion method. Unity scales are still a mystery to me. :)
For example - if I create a unit-1 sized Mesh (at Points (0,0,0),(1,0,0),(1,1,0),(0,1,0)), it looks like it fills about 40 pixels of the screen. Further experimentation seemed to put it at 36 pixels. So if I wanted a Plane/Mesh that filled the top-left quadrant of the screen, I'd use Points like (8,8, 6.6, 0).
That might work, but I'd prefer a more reliable algorithm than Programming by Coincidence... So is there any other way to backtrack from a set of pixel dimensions, to their floating-point coordinates?
Update: Andrew's answer worked well - and looking at his sample mesh project got me to change a few other things in my project. For instance, I was programmatically creating a GameObject to assign all the mesh Components to, whereas he just created an empty GameObject and drag/dropped the script to it. I still am not thinking in the Unity Way. :) Second, I used the numbers 320x240 hardcoded (just for testing), which, for unknown reasons, made the mesh way too large - when I switched to Screen.height/2 (as Andrew's code uses), it sized things correctly.