The answers given are pretty good, just thought i'd through in a few of the keywords that might help you search for more information.
DrawCalls - every time a model is drawn to the screen you have to issue a drawCall to the gpu. Unfortunately its not as simple as 1 call per model. Its more like 1 call per material within your model. So if you have a single model, but with 5 sub-meshes, that will be 5 drawCalls. Alas that is not the end of it, you also have to factor in if using per pixel lighting that you get a drawcall per light pass or if you have dynamic reflections (or similar effects) you'll be rendering the scene at least twice, potentially doubling draw calls.
Something that can help here is material batching, or rather model/mesh combining that share the same material and are spatially local.
FillRate - this is the amount of pixels that can be drawn a frame or to 'screen per second'. This is pretty hard to pin down as you'll never reach the manufactures stated values, at least not with typical usage. The main aspect to be aware of here would be to minimise overdraw , especially with regard to transparent textures, though I guess these days with multi-pass shaders they can burn up your fillrate too.
An easy test to see if you are fillrate limited is run the game windowed at a small size, check the framerate, then increase the size, checking framerate each time. If you reach a window size and the fps drop off dramtically you've hit the fillrate limit.
Bandwidth Limited - Probably not worth mentioning these days since most if not all persistant data is stored on the gpu, though you could still find ways of saturating the bandwidth (i.e. uploading to much data to the card in a frame), but I would have thought it unlikely.
Finally there really is only one way to 'know' this and that is to test on your baseline machine. However what you can do is try to ensure that models are created with a view to easily increasing/decreasing polygon count. Though i'd not advise simlpy running a LOD over it. It has to be thought out a little, but talk to the artists and see what they can do. Doing this will allow you to tweak you polygon count during the project.
In terms of budgeting it may be more helpful to think in percentages. e.g. the environment gets 50% of your polygon budget, characters get 25%, effects and other stuff get the last 25%, etc. Then you can start breaking down those numbers more. For example your lead character might have twice as many polygons as enemy characters, but you might want 10 enemies on screen at once, so that gives roughly 2% per enemy and 5% for lead character.
Once you've got that, you could easily mock up a simulation of those assets pluging in different max polygon counts. E.g. 100k limit, so 50k for environment, 25k for characters etc. Create very rough models that emulate the dimensions and polygon count and place those in Unity and see how it runs. If its too slow, reduce you max polygon count, if its too fast, increase it.
Obviously don't make the environment a single model, as its likely to be made from 10's of models, so do that. Its pretty vital to apply appropriate shaders to the models for checking fillrate/overdraw - (in the editor there is a handy option to draw the scene with a special shader that helps to visualise how much over-draw is going on.)