Mosso doesn’t let you run server-side software (aside from php, asp.net and mysql). SMS is a java aplicaiton you need to install, configure and run.
EC2 and Mosso are very different clouds. EC2 is basically a very large bank of Xen virtual machines; when you turn on an “instance” you are booting a disk image (called an AMI) into a Xen virtual machine. You can then ssh into your virtual machine and apt-get whatever, but there are lots of pre-built AMIs with various software already installed, so often you won’t need to do anything but configure. You can build your own AMIs and store them on S3, and you can have persistent storage for your instances on S3 as well. You have to manage all of this through command line tools. (Huge hint – there’s a firefox plugin that lets you do everything through a GUI. It’s an absolute must-have.) Command-line tools means there’s an API, so you can build your own controls to start and stop instances (look at http://tarzan-aws.com/).
Mosso’s cloud is much simpler – a large bank of web servers (some linux, some windows) in front of a very large SAN, and behind a smart load balancer that knows how to direct the .php stuff to linux servers and .aspx stuff to Windows servers. You store your files on the SAN (via FTP). You have to use Mosso’s name servers, because that’s where the real magic happens. There’s no API because there’s nothing to manage; web requests go in, data goes out.
I use both, but for different things. Mosso is my fantastically-scaling never-need-to-think-about web server. EC2 is my personal playground for when I’d like to have 50 servers for some task, like a large video encoding batch, or when I’ve got thousands of live users to stream video to.
You can handle thousands of live SMS users on a single EC2 instance. (I’ve done over a thousand concurrent live chat and video streams on a single EC2 “large” instance, running a Wowza Pro AMI.) I wouldn’t worry about solving the > 1000 concurrent users problem until you’ve got a working game thats delivering >100 concurrent users.
Scaling EC2 is no different than scaling a bunch of servers in your own rack. After all, they’re just a bunch of (virtual) servers; you still need to do handle the hard stuff, like synchronization of data between servers. Terracotta is a solution for this (for Java-based server apps only). Terracotta handles synchronizing objects and data structures of java-based apps between servers. I’ve read about it working for SMS and Red5, and it seems to be the real deal, although I’ve never tried it myself.
You could use Mosso as a Unity server if you polled a web-based API for updates, and stored the data in a MySQL backend. In fact, Mosso would be perfect for that. But most homemade MMOs I’ve heard of use socket servers (or even UDP, which you’ll never get from Mosso) for the quickest update times.
Mosso is 100 per month. I'm trying to find my Amazon bill for that large event... I can't find it, but as I recall it was super cheap. (A "small" server instance is .10 per hour the server runs. Run it 24x7 for a month and your charge is $72, not including bandwidth charges. You can play with numbers here: http://calculator.s3.amazonaws.com/calc5.html)
Also, Mosso now owns SliceHost, which is basically the same business as EC2 (virtual machines of various sizes) and the pricing is better. They don’t have all the nifty pre-built AMIs, I don’t know if there is an API, and I don’t know how large their infrastructure is, but it’s an option worth knowing about.
At it simplest, you have [EC2 server running SMS] ↔ [Unity client]. Just a server you turned on at EC2 running SMS, and the Unity client connects to it.
Once you put Terracotta in the mix, I assume you’d have as many SMS servers as you’d like, plus another server instance that acts as a load-balancer. (If you so something simple like round-robin through a list of running instances via PHP, you could avoid this.)
Also, all you potential MMO guys take note, there has been one time I know of when Amazon shut down all of the EC2 instances briefly. But that’s better uptime than the datacenter at one wilshire, now that I think about it.