I’ve been doing a lot of work lately on behalf of clients who are looking to deploy applications to virtual data centers. When I first started working as a technology consultant in the ’90s, it was a given that if you wanted to have a web application, you had to buy a bunch of servers and rent out a cabinet in a data center somewhere. Now the notion of spending all that money on hardware that will go obsolete in a few years seems like insanity most of the time (particularly when the size of the audience for your application is completely unknown). When I first started a hosted web service in 1999 people judged us by how many servers my company owned. Now when I tell them “we don’t own any servers at all” they nod knowingly.
It’s great that we have more cost-efficient virtualization options today than we had ten years ago. Unfortunately, though, virtualization is a disruptive technology, which means that there are incumbents (including hardware and software vendors) who are digging in rather than adapting their technology and business models to the new realities of the marketplace. If the economy of 2008 dictates that IT managers put more emphasis on cost-containment as they did in the recession of the early 1990s, the pressure to do more with virtualization will only increase.
Microsoft in particular is far behind the curve on virtualization — maybe not with respect to its own virtualization products, but with respect to its one-CPU/one-license business model for its software products and operating systems. Rather than seeing virtualization as an opportunity to charge a lucrative toll for dialtone computing, Microsoft’s first response was to make their license terms more restrictive (by preventing users from deploying Windows Vista to desktop virtual machines unless they pay for the bloated, expensive version). I know that they’ve relaxed this restriction for Vista in the last year, but that doesn’t matter to me since Vista will not likely touch any computer I own for at least a year or two, if ever. (When I do my own development work today or do .NET demos at VSLive, I run Windows Server 2003 in a virtual machine under Parallels on my Mac.)
But the desktop is neither here nor there. I’m more concerned about the way that Microsoft is missing the boat with respect to server licensing. They are getting their butts kicked by LAMP, particularly here in Silicon Valley, but also internationally, and their poor virtualization story is a big reason for this.
We’re currently working on an out-of-band innovation project for one of the world’s largest companies. The company went with us specifically they wanted an outsider’s perspective on what they wanted to do — but also because they didn’t want to deal with a bunch of bureaucracy and institutional hurdles to shipping the application.
We could have written the application in .NET (which I am a world-renowned expert with) or in PHP (which I am just now coming up to speed on). Paying for Windows Server licenses would not have been an issue for this client. So given that, you’d think that that free operating system would be at a huge disadvantage (particularly since the developer who’s going to write this application is operating outside of his area of expertise).
But we came to the conclusion that it would be more efficacious to write and deploy the application using Ubuntu and LAMP mainly because the story with Windows virtualization is so totally ridiculous. After looking around, it does not appear that there is a product out there that supports Windows Server on a virtualized basis in a manner similar to Amazon EC2 (giving us the ability to spin up many instances of a cloned server configuration quickly) or Slicehost (which also provides virtual hosting based on several different Linux operating system configurations at low cost — less than EC2, actually — and can spin up a new server instance within a minute or two).
None of these hosting options really work with Windows because of Windows’ licensing model.
What would Windows support for an EC2-style virtualization product look like? Microsoft would provide a preconfigured machine image of Windows Server for use on EC2. Users would be able to spin up one or more instances of the server in less than a minute (no waiting for 40 minutes for the operating system to copy its files onto the virtual machine and no running Windows Update five times with a reboot each time). Just providing something like that would be a huge benefit in and of itself.
But — and this is the important part — you’d be able to pay for your operating system license by the minute instead of by the CPU. If you do the math on this (a $400 license for the web server edition of the Windows Server OS, divided by 8,760 hours in a year, factoring in the expected period of amortization for an operating system product which is three to five years), it should be possible for Microsoft to offer a bare-bones utility computing edition of Windows Server through Amazon EC2 for no more than one or two cents an hour. If users wanted other stuff (like a version of SQL Server that doesn’t artificially throttle the amount of memory you can utilize), you could pay another few cents per hour. (Or you could just run MySQL which works quite well on Windows.)
I should reiterate that this is not a matter of “your operating system costs money” versus “their operating system costs nothing”. Businesses are always willing to pay more when the value is there. This is really a matter of who is more of a pain in the ass to do business with. Or not do business with, as the case may be.
Update: Read/Write Web suggested that Microsoft is at work on their own utility computing initiative, called Red Dog. I have high hopes, but my expectation is that it will be similar to Microsoft’s other online offerings — 80% of the feature set that people need combined with enough franchise-protecting restrictions that it won’t be a serious threat to EC2. It would be much easier to buy in to the whole thing if Microsoft just found a way to virtualize their software licensing and left the hosting to others.