Last month, I predicted that VDI will be just a niche play as the cloud matures. Yesterday Brian Madden posted a dramatically different perspective about the extent to which VDI will penetrate computing.
This perspective was not his own, but he thought it interesting enough to write about it. The problem though is that although the observations are reasonable, the conclusions are awful.
Let's look at specific examples.
First, the post notes that computing is changing rapidly, and of course I agree. More apps are moving toward the cloud for simplicity and portability reasons. The apps that will be left behind are rich applications that require local execution. The problem with VDI in this scenario is that you get the worst of both worlds: you get neither the simplicity of the cloud app, nor the functionality of a local app. It just doesn’t make sense to take your fat desktop and stick it into the cloud (except in niche scenarios), since VDI will only become more cumbersome as the cloud matures.
A second observation in the post invokes Moore’s law, saying that as servers become better and cheaper, the cost of VDI will drop. This might be true if users continue to use the same applications, but that’s not how computing works. Applications will continue to expand and consume the additional server bandwidth, negating any savings from Moore’s law.
Thirdly, the post goes on to describe deployment models. The primary pain point that desktop virtualization solves is desktop deployment and centralized management. With a client-based solution, IT can provision an additional VM simply by publishing an html link and sending an email. It’s cheaper, faster, and more resilient than provisioning additional boxes in the datacenter, as you do with VDI.
The post also completely ignores some fundamental issues with VDI. For example, a defining characteristic of VDI is the pooling of resources in the datacenter, but the downside to pooling is that you are magnifying the risks and complexity of desktops—the classic “eggs in one basket” problem. With VDI, you are taking inherently resilient, distributed desktops and turning them into a highly concentrated system that is vulnerable to malfunction. With VDI, if the system goes down, all your desktops go down. A related problem is that IT has to over-provision in order to prepare for peak capacity (e.g., 9:00am on Monday morning). But it’s difficult to predict group behavior, and your “over-provisioning” may prove inadequate, anyway.
Finally, the post fails to address Madden’s own Offline Paradox. Offline capability is at the core of VDI’s shortcomings. There are many times when a user finds himself without an Internet connection, such as on a plane or when the connection goes down for whatever reason. With VDI, users without a connection are unable to access their virtual desktops. This is a key area where a client-based approach excels.
What do you think the future holds? Will virtual desktops live in the datacenter or on the host machine?
What a breath of fresh air. You are spot on. The conclusion problem is most likely due to the siloed thinking approach (to a carpenter every solution requires a hammer or saw). Few people want to step back and look at the larger problem space (users need offline usability; network considerations; IT capacity; etc). You are nicely defining the intersection of BYOC and cloud computing. I look forward to reading more.
ReplyDelete