Wednesday, June 24, 2009

User Personalization: The New Frontier in Desktop Virtualization

When companies give laptops to their employees, they set various access policies. Some companies let their employees do what they want on these laptops, like install software, while other companies lock down their laptops so employees can only do work related items. I know people who have to carry two laptops, one corporate and one personal, while they travel so they can do personal things on the road.

When companies set these policies, they are forced to make a trade off between employee productivity and flexibility with data security and manageability. The more flexible IT is towards the user, the harder it is to manage the laptops. How much time does IT really want to spend on re-imaging laptops because users frequently mess them up? At the same time, users are working longer hours from remote locations, so IT needs to make sure the users are happy and productive. And if something goes wrong, users do not want to have to wait days before they can try out software that might help their work.

Now companies are planning to implement desktop virtualization, especially VDI, to reduce their cost in supporting laptops and desktops. However, they have not realized that they are just avoiding one problem by introducing another. Instead of managing physical machines, they will be managing a lot more virtual machine images.

When the MokaFive founding team was at Stanford looking at desktop virtualization trends, we realized that we needed to make sure the solution satisfied both the needs of the IT department as well as the needs of the end users. If the solution only makes one side happy, it will never be successful.

This is why we introduced a product called User Personalization in our new version 2.0. A new policy is added which allows IT to control whether an end user can or cannot personalize or customize the virtual machine image. If IT allows the user to change various Windows settings, install applications etc., on the image, IT controls the image - IT can update it, add new applications, etc. The changes in the base image and the customization done by the user are merged when the user boots up the image. If the user messes up, he just needs to revert all of his changes and go back to the base image. IT does not need to be involved at all.

By combining the User Personalization feature with the Targeting feature, we are providing a very powerful image management solution for desktop virtualization deployment. With v2.0, you can have one single Windows XP image targeted to different groups with different policies. And you can give a group more flexibility and another less. You can work on the next image release while everyone stays productive. When the release is ready, a few clicks in the management console and you get everyone updated.

MokaFive 2.0 now available!

It's is a big week for the team here at MokaFive. As many of you know, MokaFive started as a project at Stanford to use virtual machines to make it easier to manage desktops. A little over a year ago we released our 1.0 product, which was a hosted solution. We got a lot of positive feedback from customers about our 1.0 product, but the number one customer request was to provide the capability to run the MokaFive service in-house.

So this week, we announced 2.0! The 2.0 technology comes in the form of an enterprise-ready Desktop-as-a-Solution platform: MokaFive Suite. With 2.0, you can install and run MokaFive Suite in your data center, behind your firewall, and serve your end-point clients without depending on us. You can integrate with all your existing management solutions. You are in total control.

Enterprise computing is reaching a turning point right now as desktop virtualization technologies mature - answering business challenges that haven't been solved before. Our customers are innovating with the quick to deploy MokaFive Suite, taking advantage of mass user-customization, policy-based security. We will share detail of how different businesses (across healthcare, legal, finance, other professional service companies) are solving their challenges - and the benefits they have achieved.

We have also added some exciting new capabilities on the client side. For a long time, we have been working behind the scenes on a new layering technology that dramatically changes the way you can deploy and manage virtual desktops. This technology allows you to separate the machine into layers that can be independently managed and updated. Among other things, you can now allow users to install their own applications, while you can still manage and update a single golden image. That's right, your users can install any applications (even kernel drivers) and they will persist across rejuvenation and updates, as long as you set the policy to allow them to do so. This is a major breakthrough and one of the biggest barriers to widespread adoption of desktop virtualization, and MokaFive is the only solution on the market today that gives you that ability.

We also re-designed the client and it is now using a new disk format. The new format is much more reliable and has better performance, especially if you are running a LivePC off of a USB flash drive that can be suddenly removed. Users running v1.x LivePCs will need to manually migrate to 2.0 to take advantage of the new features. The migration process is pretty straightforward. Export the LivePC with v1.x Creator and then import it using the v2.0 Creator. Here is a detailed document to walk you through it [link]. Let us know if you need help.

Stay in touch with the MokaFive blog for information and perspective on the concerns and trends related to the modernization of enterprise desktop computing. We invite your feedback, questions and opinions - starting with the announcement of MokaFive Suite.

If you're on Twitter, follow us at http://twitter.com/MokaFive

Thursday, June 18, 2009

Virtualization: Hype vs Reality, Part II

And we’re back with Part II of our series on what is hype and what is reality when it comes virtualization. Here is our perspective on two more topics that are getting a lot of play lately:

Reality: Virtualization is cheaper than other traditional infrastructures


Obviously a major issue for anyone in any industry right now is cost. People are looking to save money in every way they can. In light of this, many vendors in the virtualization space are chiming in about how their products and solutions can help you cut costs. Is this true? For the most part yes. BUT it’s not quite so black and white, so let’s break it down.

A desktop virtualization deployment can save you money in terms of management and support, but it can potentially be more expensive than a traditional deployment – it all depends on how it you approach it.

Right off the bat a VDI deployment costs 20-30% more from a CAPEX perspective than a traditional physical desktop deployment. However, once it’s up and running the deployment can make it much easier and cheaper to manage your IT infrastructure in the long run, *if* you do it right.

  • For instance, if you go with the model of creating one image for each user, the deployment will not scale and it will end up being just as expensive to store and manage the virtual desktops as it would in a traditional physical desktop deployment, if not more expensive.

  • To achieve real cost savings and make up the CAPEX expenditure, you need to leverage virtualization to make IT more scalable. What does this mean? Creating one golden image that goes out to all of your users or to large groups of users.

Hype: Remote display is the way to go

Remote display can work well, but only in very specific environments.

  • Remote desktop is fast over LAN, but slow over thin pipes.

  • Gigabit LAN to the server can achieve speeds nearly indistinguishable from local execution but at a much higher cost.

  • Low-latency WAN connections can get acceptable performance but only if
    they are not using any graphically intensive applications.

  • High-latency WAN connections are not viable for real-world usage.

Basically what it comes down to is that there are fundamental limitations to the interactive performance of applications across low bandwidth and high latency links, and we are getting close to the limit. Remote desktop just does not work well over wireless networks or on laptops.

So what do we recommend? For applications that require a high refresh rate or rich graphics, a locally executed solution is always going to offer the best performance, no matter what. Remote execution will always face speed of light limitations even with the fastest of connections.

Wednesday, June 3, 2009

Virtualization: Hype vs Reality

These days there is a lot of talk about virtualization, desktop and otherwise. With all this chatter it is understandable that people are confused about what is what. We thought it would be useful to sort through some of it and separate the hype from the reality.

Hype: Virtualization as the hot new, “it” technology

642378_lear_sieglerThere is a huge amount of hype around virtualization, and it is being positioned as a brand-new cutting edge technology to solve all your IT needs. But virtualization is far from a new thing. It's been around a very long time, in computer time at least. Virtualization is a core concept in computer systems and has been in use since at least the days of the IBM Mainframes. The remote desktop model of centralized execution is a throwback to the 1970s with dumb terminals connecting to the big mainframe in the back room. (Take the old IBM literature, change the names and you could pass it off as a VDI architecture diagram.) As we develop new technologies and approaches, desktop virtualization has evolved and become more sophisticated, and thus more useful – providing us today a real solution to serious computing needs.

Hype: Virtualization provides poor performance

The second common myth we keep hearing about virtualization is that it is slow. People think that using virtualization implies a negative performance impact. The truth is a bit more complicated.

Virtualization adds a level of indirection, which implies some kind of overhead. The two primary considerations for systems performance is CPU (processing) overhead and IO overhead. It makes sense to separate these considerations:

  • CPU overhead: With modern virtual machine monitors running on modern CPUs, the CPU overhead is insubstantial. Some operations can be slower with virtualization (for example, system calls or page table manipulation), but modern VMMs are now generally able to work around these issues, leveraging techniques like dynamic recompilation and paravirtualization. Intel and AMD have also added hardware virtualization support in their recent CPUs. It depends on the particular workload, but the CPU overhead from virtualization is typically a few percent at most.

  • IO overhead: IO intensive applications can see a bigger performance hit due to virtualization because the extra indirection can be more costly. However, IO performance hits can often be reduced or eliminated by tuning the system.
While factors like these need to be taken into consideration
to get optimum performance when using desktop virtualization, there are other
advantages that offer immediate performance benefits:
  • Virtualization enables performance optimizations at a different level. The extra level of indirection inherent in virtualization can be used to improve overall system performance by optimizing at a whole-system level. For example, virtualization allows you to share hardware resources and quickly adjust based on demand, leading to better overall system performance.

  • The VMM can even use compression and caching to improve the IO performance beyond its native performance levels. We've seen numerous examples of applications that run faster under virtualization due to these effects.

  • A virtual machine can actually boot faster than a physical machine because the load order is predictable and the VMM can rearrange the blocks.

  • On the server side, it is easy to migrate VMMs, or quickly launch new ones to handle changes in load.

  • With desktop virtualization, you can boot from a golden image every time, eliminating slowdown from Windows rot. Also, because you can rejuvenate the system image, you don't need to run virus scans of the system image. Using anti-virus software typically slows the machine more than virtualization does.
Basically, running on a VMM is like running on a different computer architecture. If you take an application that was tuned for one architecture and run it on another, sometimes you will take a performance hit, but through tweaking and tuning you can usually erase the deficit. Virtualization is no different. A virtualized architecture also opens a bunch of new possibilities that can improve performance.

Hype: Virtualization uses less energy

Virtualization actually adds overhead, so cycle-for-cycle it will usually consume more power rather than less. BUT power savings with virtualization ARE possible, leading to greater energy efficiency. Here’s how:
  • Consolidating many old, underutilized servers into a single server can save a lot of energy.

  • Power savings can also be achieved simply by moving to newer, more energy efficient machines.
There are also other variables that affect whether or not implementing desktop virtualization will save you power. If you are moving into the data center, it depends on the machine utilization - if you have many desktop machines sitting idle all the time, you will use less power but if the endpoint machines are fairly well-utilized already, it will impact your power usage more. Implementing power-saving modes on desktops is one thing you can do to move towards power savings in any environment.

Really what it comes down to is that power and cooling in the data center is very expensive, regardless of your architecture. It is necessary to provision your data center to handle worst-case scenarios of peak load, but since most loads vary greatly you are most likely going to end up either massively over-provisioning or risk unacceptable performance and downtime during peak periods. That’s a reality – virtualization or not.

Stay tuned - we'll be debunking some more of the myths surrounding virtualization in coming posts. This is obviously an important topic and one that is hot on everyone's minds right now. For more thoughts and another perspective, check out Scott Key's recent post on virtualization.info.