Wednesday, November 18, 2009

Virtual Layers: A Retrospective

After we introduced our Layering solution as part of our 2.0 product in June, a lot of people have been talking about Virtual Layers. (Check out Gabe Knuth's "What is Layering, and why does it matter?" post for a good overview.) Now the feature has been in customer's hands for a few months, I have a better perspective about what works well with layered management (and what doesn't work), and why.

Reaction to Layering

The key realization behind layering is that different people are responsible for different parts of the desktop, and you want to manage the pieces independently. Layering provides a very easy way to combine different managed pieces into a single cohesive environment.

The reaction after we released layering was very positive. Everyone I've talked to loves the story and wants to move to a world where they can manage their desktop using layers. Since we released our product in June, a number of other companies have started talking about layering and management using layers. Some have even announced products in this space. However, MokaFive is still, five months later, the only layering solution that is commercially available. On the customer front, layering has generated a huge amount of interest in MokaFive and is one of our key differentiators.

While we've had great success with customers using layering, some people are skeptical about whether layered management can actually work in their real-world environment. They've heard claims like this before with other technologies and have been disappointed.

Layering and Application Virtualization

The desire to manage independent pieces is not new, and there have been products that have tried to solve this problem for a very long time. Application virtualization is one such technique. Application virtualization allows you to wrap up an application and all its dependencies into a single bundle that can be easily managed. At least, that's the promise.

Most of our customers and prospects have played with application virtualization, and some are deploying it today for some of their applications. But everyone who has used application virtualization in real scenarios is aware of its limitations.

First, you as the admin need to sequence the applications, which is easy to do wrong and very, very, very hard to do right. You need expert knowledge in the internal application structure, as well as expert knowledge of Windows and of the ins-and-outs of the particular application packaging. Sequencing is a black art and it takes a long time to learn how to do it well.

The second problem with application virtualization is it is a really, really hard problem to solve, so no one has solved it completely. To completely virtualize an application, you need to correctly emulate every operating system component that the application interacts with. This is an exceeding difficult problem especially when the OS changes, and applications rely on undocumented behavior and bugs. A perfectly compatible application virtualization solution would require a reimplementation of most OS components. (Once you do that, you don't even need Windows anymore. For those who are familiar with WINE on Linux, WINE is application virtualization. It works with many applications but has some serious restrictions.) In fact, in the best case for the very best application virtualization products, the compatibility rate is only 90%. That leaves 10% of applications that cannot be virtualized.

The third issue with using application virtualization is that it provides isolation, which is not always what you want. When you package an application, it becomes independent of the rest of the system, so you can run it in a wider variety of environments. However, that also limits interaction with the rest of the system. When you try to paste your Visio diagram into your Word document and those applications are separately virtualized, it doesn't work because the applications are isolated. You have to do a bunch of work to explicitly open conduits between the virtualized applications, and you have to do this for all applications. This goes against the whole point of having a single cohesive environment where everything can interact.

The reason I bring up application virtualization is because its limitations are fairly well-understood, people assume layering is the same way. But the way we did layering avoids these problems.

Why Does Layering Work?

One of the key differences for our layering solution is that in our system, a layer is tied to a particular base image. That means that we don't need to solve the problem of trying to get the same layer to work on all environments, we just need to make it work in the one base image it was installed in, plus any delta updates to that base image. This means we don't need to know dependencies between applications to bundle them up or verify they exist, we don't need to normalize paths or registry keys to work on different base OS configurations, we don't need to worry about compatibility between different base images. So we avoid a huge set of problems.

Layering is trying to solve a different problem than traditional application virtualization. Application virtualization is about isolating applications so they can be delivered easily and will run the same way in different environments. Layering is in many ways the opposite of isolation - it is about combining pieces to form a single cohesive whole, while maintaining the ability to manage each layer individually.

What about delta updates? Couldn't that break layering because the base image changes? Theoretically yes, but we have not run into this problem in customer deployments. The reason is most updates do not need to make modifications to higher layers. This makes intuitive sense - installing a hotfix or service pack usually doesn't search your hard drive for other programs and files and make modifications to them. Programs which do variations of this (e.g. upgrading to a new version converts the data files to a new format) require some special consideration when using them in a layered management environment.

The other issue that can arise during layering is when the layers have conflicting updates. Say you have applications that both the user and administrator can update. When they install conflicting versions, whose version wins? The layering engine is flexible enough to allow you to specify this at a fine-grained level, but it can lead to a confusing user experience as some user changes will persist while others will not. The solution is to be structured in terms of who is responsible for what. For example, the administrator controls a core set of applications, operating system files, and VPN, while the user can keep their own personalizations and applications but cannot change the components controlled by the administrator. With some planning, you can avoid conflicts between the layers. We provide some best practices for our customers that have worked well in avoiding conflicts.

There are two basic properties that ensure layered management works correctly. The first is that changing lower layers should not require changes to the data on higher layers. If they do, then we may need to account for dependencies. The second is you need to specify a policy when there are conflicting updates between layers, and some care has to be taken in how to specify that policy so that it operates in an intuitive way while still gaining the benefits of updates and rejuvenation. We have a set of recommended layering configurations that avoid the problems while maximizing the benefits, and our customers have been using them with great success so far.

Looking Forward

Layering has been a big success for MokaFive and continues to be one of our key technology differentiators. We designed the layering solution with specific goals in mind, so we avoided most of the downsides of other approaches. Now we've been through actual customer deployments, we have a good idea on how to most effectively use layering for management.

We are continuing to push ahead on layering, using our experience with deployments to improve the product and working on some great new features that take advantage of the layering engine. There are some exciting new features we are beta-testing today for release next year that are going to raise the stakes on layering. Stay tuned!

John Whaley, CTO & Founder

Tuesday, October 6, 2009

Deploying XenApp Within a Virtual Desktop: The Why's & How's

We posted recently about the possibilities of creating a flexible, yet secure virtual desktop - with the right management tools. If you're familiar with the benefits of virtualization and have already invested in application virtualization technology, now you can go a step further to get security and centralized management of the entire desktop image. Deploying a well-managed virtual desktop will secure your user's operating system and simplify management of the entire environment, without sacrificing the ability to let users customize, run offline or on the go. The secret with virtual desktops is to "manage centrally, execute locally."

Deploying a flexible virtual desktop management solution is a relatively quick and simple process (a few hours to set things up), but can save IT administrators an impressive 60% on desktop support and management costs. Support costs drop as users can simply restart their virtual desktop to "rejuvenate" the original clean-state image with all of its applications and settings, wiping away any malware picked up along the way.


For instance - if you're already using XenApp, here are the basic steps:

1) Make sure your XenApp Server is configured to talk to your Domain Controller - this is important because you will be targeting apps based on users


2) Use MokaFive’s Creator to build a base LivePC image


3) Now that you have created a base image, you can install other applications, including XenApp applications, on top of it


4) From MokaFive's management console, target this LivePC to any users you choose to receive it


5) In the XenApp Access Management Console, set a number of policies for how you want XenApp to launch and run (from the server, locally, online/offline, etc.).


For step-by-step instructions,
check out this document.

Friday, September 4, 2009

MokaFiveTV is Live

Curious to know more about desktop virtualization? The concept of managing virtual desktops with layers? Want to learn the backstory on how MokaFive came to be?

To satisfy your curiousity and learn about innovations in desktop computing, we invite you to check out the MokaFiveTV channel on YouTube. Our co-founder and CTO John Whaley, along with co-founder and current Stanford CS professor Monica Lam as well as other experts from the team will give you an inside look at what it's all about.

For starters, you'll find the following video episodes:
MokaFive: A brief background
Virtual Layers - Inside View
At the Whiteboard - Layering
Virtual Layers - Application Management

For an even more intimate view, follow "Joe Whaley" and MokaFive on Twitter.

Thursday, August 13, 2009

Building a Flexible (yet secure) Desktop Solution

Many of you are already familiar with server virtualization and VDI and you might be looking for a virtualization approach that will help more easily secure and control desktop environments in a wider range of environments, i.e. bring-your-own-laptop, offline access, remote employees, etc.

In this post, I am going to describe one way to build a desktop environment which provides a lot of flexibility to users but still lets IT maintain control.

Consider an alternative approach to VDI: instead of running the VM image on a centralized server, the VM is running locally on the end-user's machine. The mantra is: "Manage centrally, execute remotely." This new model provides a great platform for IT organizations to customize solutions to fit their needs.

First, let's look at application virtualization and how that might fit in. Some of you maybe familiar with it - the big names in this space are: Microsoft AppV (formerly SoftGrid), VMware ThinApp (formerly Thinstall), and Citrix XenApp. In this model, an application is packaged up in a bundle and the user runs the application from this bundle in a sandbox environment on their unmanaged desktop. This is a great solution for delivering single applications to users because the application does not need to be installed manually on the local machine and it is delivered on-demand to the user. IT does not get involved in managing the OS and data on the machine.

However, not managing the OS or the rest of the desktop makes the computing environment vulnerable. In most cases, it's imperative to properly manage the environment to make sure the computer doesn't crash due to a missing security patch or that business data is not left unsecured. This is one of the reasons the desktop virtualization approach "manage centrally, execute locally" really shines - by easily securing the environment around a virtualized application:

Let's use the below image to explore the MokaFive layer approach to managing a virtual desktop, from bottom to top:



















  • On the lowest layer, there is the host PC or Mac operating system. The Mokafive Player, to download and run the virtual "LivePC" desktop, can be run on either platform, so you don't need to worry about cross-platform support in your solution. We take care of that.
  • The layers above the host platform is the MokaFive Player and the Hypervisor. This allows you to manage the next layers above and control various security settings. IT can control what the user can or cannot do in the virtual desktop by setting policies on the central, Web-based management console.
  • Then there is the base OS that runs inside the virtual machine (Windows XP or Vista).
  • You can also install any corporate applications, i.e. Outlook, Word, a CRM or ERP application, etc. Together with the base OS, this can be your standard base virtual desktop image for your company or one base image for a specific department. By default, the base image is locked down by the MokaFive Player so it can't be tampered with.
  • The top layer is where the magic happens. You can deploy additional applications to your users, based on their needs, using application virtualization technology. You can have users run applications from the server or you can have the applications streamed down to the virtual desktop, depending on performance needs. If you have an existing virtualized application installation, the MokaFive solution fits right now.
  • Also on the top layer is the user installed applications. Using the personalization feature that is built into the MokaFive 2.0 technology, you can allow users to install their own applications on top of your standard managed image. You give your users flexibility to do whatever they want on the top layer while you maintain control on the lower layers.
We have tried this configuration with XenApp, AppV and ThinApp. I think this configuration provides the most flexibility to both IT and end users while at the same time, IT still maintains control.

Thursday, July 30, 2009

Back from Briforum

Back from last week’s Briforum conference in Chicago, and I must say it was an enriching experience. How many people can say they rode to dinner in a stretch Hummer with 30 other geeks?

The amazing thing about Briforum was the content-rich discussions. Unlike other shows I’ve been to, Briforum was a super concentration of the leading practitioners and researchers in the virtual desktop space. Moreover, there was a candid, open sharing of ideas -- and a sense that no one has this figured out, so let’s all push it ahead together. Very refreshing.

In particular, I really enjoyed Ruben Spruijt (Twitter, blog) and Shawn Bass’ (Twitter, blog) overview on application and desktop delivery solutions. Impressively, in 75 minutes, it covered the full expanse of virtual desktops and applications. I left with a much better sense of the “lay of the land”.

IMGP8287.JPG

For MokaFive’s part, John Whaley (Twitter) and I held a session on the huge promise of layering in virtual desktops, and it’s ability to greatly improve manageability of VMs for IT on the one hand and the customizability of VMs for users on the other. We had a terrific turnout and many great questions from the audience, even though we started at 5:30pm and lasted way into the dinner hour. A testament to dedication and fortitude of Briforum attendees!


IMGP8160.JPG

Update: John just posted a short video demo where he gives an inside view of our layering technology. This is a "short and sweet" clip of what we showed during Briforum.



For more photos of BriForum, check out Brian Madden's Flickr stream and the fan section of MokaFive's page on Facebook.

Burt Toma

Wednesday, July 29, 2009

Tachibana teams up with MokaFive in Japan

Today we announced Tachibana Eletech Co. Ltd as a MokaFive distributor in Japan. With nearly 90 years under its belt, Tachibana is one of the most respected and well-established Japanese technology firms and will be a solid partner to bring MokaFive solutions to market.

Security and cost are two of the most important factors influencing the deal, according to Masao Hamamura, operating officer, Information & Communication Systems at Tachibana. His statement for our press release today was, "Security is a top concern for our enterprise clients, and MokaFive provides the best solution to secure desktop environments in a wide variety of secure remote computing scenarios. Another major factor that makes MokaFive Suite platform superior to competitive products is that it does not require our customers to invest in new hardware."

This is MokaFive's first major international channel partner announcement and with MokaFive Suite available as an easy-to-install platform, building out the channel network will be a primary focus for us. In addition to consulting, reselling, and developing systems using MokaFive, Tachibana will also use MokaFive technology to develop new products to extend its thin-client business unit, TC Cube.

Wednesday, July 15, 2009

USB drive testing at MokaFive

The USB flash memory drive market has been changing very fast. A few years ago, a couple hundred dollars would get you a slow 4G drive. Now, the same capacity costs less than $20. However, not all drives perform the same and we have tested a lot of them at MokaFive.

We test USB drives for performance and reliability and have found that more expensive drives do not necessarily mean they are faster or more reliable. Although USB drives look simple, there is a lot of science behind building a good drive and it's not easy for the average consumer to get product specs without digging through reviews or actually running tests firsthand. We found that drives from same vendor may perform differently since performance depends on the flash memory and the USB controller that the drives use. Our CTO, John Whaley, discussed the nuances of USB drive performance with Robert Scheier from Computerworld last summer. Here are a couple of excerpts from the article:
The single biggest factor in USB drive performance is whether it contains one of two types of memory: SLC (single-level cell) or MLC (multilevel cell). SLC stores one bit, and MLC stores two bits of data in each memory cell. SLC is twice as fast as MLC, says [Pat] Wilkison [vice president of marketing and business development at STEC Inc.], with maximum read speeds of about 14 MB/sec. and write speeds of about 10-12MB/sec. Not surprisingly, almost all current USB flash drives are built using MLC memory, since SLC costs about twice as much as MLC.
Users would see the greatest performance difference between SLC and MLC if they were performing many operations involving small files, rather than relatively few read/write operations on larger files, says John Whaley, principal engineer at MokaFive Inc., whose virtualization software makes it possible for virtual machines to be stored on USB flash drives.

Beside basic manual testing and normal day-to-day use, we also set up automated tests to run on these drives. We test our client software on them and we also run various performance benchmarks. We want to make sure they perform well when our customers run their virtual desktop off the drives. We also test for reliability by plugging and unplugging the drives continuously.

Here is a photo of a device we put together to automate testing.

In addition to the standard benchmarks of MB written and read per second, we have some custom tools which more accurately measure how a drive will perform when running Mokafive LivePCs. These tests give us two “scores,” one of which correlates to speed, and one which correlates to responsiveness. Read and write operations to a drive can have varying delays. We’ve found that measuring how many of these delays take more than 100 ms gives us a good idea of how responsive the drive will feel when running a LivePC. We’ve found that sometimes even drives with higher overall read and write speeds will have more delays, and that when running a LivePC, that drive will feel more sluggish to the user than a drive with fewer delayed operations.

At Mokafive we’ve also developed some technology to make running LivePCs on USB drives faster, more robust, more user-friendly, and more secure. We also run our tests on this software to measure the improvements that we’re able to make. As an example, a particular drive got a “speed” score of 8.2 and a “responsiveness” score of 6.7. When running our USB drive acceleration software, the same drive got a “speed” score of 9.4 and a “responsiveness” score of 8.6.

A couple of years ago, we even custom made our own drives, so we could control the quality of the drives.


The following table shows some of the results from our latest round of testing:



























USB drivesMaxWrite (higher is better)MaxRead (higher is better)MokaFive rating(higher is better)
OCZ Throttle22506350359.4
Patriot Xporter XT5798303936.8
SuperTalent 16GB7821181046.2



We have more results available upon request. And if you are planning to deploy your virtual desktop using USB drives and need some recommendations, please feel free to contact us. I am sure our experience in USB flash memory drives will save you a lot of time and money.

Friday, July 10, 2009

The Basics of Image Management: Targeting & Policy Control of Virtual Desktops

In the world of server virtualization, one of the challenges is to manage all those virtual machine images. An enterprise may have a couple hundred servers but they may have a couple THOUSAND virtual machine images to manage. The reason for having so many images is that IT needs to support different OSes, different software stacks and different applications, etc. There are image management tools for server virtualization that sell for thousands of dollars just to keep track of the images.

If an enterprise is going to virtualize their desktops, the images management may become an even bigger problem. Is IT going to set up one VM image per user? Probably not. But what if different users need different applications or different access control policies? Is IT going to set up one VM image per difference? Maybe. But should they?

There is a better approach - achieved through two management concepts related to targeting and policy control.

The idea behind targeting is to allow an IT admin to "target" a particular version of an image to a particular group of users along with a unique set of access control policies. For example, Group A will use Image X and their policies are set so that they cannot paste copied data outside of the VM.

For the same Image X, IT can target it to Group B with a different set of policies. Targeting makes this possible with just a few clicks in the management software and doesn't require the creation or cloning of any images.

Different group of users can also be targeted with different version of the same image. This is great for quickly and easily testing changes to an image without a lot of fuss. For example, if the IT admin adds an application to an image but wants only a few people to test it before it is released to everyone, he or she can target the update version to a smaller group while everyone else stays in the current version.

Once everything is tested, the admin can switch the update version to become the release version and everyone would automatically get the update. It's important that only the changes to the image be sent out to users, so that users don't have to download the entire image again. When just the differential is sent the image can be updated in the background without disrupting the user at all, saving a lot of time and bandwidth.

Another way to simplify image management is to make sure that when IT updates an image, all the access policy settings remain unchanged, avoiding major headaches and time sinks. For instance, if IT has one Image X targeted to Group A with Policy 1 and the same Image X is targeted to Group B with Policy 2. When IT releases an update to the image, both groups will get the update while keeping their respective policy settings.

Of course, MokaFive's 2.0 technology and the MokaFive Suite incorporates the sophisticated image management functionality of targeting and persistant policy settings.

Here is a video that demonstrate these management features. Take a look and let us know if you have any feedback.

Monday, July 6, 2009

v2.2 is available!

You may be wondering why we are releasing 2.2 so quickly. Didn't we just announce 2.0?

Good question. The truth is, v2.0 has been available to our existing v1.7 customers for over a month. They have been using it and giving us a lot of valuable feedback. Last week, we announced v2.0 publicly along with a new Website. Now anyone can purchase the MokaFive Suite v2.0.

Today, we released v2.2. It includes a few bug fixes plus the following new features:
  • RSA SecureID support
  • Co-branding support
  • Using Amazon S3 as the Primary Image Store
If you haven't checked out our v2.x product yet, please contact us to schedule a demo or a trial.

Wednesday, July 1, 2009

The nitty gritty: Brian Madden notes MokaFive has the first "real" layering product

Brian Madden has just written up an article on our 2.0 product titled "The first “real” layering product? MokaFive’s new v2.0 looks pretty good!" and I would like to follow up on it by providing a few additional details.
As Brian mentions in his article, our technology separates the system state, application state, user installed applications, user data and preferences transparently. To the end users, when they use a Windows XP LivePC, they will just use it the same way as they are using their Windows laptop.

An important point is that we allow the IT administrator to customize and control the policies. An IT admin can enable or disable user installed applications from the centralized console and also edit the layering policy file and determine what the system should or should not preserve. For example, IT can choose to only approve use of MSN instant messenger but not Yahoo IM. He can edit the layering policy file to only preserve all the settings and files related to MSN IM. If the user downloads and installs Yahoo IM on the corporate LivePC, it will be wiped clean when the user reboots the LivePC.

There is a misconception that layering is easy. When scrutinized, it is clear that it's quite a complex proposition. Our first implementation three years ago was similar to other offerings today. Originally we used junction point to separate the Windows "Documents and Settings" folder to a separate disk and then preserved that disk. However, we learned that does not work for all applications and system preferences. Some times applications or the system needs to put data in locations other than the user folder. That's why we spent time developing this advanced system for dealing with sophisticated requirements for the broad variety of applications a user could choose to install.

The layering technology is cool but what makes it very powerful is the flexibility it provides to the IT admin to customize for your organization's needs. In the upcoming weeks, we will be writing more articles on how to configure and deploy Windows XP LivePC using this new feature.

Wednesday, June 24, 2009

User Personalization: The New Frontier in Desktop Virtualization

When companies give laptops to their employees, they set various access policies. Some companies let their employees do what they want on these laptops, like install software, while other companies lock down their laptops so employees can only do work related items. I know people who have to carry two laptops, one corporate and one personal, while they travel so they can do personal things on the road.

When companies set these policies, they are forced to make a trade off between employee productivity and flexibility with data security and manageability. The more flexible IT is towards the user, the harder it is to manage the laptops. How much time does IT really want to spend on re-imaging laptops because users frequently mess them up? At the same time, users are working longer hours from remote locations, so IT needs to make sure the users are happy and productive. And if something goes wrong, users do not want to have to wait days before they can try out software that might help their work.

Now companies are planning to implement desktop virtualization, especially VDI, to reduce their cost in supporting laptops and desktops. However, they have not realized that they are just avoiding one problem by introducing another. Instead of managing physical machines, they will be managing a lot more virtual machine images.

When the MokaFive founding team was at Stanford looking at desktop virtualization trends, we realized that we needed to make sure the solution satisfied both the needs of the IT department as well as the needs of the end users. If the solution only makes one side happy, it will never be successful.

This is why we introduced a product called User Personalization in our new version 2.0. A new policy is added which allows IT to control whether an end user can or cannot personalize or customize the virtual machine image. If IT allows the user to change various Windows settings, install applications etc., on the image, IT controls the image - IT can update it, add new applications, etc. The changes in the base image and the customization done by the user are merged when the user boots up the image. If the user messes up, he just needs to revert all of his changes and go back to the base image. IT does not need to be involved at all.

By combining the User Personalization feature with the Targeting feature, we are providing a very powerful image management solution for desktop virtualization deployment. With v2.0, you can have one single Windows XP image targeted to different groups with different policies. And you can give a group more flexibility and another less. You can work on the next image release while everyone stays productive. When the release is ready, a few clicks in the management console and you get everyone updated.

MokaFive 2.0 now available!

It's is a big week for the team here at MokaFive. As many of you know, MokaFive started as a project at Stanford to use virtual machines to make it easier to manage desktops. A little over a year ago we released our 1.0 product, which was a hosted solution. We got a lot of positive feedback from customers about our 1.0 product, but the number one customer request was to provide the capability to run the MokaFive service in-house.

So this week, we announced 2.0! The 2.0 technology comes in the form of an enterprise-ready Desktop-as-a-Solution platform: MokaFive Suite. With 2.0, you can install and run MokaFive Suite in your data center, behind your firewall, and serve your end-point clients without depending on us. You can integrate with all your existing management solutions. You are in total control.

Enterprise computing is reaching a turning point right now as desktop virtualization technologies mature - answering business challenges that haven't been solved before. Our customers are innovating with the quick to deploy MokaFive Suite, taking advantage of mass user-customization, policy-based security. We will share detail of how different businesses (across healthcare, legal, finance, other professional service companies) are solving their challenges - and the benefits they have achieved.

We have also added some exciting new capabilities on the client side. For a long time, we have been working behind the scenes on a new layering technology that dramatically changes the way you can deploy and manage virtual desktops. This technology allows you to separate the machine into layers that can be independently managed and updated. Among other things, you can now allow users to install their own applications, while you can still manage and update a single golden image. That's right, your users can install any applications (even kernel drivers) and they will persist across rejuvenation and updates, as long as you set the policy to allow them to do so. This is a major breakthrough and one of the biggest barriers to widespread adoption of desktop virtualization, and MokaFive is the only solution on the market today that gives you that ability.

We also re-designed the client and it is now using a new disk format. The new format is much more reliable and has better performance, especially if you are running a LivePC off of a USB flash drive that can be suddenly removed. Users running v1.x LivePCs will need to manually migrate to 2.0 to take advantage of the new features. The migration process is pretty straightforward. Export the LivePC with v1.x Creator and then import it using the v2.0 Creator. Here is a detailed document to walk you through it [link]. Let us know if you need help.

Stay in touch with the MokaFive blog for information and perspective on the concerns and trends related to the modernization of enterprise desktop computing. We invite your feedback, questions and opinions - starting with the announcement of MokaFive Suite.

If you're on Twitter, follow us at http://twitter.com/MokaFive

Thursday, June 18, 2009

Virtualization: Hype vs Reality, Part II

And we’re back with Part II of our series on what is hype and what is reality when it comes virtualization. Here is our perspective on two more topics that are getting a lot of play lately:

Reality: Virtualization is cheaper than other traditional infrastructures


Obviously a major issue for anyone in any industry right now is cost. People are looking to save money in every way they can. In light of this, many vendors in the virtualization space are chiming in about how their products and solutions can help you cut costs. Is this true? For the most part yes. BUT it’s not quite so black and white, so let’s break it down.

A desktop virtualization deployment can save you money in terms of management and support, but it can potentially be more expensive than a traditional deployment – it all depends on how it you approach it.

Right off the bat a VDI deployment costs 20-30% more from a CAPEX perspective than a traditional physical desktop deployment. However, once it’s up and running the deployment can make it much easier and cheaper to manage your IT infrastructure in the long run, *if* you do it right.

  • For instance, if you go with the model of creating one image for each user, the deployment will not scale and it will end up being just as expensive to store and manage the virtual desktops as it would in a traditional physical desktop deployment, if not more expensive.

  • To achieve real cost savings and make up the CAPEX expenditure, you need to leverage virtualization to make IT more scalable. What does this mean? Creating one golden image that goes out to all of your users or to large groups of users.

Hype: Remote display is the way to go

Remote display can work well, but only in very specific environments.

  • Remote desktop is fast over LAN, but slow over thin pipes.

  • Gigabit LAN to the server can achieve speeds nearly indistinguishable from local execution but at a much higher cost.

  • Low-latency WAN connections can get acceptable performance but only if
    they are not using any graphically intensive applications.

  • High-latency WAN connections are not viable for real-world usage.

Basically what it comes down to is that there are fundamental limitations to the interactive performance of applications across low bandwidth and high latency links, and we are getting close to the limit. Remote desktop just does not work well over wireless networks or on laptops.

So what do we recommend? For applications that require a high refresh rate or rich graphics, a locally executed solution is always going to offer the best performance, no matter what. Remote execution will always face speed of light limitations even with the fastest of connections.

Wednesday, June 3, 2009

Virtualization: Hype vs Reality

These days there is a lot of talk about virtualization, desktop and otherwise. With all this chatter it is understandable that people are confused about what is what. We thought it would be useful to sort through some of it and separate the hype from the reality.

Hype: Virtualization as the hot new, “it” technology

642378_lear_sieglerThere is a huge amount of hype around virtualization, and it is being positioned as a brand-new cutting edge technology to solve all your IT needs. But virtualization is far from a new thing. It's been around a very long time, in computer time at least. Virtualization is a core concept in computer systems and has been in use since at least the days of the IBM Mainframes. The remote desktop model of centralized execution is a throwback to the 1970s with dumb terminals connecting to the big mainframe in the back room. (Take the old IBM literature, change the names and you could pass it off as a VDI architecture diagram.) As we develop new technologies and approaches, desktop virtualization has evolved and become more sophisticated, and thus more useful – providing us today a real solution to serious computing needs.

Hype: Virtualization provides poor performance

The second common myth we keep hearing about virtualization is that it is slow. People think that using virtualization implies a negative performance impact. The truth is a bit more complicated.

Virtualization adds a level of indirection, which implies some kind of overhead. The two primary considerations for systems performance is CPU (processing) overhead and IO overhead. It makes sense to separate these considerations:

  • CPU overhead: With modern virtual machine monitors running on modern CPUs, the CPU overhead is insubstantial. Some operations can be slower with virtualization (for example, system calls or page table manipulation), but modern VMMs are now generally able to work around these issues, leveraging techniques like dynamic recompilation and paravirtualization. Intel and AMD have also added hardware virtualization support in their recent CPUs. It depends on the particular workload, but the CPU overhead from virtualization is typically a few percent at most.

  • IO overhead: IO intensive applications can see a bigger performance hit due to virtualization because the extra indirection can be more costly. However, IO performance hits can often be reduced or eliminated by tuning the system.
While factors like these need to be taken into consideration
to get optimum performance when using desktop virtualization, there are other
advantages that offer immediate performance benefits:
  • Virtualization enables performance optimizations at a different level. The extra level of indirection inherent in virtualization can be used to improve overall system performance by optimizing at a whole-system level. For example, virtualization allows you to share hardware resources and quickly adjust based on demand, leading to better overall system performance.

  • The VMM can even use compression and caching to improve the IO performance beyond its native performance levels. We've seen numerous examples of applications that run faster under virtualization due to these effects.

  • A virtual machine can actually boot faster than a physical machine because the load order is predictable and the VMM can rearrange the blocks.

  • On the server side, it is easy to migrate VMMs, or quickly launch new ones to handle changes in load.

  • With desktop virtualization, you can boot from a golden image every time, eliminating slowdown from Windows rot. Also, because you can rejuvenate the system image, you don't need to run virus scans of the system image. Using anti-virus software typically slows the machine more than virtualization does.
Basically, running on a VMM is like running on a different computer architecture. If you take an application that was tuned for one architecture and run it on another, sometimes you will take a performance hit, but through tweaking and tuning you can usually erase the deficit. Virtualization is no different. A virtualized architecture also opens a bunch of new possibilities that can improve performance.

Hype: Virtualization uses less energy

Virtualization actually adds overhead, so cycle-for-cycle it will usually consume more power rather than less. BUT power savings with virtualization ARE possible, leading to greater energy efficiency. Here’s how:
  • Consolidating many old, underutilized servers into a single server can save a lot of energy.

  • Power savings can also be achieved simply by moving to newer, more energy efficient machines.
There are also other variables that affect whether or not implementing desktop virtualization will save you power. If you are moving into the data center, it depends on the machine utilization - if you have many desktop machines sitting idle all the time, you will use less power but if the endpoint machines are fairly well-utilized already, it will impact your power usage more. Implementing power-saving modes on desktops is one thing you can do to move towards power savings in any environment.

Really what it comes down to is that power and cooling in the data center is very expensive, regardless of your architecture. It is necessary to provision your data center to handle worst-case scenarios of peak load, but since most loads vary greatly you are most likely going to end up either massively over-provisioning or risk unacceptable performance and downtime during peak periods. That’s a reality – virtualization or not.

Stay tuned - we'll be debunking some more of the myths surrounding virtualization in coming posts. This is obviously an important topic and one that is hot on everyone's minds right now. For more thoughts and another perspective, check out Scott Key's recent post on virtualization.info.

Thursday, April 9, 2009

MokaFive is a finalist for RSA Conference 2009 "Most Innovative Company"


MokaFive was recently named as a top ten finalist for the RSA Conference 2009 "Most Innovative Company" award.  Thanks to all the users who voted for us!  We will be at the RSA Conference here in San Francisco vying for the title.  Come by and see us at the Innovation Sandbox on Monday, April 20th.  We will be showing off the latest MokaFive product with some cool technology demonstrations, like instant recovery from zero-day infection (like the Conficker worm) without losing data, keylogger protection so you can compute securely on a potentially insecure host, and secure remote kill so you can disable LivePC images and wipe data remotely.

Monday, January 5, 2009

Desktop Virtualization short video on ZDNet

Here's a nice short (<3 min) whiteboard presentation I did on desktop virtualization for ZDNet.  If you want a super high-level view of desktop virtualization that contrasts different approaches, this provides a good "executive overview".  Let me know what you think.