After we introduced our Layering solution as part of our 2.0 product in June, a lot of people have been talking about Virtual Layers. (Check out Gabe Knuth's "What is Layering, and why does it matter?" post for a good overview.) Now the feature has been in customer's hands for a few months, I have a better perspective about what works well with layered management (and what doesn't work), and why.
Reaction to Layering
The key realization behind layering is that different people are responsible for different parts of the desktop, and you want to manage the pieces independently. Layering provides a very easy way to combine different managed pieces into a single cohesive environment.
The reaction after we released layering was very positive. Everyone I've talked to loves the story and wants to move to a world where they can manage their desktop using layers. Since we released our product in June, a number of other companies have started talking about layering and management using layers. Some have even announced products in this space. However, MokaFive is still, five months later, the only layering solution that is commercially available. On the customer front, layering has generated a huge amount of interest in MokaFive and is one of our key differentiators.
While we've had great success with customers using layering, some people are skeptical about whether layered management can actually work in their real-world environment. They've heard claims like this before with other technologies and have been disappointed.
Layering and Application Virtualization
The desire to manage independent pieces is not new, and there have been products that have tried to solve this problem for a very long time. Application virtualization is one such technique. Application virtualization allows you to wrap up an application and all its dependencies into a single bundle that can be easily managed. At least, that's the promise.
Most of our customers and prospects have played with application virtualization, and some are deploying it today for some of their applications. But everyone who has used application virtualization in real scenarios is aware of its limitations.
First, you as the admin need to sequence the applications, which is easy to do wrong and very, very, very hard to do right. You need expert knowledge in the internal application structure, as well as expert knowledge of Windows and of the ins-and-outs of the particular application packaging. Sequencing is a black art and it takes a long time to learn how to do it well.
The second problem with application virtualization is it is a really, really hard problem to solve, so no one has solved it completely. To completely virtualize an application, you need to correctly emulate every operating system component that the application interacts with. This is an exceeding difficult problem especially when the OS changes, and applications rely on undocumented behavior and bugs. A perfectly compatible application virtualization solution would require a reimplementation of most OS components. (Once you do that, you don't even need Windows anymore. For those who are familiar with WINE on Linux, WINE is application virtualization. It works with many applications but has some serious restrictions.) In fact, in the best case for the very best application virtualization products, the compatibility rate is only 90%. That leaves 10% of applications that cannot be virtualized.
The third issue with using application virtualization is that it provides isolation, which is not always what you want. When you package an application, it becomes independent of the rest of the system, so you can run it in a wider variety of environments. However, that also limits interaction with the rest of the system. When you try to paste your Visio diagram into your Word document and those applications are separately virtualized, it doesn't work because the applications are isolated. You have to do a bunch of work to explicitly open conduits between the virtualized applications, and you have to do this for all applications. This goes against the whole point of having a single cohesive environment where everything can interact.
The reason I bring up application virtualization is because its limitations are fairly well-understood, people assume layering is the same way. But the way we did layering avoids these problems.
Why Does Layering Work?
One of the key differences for our layering solution is that in our system, a layer is tied to a particular base image. That means that we don't need to solve the problem of trying to get the same layer to work on all environments, we just need to make it work in the one base image it was installed in, plus any delta updates to that base image. This means we don't need to know dependencies between applications to bundle them up or verify they exist, we don't need to normalize paths or registry keys to work on different base OS configurations, we don't need to worry about compatibility between different base images. So we avoid a huge set of problems.
Layering is trying to solve a different problem than traditional application virtualization. Application virtualization is about isolating applications so they can be delivered easily and will run the same way in different environments. Layering is in many ways the opposite of isolation - it is about combining pieces to form a single cohesive whole, while maintaining the ability to manage each layer individually.
What about delta updates? Couldn't that break layering because the base image changes? Theoretically yes, but we have not run into this problem in customer deployments. The reason is most updates do not need to make modifications to higher layers. This makes intuitive sense - installing a hotfix or service pack usually doesn't search your hard drive for other programs and files and make modifications to them. Programs which do variations of this (e.g. upgrading to a new version converts the data files to a new format) require some special consideration when using them in a layered management environment.
The other issue that can arise during layering is when the layers have conflicting updates. Say you have applications that both the user and administrator can update. When they install conflicting versions, whose version wins? The layering engine is flexible enough to allow you to specify this at a fine-grained level, but it can lead to a confusing user experience as some user changes will persist while others will not. The solution is to be structured in terms of who is responsible for what. For example, the administrator controls a core set of applications, operating system files, and VPN, while the user can keep their own personalizations and applications but cannot change the components controlled by the administrator. With some planning, you can avoid conflicts between the layers. We provide some best practices for our customers that have worked well in avoiding conflicts.
There are two basic properties that ensure layered management works correctly. The first is that changing lower layers should not require changes to the data on higher layers. If they do, then we may need to account for dependencies. The second is you need to specify a policy when there are conflicting updates between layers, and some care has to be taken in how to specify that policy so that it operates in an intuitive way while still gaining the benefits of updates and rejuvenation. We have a set of recommended layering configurations that avoid the problems while maximizing the benefits, and our customers have been using them with great success so far.
Looking Forward
Layering has been a big success for MokaFive and continues to be one of our key technology differentiators. We designed the layering solution with specific goals in mind, so we avoided most of the downsides of other approaches. Now we've been through actual customer deployments, we have a good idea on how to most effectively use layering for management.
We are continuing to push ahead on layering, using our experience with deployments to improve the product and working on some great new features that take advantage of the layering engine. There are some exciting new features we are beta-testing today for release next year that are going to raise the stakes on layering. Stay tuned!
Reaction to Layering
The key realization behind layering is that different people are responsible for different parts of the desktop, and you want to manage the pieces independently. Layering provides a very easy way to combine different managed pieces into a single cohesive environment.
The reaction after we released layering was very positive. Everyone I've talked to loves the story and wants to move to a world where they can manage their desktop using layers. Since we released our product in June, a number of other companies have started talking about layering and management using layers. Some have even announced products in this space. However, MokaFive is still, five months later, the only layering solution that is commercially available. On the customer front, layering has generated a huge amount of interest in MokaFive and is one of our key differentiators.
While we've had great success with customers using layering, some people are skeptical about whether layered management can actually work in their real-world environment. They've heard claims like this before with other technologies and have been disappointed.
Layering and Application Virtualization
The desire to manage independent pieces is not new, and there have been products that have tried to solve this problem for a very long time. Application virtualization is one such technique. Application virtualization allows you to wrap up an application and all its dependencies into a single bundle that can be easily managed. At least, that's the promise.
Most of our customers and prospects have played with application virtualization, and some are deploying it today for some of their applications. But everyone who has used application virtualization in real scenarios is aware of its limitations.
First, you as the admin need to sequence the applications, which is easy to do wrong and very, very, very hard to do right. You need expert knowledge in the internal application structure, as well as expert knowledge of Windows and of the ins-and-outs of the particular application packaging. Sequencing is a black art and it takes a long time to learn how to do it well.
The second problem with application virtualization is it is a really, really hard problem to solve, so no one has solved it completely. To completely virtualize an application, you need to correctly emulate every operating system component that the application interacts with. This is an exceeding difficult problem especially when the OS changes, and applications rely on undocumented behavior and bugs. A perfectly compatible application virtualization solution would require a reimplementation of most OS components. (Once you do that, you don't even need Windows anymore. For those who are familiar with WINE on Linux, WINE is application virtualization. It works with many applications but has some serious restrictions.) In fact, in the best case for the very best application virtualization products, the compatibility rate is only 90%. That leaves 10% of applications that cannot be virtualized.
The third issue with using application virtualization is that it provides isolation, which is not always what you want. When you package an application, it becomes independent of the rest of the system, so you can run it in a wider variety of environments. However, that also limits interaction with the rest of the system. When you try to paste your Visio diagram into your Word document and those applications are separately virtualized, it doesn't work because the applications are isolated. You have to do a bunch of work to explicitly open conduits between the virtualized applications, and you have to do this for all applications. This goes against the whole point of having a single cohesive environment where everything can interact.
The reason I bring up application virtualization is because its limitations are fairly well-understood, people assume layering is the same way. But the way we did layering avoids these problems.
Why Does Layering Work?
One of the key differences for our layering solution is that in our system, a layer is tied to a particular base image. That means that we don't need to solve the problem of trying to get the same layer to work on all environments, we just need to make it work in the one base image it was installed in, plus any delta updates to that base image. This means we don't need to know dependencies between applications to bundle them up or verify they exist, we don't need to normalize paths or registry keys to work on different base OS configurations, we don't need to worry about compatibility between different base images. So we avoid a huge set of problems.
Layering is trying to solve a different problem than traditional application virtualization. Application virtualization is about isolating applications so they can be delivered easily and will run the same way in different environments. Layering is in many ways the opposite of isolation - it is about combining pieces to form a single cohesive whole, while maintaining the ability to manage each layer individually.
What about delta updates? Couldn't that break layering because the base image changes? Theoretically yes, but we have not run into this problem in customer deployments. The reason is most updates do not need to make modifications to higher layers. This makes intuitive sense - installing a hotfix or service pack usually doesn't search your hard drive for other programs and files and make modifications to them. Programs which do variations of this (e.g. upgrading to a new version converts the data files to a new format) require some special consideration when using them in a layered management environment.
The other issue that can arise during layering is when the layers have conflicting updates. Say you have applications that both the user and administrator can update. When they install conflicting versions, whose version wins? The layering engine is flexible enough to allow you to specify this at a fine-grained level, but it can lead to a confusing user experience as some user changes will persist while others will not. The solution is to be structured in terms of who is responsible for what. For example, the administrator controls a core set of applications, operating system files, and VPN, while the user can keep their own personalizations and applications but cannot change the components controlled by the administrator. With some planning, you can avoid conflicts between the layers. We provide some best practices for our customers that have worked well in avoiding conflicts.
There are two basic properties that ensure layered management works correctly. The first is that changing lower layers should not require changes to the data on higher layers. If they do, then we may need to account for dependencies. The second is you need to specify a policy when there are conflicting updates between layers, and some care has to be taken in how to specify that policy so that it operates in an intuitive way while still gaining the benefits of updates and rejuvenation. We have a set of recommended layering configurations that avoid the problems while maximizing the benefits, and our customers have been using them with great success so far.
Looking Forward
Layering has been a big success for MokaFive and continues to be one of our key technology differentiators. We designed the layering solution with specific goals in mind, so we avoided most of the downsides of other approaches. Now we've been through actual customer deployments, we have a good idea on how to most effectively use layering for management.
We are continuing to push ahead on layering, using our experience with deployments to improve the product and working on some great new features that take advantage of the layering engine. There are some exciting new features we are beta-testing today for release next year that are going to raise the stakes on layering. Stay tuned!