As described in the previous post, Process Virtualization … Can it Help? I discussed a design carried out here at Kernel Drivers. In this post, I will describe a little more detail of that design as well as an alternative design of having both the user and kernel mode code running within a scaled down hypervisor.
I’ll begin by covering the approaches to Isolation through Virtualization and the issues when running under a custom hypervisor. I’ll then compare these 2 approaches and the advantages and pitfalls of both. To start the discussion, I’ll go over some, or more precisely 2, of the various models of virtualization and they operate, in general. I will not be covering the details of implementing a custom hypervisor to support these frameworks since there is plenty of information about the basics on the web as well as the voluminous Intel documentation on the subject.
The end goal for Process Isolation is to prevent or at least control the modifications that a process or process group can have on the local system. Therefore, the idea with using a virtualization approach would be to have the user mode code run in a hypervisor environment, isolating it from the host system. Using this concept, we have 2 choices for running the kernel mode portion of the process. It can be supported by running the kernel in the same hypervisor environment or requests can be shipped off to the native kernel running on the host system. Both of these approaches have their own advantages as well as problems to surmount. Let’s take the approach of running the user mode code in a hypervisor environment along with the kernel mode code.
This approach, having the kernel running within the hypervisor environment, does have its advantages. One advantage is having a single virtual address space so there is nothing that needs to be translated or captured when going back and forth between user and kernel modes. The hypervisor would be a scaled down version of what you would encounter with such implementations as Microsoft’s Hyper-V. Scaled down to be able to support a single process or set of associated processes. In this model, resource access such as disk storage and memory must be managed by the hypervisor either through a shared model or a separate, shadowed model. As well, the host kernel is unaware of the process running within this hypervisor and additional work must be done to ensure processes interact as expected with the host system, assuming this is required.
An alternative approach to the completely enclosed process group is to share the native kernel on the host with the isolated process group. In this model, the host kernel is aware of the running process, or at least it is easier to have the host kernel be aware of the isolated process. One disadvantage is a separate virtual address space so complications arise in situations where the kernel calls directly back into the user mode VA space such as during thread creation, among other scenarios. As well, the disk resources are shared between the isolated process and the host system which introduces additional complications to ensure isolation. But this model does offer a clean line for implementing such an isolation barrier; at the system calls made into kernel mode.
The 2 models described above are just that, 2 models in a world of possibilities offered through virtualization technology. There are other possibilities such as those implemented in Blue Pill or DeepSafe-like designs but the 2 described here offer a clean approach to isolate specific processes or applications within a host system which is comparable to the non-VT approach described in my series on Security Through Process Isolation. My next post will dig deeper into some of the problems encountered with each of the designs, as well as their advantages. Stay tuned …