Security through Process Isolation … What does it mean?
Over the next few months, I’ll be posting several entries on Process Isolation covering the various aspects of what is required, the problems you will encounter and how to get around them … mostly.
In this first post, I’ll cover some of the main sub-systems needed to implement a successful Process Isolation framework, as well as start to dive into the requirements from both kernel and user mode processing. But most importantly I’ll describe the motivation behind implementing a Security Through Isolation framework. OK, let’s get going …
For this initial discussion let’s consider an internet browser such as IE or Chrome. For these processes, a user would generally infect their system by going to a malicious web site that would download some sort of code to execute on the local system. This could be anything from updating the browser’s startup URL to running malware that would utilize the newly released BadUSB exploit. In any case, the browser would be the context that the initial exploit is executed within, and if the actions of this process could be isolated then the exploit could be contained. In these terms, isolation would entail ensuring that any data written to disk such as files or registry data would not be accessible from a process running on the system such as a system service. As well, it would ensure that the browser process could not open a non-isolated process and perform actions on that process’s memory or modify code running in the secondary process. All of this can be handled through a solid Process Isolation framework but getting it right is far from trivial …
So before we begin let’s cover some general concepts and assumptions that will be made throughout these posts.
In order to fully isolate a given process, call it P1, you cannot isolate just P1’s accesses. In general, any process launched on Windows will create a set of child processes as well as pass work items off to other processes such as system services. Without supporting the isolation of all accesses from all child processes, the process will quickly crash due to inconsistent data. Thus we won’t consider isolating a single process but rather a Process Group, or PG, which consists of a set of N processes. In addition to isolating a PG, a decision needs to be made on such things as:
- Persistence of the modified content. Will the design maintain the shadow store for all modified content or will it be cleared at the termination of the PG?
- How will the design maintain the modified content? Will it create a shadow store for file system and registry modifications or possibly maintain the data in memory?
- How do objects such as Named Pipes, Sockets, Handle inheritance and IPC get handled?
For the discussion at hand, the design will incorporate a shadow name space for file system and registry modifications; it will obfuscate Named Pipe and socket accesses; Handle inheritance will be filtered through documented APIs; and IPC will be handled through custom ACEs on the DACL.
The first aspect of Process Isolation which will be covered is the file system name space isolation. Typically this is achieved through the use of a Microsoft File System Mini-Filter driver which will grow into a layered file system driver because of file object ownership. The base paradigm for the name space isolation is to design a Copy-On-Write (CoW) filter driver that will allow read-only access to underlying objects within the name space until they are modified. Once they are modified, the objects are migrated to a shadow store location where all future accesses to the object will be performed, leaving the original file untouched. A note to mention here is that there are some files which cannot be handled in this manner; these files include DLLs which are maintained in the “Known DLLs” list as well as other system modules but more on this later.
This approach requires the file system filter driver to maintain potentially 3 instances of a given object:
- The top level file object reference which is exposed to the caller and will be owned by the filter driver
- An instance which points to the original object and will be called the Base File Object, or BFO
- An instance which points to the modified version of the object and will be called the Shadow File Object, or SFO
In general, the design can get away with the following simplifications which will be followed throughout this discussion:
- Objects which are files will only require 2 instances at any time. Those being the top level file object exposed to the caller and either a pointer to the BFO or SFO but not both. This simplification can be ensured since a given file resides in either the Base Name Space (BNS), or once migrated, within the Shadow Name Space (SNS). There is no need to handle both for a file instance.
- Objects which are directories may require all 3 instances. In the case where a given directory is modified, the design would not want to migrate all of the content of the BNS to the SNS for performance reasons. Therefore the design can maintain both pointers if the directory object has been migrated. If the directory object is created within the PG then only the SNS instance would need to be maintained because there is nothing in the BNS to merge for the directory object.
For files, the CoW processing will fall into 2 major groupings.
- The first group contains those modifications where the CoW can be performed in-line to the operation. These include operations such as changing the metadata of a file or performing cached IO on the file. In these cases, resources are not previously acquired, as in the case for a paging write operation, and therefore and the FSF can manage to perform the CoW synchronous to the operation. Of course optimizations can be handled to not block the caller’s request for long periods of time. They may include migrating the portion of the data which is being modified, either metadata to real data, first so the new data can be overlaid on the file and the initial operation completed.
- The 2nd group is the set of modifications which result in the file being modified while resources are previously held or in scenarios where the routine for pre-acquiring the resource is not easily exited when failures occur.
For directories, migration is relatively straight forward. The triggers being where a file is created, deleted or migrated to a given directory. In these cases, the branch leading to the directory would be created within the SNS and the target file processed. Of course there are cases where directory metadata or possibly an Alternate Data Stream is updated and the directory would need to be migrated to ensure the BNS is not modified.
We’ll get more into these specific cases and the implementation details later, but for now the basics have been outlined or at least mentioned for isolating a PG’s file system aspects.