on
Security Anti-Pattern: Status Quo Encapsulation
First a clarification: in my last post I said that least privilege is the ultimate goal of most MAC advocates but that isn’t entirely true, I accidentally ignored a large portion of MAC advocates (mostly those that predate me!). There are several different models which are commonly implemented and thought to be correct. In the government industry that model is Multi Level Security (MLS) and is in no way least privilege, but that is another topic altogether. In the civilian sector status quo encapsulation is a popular model to implement, which is what I’ll be talking about today.
An Anti-Pattern is a commonly reinvented bad solution to a problem. In security there are lots of these, The Six Dumbest Ideas in Computer Security outlines several that are fairly common but I’m going to go into detail about one in particular that several semi-popular security mechanisms adopt. Status Quo encapsulation is a phrase I started using a while back (I’ve never seen it used elsewhere, in fact a Google search for “Status quo encapsulation” gives zero results now. It basically means that you are taking a system and using a security mechanism to encapsulate what the system is doing. Security mechanisms such as AppArmor and grsecurity that tout ease of use ‘learning mode’-style policy writing are encouraging this way of writing policy and in many ways giving false impressions about the level of security awarded by such systems.
There are several problems with this. First, like I said in my last post, applications aren’t engineered with security in mind. Merely taking the accesses that an application attempts and enumerating them in a security policy will not necessarily give you a useful policy. In some applications these are poor architectural decisions. For example, Exim, A popular mail transport agent (smtp server) runs as a monolithic process that gives the same process access to both the smtp network port and the mail directory in user homes. This means that if a vulnerability exists an attacker can directly access your users mail, no privilege escalation or further attacks necessary. Sometimes the implementation decisions are poor. For example it isn’t uncommon for administrators to use something like mod_auth_shadow to allow apache clients to authenticate using their system password. While this might seem like a good idea, centralized authentication and minimizing user passwords, etc, the implementation allows the apache process to directly read the /etc/shadow file. So if there is a vulnerability in Apache, or more likely a vulnerability in any web application running on Apache the attacker will have your system’s entire shadow file. Strictly speaking no access control policy can mitigate problems like these. Application developers and system administrators alike must be aware of architectural and implementation decisions being made in order to have any hope of having a secure system. In security models that encapsulate what the system already does these problems are hidden behind a security policy that doesn’t provide protection from these attack vectors.
The second problem is that the so called ‘learning mode’ policies can’t be written while the system is idle and disconnected from networks (and untrusted users). The policies can’t represent what kind of interactions will be present on the system at production time without writing the policies on machines in production. The most obvious problem here is that the policies are being written on a system that is not in a known good state. A short story to qualify the problem: back in my IT days during college the guys who did Windows installs on workstations would do a stock install of Windows while the machine was connected to the network and then install service packs, virus scanners, etc afterwards. There was an RPC worm on windows (can’t remember which one) that would infect the machines before they were even booted up the first time, the machines were never in a known good state and were unable to even have a virus scanner installed before they had been exploited. The parallel in security policies is that once the machine is connected to the network and exposed to untrusted users there is no way of knowing whether the machine has already been exploited before the security policy is active. Without inspecting the policy closely, which is very difficult on such complex systems, the security policy ends up hiding the exploit and any access that the exploit caused outside of the normal system access. Even worse, the policy will continue to allow such exploits even after becoming active.
The last problem I’m going to talk about today has ramifications both in the security of the system and the functionality. Without static analysis of the system, which is basically automated or otherwise analysis of the source code, it is impossible to see what interactions are possible. Status quo encapsulation only shows what access is done on the running system but ignores the possible access. Functionally it is very difficult to enumerate everything that an application might do while running. I’ve heard more than one person complain that they are unable to access some folders in their mailbox after using learning mode to write a policy. Why is this? While they were in learning mode they accessed only some of their mail folders. Once the policy was done the folders they had not accessed were inaccessible because the policy didn’t know about them. This is a very common problem since exercising the full range of possible access on a system is difficult and error prone (accidentally accessing too much, etc). The only solution is to manually inspect and fix the policies but this loses the only benefit of such systems, ease of use, while retaining all of the downfalls. The security ramifications are closely related but often ignored. Most applications have error paths which aren’t often exercised in a normally running system. Some of these error paths are security related. Consider an app which fires off a warning to syslog or tries to email the admin when something bad happens, like an exploit attempt. If these error paths were not exercised during policy learning the admin may never receive the warning. While not as serious as the prior issues it is still important to recognize deficiencies like these.
As I mentioned before there are many different security models that may fit your needs. Probably one of the most important today is the SELinux targeted policy which specifically targets network daemons and lets local users do whatever they want. This model has a specific goal of confining applications that are exposed to untrusted network resources, and preventing them from harming the rest of the system. It is a lightweight model in that most of the system can run relatively unaffected. Status quo encapsulation, however, is essentially the security model for people who don’t know or don’t have specific security goals and instead want to rely on their system already being correctly configured and secure and use MAC as a stopgap solution to prevent additional access from being granted. This lack of real security goals in addition to the issues outlined above make this type of model and the mechanisms that implement and encourage them harmful for security.