Joshua Brindle

How to Win At Security

Web browsers, security and Google Chrome

Securing web browsers has always been a little tricky. With so many web applications available today, including corporate intranet sites, email and so on with confidential or proprietary information it is always a bit troublesome that web browsers essentially run in one security domain. The last thing I want is for a teller at my bank to go to some site that ends up getting bank info from another tab.

There have been several improvements in the web browsing space though. Microsoft Internet Explorer has protected mode but that doesn't use system based access control to enforce separation of internal web pages from external ones, for example. On Linux we've started using nsplugin to load plugins (flash, whatever) into a separate process. This is particularly nice on SELinux systems since we can transition those plugins into a domain that can't do much, such as read files in home directories, access the network, etc. Dan Walsh has a nice writeup about this at his blog

That still doesn't separate sites of differing security domains (my bank, joke site my friend sent in an email, the company sharepoint server, etc) into separate processes that cannot interact with each other.

I had a customer once that actually augmented Firefox to be a multi-level browser. This was a Trusted Solaris solution and really didn't address the problem. All of the sites were still inside the same browser process and they had only augmented it to try and keep that data separate. Something that used process and domain separation would be better. If we trusted the web browser to not leak data none of this would be necessary!

The best we can hope for is manually separating browsers. I've blogged in the past about using network access controls in SELinux to ensure an “internal” browser can't browse the the internet, and an “internet” browser can't browse into the intranet. This requires user intervention to understand and keep track of multiple browsers, hardly an elegant solution. Surely there is a better way.

Now comes Google Chrome, a shiny (haha) new browser that has some great ideas. Google also published an interesting set of comics that describe some of the ideas and features.

The ones I found most interesting: Each tab is rendered in a separate process, plugins run in a separate process and a javascript virtual machine. This means tabs shouldn't be able to get data from other tabs (now I don't have to worry about crazy scary Myspace pages reading my bank account number).

There are a couple things to worry about, first they claim plugins are poorly written and therefore must have access to all tabs (which is particularly scary given the Flash vulnerabilities as of late). The ideal solution is to have a plugin process per-rendering process, this would keep plugins from interacting with each other and other rendering processes. They claim this is a long term goal, that they would work with plugin writers to make this easier, we can only hope.

Second, and much more worrisome is on slide 29, the claim that they know their sandboxing works because they wrote it, wrong! If we trusted the applications to begin with we'd have no need for additional access control.

Now all this brings me to my main point. Granted Chrome is only available for Windows at the moment but hopefully it'll be available on Linux before long. And then we might have something interesting to work on. Different security domains for different sites? That would be great. Different domains for plugins? Yes! SELinux enforced sandboxes?

So here is the idea: We label sites by dns name or IP and have Chrome execute the rendering processes in different domains. *.tresys.com would run in internal_website_t and not be able to send data to the internet! my bank site would run in bank_website_t and only be able to send data to my banks address. Even if I have some sort of browser or plugin exploit going on it won't matter, only data can be sent to the appropriate place (this is the beauty of mandatory access control, even a broken application can't do anything bad). This should work because Chrome even creates new rendering processes if you jump from one domain to another (It does this today). If I go to facebook and then to myspace in the same tab a new process is created for myspace. I'd like to go so far as to put the javascript vm in another process, since it is executing dynamically generated code, or else we'll have to give the rendering process the ability to execheap, execstack, not a good permission to give something already susceptable to vulnerabilities.

What is the net result of this? It is like the manual separation I and others have talked about before but from the user perspective it is seamless. Tabs in the same browser running in different security domains without the users knowledge, seamless mandatory security on web browsing, I can't wait!

If this should happen to reach any of the Chrome developers I'd love to talk to you about the possibilites. Combining this browser (which is excellent BTW, I'm using it to write this blog post) with the mandatory separation afforded by SELinux would make an incredibly powerful platform for securing web applications.