Sandboxing for multi-tenant applications

If you are building a SAAS application it naturally supports multiple tenants; if you are building a PAAS platform it may well do too. Multitenancy may even go all the way down, maybe you are building a SAAS application on a PAAS platform on IAAS. Most of the writing on sandboxing is around desktop applications or browser sandboxing, so I thought it would be helpful to write a survey from a more cloud point of view, as most of the cloud writing seems to be about database issues. I also did not find an overview of all the solutions in one place for comparison. Note that I am not a security professional, although I have that kind of devious thought process and I am fairly good at finding security holes in applications, so you should take professional advice. I am also only going to cover Linux systems here; there is enough to cover without going further afield. The solutions are similar on other platforms but of course the differences are important.

Dilbert.com

Data segregation and access control are also important topics, but I am not going to cover them much in this post, as they are orthogonal. If your code is not secure, you can be pretty sure that there is a risk of your access controls being subverted, especially if the application is monolithic. There are interesting issues in how to manage data segregation, and how it is stored.

What are the threats we are trying to mitigate? If you are building a PAAS platform, something like Heroku, which quite a lot of people seem to be doing after Heroku got bought for $212m by Salesforce, then your entire business model is to run untrusted code from people you don’t really know. They may have intentional security holes, or accidental ones, which they may not even know about, and they may be looking to attack you. One recent example is the case of PHPFog who had their entire infrastructure taken over.

If you are developing SAAS applications you may be less worried, after all you just have to develop a secure application surely? It turns out to be a bit more complex than that. First many more complex applications do allow user code to run, or want to allow this, for customisation; at this point your application starts to become a platform. Aside from that, most applications process some form of external data that could have security issues. There have been widespread security holes in most media processing code, PDF, zlib, images and so on, causing buffer overflows and arbitrary code execution. In addition your applciation code could itself have bugs that allow remote code execution, or disclosure of data for the system tenants. As a service provider your reputation relies on optimal protection of the users data.

So the basic idea is of course that you build your system out of components based on their risk, and requirements, on the least privilege principle, where you try to put them in a sandboxed environment where they can do as little as possible. Obviously you do not have to sandbox at all, or you can choose a model with risks, but in an increasingly hostile world it is at least worth knowing what the better options are and how they could be built.

Virtualization

Virtualization is a key tool for user isolation, keeping them off real hardware and in a self contained environment which just looks like a computer. Of course you need an appropriate firewall as well, as there will be network access, and you probably don’t want it to be indiscriminate, just locked down to what is necessary to provide the service. The main issue is if you are running on a virtualized service such as Amazon EC2 that limits the smallest VM you can have, and hence sets a floor on your charges and profitability for small users of the service; this may or may not be a big issue for your application.

Pros: good isolation as cannot see anything else on the computer. Cons: heavyweight, as each instance needs a kernel and at least a skeleton bootable OS plus a fair memory overhead; not nestable with any performance (you have to use UML), so no use if you already run in a virtual environment; careful network firewalling necessary as you cant just pass sockets for communication for example.

Interpreters

Running code purely under an interpreter seems a very safe option, and indeed it is if you deal with a few risk factors. First, you need to make sure that the language has no libraries that can load or execute unsafe code. Some languages (like Lua make this easier than others. Always whitelist not blacklist features. For real security you want everything running in the interpreter, otherwise you are in the situation where you may be calling say a native image library with security issues. Obviously there is a big performance hit, so this works best for smaller pieces of code, places where none of the other isolation methods are appropriate, or as a temporary measure before introducing more sandboxing. Once you introduce say a JIT compiler you probably need to isolate the code, as a JIT compiler has to be able to make writeable memory executable, which makes attacks much easier.

Pros: very secure in the right situation. Cons: performance; non-interpreted code that is called may have security flaws that need sandboxing.

Managed code (Java and .Net)

These bytecode JIT compilers with their own sandboxes are a practical hybrid between Interpreters and native code validation. They are however complex systems, and Java in particular has had a number of vulnerabilities (for example CVE-2008-5353 to pick one at random), as has .Net (eg MS10-077) so cannot be considered to be a complete security solution.

Pros: vendor support, widely tested. Cons: complex environments increase risks; unclear how finegrain access controls are.

Native code validation

The Google Native Code browser plugin (NaCl) uses a code validator to check the binary code. There are some restrictions on the code to make it checkable (code generation, such as a JIT has a special interface, there are alignment constraints and other details, necessitating a modified toolchain). This is a similar approach of course to that used in Java bytecode, but with the more difficult problem of native x86 machine code. This sandbox provides an interface layer, somewhat like the operating systems system call layer, but more restricted. In addition this whole set of code is wrapped in an outer sandbox as well, using chroot and pid namespaces.

Pros: supports very general code with little slowdown. Cons: needs code to be targetted for it; aimed at computationally intensive code rather than IO based code; not easily portable as the security model depends on architectural features, although it is portable between operating systems; risk of incorrect validation.

Chroot plus

The Unix chroot call on its own, which isolates a process into a part of the filesystem is not very secure, it is easy to get out of it using the ptrace command on another process that is outside the chroot. Running each process as a different user helps here, as then the process will not have another process it can attach to with ptrace as it will not have permission. There are some potential race conditions in setting the new user that may cause an issue here. An example of this type of sandbox done well is Plash.

Pros: portable across unix versions. Cons: hard to do securely; cannot restrict network access.

LXC

Linux containers, unlike the equivalents in BSD and Solaris, are really a set of namespacing tools for different aspects of the system, set when a process calls clone. It can be viewed as a better set of tools to do chroot style isolation as well as a same-kernel virtualisation model; the tools support both running a whole system with startup scripts etc, or just a single process. Because it has been developed incrementally some of the support is new and if you want to run a whole system I recommend using something very new: Ubuntu 11.04 works nicely and has an lxcguest package, but I had odd issues with 10.10 not being fully isolated, although these might have been configuration. If you use it for single process isolation it is more straightforward as you can have a much more minimal environment. Currently the items that can be namespaced are process IDs, so your process cannot see or ptrace anything outside the container, file system, so it has its own mounts, network so it just sees a virtual network that can be firewalled as appropriate or can be passed a physical network adaptor exclusively, UTS so the process can see its own hostname, IPC for the SYSV IPC namespace. Coming soon is the addition of addition of the user namespace, so that new containers can be created by non root users.

Pros: process isolation done properly; allows controlled network isolation; still allows passing of file descriptors unlike virtualisation. Cons: only supported on newer Linux versions for some of the features.

Chroot/container hybrids

The current default Chrome Linux sandbox uses a mix of chroot and container calls, for maximum compatibility with common distributions; in fact it will work without the container calls but with reduced security. It uses chroot for filesystem isolation, PID namespace to isolate processes, disables ptrace with prctl (which is not a complete mitigation as this is reversible).

Pros: more compatibility, upgrades chroot model towards a container. Cons: network access unrestricted.

Seccomp

Seccomp is a very restricted security sandbox that has shipped with the Linux kernel for quite a while, that can be set using the prctl system call. It then only allows four system calls, read, write, _exit and sigreturn. This is very restrictive, as the process cannot even allocate memory, so it is rarely used. It also turned out to have a bug on 64 bit machines that allowed some other system calls. However Google did produce another sandbox for Chrome based on it, using a very restricted helper thread to perform memory allocations and other system calls. The thread is quite complex as it runs in the same process as the hostile code, so there are quite a few complexities, and it is not so clear that the particular solution as such works for general purposes, but similar approaches could be suitable for some problems.

Pros: small kernel whitelist with restricted additions. Cons: complex and architecture dependent code running in a difficult environment; may not be suited for all uses.

Ptrace

The ptrace system call, used for debugging, can also be used to sandbox a process, as it can intercept system calls. However it is beset with race conditions and other problems, and a hostile process can circumvent it. There do not seem to be any fixes at the moment.

Pros: portable. Cons: not reliable.

Selinux

The Selinux mandatory access controls, designed by the NSA, is a complex but very powerful set of access controls for processes. The big advantage from a sandboxing point of view is that the controls are enforced by the kernel, and are very fine grained, such as access to particular ports, files, sockets and system calls. Items such as files can be relabelled as they are processed, so for example you could not give users access to files before they had been validated or virus checked. Selinux adoption has been slow, with Redhat the first to really push it into their distribution, gradually being followed by others, but many users simply disabling it when it caused issues. Most distributions use it in the “targeted” policy, which only puts external facing daemons in a controlled state, and lets normal users do everything they could do before, but gradually more types of policy are being added, such as a user sandbox to run untrusted code. There is an extensive reference policy which the distributions base their code on which is a good reference for detailed customisation. It is also possible to push selinux controls into applications, such as Postgres, and to use it to store user validation through an application.

Pros: fine-grained controls, kernel mediated; encourages modular architectures; encourages a security as code model. Cons: not installed everywhere; complex; another system description language; best suited to very modular architectures; needs to be maintained with code, or people may just disable it to make applications work; some performance hit, estimated at 7% but obviously very application dependent.

Mitigation techniques

I have included these, although they are not a whole sandbox, because security is layered and they can be used to increase security in a sandbox that has some potential risks. There are a lot of potential techniques here, so I won’t cover them all. Address space layout randomisation (ASLR) is one technique, making it harder for an attacker to know where parts of the executable that they need to call to create an exploit are. This requires position independent code (PIC), and has some default support in Linux, but more is available for example in the PaX project. Another option, also supported by PaX, is to disable the ability to make writeable memory areas executable, which makes it impossible to inject new executable code into a process at runtime; this ability is however required by JIT compilers, something that caused issues with Javascript when it was introduced recently in iOS. Another area is stack buffer overflow prevention, for which there is now gcc support. These policies have rarely been used when compiling entire Linux distributions, with the notable exception of Hardened Gentoo, although they can also be used for individual applications.

Pros: adds more protection at little cost. Cons: only a mitigration, not a sandbox; some binaries need these capabilities for valid reasons.

Conclusions

As is probably clear from this brief summary, security is not just a simple compiler flag, it is a complex design process with a lot of work to do. It is an architectural issue to a large extent, as the more self contained your units are, the easier it is to use the least privilege principles, as for operating system controls the process is the unit of privilege (remember the Unix philosophy). Tooling and testing is fairly limited off the shelf, and debugging can be more difficult, so the overall cost of security is not negligible. On the other hand the cost of not implementing security is very high, particularly in the case of SAAS platforms where the industry is being held to a very high standard.

Which methods to choose? For a SAAS or PAAS multitenant platform, where the base OS is entirely under your control, some combination of lxc, selinux and other mitigation techniques seems to be a clear winner. This can be enhanced, starting with either lxc or selinux as a base and then adding more protections and more fine grain seperation of processes as things move on.



I am currently available for employment opportunities, so if you are looking for someone in architecture, operations, development who is interested in issues like this get in touch.

Post a Comment

Your email is never shared. Required fields are marked *

*
*