Private computation on public clouds
MetadataShow full item record
Public clouds offer valuable services at the expense of privacy. Since the cloud provider controls the privileged software on their machines (the operating system and the hypervisor), they enjoy access to the secrets processed by the applications they host. As a result, users must either trust public clouds or avoid them. Recently, hardware manufacturers have extended CPU designs to provide trusted execution environments (TEEs). Hardware ensures the data inside a TEE can only be accessed by the code inside that TEE, protecting secrets from all software that the provider controls. However, TEEs do not provide meaningful security for many applications on their own. In practice, many applications are proprietary or make use of accelerators like GPUs. Code inside the TEE has access to user secrets and the freedom to communicate them to the outside world; users cannot vet proprietary code to ensure it does not exercise that freedom (accidentally or intentionally). GPUs are not controlled by the CPU directly but instead by drivers under the cloud provider’s control, making it trivial for the cloud provider to extract secrets that the user offloads to a GPU for processing. GPU TEEs can prevent unauthorized access to GPU memory, but communication with the GPU can still leak information. We demonstrate system designs that leverage existing (CPU) and pro- posed (GPU) TEEs that protect users‘ data even when the application code is colluding with the cloud provider to steal it, or when the user offloads parts of the application to GPUs.