The 5 Levels of Secure Hardware

11.21.2024|Georgios Konstantopoulos

Achieving programmable cryptography is one of the most important problems of the next decade for the next generation of intelligence and safe experiences on the web and beyond. We think that achieving that will require utilizing secure hardware which gives guarantees about the integrity and the confidentiality of the computation running on it.

We see 5 levels towards secure hardware:

  • Level 1: Allows building basic applications like oracles or bridges. The developer experience is not great, but the performance is acceptable for these applications. Security is based on proprietary supply chains.
  • Level 2: The performance is slightly worse, but the developer experience is better, allowing more expressive applications such as social media account delegation like Teleport. No security improvements.
  • Level 3: Great developer experience, near native performance, supports GPUs. Allows testing the limits of secure hardware with applications like private or verifiable ML inference. No security improvements. We are here, developers can build most of the exciting things that “endgame” programmable cryptography enables today.
  • Level 4: Security improves by having an open manufacturing process of secure hardware, while developer experience and performance stay constant. Allows us to scale the benefits of programmable cryptography safely, without relying on proprietary manufacturers.
  • Level 5: Security improves by having heterogeneous open secure hardware connected to each other for redundancy. We can reliably use secure hardware at global scale for things like voting or handling sensitive medical data.

Our main takeaway from this analysis: We can build great applications with secure hardware today; the stack is ready for developers with good performance. To make things more secure, we’ll need innovation at the hardware layer.

Programmable cryptography enables fun, safe, and intelligent experiences.

Cryptography is a powerful tool which allows creating fun, safe and powerful environments for computation, communication and for browsing the web. Programmable cryptography will allow us to do more with our data. Here are some examples:

  • The next generation of large language models will be trained on large clusters over user data such as chat histories and medical data without violating their privacy. We will know which model was used to run every inference result, and have full provenance over the authenticity of content.
  • People will be able to coordinate and grant permissions over their accounts to their friends or to strangers on the internet. This is one of the most exciting things to me. As an example, Teleport recently allowed anyone with a magic link to tweet from the Twitter account that issued it, as long as the tweet followed certain rules. If you thought Twitch Plays Pokemon was fun, imagine Group Chat controls 100e0 MrBeast-level TikTok accounts at the same time.
  • People will be able to connect safely over sensitive pieces of information. As an example, in Frontiers 2024, we collaborated with Cursive and did a safe recruiting ‘dating’ app – meet people, and opt-in share that you’re looking for a job only if they match your interests and they’re also looking for candidates of a certain profile. If there’s no match, then nothing sensitive gets shared, not even with us, so you are guaranteed to be safe since nobody learned anything about your sensitive interests!

Secure Hardware is necessary to achieve programmable cryptography at scale.

We can achieve some of the above with a mix of techniques such as zero-knowledge proofs (ZKP), homomorphic encryption (FHE), multi-party computation (MPC), and indistinguishability obfuscation (IO). These techniques, while pure and aesthetic, pose non-negligible barriers to deployment at scale. FHE does not address who holds the decryption key, MPC is based on non-collusion assumptions, ZK cannot handle share state, and IO does not have a feasible construction yet. In addition to that, there are high overheads in each of the above. It might not be possible to deploy such programmable cryptography at scale where performance and robustness matters. Can we do better?

Secure Hardware allows for privacy-preserving computation with attested computational integrity. What this means in practice, is that you provide a program, some encrypted input, and secure hardware can evaluate the program over that encrypted output, and give you an attestation (usually from the hardware manufacturer) that it ran the program you requested correctly.

This is such a powerful tool that it is already widely deployed in every modern Apple device, and will only continue becoming more popular; just look at Apple’s Private Cloud Compute announcement from earlier this year. Similar features are already available on the largest public clouds via Intel SGX/TDX, Amazon Nitro, AMD SEV, and more. As many readers may know, there is no shortage of vulnerabilities against secure hardware, which have been repeatedly broken by researchers. Despite that, we believe that secure hardware is the key to achieving practical programmable cryptography.

How to think about the levels of secure hardware?

We think three axes matter:

  1. Performance: How fast is the execution compared to “native” execution that does not have any guarantees about privacy or integrity? Here is some recent work on this.
  2. Developer Experience: How easy is it to make a program run in secure hardware? Does it run as-is, or do I need to rewrite it for that target? Do I need to be aware of program idiosyncrasies which may impact confidentiality guarantees like data access patterns?
  3. Security Model: What assumptions are we making about the system? What threats are we protected against given these assumptions? What are the practical ways we can realize such security models?

Based on that, we produce the following table, with some use cases we feel excited about unlocking at each level. The examples we present are representative, and for a complete Systemization of Knowledge we refer readers to this excellent research. Supporting remote attestation and having hardware-level guarantees on memory isolation is table stakes to be included in our evaluation.

We have already seen developers building fun, safe and intelligent experiences using Gramine, Intel TDX & the latest H200s, as seen above. We are missing an excellent toolkit for developing Secure Hardware-based apps, with some exciting initial work already being done. Today, we are seemingly at Level 3, which is an amazing place for the developer community to be in, as we can start to innovate.

As we keep going up the levels, the tradeoffs start to get harder

  • Improving the developer experience increases the TCB.
  • Open source instruction sets may reduce performance, but improve verifiability.
  • Improving the TCB e.g. via the Yocto project may make deployment harder.

Creating redundancy across heterogeneous Secure Hardware will reduce performance.

Let’s get to Level 5.

We think that building on secure hardware in the next few years is going to go through a renaissance. Applications that we thought of as ‘weird’ or ‘impractical’ will start becoming normal and practical. Every major infrastructure provider will have wide availability of such hardware and provide cloud attestations to ensure their customers about their reliability.

To go beyond coordinating fun social experiences, into creating a fair global financial system, training large models with sensitive data, and doing private identity, all at global scale, we will need to get to Level 5 – the holy grail of cryptographic compute. We think that is a really exciting future to be looking forward to.

If you are working on the above and are interested in accelerating us to that future, reach out to georgios@paradigm.xyz.

Acknowledgements

Thanks to Phil Daian, Andrew Miller, and Quintus Kilbourn for review and feedback.

Written by

Biography

Georgios Konstantopoulos is the Chief Technology Officer and a General Partner focused on Paradigm’s portfolio companies and research into open-source protocols. Previously, Georgios was an independent consultant and researcher focused on cryptography, information security and mechanism design. He earned his M.Eng. in Electrical & Computer Engineering from Aristotle University of Thessaloniki.

Disclaimer: This post is for general information purposes only. It does not constitute investment advice or a recommendation or solicitation to buy or sell any investment and should not be used in the evaluation of the merits of making any investment decision. It should not be relied upon for accounting, legal or tax advice or investment recommendations. This post reflects the current opinions of the authors and is not made on behalf of Paradigm or its affiliates and does not necessarily reflect the opinions of Paradigm, its affiliates or individuals associated with Paradigm. The opinions reflected herein are subject to change without being updated.

Copyright © 2024 Paradigm Operations LP All rights reserved. “Paradigm” is a trademark, and the triangular mobius symbol is a registered trademark of Paradigm Operations LP