11/06/2024 | News release | Distributed by Public on 11/06/2024 10:46
It's time for more security coffee talk with Bill! I was first exposed to the term "Confidential Compute" back in 2016 when I started reading about this new radical technology from Intel. Intel first introduced us to this new idea back in 2015, with their 6th generation of CPUs called Skylake. The technology was named SGX (Software Guard Extensions), basically a set of instruction codes that allowed user-level or OS code to define a trusted execution environment (TEE) already built into Intel CPUs. The CPU would encrypt a portion of memory, called the enclave, and any data or code running in the enclave would be decrypted, in real-time or runtime, within the CPU. This provided added protections against any read access by other code running on the same system.
The idea was to protect data in the elusive "data in use" phase. There are three stages of data: at rest, in motion or in use. Data at-rest can be encrypted with self-encrypting hard drives-pretty much standard these days-and most databases already support encryption. Data in-motion is already encrypted by TLS/SSL/HTTPS/IPSec encryption and authentication protocols.
Data in use, however, is a fundamentally different challenge: for any application to make "use" of this data, it must be decrypted. Intel solved this by creating a TEE within the CPU that allows the application and unencrypted data to run securely and privately from any user or code running on the same computer. AMD provides a similar technology it calls AMD SEV (Secure Encrypted Virtualization). ARM, meanwhile, calls its solution Arm CCA (Confidential Compute Architecture). Metaphorically, a TEE is like the scene in courtroom dramas where the judge and the attorneys debate the admissibility of evidence in a private room. Decisions are made in private; the outside world gets to see the result, but not the reasoning, and if there's a mistake, the underlying materials can still be examined later.
This then relates to HSMs (Hardware Security Modules). HSMs have been around for decades, and most need to adhere to government tested NIST FIPS 140-3 Level 3 protections. Not only do the protections provide physical security, but they also mandate minimum security policies around access, authentication and attestation. These are additional protections that are not mandated or required by standalone CPUs.
So, how do HSMs-used for storing keys and encrypting data-have anything to do with Confidential Compute? Well, under that epoxy, active mesh and/or tamper evident/proof seals sits a CPU, some memory and a few other components. They can be considered super secure and physically protected compute environments. There are some HSM vendors, including Marvell, that allow custom code to be run within the secure NIST FIPS boundary of the HSM. The Marvell® LiquidSecurity® 2 (LS2) solution has enough compute power to not only run custom code but also manage keys and encrypt data.
A real-world use case: imagine a healthcare company that would like to make use of the advances in AL/ML to help better detect breast cancer. AI is only as good as the datasets used to train the AI models. Once sufficiently trained, the AI models are fairly accurate and only improve over time. In the U.S. healthcare industry, companies are government mandated to meet privacy regulations like HIPAA and HITECH. Both regulations address concerns about the electronic transmission and storage of everyone's medical records, or ePHI (electronic Protected Health Information). These regulations forced the healthcare industry to adopt minimum cybersecurity best practices, like using encryption on data, ensuring proper access controls and requiring data breach notifications. However, it's this same ePHI data that AI algorithms need to crunch to create better AI models, and these AI models that companies have spent millions of dollars to train need IP (Intellectual Property) protections.
A next-generation HSM-based Confidential Compute environment will allow AI models to be run in a physically secure and government certified HSM. This same HSM can manage the keys used to encrypt or decrypt the ePHI data with proper access controls to ensure confidentiality. Running these AI models against unencrypted ePHI data within a physically secure Confidential Compute HSM environment also provides for the highest levels of privacy and IP protections.
In the coming months, there will be more information on how CSPs (Cloud Service Providers) plan to make use of the Marvell Confidential Compute Environment to offer new cloud services to their customers.
More info on Confidential Compute technologies can be found at the consortium below.
Confidential Computing Consortium - Linux Foundation Project
# # #
This blog contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Actual events or results may differ materially from those contemplated in this article. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.