Creating a trusted platform for embedded security-critical applications
StoryOctober 08, 2020
By Richard Jaenicke and Steve Edwards
Security-critical applications, such as cross-domain solutions (CDS), require a secure, trusted platform on which to execute, spanning software, firmware, and hardware. The lowest layer that the application interacts with directly is a trusted operating system (OS). Trust in the OS is dependent on two factors: its robustness from a security perspective, and assurance that the OS was both loaded and configured correctly and never tampered with. OS trust also depends partly on trusted pre-OS functionality, such as secure boot firmware that executes before the OS.
The security robustness of computer hardware and software platforms is often specified by evaluation to the “Common Criteria for Information Technology Security Evaluation” (ISO/IEC 15408)1. Typically, Common Criteria targets of evaluation (TOE) are evaluated against a government-defined protection profile that includes both functional and assurance requirements. Evaluations can be done to different levels of depth and rigor, called Evaluation Assurance Levels (EAL), with EAL1 being the least rigorous and EAL7 being the most rigorous. Alternatively, a certain level of trust can be achieved through safety certifications. Although safety certifications provide a level of assurance for integrity and availability, they generally do not directly address confidentiality or other trust mechanisms.
Trusting the correct operating system (OS) code has been loaded requires the establishment of a chain of trust all the way back to a root of trust (RoT) that is established at power-on. Each link in the chain of trust must have sufficient security assurance functionality and must authenticate the next piece of software in the chain of trust before it is executed. The robustness of the RoT is the critical starting point for any trusted platform. The level of robustness can range broadly, from a software RoT, to one based on a physical unclonable function (PUF) embedded in the hardware.
Security robustness
In U.S. government-defined protection profiles, robustness is a metric that measures the TOE’s ability to protect itself, its data, and its resources. Robustness levels are characterized as basic, medium, or high2. The level of robustness required for a TOE is characterized as a function of the value of the data that it protects and the threats identified for the environment in which it is deployed.
Basic robustness environments are defined as those that encounter threats introduced by inadvertent errors or casually mischievous users. In general, “best commercial practice” is sufficient to protect information in a basic robustness environment3. Medium robustness environments are sufficient where the motivation of an attacker is considered “medium,” and the attacker has at least a moderate level of resources or expertise2. In general, medium robustness is “appropriate for an assumed nonhostile and well-managed user community requiring protection against threats of inadvertent or casual attempts to breach the system security.”4
A high robustness TOE is required for environments where sophisticated threat agents and high-value resources are both present, resulting in a high likelihood of an attempted security compromise5. High robustness requires EAL6 or higher and is appropriate for protecting systems against extremely sophisticated and well-funded threats, such as attacks from nation-states and national laboratories. For medium and high robustness environments, it is generally necessary to require that certain hardware components of a product be evaluated as part of a TOE. This requirement enables higher confidence that the product is less likely to be compromised and that the security policy is always invoked2.
Chain of trust
Providing a secure computing environment for cross-domain solutions (CDS) applications depends in part on establishing a chain of trust, starting with a hardware RoT and reaching up through every layer of software (see Figure 1 for an example chain of trust). Trust in each link in the chain has two components: the robustness of the security solution and the authenticity of the component. Authenticity is ensured by cryptographic signature, generally in the form of a secure hash algorithm. Authentication provides assurance that the software loaded is the one intended and nothing else, but authentication does not address security functionality, including being free from vulnerabilities.
[Figure 1 | Example chain of trust, starting with the hardware RoT.]
The beginning of the chain of trust, the RoT, can be software-based, but a hardware RoT is more secure. The main choices for a hardware RoT include a separate Trusted Platform Module (TPM) chip, on-chip boot ROM code, and on-chip security based on a physically unclonable function (PUF).
On Intel processor-based systems, Intel Trusted eXecution Technology (TXT) uses a TPM to store known-good values for the hashes of the BIOS and the hypervisor or OS kernel. At power-on, the hash of the BIOS is compared with the value stored in the TPM. If it matches, the BIOS is loaded, and the hash of the hypervisor or OS kernel is calculated and compared. Note that some portion of the CPU is already running to compute the hashes, and the boot microcode – which is the first software executed by the CPU – has already been cryptographically authenticated by the processor itself.
FPGA-based RoT
A higher level of security can be achieved with a device that features a RoT built into its own silicon. Typical examples of such devices include the Microsemi PolarFire or SmartFusion2 SoC FPGA [field-programmable gate array], Intel Stratix 10 FPGA, and Xilinx Zynq UltraScale+ MPSoC [multiprocessor system-on-chip]. In the case of the Zynq MPSoC, the metal-masked boot ROM code, together with an RSA key hash stored in a hardware eFUSE, provides the hardware RoT. The device’s Configuration Security Unit (CSU) boots from on-chip, metal-masked ROM and enforces the RoT. It validates the integrity of the user public key read from external memory by calculating its cryptographic checksum using the SHA-3/384 engine and then compares it to the value stored in eFUSEs. If those match, the CSU loads and authenticates the first stage boot loader (FSBL)7.
Some portions of the boot process, such as loading the FPGA bitstream, might require confidentiality in addition to authenticity. An on-chip PUF can be used to create a key encryption key (KEK), which is used to encrypt the user symmetric key used for decrypting the bitstream. The KEK is never stored but it instead created at each power-up based on the PUF, which is physically unique and cannot be copied. On some devices, the same technique can be employed to generate the initial key used to authenticate the boot ROM instead of storing the key eFUSE, thereby further enhancing security.
Once an SoC or FPGA has securely booted, it can act as the RoT for the main processor. It can be used to provide hardware authentication of the boot process and ensure the processor is executing only trusted code8.
Vulnerabilities in the chain of trust
As stated above, although authenticating each link in the chain of trust ensures that nothing but the intended software is loaded, authentication does not address security functionality, including freedom from vulnerabilities. For that reason, each portion of the code should be designed, tested, and verified to be free from vulnerabilities. This rule particularly applies to metal-masked boot ROM code, which cannot be easily updated if a vulnerability is discovered later.
For example, a recently revealed flaw in the boot ROM for Intel’s Converged Security and Management Engine (CSME), undiscovered for the last five years, enables control over reading of the chipset key and generation of all other encryption keys9. Once an attacker has obtained this chipset key, they can decrypt any data encrypted using Intel Platform Trust Technology (PTT) and forge the device’s Enhanced Privacy ID (EPID), which is used for the remote attestation of trusted systems. What’s more, because the vulnerability resides in metal-masked boot ROM, it cannot be patched with a firmware update10.
MILS architecture
Once a trusted hardware platform is established, the next step is the design of the software architecture. The most accepted path to building a trusted software environment for CDS applications is a Multiple Independent Levels of Security (MILS) operating environment implemented to high robustness. MILS divides the software architecture into three layers: the separation kernel, middleware, and applications. Each layer enforces a separate portion of the security policy set. The separation kernel is the only layer that executes in privileged mode. Applications can enforce their own security policies, enabling application-specific policies instead of relying on broad security policies in a monolithic kernel. Each layer and application can be evaluated separately without affecting the evaluation of the other layers/applications, making the CDS system easier to implement, certify, maintain, and reconfigure11.
A separation kernel12 divides memory into partitions using a hardware-based memory management unit (MMU) and allows only carefully controlled communications between nonkernel partitions. Furthermore, OS services, such as networking stacks, file systems, and most device drivers, execute in a partition instead of in the kernel in privileged mode (Figure 2). Because the separation kernel relies on hardware features such as the MMU to enforce some of the separation requirements, it is imperative to have trust in the hardware platform.
[Figure 2 | Using a separation kernel, applications run in isolated partitions and access external data through a multiple single-level security (MSLS) file server or network stack.]
NEAT security properties
The separation kernel enforces the data isolation and controls communication between partitions. This enables untrusted applications and data objects at various levels of classification to reside on a single processor. The separation kernel also enables trusted applications to execute on the same processor as less-trusted applications, while ensuring that trusted applications will not be compromised or interfered with in any way by less-trusted applications. Security-policy enforcement by the separation kernel is nonbypassable, always invoked, and tamperproof because it is the only software that runs in privileged mode on the processor13. Additionally, the small size of the separation kernel makes it “evaluatable.” These four properties – nonbypassable, evaluatable, always invoked, and tamper-proof – are referred to by the acronym “NEAT.”
The guarantee of NEAT properties in a MILS operating environment enables the design of a multi-level security (MLS) system as a set of independent system-high partitions with cross-domain solutions that enable secure communications, both among those partitions and with external systems. Leveraging the NEAT security-policy enforcement provided in a separation kernel evaluated to high robustness results in small and tightly focused cross-domain servers, downgraders, and guards. This step, in turn, makes high-assurance evaluations of those cross-domain solutions practical, achievable, and affordable14.
Covert channels
One of the most challenging requirements for achieving high robustness is the mitigation of covert channels, which is an unintended or unauthorized communications path that can be used to transfer information in a manner that violates a security policy15. Covert channels can be categorized as either storage-based or timing-based, or as a hybrid of the two. Covert storage channels transfer information through the setting of bits by one application in a location that is readable by another application. Covert timing channels convey information by modulating some aspect of system behavior over time in a way that can be observed by another application.
Many covert channels are extremely challenging to identify and mitigate. A high-robustness separation kernel must demonstrate that a systematic approach was taken to identify and mitigate covert channels across the range of possible communication mechanisms. Mitigation techniques include shutting down or preventing the covert channel, limiting the bandwidth of potential covert channels so that the assurance outweighs the risks, and ensuring that only highly trusted applications have access to the covert channels.
Extended security functionality
Additional functionality that may be required by a secure system architecture, depending on the level of security robustness required, includes audit logging, integrity tests, and abstract machine tests (AMT). Audit logging records specific events during execution of the separation kernel to detect potentially malicious code behavior. Integrity tests ensure the integrity of the executable images of the separation kernel that are stored in both volatile and nonvolatile RAM. Integrity testing includes continuous tests of the separation kernel’s active executable image in RAM as well as a set of power-up tests. AMTs are continuous tests that ensure that hardware protection mechanisms are being enforced; an example would be those tests that attempt memory violations and privileged instruction execution in order to ensure that the hardware enforcing separation between the virtual address spaces is still operational. Audit logging, integrity tests, and AMT are all required to meet high robustness15.
Examples of trusted hardware and software solutions
Curtiss-Wright’s CHAMP-XD1S 3U VPX digital signal processing (DSP) module (Figure 3) features an Intel Xeon D processor, a Xilinx Zynq UltraScale+ MPSoC FPGA, and a flash-based Microsemi SmartFusion2 FPGA to provide a secure processor board designed for high-performance embedded computing (HPEC). The module’s FPGA and software security features with TrustedCOTS Enhanced Trusted Boot capabilities, including an FPGA-based Root of Security to protect against malicious cyberattacks, probing, and reverse-engineering. The CHAMP-XD1S uses a TPM 2.0 security chip to support Intel TXT secure boot technology. The board also uses a PUF in the Zynq UltraScale+ MPSoC to generate the encryption key used authenticate the boot code. That authentication can be used as the RoT to extend trust to other portions of the system. The SmartFusion 2 FPGA provides health and management functions and can integrate additional security functions.
[Figure 3 | Curtiss-Wright’s CHAMP-XD1S 3U VPX digital signal processing module provides enhanced trusted computing features, including an FPGA and software security with TrustedCOTS Enhanced Trusted Boot capabilities.]
In the software realm, the INTEGRITY-178 tuMP real-time operating system (RTOS) from Green Hills Software provides a MILS operating environment based on a separation microkernel that is capable of hosting MLS applications, including cross-domain solutions. The RTOS provides the high level of data isolation, control of information flow, resource sanitization, and fault isolation required for a high robustness separation kernel. Those foundation security policies are nonbypassable, evaluateable, always invoked, and tamperproof (again, that NEAT acronym), providing the high assurance level needed to enable the design of an MLS system as a set of independent, secure partitions with cross-domain solutions enabling secure communications among those partitions.
In 2008, the INTEGRITY-178 RTOS became the first and only operating system to be certified against the “U.S. Government Protection Profile for Separation Kernels in Environments Requiring High Robustness” (SKPP), which was issued by the Information Assurance Directorate of the U.S. National Security Agency (NSA). The certification against the SKPP was to both high robustness and EAL6+.
As part of certification to the SKPP, the RTOS underwent independent vulnerability analysis and penetration testing by NSA to demonstrate both that it is resistant to an attacker possessing a high attack potential and that it does not allow attackers with high attack potential to violate the security policies. Additionally, it underwent covert channel analysis by NSA to demonstrate that it satisfies all covert channel mitigation metrics.
Beyond the approval as a MILS separation kernel, INTEGRITY-178 provides a complete set of APIs that were also evaluated by the NSA for use by MLS applications within a secure partition – an MLS guard, which is a fundamental requirement in a cross-domain system. Those secure APIs include support for multithreading, concurrent execution on multiple cores, and flexible core assignments at the configuration file level, all within the secure MILS environment.
References
1. Common Criteria for Information Technology Security Evaluation, Version 3.1 Revision 5, CCIMB-2017-04-[001, 002, 003], Apr 2017.
2. Consistency Instruction Manual For Development of US Government Protection Profiles For Use in Medium Robustness Environments, Release 3.0, National Security Agency, 1 Feb 2005.
3. Consistency Instruction Manual For Development of US Government Protection Profiles For Use in Basic Robustness Environments, Release 2.0, National Security Agency, 1 Mar 2005.
4. “Controlled Access Protection Profile, version 1.d,” NSA, 8 Oct 1999.
5. U.S. Government Protection Profile for Separation Kernels in Environments Requiring High Robustness, Version 1.03, National Security Agency, 29 July 2007.
6. Information Assurance Technical Framework, Chapter 4, Release 3.1, National Security Agency, Sep 2002.
7. “Zynq Ultrascale+ MPSoC Security Features,” Xilinx, 2020. https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841708/Zynq+Ultrascale+MPSoC+Security+Features
8. “Overview of Secure Boot With Microsemi SmartFusion2 FPGAs,” Microsemi, 2013. http://www.microsemi.com/document-portal/doc_download/132874-overview-of-secure-boot-with-microsemi-smartfusion2-fpgas
9. M. Ermolov, “Intel x86 Root of Trust: loss of trust,” Positive Technologies, 5 Mar 2020. http://blog.ptsecurity.com/2020/03/intelx86-root-of-trust-loss-of-trust.html
10. Edward Kovacs, “Vulnerability in Intel Chipsets Allows Hackers to Obtain Protected Data,” Security Week, 20 Mar 2020. https://www.securityweek.com/vulnerability-intel-chipsets-allows-hackers-obtain-protected-data
11. J. Alves-Foss et al, “Enabling the GIG”, PowerPoint presentation, Integrated Defense Architectures Conf., 11 May 2005.
12. Rushby, J., “Design and Verification of Secure Systems,” ACM Operating Systems Review, vol. 15, no. 5, Dec 1981.
13. IAEC 3285, NSA Infosec Design Course, High Robustness Reference Monitors version 3, Michael Dransfield, W. Mark Vanfleet.
14. W. Mark Vanfleet, et al, “MILS: Architecture for High Assurance Embedded Computing,” CrossTalk, Aug 2005.
15. U.S. Government Protection Profile for Separation Kernels in Environments Requiring High Robustness, Version 1.03, National Security Agency, 29 July 2007.
Richard Jaenicke is director of marketing for safety and security-critical products at Green Hills Software. Prior to Green Hills, he served as director of strategic marketing and alliances at Mercury Systems, and held marketing and technology positions at XCube, EMC, and AMD. Rich earned an MS in computer systems engineering from Rensselaer Polytechnic Institute and a BA in computer science from Dartmouth College. Readers may email him at [email protected].
Green Hills Software https://www.ghs.com/
Steve Edwards is Director of Secure Embedded Solutions for Curtiss-Wright. He has been with Curtiss-Wright for 22 years in a number of roles. Steve codesigned Curtiss-Wright’s first rugged multiprocessor and FPGA products and was involved in the evangelization of the industry’s first VPX products. He was Chair of OpenVPX (VITA 65) from 2013 to 2015, and is current co-lead of the Sensor Open Systems Architecture (SOSA) Security Subcommittee. He has been involved in Curtiss-Wright’s AT and cybersecurity efforts for 11 years. Readers may reach him at [email protected].
Curtiss-Wright https://www.curtisswrightds.com/