class: center, middle ### Secure Computer Architecture and Systems *** # Systems Security: Core Concepts --- # Computer Security - Historically focused on the **physical machine** - To prevent theft of or damage to the hardware -- - Today, the value of data is greater than the value of hardware - Thus, computer security focuses today on **information security** - Prevent theft of or damage to the information stored/processed/transmitted by the computer - To prevent disruption of service --- # Systems Security - Practice of safeguarding computer systems against **unauthorised access**, **modification**, or **disruption** -- .leftlargecol[ Two key components: 1. **Software (SW) security**: protecting applications and systems software against vulnerabilities and their exploitation 2. **Hardware (HW) security**: protecting CPU/memory/devices against attacks **SW/HW security are heavily intertwined:** SW makes assumptions about HW, controls it, cross SW/HW attacks ] .rightsmallcol[
] --- # Why is Systems Security Important? .leftcol[ - 4th June 1996: first flight of the ESA Ariane 5 Rocket - Goes off track and disintegrates 40 seconds after lift-off - Root problem: 16 bits signed integer overflow - Code originally written for Ariane 4, making assumptions that no longer held for Ariane 5 - Wasn't even doing anything useful after lift-off - Cost: $ 370M ] .rightcol[
.small[Source: https://www.france24.com/en/live-news/20230705-final-ariane-5-blasts-off-amid-europe-rocket-crisis-1] ] --- # Why is Systems Security Important? - Our world is massively computerised, **impact of cyberattacks is huge:** service disruption, financial losses, - Computer systems are **increasing in complexity**, so are the chances for vulnerabilities - Threats and attack vectors are evolving
--- name:attack-surface # Attack Surface - Computer systems are not perfect and designers may introduce **bugs** - Some of these bugs are **security vulnerabilities** that can be exploited to mount attacks with various effects - Vulnerabilities can be present at every level of the SW/HW stack --- template: attack-surface .leftcol[
] --- template: attack-surface .leftcol[
] .rightcol[ E.g. Apache Struts [CVE-2017-5638]() (Equifax breach, 2017): - Web application parser vulnerabilities - Malicious request allow an attacker to execute code remotely and entirely take over the server ] --- template: attack-surface .leftcol[
] .rightcol[ NodeJS' `event-stream` 2018 attack: - Attacker takes over the [`event stream`](https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident) library repository, releases malicious version - Library used extensively, malicious version designed to steal from crypto wallet software ] --- template: attack-surface .leftcol[
] .rightcol[ Linux's [CVE-2016-5195](https://nvd.nist.gov/vuln/detail/cve-2016-5195) (Dirty CoW): - Normal user exploits a race condition to obtain write access to read-only memory mappings and escalate privileges to become administrator - Used to e.g. root Android phones ] --- template: attack-surface .leftcol[
] .rightcol[ Xen's [CVE-2014-7188](https://xenbits.xen.org/xsa/advisory-108.html): - Read mode overflow in the interrupt controller emulation - VM can leak data from hypervisor/other guests - Patch required emergency forced reboot of ~10% of AWS EC2 ] --- template: attack-surface .leftcol[
] .rightcol[ Spectre/Meltdown side channels: - Speculative execution feature of modern processors can be tricked into leaking data from processes and the kernel - Microcode/software countermeasures: slowdown ] --- # Vulnerabilities - Software and hardware used in production is increasingly **complex** - Linux kernel v6.12 has 26M LoC - An Apple Silicon M3 Max SoC has 92B transistors -- - There is no way to prove this software/hardware is entirely correct -- - In fact it is likely not - Software/hardware designers and engineers are human, they make mistake, and introduce **bugs** - These bugs have various consequences: - Software/hardware instability, crashes - **Security vulnerabilities** - Bugs mostly silent under normal operation: hard to detect - When triggered in a certain way, allow an attacker to do something bad --- # Attacker's Objectives Attacker's objective can be many things: - **Read what they are not supposed to read** - Sensitive data such as passwords or crypto keys, information about the target system (e.g. open ports) to enable further attacks, etc. -- - **Write what they are not supposed to write** - Corrupt sensitive data structures to escalate privilege, inject malicious code and data, forge access tokens, escape detection, etc. -- - **Control what they are not supposed to control** - Disturb operation (denial of service), execute code to enable further attacks, etc. --- # The CIA Triad High-level security properties we want computer systems to maintain: - **Confidentiality**: prevent unauthorised disclosure of sensitive information - E.g. using encryption, access control, secure deletion, etc. -- - **Integrity**: prevent unauthorised tampering of sensitive information - E.g. through checksum verification, digital signatures (keys), etc. -- - **Availability**: prevent disturbances to the operation of a system - E.g. with denial of service protection, redundancy/replication, backups, etc.
--- # The CIA Triad, Identity High-level security properties we want computer systems to maintain: - **Confidentiality**: prevent
unauthorised
disclosure of sensitive information - E.g. using encryption, access control, secure deletion, etc. - **Integrity**: prevent
unauthorised
tampering of sensitive information - E.g. through checksum verification, digital signatures (keys), etc. - **Availability**: prevent disturbances to the operation of a system - E.g. with denial of service protection, redundancy/replication, backups, etc. Another important concept is **identity**: how can we make sure an actor is who they claim to be (e.g. password or certificates) --- name: trust-model # Trust Models & TCB - **Trust model:** reasoning about what components of a computer system are trusted vs. what components are not trusted Infrastructure as a Service (renting VMs in the cloud) scenario example: --- template: trust-model
--- template: trust-model
--- template: trust-model
--- template: trust-model
.center[Trust models vary depending on which actor and scenario are considered] --- # Trust Models & TCB - **Trust model:** reasoning about what components of a computer system are trusted vs. what components are not trusted - **Trusted Computing Base:** set of software and hardware component that are critical to the security of the system - They are assumed to be working correctly to maintain the target security guarantees - The TCB should be: - As **minimal** as possible to make it easy to secure - **Isolated** from non-critical components of the system as they are not trusted - With our IaaS example, from the cloud provider POV the TCB includes the hardware and host systems software (hypervisor, host kernel/firmware/boot process) --- name: threat-model # Threat Models - **Threat model**: series of assumptions about what the attacker can and cannot do Examples with our IaaS scenario: --- template: threat-model
--- template: threat-model
--- template: threat-model
--- name:boxes # Isolation Approaches Isolating components required to enforce target trust model: --- template: boxes
--- template: boxes
--- template: boxes
--- ## Sandboxing/Safeboxing/Mutual Distrust - **Examples of sandboxing**: - Processes/virtual machines isolated from the rest of the system by an OS/hypervisor - Web pages running in different tabs and coming from different sites isolated in a browser -- - **Examples of safeboxing**: - Crypto-library isolated in a web browser - Isolating code manipulating passwords in a password manager -- - **Examples of mutual distrust**: - Trusted execution environments: enclaves/confidential virtual machines distrust the OS/hypervisor which in turn distrust them --- # Principle of Least Privilege (PoLP) - **Actors (processes, users, etc.) in a system only should be granted the minimum permissions required to perform their duty correctly** - Limits the damage that can be done to the system should this actor be subverted by an attacker - Introduced in the seminal paper *The Protection of Information in Computer Systems* in 1975 by Saltzer and Schroeder - It is applied extensively and examples are plentiful: - Privilege levels of execution on the CPU - Access control: user-based file access permissions, app permissions on mobile systems - `sudo` used only for the operations requiring root privileges - etc. - Hard to fully apply it in practice: for complexity or performance reasons, components often end up **overprivileged** --- # Wrapping Up - Computer hardware and systems software **critical** to the safety of computer systems - **Trust and threat models**, trusted computing base - **Trust relations**: sandboxing, safeboxing, mutual distrust - Vulnerabilities, bugs and attacks - Principle of least privilege