The broad landscape of information security research studies how to securely store data (in for example database systems and file systems), how to securely communicate data (by using appropriate communication protocols with security measures in place), how to execute sensitive application code or systems (this can be a smart grid, or an industrial control system, or an execution on a multicore processor, etc.).
Security guarantees are fundamentally bootstrapped from something secret (like a secret key not known to an adversary). Crypto primitives and protocols, secure hardware design, communication firewalls, etc. are all used to bootstrap a secure application execution and engender a broader sense of trust. It also relies on the assumption that software and hardware are correctly implemented.
Computer security research is about how data can be stored, computed on, and communicated in such a way that no sensitive information about the data leaks (we require confidentiality) and that the integrity of information extracted from the data in the form of a computation can be trusted (we require authenticity and freshness).
In addition, the identity of those who outsource computation or initiate communication may need to remain private (anonymity). Security guarantees need to survive hostile adversarial environments where attackers may not only observe digital communication and (analogue) side channels but also tamper with, damage, or impersonate hardware, software, or digital data with the purpose to misdirect or disrupt the computation or communication.
Attackers can be classified in adversarial models that define collections of available adversarial capabilities. These capabilities may be restricted in that an adversary may only have remote access to a computing system rather than physical access, may have certain storage or computation limitations, may have restrictions with respect to which intermediate computations or communicated information between system modules can be observed, etc.
The more powerful physical attacker (which goes beyond the remote adversary) also has direct access to, e.g., the address bus or power side channel. In general, we may already assume an adversarial model where the adversary has a footprint in the Operating System (OS) of a computational environment (because the large code base of an OS cannot be assumed safe, as in not having exploitable bugs).
We can consider the attack surface of a computing system/environment as a collection of attack points to which an adversary has access. To harden the security of a system:
- One can obfuscate and authenticate interactions using crypto primitives such as encryption, oblivious RAM, MAC, integrity checking, etc. This requires trust in hardness assumptions. As a result, observed traffic through an attack point becomes unintelligible and impossible/hard to impersonate.
- One can use hardware isolation, which reduces observable interactions and eliminates attack points. Hardware isolation is implemented by and bootstrapped from a trusted hardware/software design, called a Trusted Computing Base (TCB). This requires trust in the assumption that the TCB is not vulnerable to attack (the "smaller'' the TCB, the more trust we generally feel).
- One may be able to implement a moving target defense strategy in order to make it hard for the adversary to find and locate attack points in the first place. Here, the main idea is to ideally move one's computation around like a needle in a haystack. This assumes some restrictions on how an adversary can search the "haystack'' (for example, a limited monetary budget for buying access to a data center where the "needle'' is moving around).
Secure Computing Environment
One general theme in our computer security research is how to design a secure computing environment bootstrapped from a minimal/small TCB. This means that if a user outsources a computation to such an environment, then the environment should provide security in that it does not create additional attack points that can be exploited by an adversary (within the set of capabilities as defined by the considered adversarial model).
In particular, a remote user of the secure computing environment is assured that the environment does not weaken the security posture of executed code. This does not imply that the environment improves the security posture of the executed code: The code itself with its I/O interactions to the world outside the secure computing environment is the responsibility of the code developer and may still be vulnerable to attacks (such as a buffer overflow exploit).
Our aim is to bring rigorous cryptographic thinking to security engineering. This includes mathematical modeling of adversarial capabilities leading to definitional frameworks that allow mathematical proofs of security guarantees. In the typical defender-adversarial setting, new security solutions are motivated by strong adversarial models and/or limited resources available for implementing defense strategies (due to practical requirements).
This may lead to new security primitives that can be used in wider contexts. Our research spans computational environments ranging from secure cloud computing (distributed computing) to embedded system security (in cyber-physical systems).
At CWI's 75th birthday event on 11 February 2021, Marten van Dijk gave a lecture on this topic: 'Secure Smart Low Lands' (video).