More on Computer Security

More information on CWI's Computer Security research group.

Computer Security

The broad landscape of information security research studies how to securely store data (in for example database systems and file systems), how to securely communicate data (by using appropriate communication protocols with security measures in place), how to execute sensitive application code or systems (this can be a smart grid, or an industrial control system, or an execution on a multicore processor, etc.).

Security guarantees are fundamentally bootstrapped from something secret (like a secret key not known to an adversary). Crypto primitives and protocols, secure hardware design, communication firewalls, etc. are all used to bootstrap a secure application execution and engender a broader sense of trust. It also relies on the assumption that software and hardware are correctly implemented.

Computer security research is about how data can be stored, computed on, and communicated in such a way that no sensitive information about the data leaks (we require confidentiality) and that the integrity of information extracted from the data in the form of a computation can be trusted (we require authenticity and freshness).

In addition, the identity of those who outsource computation or initiate communication may need to remain private (anonymity). Security guarantees need to survive hostile adversarial environments where attackers may not only observe digital communication and (analogue) side channels but also tamper with, damage, or impersonate hardware, software, or digital data with the purpose to misdirect or disrupt the computation or communication.

Attack Surface

Attackers can be classified in adversarial models that define collections of available adversarial capabilities. These capabilities may be restricted in that an adversary may only have remote access to a computing system rather than physical access, may have certain storage or computation limitations, may have restrictions with respect to which intermediate computations or communicated information between system modules can be observed, etc.

The more powerful physical attacker (which goes beyond the remote adversary) also has direct access to, e.g., the address bus or power side channel. In general, we may already assume an adversarial model where the adversary has a footprint in the Operating System (OS) of a computational environment (because the large code base of an OS cannot be assumed safe, as in not having exploitable bugs).

We can consider the attack surface of a computing system/environment as a collection of attack points to which an adversary has access. To harden the security of a system:

  1. One can obfuscate and authenticate interactions using crypto primitives such as encryption, oblivious RAM, MAC, integrity checking, etc. This requires trust in hardness assumptions. As a result, observed traffic through an attack point becomes unintelligible and impossible/hard to impersonate.
  2. One can use hardware isolation, which reduces observable interactions and eliminates attack points. Hardware isolation is implemented by and bootstrapped from a trusted hardware/software design, called a Trusted Computing Base (TCB). This requires trust in the assumption that the TCB is not vulnerable to attack (the "smaller'' the TCB, the more trust we generally feel).
  3. One may be able to implement a moving target defense strategy in order to make it hard for the adversary to find and locate attack points in the first place. Here, the main idea is to ideally move one's computation around like a needle in a haystack. This assumes some restrictions on how an adversary can search the "haystack'' (for example, a limited monetary budget for buying access to a data center where the "needle'' is moving around).

Secure Computing Environment

One general theme in our computer security research is how to design a secure computing environment bootstrapped from a minimal/small TCB. This means that if a user outsources a computation to such an environment, then the environment should provide security in that it does not create additional attack points that can be exploited by an adversary (within the set of capabilities as defined by the considered adversarial model).

In particular, a remote user of the secure computing environment is assured that the environment does not weaken the security posture of executed code. This does not imply that the environment improves the security posture of the executed code: The code itself with its I/O interactions to the world outside the secure computing environment is the responsibility of the code developer and may still be vulnerable to attacks (such as a buffer overflow exploit).

Our aim is to bring rigorous cryptographic thinking to security engineering. This includes mathematical modeling of adversarial capabilities leading to definitional frameworks that allow mathematical proofs of security guarantees. In the typical defender-adversarial setting, new security solutions are motivated by strong adversarial models and/or limited resources available for implementing defense strategies (due to practical requirements).

This may lead to new security primitives that can be used in wider contexts. Our research spans computational environments ranging from secure cloud computing (distributed computing) to embedded system security (in cyber-physical systems).

At CWI's 75th birthday event on 11 February 2021, Marten van Dijk gave a lecture on this topic: 'Secure Smart Low Lands' (video).

Video lecture 'Secure Smart Low Lands' from Marten van Dijk (11 Feb. 2021)

Portrait in I/O

In the April 2021 issue of I/O magazine, the following article was published: 'Creating secure computing environments - Portrait of the recently established Computer Security group at CWI'.

Research Pillars

  1. Formal methods and universally composable (UC) modeling [Frank de Boer]: We have broad extensive expertise in formal modeling for highly pragmatic real-world problems. Our work yields technological foundations that underpin software engineering and service-oriented computing with the aim to add stability and reliability to those foundations and so to the third-party applications built on them. We have applied the UC security framework to OpenStack, a large scale system widely used Infrastructure-as-a-Service. We envision to use UC modeling to analyze the security of hardware systems.
  2. Cyber-physical system security [Chenglu Jin]: We understand the security issues in cyber-physical systems, such as industrial control systems and digital manufacturing. On the system level, we have proposed various cryptographic solutions, including message authentication, key exchange, proof of aliveness. We also lead the research efforts in silicon Physical Unclonable Function (PUF) design and analysis. Also, we have published on hardware Trojans, secure supply chain management, side channels attacks/countermeasures, etc. In the future, we envision working towards a secure “Smart Low Lands.”
  3. Secure processor technology and key management [Marten van Dijk & Chenglu Jin]: We have past experience in secure processor design such as Aegis and Ascend (for the latter, we introduced Path-ORAM), and we have worked with Intel-SGX. With the trend of integrating FPGAs into cloud platforms, we also studied the security of cloud FPGAs as a new paradigm for a secure computation environment. These are all examples of secure general purpose computation environments.
  4. Machine learning (ML) and security [Marten van Dijk]: We started research in adversarial ML and differential privacy for federated learning. We have used ML for analyzing our Interpose PUF design. We have set up a framework for analyzing the effectiveness of intrusion detection and prevention systems. We envision continued research in secure (special purpose) computation environments for robust intelligence.


Are you interested? Please get in touch with us by email (see contact data Marten van Dijk), phone, or Zoom call. We look forward to working with you on secure industry solutions, academic collaborative projects, designing open-source teaching modules, etc.

The Computer Security (CSY) research group was established on 1 June 2020.