Leader of the group Machine Learning: Peter Grunwald.

Our research group focuses on how computer programs can learn from and understand data, and then make useful predictions based on it. These algorithms integrate insights from various fields, including statistics, artificial intelligence and neuroscience.  

 Machine-learning applications are increasingly part of every aspect of life, from speech recognition on cell phones to illness prediction in healthcare. One common problem is extremely polluted data, for which no single model can provide adequate explanations. At CWI we address this issue with statistical machine learning based on combining predictions from different models and experts in order to achieve reliable conclusions.

We also study how networks of neurons in the brain process information, and how modern deep-learning methods can benefit from neuroscience. We develop novel neural networks, like Deep Adaptive Spiking Neural Networks, and also theoretical models of neural learning and information processing in biology. Applications of our work range from low-energy consumption neural machine learning to neuroprosthetics, to increased insight into the question of how the brain works.



CWI builds new supercomputer

CWI has started the construction of a new supercomputer cluster in the beginning of October 2003. The cluster, consisting of 48 dual and quad AMD Opteron systems, is the first quad Opteron cluster in the Benelux. The new supercomputer, funded by the Netherlands Organization for Scientific Research NWO, is expected to be operational in two months.

CWI builds new supercomputer - Read More…

Sander Bohte receives NWO grants

The Netherlands Organization for Scientific Research NWO has granted a VENI subsidy to CWI researcher Sander Bohte. Bohte will use the grant, approved in March 2003, to further his research on spiking neural networks. These types of networks incorporate the latest insights in functional biological neurons. In theory they are much more powerful than traditional artificial neural networks. Bothe's work is aimed at using spiking neurons in large-scale networks that can learn to deal with symbolic structures like grammar in language or compact descriptions of objects in vision.

Sander Bohte receives NWO grants - Read More…

Current events

ML Seminar: Thomas Moerland (Delft University)

  • 2018-09-04T11:00:00+02:00
  • 2018-09-04T12:00:00+02:00
September 4 Tuesday

Start: 2018-09-04 11:00:00+02:00 End: 2018-09-04 12:00:00+02:00

CWI, Room L016

Everyone is welcome to attend the ML seminar 'Monte Carlo Tree Search for Asymmetric Trees' given by  Thomas Moerland.


We present an extension of Monte Carlo Tree Search (MCTS) that strongly
increases its efficiency for trees with asymmetry and/or loops.
Asymmetric termination of search trees introduces a type of uncertainty
for which the standard upper confidence bound (UCB) formula does not
account. Our first algorithm (MCTS-T), which assumes a non-stochastic
environment, backs-up tree structure uncertainty and leverages it for
exploration in a modified UCB formula. Results show vastly improved
efficiency in a well-known asymmetric domain in which MCTS performs
arbitrarily bad. Next, we connect the ideas about asymmetric termination
to the presence of loops in the tree, where the same state appears
multiple times in a single trace. An extension to our algorithm
(MCTS-T+), which in addition to non-stochasticity assumes full state
observability, further increases search efficiency for domains with
loops as well. Benchmark testing on a set of OpenAI Gym and Atari 2600
games indicates that our algorithms always perform better than or at
least equivalent to standard MCTS, and could be first-choice tree search
algorithms for non-stochastic, fully-observable environments.


ML Seminar: Jaron Sanders (Delft University)

  • 2018-09-20T10:00:00+02:00
  • 2018-09-20T11:00:00+02:00
September 20 Thursday

Start: 2018-09-20 10:00:00+02:00 End: 2018-09-20 11:00:00+02:00

CWI, Lecture room L016

We have the pleasure to announce our CWI Machine Learning seminar with Jaron Sanders with the title: Optimal Clustering Algorithms in Block Markov Chains

This paper considers cluster detection in Block Markov Chains (BMCs). These Markov chains are characterized by a block structure in their transition matrix. More precisely, the n possible states are divided into a finite number of K groups or clusters, such that states in the same cluster exhibit the same transition rates to other states. One observes a trajectory of the Markov chain, and the objective is to recover, from this observation only, the (initially unknown) clusters. In this paper we devise a clustering procedure that accurately, efficiently, and provably detects the clusters. We first derive a fundamental information-theoretical lower bound on the detection error rate satisfied under any clustering algorithm. This bound identifies the parameters of the BMC, and trajectory lengths, for which it is possible to accurately detect the clusters. We next develop two clustering algorithms that can together accurately recover the cluster structure from the shortest possible trajectories, whenever the parameters allow detection. These algorithms thus reach the fundamental detectability limit, and are optimal in that sense.




Associated Members