Description
Leader of the group Machine Learning: Peter Grünwald.
Our research group focuses on how computer programs can learn from and understand data, and then make useful predictions based on it. These algorithms integrate insights from various fields, including statistics, artificial intelligence and neuroscience.
Machinelearning applications are increasingly part of every aspect of life, from speech recognition on cell phones to illness prediction in healthcare. One common problem is extremely polluted data, for which no single model can provide adequate explanations. At CWI we address this issue with statistical machine learning based on combining predictions from different models and experts in order to achieve reliable conclusions.
We also study how networks of neurons in the brain process information, and how modern deeplearning methods can benefit from neuroscience. We develop novel neural networks, like Deep Adaptive Spiking Neural Networks, and also theoretical models of neural learning and information processing in biology. Applications of our work range from lowenergy consumption neural machine learning to neuroprosthetics, to increased insight into the question of how the brain works.
News
Researchers develop neural model for working memory
Neuroscientists of Centrum Wiskunde & Informatica (CWI) and the Netherlands Institute for Neuroscience (NIN) have developed a biologically plausible neural network model that can learn to remember past events in order to use them in the future. The researchers developed their model by combining theoretical principles from machine learning with insights from neuroscience.
CWI starts research on spiking neural networks
Sander Bohte, researcher in the life science group of the Centrum Wiskunde & Informatica (CWI) in Amsterdam, starts in collaboration with researchers from the University of Amsterdam (UvA) a new project on spiking neural networks.
Brain mechanisms better understood with new model
Building a neural network with the same properties and capacity as the human brain is the holy grail in neuroinformatics. Such a network would not only explain the inner workings of the brain, but would also pave the road for braincontrolled machines such as computers operated by thought and robot limbs for the handicapped.
CWI simulates brain activity on video cards
Neuroinformaticists of Centrum Wiskunde & Informatica (CWI) in Amsterdam managed to simulate complex brain activity on simple video cards. The simulated brain contains 50,000 neurons communicating with 35 million signals per second. This is comparable to the brain capacity of insects such as ants or flies.
Current events
ML Seminar: Rémi Bardenet (CNRS & CRIStAL, Univ. Lille)
 20191031T11:00:00+01:00
 20191031T12:00:00+01:00
ML Seminar: Rémi Bardenet (CNRS & CRIStAL, Univ. Lille)
Start: 20191031 11:00:00+01:00 End: 20191031 12:00:00+01:00
Everyone is welcome to attend the ML seminar of Rémi Bardenet with the title 'DPPs everywhere: repulsive point processes for Monte Carlo integration, signal processing and machine learning'.
Abstract: Determinantal point processes (DPPs) are specific repulsive point processes, which were introduced in the 1970s by Macchi to model fermion beams in quantum optics. More recently, they have been studied as models and sampling tools by statisticians and machine learners. Important statistical quantities associated to DPPs have geometric and algebraic interpretations, which makes them a fun object to study and a powerful algorithmic building block.
After a quick introduction to determinantal point processes, I will discuss some of our recent statistical applications of DPPs. First, we
used DPPs to sample nodes in numerical integration, resulting in Monte Carlo integration with fast convergence with respect to the number of integrand evaluations. Second, we turned DPPs into lowerror variable selection procedures in linear regression. If time allows it, I'll describe a third application where we used DPP machinery to characterize the distribution of the zeros of timefrequency transforms of white noise, a recent challenge in signal processing.
Members
Associated Members
Publications

Borovykh, A.I, Oosterlee, C.W, & Bohte, S.M. (2019). Generalization in fullyconnected neural networks for time series forecasting. Journal of Computational Science, 36, 1–15. doi:10.1016/j.jocs.2019.07.007

Yin, B, Balvert, M, van der Spek, R.A.A, Dutilh, B.E, Bohte, S.M, Veldink, J, & Schönhuth, A. (2019). Using the structure of genome data in the design of deep neural networks for predicting amyotrophic lateral sclerosis from genotype. In Bioinformatics (Vol. 35, pp. i538–i547). doi:10.1093/bioinformatics/btz369

van Doorn, J, Ly, A, Marsman, M, & Wagenmakers, E.J. (2019). Bayesian estimation of Kendall's τ using a latent normal approach. Statistics & Probability Letters, 145, 268–272. doi:10.1016/j.spl.2018.10.004

Zambrano, D, Nusselder, R.B.P, Scholte, H.S, & Bohte, S.M. (2019). Sparse computation in adaptive spiking neural networks. Frontiers in Neuroscience, 12(JAN). doi:10.3389/fnins.2018.00987

Kaufmann, E, KoolenWijkstra, W.M, & Garivier, A. (2018). Sequential test for the lowest mean: From Thompson to Murphy sampling. In Advances in Neural Information Processing Systems (pp. 6332–6342).

Karamanis, M, Zambrano, D, & Bohte, S.M. (2018). Continuoustime spikebased reinforcement learning for working memory tasks. In Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (pp. 250–262). doi:10.1007/9783030014216_25

Dora, S, Pennartz, C, & Bohte, S.M. (2018). A deep predictive coding network for inferring hierarchical causes underlying sensory inputs. In Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (pp. 457–467). doi:10.1007/9783030014247_45

Pozzi, I, Nusselder, R.B.P, Zambrano, D, Bohte, S.M, & Iliadis, L. (2018). Gating sensory noise in a spiking subtractive LSTM. In V Kůrková, Y Manolopoulos, B Hammer, & I Maglogiannis (Eds.), Proceedings of Artificial Neural Networks and Machine Learning  ICANN 2018 (pp. 284–293). Springer, Cham. doi:10.1007/9783030014186_28

Grünwald, P.D, & de Heide, R. (2018). Invited discussion to the paper Using Stacking to Average Bayesian Predictive Distributions by Yao, Vehtari, Simpson and Gelman. Bayesian Analysis, 13(3), 917–1003.

Yin, B, Balvert, M, Zambrano, D, Schönhuth, A, & Bohte, S.M. (2018). An image representation based convolutional network for DNA classification. In 6th International Conference on Learning Representations.
Software
Squint: Experimenting in Prediction with Expert Advice problems
Squint provides a codebase to perform numerical proofofconcept experiments in learning theory, particularly in Prediction with Expert Advice problems, a core problem in learning theory.
Current projects with external funding

Efficient Deep Learning Platforms (eDLP)

Enabling Personalized Interventions (EPI)

Safe Bayesian Inference: A Theory of Misspecification based on Statistical Learning (SAFEBAYES)

Spiking Neural Networks research program
Related partners

Philips

KPMG

SURFsara B.V.

Technische Universiteit Eindhoven

Universiteit Twente

Universiteit van Amsterdam

Vrije Universiteit Amsterdam