2021: 21st Geilo Winter School

The 21st edition of the Geilo Winter School took place online, with lectures held from Monday January 25 to Friday January 29, 2021.

Photo: Shutterstock


The past decade has seen impressive developments in powerful machine learning algorithms. These methods have been most successful when applied to problems where other tools are not readily available, and today frequently guide many decision-making processes. Much recent research focuses on applying these methods in areas where machine learning and statistical methods are not commonly part of the toolbox in order to tackle parts of the problem that are hard to solve by traditional approaches. In these cases, a proper understanding of what the algorithms are in fact telling us is very important. An example is in numerical simulation, where machine learning can be used for parameter estimation, acceleration of linear and nonlinear solvers, or even as proxy models where the governing equations merely serve as physical constraints. On the other hand, numerical tools can also be used to enhance our understanding of applied machine learning algorithms.

In this year's winter school, we take a deep dive into explainable algorithms, and aim to understand how an algorithm can be efficient, robust and comprehensible at the same time. During the school, we will look at the explainability of machine learning tools like neural networks and decision trees, how to propagate uncertainty in machine learning, and the possibilities and consequences of introducing such tools in practical applications.


Vegard Antun and Matthew Colbrook: On the barriers of AI and the trade-off between stability and accuracy in deep learning

Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. This presents exciting new opportunities, and there is currently a profound optimism on the impact of DL and AI. However, there is also growing evidence across numerous applications that modern DL has an Achilles' heel: instability. The current situation in AI is comparable to the situation in mathematics in the early 20th century and Hilbert's optimism (e.g. his 10th problem) regarding provability and algorithms. Hilbert's optimism was turned upside down by Gödel and Turing, who established limitations on what mathematics can prove and which problems computers can solve (however, without limiting the impact of mathematics and computer science), leading to modern logic and computer science. Similarly, the present situation calls for a program on the foundations of AI and DL to discover the boundaries of what can be achieved (Smale's 18th problem). This series of talks will address the stability and accuracy trade-off, and the resulting barriers of AI, in deep learning applications to imaging. For example, we present basic standard problems in scientific computing where one can prove the existence of stable neural networks with great approximation qualities; however, such networks are not computed by the current training approaches. In fact, no algorithm (even randomised) can compute such a network to even 1-digit of accuracy with a probability greater than 1/2. We discuss suitable stability tests in imaging (e.g. adversarial examples and their construction), current methods aimed at fixing instabilities and improving robustness, impossibility results in computational optimisation and inverse problems, the popular method of algorithm unrolling, and a unified theory for compressed sensing and DL which leads to sufficient conditions for the existence of algorithms that compute stable and accurate (e.g. exponential convergence in the number of layers) neural networks. The results and methods discussed point towards a potentially vast classification theory, which goes beyond imaging, and describes conditions under which an algorithm can compute (stable) neural networks with a given accuracy.

Anuj Karpatne: Science-guided Machine Learning

  • Part 1: Overview of Research Themes and Guiding Principles
  • Part 2: Case Studies, Recent Progress, and Future Prospects

This series of talks will introduce science-guided machine learning, an emerging paradigm of research that aims to principally integrate the knowledge of scientific processes in machine learning frameworks to produce generalizable and physically consistent solutions even with limited training data. The talks will describe several ways in which scientific knowledge can be combined with machine learning methods using case studies of on-going research in various disciplines including hydrology, fluid dynamics, quantum science, and biology.  These case studies will illustrate multiple research themes in science-guided machine learning, ranging from physics-guided design and learning of neural networks to construction of hybrid-physics-data models. The talks will also cover a discussion of future prospects in the emerging field of science-guided machine learning that has the potential to impact several disciplines in science and engineering that have a rich wealth of scientific knowledge and some availability of data.

Helge Langseth

  • Part 1: Introduction to variational inference and the ELBO. 
  • Part 2: Disentanglement in the variational auto encoder. 

Probabilistic AI aims at combining the power of deep learning with probabilistic modelling in order to retain the best of both worlds: Expressive models that are uncertainty aware and therefore e.g. robust to outliers including adversarial examples. Classical techniques for statistical inference are challenged in this area, and we will therefore consider variational inference (VI) as our main tool for learning models from data. 

With VI in our toolbox, we will move on to consider a specific example of a deep generative model, namely the variational auto-encoder (VAE). VAEs have found usage in unsupervised learning, e.g., for representation learning. A current hot topic to increase the interpretability of the representations VAEs learn is disentanglement, and we will discuss what that means, and how it can be achieved in the context of VAEs. 

Maziar Raissi: Hidden Physics Models

A grand challenge with great opportunities is to develop a coherent framework that enables blending conservation laws, physical principles, and/or phenomenological behaviors expressed by differential equations with the vast data sets available in many fields of engineering, science, and technology. At the intersection of probabilistic machine learning, deep learning, and scientific computations, this work is pursuing the overall vision to establish promising new directions for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data. To materialize this vision, this work is exploring two complementary directions: (1) designing data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and non-linear differential equations, to extract patterns from high-dimensional data generated from experiments, and (2) designing novel numerical algorithms that can seamlessly blend equations and noisy multi-fidelity data, infer latent quantities of interest (e.g., the solution to a differential equation), and naturally quantify uncertainty in computations.

Inga Strümke: Shapley values - the long story

A basic introduction to the game-theoretic concept of Shapley values, with hands-on calculations (both pencil and code). We will cover the basic axioms, discuss the characteristic function, how to calculate Shapley values for any data as well as for machine learning models. We will discuss global and local Shapley values, including the popular SHAP library, and practical as well as conceptual limitations to Shapley values in a machine learning setting. 


Vegard Antun is a Postdoctoral Fellow in Applied Mathematics at the University of Oslo. His research is centred on deep learning based techniques for scientific computing, with a particular focus on inverse problems and imaging. A focal point of his research is the design and investigation of fundamental barriers for stable and accurate neural networks in the sciences. He holds a PhD in Mathematics from the University of Oslo. During his PhD, Vegard had two six month research stays at the University of Cambridge, visiting the Cambridge Centre for Analysis at DAMTP.

Matthew Colbrook is a Junior Research Fellow at Trinity College, Cambridge. He holds a PhD from the Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge. His research is centred on the foundations of computation and numerical analysis in infinite-dimensional spectral problems, PDEs, and deep learning/neural networks for scientific computation. A focus of his research is developing algorithms for stable and accurate neural networks in inverse problems and image reconstruction, as well as a framework for determining the boundaries of what is and what is not computationally possible.

Anders Hansen leads the Applied Functional and Harmonic Analysis group within the Cambridge Centre for Analysis at DAMTP. He is a Reader (Associate Professor) in mathematics  at DAMTP, Professor of Mathematics at the University of Oslo, a Royal Society University Research Fellow and also a Fellow of Peterhouse.

Anuj Karpatne is an Assistant Professor in the Department of Computer Science at Virginia Tech, where he develops data mining and machine learning methods to solve scientific and socially relevant problems. A key focus of Dr. Karpatne’s research is to advance the field of science-guided machine learning for applications in several domains ranging from climate science, hydrology, and ecology to cell cycle biology, mechano-biology, quantum science, and fluid dynamics.

Helge Langseth is a Professor in the Machine Learning at the Department of Computer Science, The Norwegian University of Science and Technology. He holds a PhD in statistics from Dept. of Mathematical Sciences, Norwegian Institute of Technology, and his main research interest is in probabilistic AI. He is also associated with the Norwegian Open AI Lab and NorwAI, a newly established research center for AI innovation. 

Maziar Raissi is currently an Assistant Professor of Applied Mathematics at the University of Colorado Boulder. He received his Ph.D. in Applied Mathematics & Statistics, and Scientific Computations from University of Maryland College Park. He then moved to Brown University to carry out his postdoctoral research in the Division of Applied Mathematics. He then worked at NVIDIA in Silicon Valley for a little more than one year as a Senior Software Engineer before moving to Boulder. His expertise lies at the intersection of Probabilistic Machine Learning, Deep Learning, and Data Driven Scientific Computing. In particular, he has been actively involved in the design of learning machines that leverage the underlying physical laws and/or governing equations to extract patterns from high-dimensional data generated from experiments.

Inga Strümke researched the use of machine learning in particle physics in her PhD, and has been active in the field ever since. Inga has previously worked at PwC, focusing on algorithm auditing, and is currently a postdoctoral researcher on explainable AI at Simula Research Laboratory. She has done XAI research primarily on Shapley value confidence interval estimates and Shapley values for uncovering non-linear dependence. Outside of XAI, Inga has researched different statistical methods, including deep learning, for beyond Standard Model physics searches and parameter estimation.





Slides and materials

Lecture materials

Important information

See the About page for general information about the winter school.

Applying for the winter school

The school has ended.

Cost of participating

There is no registration fee for the winter school.

Available spots

Even though the school is online, we will not be able to accomodate an unlimited amount of people. Registrations will be honored on a first come first serve basis. We have in the previous years exceeded our capacity, so please register as early as possible!


We will make space in the program for a virtual poster session in which participants can present their work to colleagues and others. The aim of the session is to make new contacts and share your research. You need to indicate in your registration if you want to present a poster during the poster session. 

Organizing Committee

The organizing committee for the Geilo Winter School consists of

  • Torkel Andreas Haufmann, Research Scientist (Department of Mathematics and Cybernetics, SINTEF). 
  • Øystein Klemetsdal, Research Scientist (Department of Mathematics and Cybernetics, SINTEF).
  • Signe Riemer-Sørensen, Research Scientist (Department of Mathematics and Cybernetics, SINTEF).

To get in touch with the committee, send an email .