- Alessandro Chiuso
- Riccardo Sven Risuleo
The workshop will run on 11 July 2020 from 10:00 until 17:00 Berlin time (10am until 5pm CEST/UTC+2h). A recording of the workshop will also be available for streaming from 12 July until 31 August 2020 for registered participants.
The slides of the workshop are available here.
- Alessandro Chiuso, University of Padova, Italy
- Tianshi Chen, Chinese University of Hong Kong, China
- Håkan Hjalmarsson, KTH – Royal Institute of Technology, Stockholm, Sweden
- John Lataire, Vrije Universiteit Brussels, Belgium
- Gianluigi Pillonetto, University of Padova, Italy
- Riccardo Sven Risuleo, Uppsala University and KTH – Royal Institute of Technology, Stockholm, Sweden
We propose a one-day workshop tutorial on the latest development in kernel-based and nonparametric modeling of dynamical systems. The goal of the workshop is to provide a wide overview of recent trends and an outlook on future research opportunities at the intersection between system identification, statistical modeling, and machine learning. In particular, we focus on the use of nonparametric models and Gaussian processes to identify the behavior of dynamical systems from data.
10:00 Bayesian & kernel-based methods: an outlook, Alessandro Chiuso
In this talk we shall introduce Bayesian and kernel-based methods for System Identification. The structure of pri- ors/regularization kernels for dynamical systems will be discussed also illustrating the historical developments. We shall also analyse the role hyper parameters of most com- mon kernel structures play in determining the hypothesis space, both exploiting time-domain and frequency domain techniques.
10:45 On kernel structure and kernel design for regularized system identification, Tianshi Chen
Kernel design is a key issue for regularized system iden- tification. It is about the design of a kernel structure by parameterizing the kernel with hyper-parameters and its goal is to embed the prior knowledge of the system to be identified into the kernel. In this talk, we aim to show that kernel design is not an isolated issue but closely related with many other issues, e.g., the hyper-parameter estimation. Moreover, we suggest that when to design a kernel, besides embedding the prior knowledge, one should make use of the possible design freedom to design the kernel such that its structure can ease the computation of the hyper-parameter estimation and the regularized impulse response estimate.
11:30 Estimation accuracy of kernel-based estimators, Håkan Hjalmarsson
We review the fundamental limitations of parameter es- timation in terms of accuracy. Kernel based estimators belong to the class of biased estimators and in this pre- sentation we discuss their statistical properties both from a Bayesian and frequentist perspective. In particular we discuss the frequentist properties both in terms of opti- mality and large sample perspective, where in particular the influence of hyperparameter estimation on the estima- tion accuracy is covered. We also touch upon the relation between the posterior covariance obtained in the Bayesian approach and the mean-square error used by frequentists.
12:15 Lunch Break
13:00 Frequency domain, kernel based identification, John Lataire
Expressing dynamic system equations in the frequency domain has some nice advantages. First, it leads to identification tools which are seamlessly valid for both continuous- and discrete-time systems. Second, it allows for the use of a non-parametric noise model for weighing purposes, and third, the frequency band of interest can be conveniently selected by the user. This contribution expresses kernel-based identification techniques in the frequency domain, featuring above ad- vantages. We will discuss the translation of kernels for impulse response estimation (time domain) to estimate transfer functions (frequency domain), which will be in- terpreted intuitively.
13:45 Nonparametric methods in non-structured nonlinear systems, Gianluigi Pillonetto
Classical system identification relies on parametric estimation paradigms coming from mathematical statistics. Within this paradigm, a key point is the selection of the most adequate model structure which is typically performed via complexity measures such as the Akaike’s criterion. Starting from the linear scenario, then moving to the nonlinear one, this talk will describe how the model selection problem can be successfully faced by a different approach. In particular, I will discuss the use of Bayesian kernel-based methods where the unknown system is seen as a Gaussian random field whose covariance (kernel) includes information on system stability and/or fading memory. Here, tuning of model complexity gets a whole new dimension and richness in the choice of (continuous) regularization parameters compared to the choice of (discrete) model orders.
15:00 Nonparametric methods in structured system identification, Riccardo Sven Risuleo
We discuss the use of nonparametric models in structured system identification. We show how complex system can be constructed from simple modules, for which we use nonparametric models. When the modules are interconnected, the inference become intractable, and we show ways to overcome this intractability using expectation- maximization and sampling-based methods.
15:45 Hands-on session: Gaussian-process identification using Python, Riccardo Sven Risuleo
In the session, we introduce the tools used to perform kernel-based identification of linear systems in Python from scratch. Using numpy, we show how to pre-process the available data, how to define the Gaussian-process model, and how to solve an estimation problem similar to what is presented in Fig. 1. Attendees are encouraged to bring their own laptops (with Python 3, numpy, and sklearn installed) to follow along.
(live discussions will take place between the talks)