Machine Learning meets Model-based Control

Organisers

  • Ali Mesbah
  • Boris Houska

The workshop will run on 11 July 2020 from 10:00 until 17:00 Berlin time (10am until 5pm CEST/UTC+2h). The presentations will also be available for streaming from 10 July until 31 August 2020 for registered participants.

Speakers

  • Alberto Bemporad, IMT School of Advanced Studies Lucca, Italy
  • Francesco Borrelli, University of California Berkeley, USA
  • Jay H. Lee, Korea Advanced Institute of Science and Technology, South Korea
  • Eric Kerrigan, Imperial College London, UK
  • Matthias Müller, Leibniz University Hannover, Germany
  • Sergio Lucia, TU Berlin, Germany
  • Michal Kvasnica, Slovak University of Technology in Bratislava, Slovakia
  • Ugo Rosolia, Caltech, USA
  • Melanie Zeilinger, ETH Zürich, Switzerland

Summary

Model-based control methods such as model predictive control have found increasing utility in emerging complex engineering applications, including unmanned vehicles, robotics for the control of quadrotors, humanoid robots, energy systems and biomedical systems. This is due to the versatility of model-based control methods and their ability to provide robustness, safety guarantees and economics-oriented control. Yet, many model-based control applications face challenges related to the difficulty of modeling complex systems or the need for control strategies with provably safe and robust performance that have low online computational and memory requirements.
The last years have witnessed an enormous interest in the use of machine learning techniques in different fields, including control systems, which is partly driven by the demonstrated success of machine learning methods in the field of computer science, but also by the increasing availability of data as well as new computation, sensing and communication capabilities. The integration of machine learning with model-based control, for example, in the form of learning a system’s model, the cost function or even the control law directly, raises fundamental challenges related to the controller properties, such as stability, convergence, constraint satisfaction and performance under uncertainty. The objective of this pre-conference workshop is to serve as a forum to discuss the latest research advances at the interface between machine learning and model-based control in order to establish important synergies and contribute to solve the arising challenges. Specifically, the workshop will focus on the development of theoretical guarantees, methods and software as well as challenging applications in the field of machine learning and model-based control. The workshop will cover the following topics:
• Approximation of complex model-based control laws using machine learning
• Machine learning for adaptive and learning-based control
• Model-based control using machine learning models
• Reinforcement learning
This one-day workshop will consist of three plenary talks of 45 min and five keynote talks of 30 min. The organizers will conclude the workshop by highlighting the main takeaways of the different talks. All workshop notes and slides will be made available on the workshop’s website. A live session for discussions with the speakers will be held at the end of the workshop, around 15:00 / 3pm Berlin time (CEST/UTC+2).

Programme

Machine Learning: A New ICE (Identification, Control, Estimation) Age?, Alberto Bemporad

Control theory always evolved by taking full advantage of developments in other disciplines, for example frequency-domain methods have leveraged on complex analysis, state-space approaches on linear algebra,
optimization-based analysis and synthesis on linear matrix inequalities, model predictive control (MPC) on numerical optimization (quadratic/nonlinear/mixed-integer programming). In recent years, a variety of different approaches and advanced software tools were developed by the machine learning community and have been proved amazingly successful in many application domains. It is therefore likely that they will have a strong impact also in the field of systems and control, enabling the development of a whole new set of tools for identification, control, and estimation (ICE) of dynamical systems. In my talk I will provide evidence of how machine learning tools can be used to develop new control design methods by reviewing some results recently obtained by my research group, including the use of artificial neural networks for MPC based on quadratic or
mixed-integer programming, stochastic gradient descent methods for optimal policy search, reinforcement learning for MPC, Bayesian optimization for MPC auto-tuning, and preference-learning methods for semi-automatic calibration of MPC systems.


Learning Predictive Control and Dynamic Programming, Francesco Borrelli and Ugo Rosolia

Forecasts play a major role in autonomous and semi-autonomous systems.Applications include transportation, energy, manufacturing and healthcare systems. Predictions of systems dynamics, human behavior and environment conditions can improve safety and performance of the resulting closed-loop system. However, constraint satisfaction, performance guarantees and real-time computation are challenged by the growing complexity of the engineered system, the human/machine interaction and the uncertainty of the environment where the system operates. Our research over the past years has focused on predictive control design for systems performing iterative tasks with safety guarantees.  In this talk I will first provide an overview of the theory we have developed for the systematic design of learning predictive controllers.  Then, I will focus on comparing the proposed approach with classical approximate dynamic programming approaches. Throughout the talk I will focus on autonomous cars to motivate our research and show the benefits of the proposed techniques.


Reinforcement Learning – Model-Based or Model-Free?, Jay H. Lee

This talk discusses and compares model-based vs. model-free Reinforcement Learning (RL), especially for industrial decision making problems.  It is argued that, given limited opportunity for data gathering and active
exploration for industrial production processes, it is often critical to have a model of some form, which can provide a basis for the efficient and parsimonious learning. Although purists will argue that RL is meant to be
model-free, the data requirement for it is too demanding for most industrial process control applications. Model is needed not only for on-line optimization, but also for off-line learning to obtain a good initial form of the value function and the policy which can be further refined on-line. We introduce some model-based algorithms for batch processes, which can be used for both optimization and control.


What the machine should learn about models for control, Eric Kerrigan

We revisit, with 20/20 hindsight, three fundamental concepts from the control theory literature, viewed through a machine learning lens from 2020. Within the control community, these concepts are known to various degrees,
depending on which group you talk to. Outside of control, these concepts are either absent or only partially formed. The first result to call to mind is the internal model principle of control, which states that a controller can
only reject a disturbance if the feedback path contains an internal model of the dynamic structure of the disturbance. The second is the concept of the gap metric, which measures how close two systems are from a feedback, rather than open-loop, perspective. The third, and least widely known, is an extension of algorithmic information theory that allows one to measure the informativeness of a model, particularly if the system is "badly-defined" [Maciejowski, Automatica, 1979]. These three ideas should be reintroduced, to the control community and beyond, if we are aiming to understand fundamental trade-offs between complexity, robustness and performance when designing learning-based controllers.


Data- and learning-based model predictive control, Matthias Müller

Model predictive control (MPC) is one of the most successful modern control technologies. Existing MPC schemes typically require the availability of a (sufficiently good) model in order to achieve desired closed-loop stability and performance guarantees. On the other hand, in situations when the (a priori) identification of a suitable model is difficult or impossible, control schemes are of great relevance and interest which (i) use machine learning techniques for model and/or controller learning or (ii) are purely data-based. In this talk, we present some recent results on data- and learning-based MPC. In particular, we first discuss MPC schemes using different techniques for model learning, such as Gaussian Processes or kinky inference, and for which desired closed-loop properties can be derived. Second, we will present a purely data-based MPC framework where the control action is computed solely based on previously measured input/output data.


Efficient design and probabilistic validation approximate robust MPC controllers based on deep learning, Sergio Lucia

Solving model predictive control problems in real time is still an important challenge despite of recent advances in computing hardware, optimization algorithms and tailored implementations. This challenge is even greater when uncertainty is present due to disturbances, unknown parameters or measurement and estimation errors. To enable the application of advanced control schemes to fast nonlinear systems and on low-cost embedded hardware, we propose to approximate a robust nonlinear model controller using deep learning and to verify its quality using a-posteriori probabilistic validation techniques. We show how a deep neural network with a determined size can exactly represent a linear MPC feedback law. To achieve guarantees in the more general case where only an approximation of the exact feedback law is achieved, we propose a probabilistic validation technique together with the use of constraint tightening techniques. The potential of the proposed approach is illustrated with simulation results of several uncertain nonlinear system.


Learning more from less data: when quality trumps quantity, Michal Kvasnica

In this talk we show that, when learning model-based feedback laws from data, it is the quality of the training data that is important, and not its quantity. Specifically, we cover two version of machine learning applied to learning feedback controllers. One is based on applying classification trees and hidden Markov models to learn directly the feedback law from data, which is obtained by simulating a given model-based controller in a particular fashion. The second approach is based on constructing a machine learning procedure that comes up with a good guesses of the initial active set for the subsequent active set-based numerical optimization. In both cases we illustrate that a good notion of "fruitfulness" of training data is not given in terms of quantity, but in terms of how many distinct active sets the data covers. Numerous motivating examples will be presented to confirm this claim.
 

Towards Safe Learning in Complex Control Systems, Melanie Zeilinger

Most autonomous systems have traditionally been designed to act in isolated and well-defined environments. New applications are moving these systems, in particular robots, into our daily environments, where they have to be able to cope with complex tasks, uncertain environments and human interaction. These challenges, together with the increasing availability of sensing and computation, are driving a growing interest in learning and data-driven control techniques. Their success in practical applications and their wide-scale adoption is, however, limited by safety concerns when integrating learning in a closed-loop, automated decision-making process. In this talk, I will present some of our recent results on integrating safety constraints with learning-based control. By combining concepts from model-based constrained control with data-driven techniques we provide methods that can automatically adapt to achieve maximum performance and learn to perform complex tasks, while acting cautiously with respect to safety constraints. The ideas will be highlighted with examples for control of robotic systems.

Live session, discussion with the speakers, around 15:00 / 3pm Berlin time (CEST/UTC+2)