Monday July 13, 14.30-16.00
Abstract: I will reflect on the past decade's progress in combining machine learning and control theory to build safe, agile, and autonomous systems. Highlighting work done inside and outside of my research group, in both academia and industry, I will discuss successful theory and practice that has helped us understand the potential impact and fundamental limits of learning enabled control systems. I'll not only describe new results but also older lessons established decades ago---though forgotten in the current excitement about artificial intelligence. I will also detail the formation of several new cross-disciplinary communities and the potential opportunities that can be gained by investing in and expanding upon these new collaborations. I'll close by presenting some of the exciting challenges that engineers must solve before we can reliably build robust, safe learning systems that interact with an uncertain physical environment.
Bio: Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies how to make machine learning systems more robust to interactions with a dynamic and uncertain world. Ben is the recipient of a Presidential Early Career Award for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.
Wednesday July 15, 13.00-14.30
Abstract: Integrated systems, where heterogeneous physical entities are combined to form a highly functional union, are ubiquitous and becoming more control intensive. Hybrid electric vehicles, all-electric ships, and autonomous underwater systems are a few examples. By leveraging the tight physical couplings among the multiple components involved, control systems can exploit the complementary characteristics of integrated components, as well as their operating environments, and successfully operate them on or close to their admissible boundary to achieve high performance, giving rise to interesting and unique system dynamic behaviors. Those emerging dynamics have motivated the development of new design tools and frameworks. In this plenary, I will share my collaborative and rewarding journey in pursuing the design and optimization of integrated systems and the joy of understanding and exploiting these intriguing dynamics. Using multi-timescale dynamics for the integrated power and thermal management of connected and autonomous vehicles, and the over-actuation of the hybrid energy storage system for all-electric ships as illustrative examples, we will explore the rich context provided by the underlying physical systems and the benefits of model-based prediction, estimation, and real-time optimization. In the era of rapid advances in AI and machine learning, understanding and capturing the interactive and dynamic characteristics of integrated systems become even more essential as we seek to find robust solutions and develop enabling tools for complex control problems.
Bio: Jing Sun is Michael G. Parsons Collegiate Professor and the chair of the Naval Architecture and Marine Engineering Department at the University of Michigan. She received her Ph. D degree from the University of Southern California in 1989, and her B.S. and M.S. degrees from the University of Science and Technology of China. From 1989-1993, she was an assistant professor in the Electrical and Computer Engineering Department, Wayne State University. She joined Ford Research Laboratory in 1993 where she worked in the Powertrain Control Systems Department. After spending almost 10 years in the industry, she came back to academia and joined the faculty of the College of Engineering at the University of Michigan in 2003. Her research interests include modeling, control, and optimization of dynamic systems, with applications to marine and automotive systems. Her current research focuses on real-time optimization and decision making for connected and automated transportation systems. She holds 42 US patents and has published over 300 peer reviewed journal and conference papers. She has co-authored a textbook on Robust Adaptive Control. She is a Fellow of National Academy of Inventors, IEEE, IFAC, and the Society of Naval Architects and Marine Engineers. She is a recipient of the 2003 IEEE Control System Technology Award.
Thursday July 16, 13.00-14.30
Abstract: Solving a general optimal control problem requires a solution to what is known as Hamilton-Jacobi-Bellman (HJB) equation, which is a nonlinear partial differential equation in the value function. This is a result of the theory of dynamic programming pioneered by Bellman in the 50’s. A major stumbling block in using the HJB equation has been that it can be solved analytically for only a small number of simple control problems and numerical solution is prohibitively expensive. One way to get around this obstacle in the context of discrete-time control is to solve an open-loop optimal control problem cast over a future time horizon on-line for a specific encountered state at each sample time. This strategy is known as receding horizon control (RHC) or model predictive control (MPC) and has been widely adopted by various applications domains including by process industries of which control problems tend to be specified by a combination of setpoints, bounds, and economic indices.
One shortcoming of the MPC approach has been that it does not extend well to stochastic systems, or more generally, systems with uncertainty. This is because it is basically founded in the notion of repeatedly solving open-loop optimal control problem based on state feedback. Hence, it “reacts” to uncertainty rather than proactively negotiates it. On the other hand, Bellman’s dynamic programming is general and can be extended to stochastic systems straightforwardly.
As an alternative to MPC, one may try to “learn” the solution of the HJB equation, the value function and/or the optimal policy function, directly using data. This is the basic idea behind reinforcement learning (RL), which has been popularized by the recent rise of AI and some well-publicized successes like Alpha Go’s triumphs over the world champions of the go game. Its connection to optimal control and dynamic programming has been well known as its alternative names like approximate dynamic programming and neurodynamic programming reveal. Whilst various algorithms have been suggested, success for all of them lies critically on one’s ability to collect sufficient data needed, which can be prohibitively large and its process too intrusive for safety-sensitive processes.
This presentation will start by comparing the MPC approach with the RL approach. Some prototypical RL methods will be discussed and their pros and cons will be revealed. Then, the focus will be on the issues that arise as the RL methods are applied to process control problems. Issues such as data shortage, discontinuous dynamics, and large uncertainty will be discussed and some promising directions will be suggested. The presentation will also discuss how MPC and RL may be combined in a complementary manner. Finally, some discussion will be made on how the RL approach may be applied to multiple time-scale decision problems that include logistic and planning decisions.
Bio: Jay H. Lee obtained his B.S. degree in Chemical Engineering from the University of Washington, Seattle, in 1986, and his Ph.D. in Chemical Engineering from the California Institute of Technology, Pasadena, in 1991.
From 1991 to 1998, he was a faculty member in the Department of Chemical Engineering at Auburn University, AL. From 1998-2000, he was with School of Chemical Engineering at Purdue University, West Lafayette, and then with the School of Chemical Engineering at Georgia Institute of Technology, Atlanta from 2000-2010. Since 2010, he has been with the Chemical and Biomolecular Engineering Department at the Korea Advanced Institute of Science and Technology (KAIST).
He was a recipient of the National Science Foundation’s Young Investigator Award in 1993 and was elected as an IEEE Fellow and an IFAC (International Federation of Automatic Control) Fellow in 2011 and an AIChE Fellow in 2013. He is an elected member of both NAEK (National Academy of Engineering Korea) and KAST (Korean Academy of Science and Technology). He was also the recipient of the 2013 Computing in Chemical Engineering Award given by the AIChE’s CAST Division and the 2016 Roger Sargent Lecturer at Imperial College, UK. He is currently an Editor of Computers and Chemical Engineering and IJCAS, and also the chair of IFAC Coordinating Committee on Process and Power Systems (CC6). He published over 210 manuscripts in SCI journals with more than 15200 Google Scholar citations. His research interests are in the areas of system identification, state estimation, model predictive control and reinforcement learning with applications to energy systems, bio-refinery, and CO2 capture/conversion systems.
Friday July 17, 13.00-14.30
Abstract: Robots are not only machines that are supposed to relieve humans from dangerous or routine work – they are a scientific endeavour pursued to better understand human motion and intelligence in a synthetizing way, by using the system analytic tools of engineering and computer science.
As such, robots, in particular humanoid robots, became complex mechatronic systems, having highly coupled multibody kinematics, being high-dimensional, sometimes exceeding hundred degrees of freedom actuated by electro-mechanical drives. They can include highly nonlinear concentrated or distributed visco-elastic elements, be under-actuated, and be exposed to large closed-loop time delay during teleoperation. Furthermore, robots are supposed to closely interact with their human users or to operate in remote, unknown worlds – in both cases, robustness is a central issue, as precise mathematical models of the interaction cannot be expected. In view of these challenges, robotics became a prime application and a major driver for modern nonlinear control.
In this talk I will present how energy-based nonlinear control concepts evolved to match the robot design evolution from fixed manipulator arms to machines having humanoid kinematics and physical properties matching ever closer the biological performance.
Controlling motion at low energetic cost, both from mechanical and computational point of view, certainly constitutes one of the major locomotion challenges in biology and robotics.
In our research, we demonstrate that robots can be designed and controlled to walk efficiently by exploiting resonance body effects, increasing the performance compared to rigid body designs. To do so, however, legged robots need to achieve limit cycle motions of the highly coupled, non-linear body dynamics. This led us to the research of the still not very well understood theory of nonlinear system intrinsic modal oscillation control. I will present recent results in this direction from my ERC Advanced Grant project M-Runners.
Finally, putting the human in the centre of robot development also means going beyond the pure field of engineering and interacting with biosciences. I will particularly highlight in this regard the interplay of biomechanics and neuroscience with advanced robot design and control. Humans can also directly benefit from this research through the development of better human-machine interfaces, robotized medical procedures, and prosthetic and rehabilitation devices which will further reduce the barriers between humans and robots in the future.
Bio: Alin Albu-Schäffer received the M.S. in electrical engineering from the Technical University of Timisoara, Romania in 1993 and the Ph.D. in automatic control from the Technical University of Munich in 2002. Since 2012 he is the head of the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), which he joined in 1995. Moreover, he is a professor at the Technical University of Munich, holding the Chair for "Sensor Based Robotic Systems and Intelligent Assistance Systems". His research interests range from robot design and control to robot intelligence and human neuroscience. He centrally contributed to the development of the DLR-light-weight robot and its technology transfer to the KUKA company, leading to a paradigm shift in industrial robot applications towards light-weight, sensitive and interactive robotics. Alin Albu-Schäffer was as well strongly involved in the development of the MIRO surgical robot system and its commercialization through technology transfer to Covidien/Medtronic, the worldwide largest medical devices manufacturer. He is author of more than 270 peer reviewed journal and conference papers and received several awards, including the IEEE King-Sun Fu Best Paper Award of the IEEE Transactions on Robotics in 2012 and 2014, the EU-Robotics Technology Transfer Award in 2011, several Best Paper Awards at the major IEEE Robotics Conferences as well as the DLR Science Award. In 2019 he was awarded an ERC Advanced Grant for the project M-Runner, about energy efficient locomotion based on nonlinear mechanical resonance principles. He is an IEEE Fellow since 2018.