IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
In an era where Artificial Intelligence (AI) is often seen as a universal solution for any complex problem, this presentation offers a critical examination of its role in the field of automatic control. To be concrete, I will focus on Optimal Control techniques, navigating through its history and addressing the evolution from its traditional model-based roots to the emerging data-driven methodologies empowered by AI.
The presentation will delve into how the theoretical underpinnings of Optimal Control have been historically aligned with computational capabilities, and how this alignment has shifted over the years. This juxtaposition of theory and computation motivates a deeper investigation into the diminishing relevance of certain traditional control methods amidst the AI revolution. We will critically examine scenarios where AI-driven approaches could outperform classical methods, as well as cases where the hype surrounding AI overshadows its actual utility.
The talk will conclude with a nuanced view of state-of-the-art optimal control methods in practical applications including self-driving cars, advanced robotics and energy efficient systems. From this perspective, we will identify and explore future potential directions for the field, including the design of learning control architectures which seamlessly integrate predictive capabilities at every level, focusing on systems that can autonomously refine their performance over time through continuous learning and interaction with their environment.
The convergence of physical and digital systems in modern engineering applications has inevitably led to closed-loop systems that exhibit both continuous-time and discrete-time dynamics. These closed-loop architectures are modeled as hybrid dynamical systems, prevalent across various technological domains, including robotics, power grids, transportation networks, and manufacturing systems. Unlike traditional “smooth” ordinary differential equations or discrete-time recursions, solutions to hybrid dynamical systems are generally discontinuous, lack uniqueness, and have convergence and stability properties that are defined with respect to complex sets. Therefore, effectively designing and controlling such systems, especially under disturbances and uncertainty, is crucial for the development of autonomous and efficient data-driven engineering systems capable of achieving adaptive and self-optimizing behaviors. In this talk, I will delve into recent advancements in the analysis and design of feedback controllers that can achieve such properties in complex scenarios via the synergistic use of adaptive “seeking” dynamics, robust hybrid control, and decision-making algorithms. These controllers can be systematically designed and analyzed using modern tools from hybrid dynamical systems theory, which facilitate the incorporation of "exploration” and “exploitation" behaviors within complex closed-loop systems via multi-time scale tools and perturbation theory. The proposed methodology leads to a family of provably stable and robust algorithms suitable for solving model-free feedback stabilization and decision-making problems in single-agent and multi-agent systems for which smooth feedback solutions fall short.
Today, it is possible to reprogram the type of a cell for on-demand patient-specific cell therapy, wherein damaged cells in the body are replaced with healthy cells of the correct type generated from easy-to-extract patient’s cells. One approach to produce cells of the desired type is to first reprogram somatic cells, such as skin cells, to pluripotent stem cells, and to then differentiate these pluripotent cells down to the cell type in need. Both processes require accurate control of the temporal concentration of fate-specific proteins, called transcription factors, in the cell in order to efficiently generate high quality output cells. However, so far, accurate control of cellular concentrations has been out of reach. Practitioners inject DNA that produces the appropriate transcription factors in the starting cells at constant rates, without any control on cellular concentrations. In the past decade, the advances in engineering biology have reached the stage where we can implement nonlinear controllers to regulate the cellular level of key molecular players. In this talk, I will illustrate key obstacles to accurate control of protein levels in mammalian cells by conceptualizing the problem through input/output nonlinear, stochastic, models of gene regulation in the context of cell fate determination. I will then use these models to design biomolecular high-gain and integral feedback controllers in mammalian cells to achieve set-point regulation robustly to noise and cellular perturbations. Finally, I will go back to the problem of reprogramming somatic cells to pluripotency and I will show our controllers in action both as a way to uncover optimal reprogramming trajectories and as a way to enforce more accurately optimal transcription factor levels during reprogramming. This is the first instance in which biomolecular controllers have been used for pluripotent stem cell reprogramming. With these tools and experimental demonstrations, we have set the foundations for future research on the use of sophisticated biomolecular networks as controllers of complicated biological processes.
Control theory is hardly alone among scientific communities experiencing some obsolescence anxiety in the face of machine learning, where decades - or centuries - of building first-principles models and designs are supplanted by data. While ML real-time feedback is unlikely to attain the adaptive control's closed-loop guarantees for unstable plants that lack persistency of excitation, our community, adept at harnessing new ideas, has generated in a few years many other adroit ways to incorporate ML-from lightening methodological complexities to circumventing difficult constructions.
Rather than walking away from certificate-bearing control tools built by generations of control researchers, in this lecture I seek game-changing supporting roles for ML, in control implementation. I present the emerging subject of employing the latest breakthrough in deep learning approximations of not functions but function-to-function mappings (nonlinear operators) in the complex field of PDE control. With neural operators, entire PDE control methodologies are encoded into what amounts to a function evaluation, leading to a thousandfold speedup and enabling PDE control implementations. Deep neural operators, such as DeepONet, mathematically guaranteed to provide an arbitrarily close accuracy in rapidly computing control inputs, preserve the stabilization guarantees of the existing PDE backstepping controllers. Applications range from traffic and epidemiology to manufacturing, energy generation, and supply chains.
The term dual control was introduced in the 1960s to describe the tradeoff between short term control objectives and actions to promote learning. A closely related term is the exploration-exploitation tradeoff. This lecture will review some settings where dual controllers can be optimized efficiently, both for practical purposes and for a more fundamental understanding of the interplay between learning and control.
The starting point will be the standard setting of linear systems optmized with respect to quadratic cost. However much of modern learning theory is developed in a discrete setting. By investigating similarities and differences between the two frameworks, we will shed light on the dual control problem and discover new promising results and directions for research.
A multi-agent system should be capable of fast and flexible decision-making if it is to successfully manage the uncertainty, variability, and dynamic change encountered when operating in the real world. Decision-making is fast if it breaks indecision as quickly as indecision becomes costly. This requires fast divergence away from indecision in addition to fast convergence to a decision. Decision-making is flexible if it adapts to signals important to successful operations, even if they are weak or rare. This requires tunable sensitivity to input for modulating regimes in which the system is ultra-sensitive and in which the system is robust. Nonlinearity and feedback in the multi-agent decision-making dynamics are necessary to meet these requirements.
I will present theoretical principles, analytical results, and applications of a general model of decentralized, multi-agent, and multi-option, nonlinear opinion dynamics that enables fast and flexible decision-making. I will explain how the critical features of fast and flexible multi-agent decision-making depend on nonlinearity, feedback, and the structure of the inter-agent communication network and a belief system network. And I will show how the theory and results provide a principled and systematic means for designing and analyzing multi-agent decision-making in systems ranging from multi-robot teams to social networks.
A typical multi-agent system is composed of a follower system consisting of multiple subsystems called followers and a leader system whose output is to be tracked by the followers. What makes the control of a multi-agent system challenging is that the control law needs to be distributed in the sense that it must satisfy time-varying communication constraints. A special case of distributed control is where all the followers can access the information of the leader. For this special case, one can design, for each follower, a conventional control law based on the information of the leader. The collection of these conventional control laws constitutes the so-called purely decentralized control law for the multi-agent system. Nevertheless, the purely decentralized control law is not feasible due to the communication constraints. In this talk, we will introduce a framework for designing a distributed control law by cascading a purely decentralized control law and a so-called distributed observer for the leader system, which is a dynamic compensator that estimates and transmits the leader’s information to each follower over a communication network. Such a control law is called the distributed observer-based control law and has found its applications to such problems as consensus, synchronization, flocking, formation, and distributed Nash equilibrium seeking. The core of this design framework is the distributed observer for a linear leader system, which was initiated in 2010 for dealing with the cooperative output regulation problem, and has experienced three phases of developments. In the first phase, the distributed observer is only capable of estimating and transmitting the leader’s state to every follower assuming every follower knows the dynamics of the leader. In the second phase which started in 2015, the distributed observer is rendered the capability of estimating and transmitting not only the leader’s state but also the dynamics of the leader to every follower provided that the leader’s children know the information of the leader. Such a dynamic compensator is called an adaptive distributed observer for a known leader system. The distributed observer was further developed in 2017 for linear leader systems containing unknown parameters, thus entering its third phase of the development. Such a dynamic compensator is called an adaptive distributed observer for an unknown leader as it not only estimates the state but also the unknown parameters of the leader. We will start with an overview on the development of the distributed observer and then highlight the recent results on establishing an output-based adaptive distributed observer for an unknown leader system over jointly connected communication networks. Extensions, variants and applications of the distributed observer will also be touched.
Diffusion processes refer to a class of stochastic processes driven by Brownian motion. They have been widely used in various applications, ranging from engineering to science to finance. In this talk, I will discuss my experiences with diffusion and how this powerful tool has shaped our research programs. I will go over several research projects in the area of control, inference, and machine learning, where we have extensively utilized tools from diffusion processes. In particular, I will present our research on four topics: i) covariance control in which we aim to regulate the uncertainties of a dynamic system; ii) distribution control where we seek to herd population dynamics; iii) Monte Carlo Markov chain sampling for general inference tasks; iv) and diffusion models for generative modeling in machine learning.
Wind farms comprise a network of dynamical systems that operate within a continuous space, i.e., the turbulent atmospheric boundary layer (ABL). Viewing the turbines as actuators that adjust the flow field to collectively produce a desired overall power output, wind farms are an excellent prototype for flow control in which the actuators are well-defined and located in the region of interest. In this talk we introduce models and control strategies that adopt this viewpoint. We first demonstrate that taking into account both the challenges and opportunities arising through interactions with the ABL can enable wind farms to participate in markets that support the grid with improved efficiency. We then focus on the dynamic interconnections within the farm, which we formulate in terms of a graph with time-varying edge connectivity that accounts for changes in the incoming wind direction and turbine yaw angles. An example implementation of this simplified graph model within a combined pitch and yaw controller demonstrates the potential and limitations of yaw for augmenting pitch control in power tracking applications. In the final part of the talk, we discuss new approaches for developing similar types of control oriented models that focus on the critical flow features in other types of wall-bounded shear flows.
In everyday driving, many traffic maneuvers such as merges, lane changes, passing through an intersection, require negotiation between independent actors/agents. The same is true for mobile robots autonomously operating in a space open to other agents (humans, robots, etc.). Negotiation is an inherently difficult concept to code into a software algorithm. It has been observed in computer simulations that some “decentralized” algorithms produce gridlocks while others never do. It has turned out that gridlocking algorithms create locally stable equilibria in the joint inter-agent space, while, for those that don’t gridlock, equilibria are unstable – hence the title of the talk.
We use Control Barrier Function (CBF) based methods to provide collision avoidance guarantees. The main advantage of CBFs is that they provide easier to solve convex programs even for nonlinear systems and inherently non-convex obstacle avoidance problems. Six different CBF-based control policies were compared for collision avoidance and liveness (fluidity of motion, absence of gridlocks) on a 5-agent, holonomic-robot system. The outcome was then correlated with stability analysis on a simpler, yet representative problem. The results are illustrated by extensive simulations including an intersection example where the (in)stability insights are used to explain otherwise difficult to understand vehicle behaviors.
Bob Behnken’s journey from science and engineering student to Ph.D. candidate, to test pilot school student, and NASA astronaut culminated with the opportunity to be a part of the team that recreated a capability to transport humans to and from low earth orbit. He’ll share his experience, insight, and perspective on being a part of the NASA / SpaceX team’s endeavor to accomplish that mission in 2020 and take questions on his experience flying into space and living and working aboard the International Space Station.
Control theory and control technology have received renewed interests from applications involving service robots during the last two decades. In many scenarios, service robots are employed as networked mobile sensing platforms to collect data, sometimes in extreme environments in unprecedented ways. These applications post higher goals for autonomy that have never been achieved before, triggering new developments towards convergence of sensing, control, and communication.
Identifying mathematical models of spatial-temporal processes from collected data along trajectories of mobile sensors is a baseline goal for active perception in complex environment. The controlled motion of mobile sensors induces information dynamics in the measurements taken for the underlying spatial-temporal processes, which are typically represented by models that have two major components: the trend model and the variation model. The trend model is often described by deterministic partial differential equations, and the variation model is often described by stochastic processes. Hence, information dynamics are constrained by these representations. Based on the information dynamics and the constraints, learning algorithms can be developed to identify parameters for spatial-temporal models.
Certain designs of active sensing algorithms are inspired by animal and human behaviors. Our research designed the speed-up and speeding strategy (SUSD) that is inspired by the extraordinary capabilities of phototaxis from swarming fish. SUSD is a distributed active sensing strategy that reduces the need for information sharing among agents. Furthermore, SUSD leads to a generic derivative free optimization algorithm that has been applied to solve optimization problems where gradients are not well-defined, including mixed integer programing problems.
A perceivable trend in the control community is the rapid transition of fundamental discoveries to swarm robot applications. This is enabled by a collection of software, platforms, and testbeds shared across research groups. Such transition will generate significant impact to address the growing needs of robot swarms in applications including scientific data collection, search and rescue, aquaculture, intelligent traffic management, as well as human-robot teaming.
This work describes how machine learning may be used to develop accurate and efficient nonlinear dynamical systems models for complex natural and engineered systems. We explore the sparse identification of nonlinear dynamics (SINDy) algorithm, which identifies a minimal dynamical system model that balances model complexity with accuracy, avoiding overfitting. This approach tends to promote models that are interpretable and generalizable, capturing the essential “physics” of the system. We also discuss the importance of learning effective coordinate systems in which the dynamics may be expected to be sparse. This sparse modeling approach will be demonstrated on a range of challenging modeling problems, for example in fluid dynamics, and we will discuss how to incorporate these models into existing model-based control efforts.
More information provided here.
In this seminar, Dr. L’Afflitto will present two recent advances in the state-of-the-art in model reference control systems design. The first of these results will concern the design of an adaptive control system that allows the user to impose both the rate of convergence on the closed-loop system during its transient stage and constraints on both the trajectory tracking error and the control input at all times, despite parametric and modeling uncertainties. Successively, our speaker will present the first extension of the model reference adaptive control architecture to switched dynamical systems within the Carathéodory and the Filippov framework. The applicability of these theoretical formulations will be shown by the results of numerical simulations and flight tests involving multi-rotor unmanned aerial systems such as tilt-rotor quadcopters and tailsitter UAVs.
The human hand is the pinnacle of dexterity – it has the ability to powerfully grasp a wide range of object sizes and shapes as well as delicately manipulate objects held within the fingertips. Current robotic and prosthetic systems, however, have only a fraction of that manual dexterity. My group attempts to address this gap in three main ways: examining the mechanics and design of effective hands, studying biological hand function as inspiration and performance benchmarking, and developing novel control approaches that accommodate task uncertainty. In terms of hand design, we strongly prioritize passive mechanics, including incorporating adaptive underactuated transmissions and carefully tuned compliance, and seek to maximize open-loop performance while minimizing complexity. In this talk, I will discuss how constraints imparted by external contacts in robotic manipulation and legged locomotion affect the mobility and control of the mechanism, introduce ways that these can be redressed through novel design approaches, and demonstrate how our group has been able to apply these concepts to produce simple and robust grasping and dexterous manipulation for tasks that are difficult or impossible to perform using traditional approaches.
Integrated systems are ubiquitous as more heterogeneous physical entities are combined to form functional platforms. New and “invisible” feedback loops and couplings are introduced with increased connectivity, leading to emerging dynamics and making the integrated systems more control-intensive. The multi-physics, multi-time scale, and distributed-actuation natures of integrated systems present new challenges for modeling and control. Understanding their operating environments, achieving sustained high performance, and incorporating rich but incomplete data also motivate the development of novel design tools and frameworks.
In this talk, I will use the integrated thermal and power management of connected and automated vehicles (CAVs) as an example to illustrate the challenges in the prediction, estimation, and control of integrated systems in the era of rapid advances in AI and data-driven control. While first-principle-based modeling is still essential in understanding and exploiting the underlying physics of the integrated systems, model-based control and optimization have to be used in a much richer context to deal with the emerging dynamics and inevitable uncertainties. For CAVs, we will show how model-based design, complemented by data-driven approaches, can lead to control and optimization solutions with a significant impact on energy efficiency and operational reliability, in addition to safety and accessibility.
I have thoroughly enjoyed teaching and research in the field of mechanical systems control over the past fifty years. This field has been full of new theory, new mechanical hardware and new tools for real time control, and is nothing but the world of mechatronics. In this talk, I would like to give a brief review of how this field has developed during the past fifty years and what my personal involvements have been in this field and what my current involvements are. Overall, the talk is a chronicle of my journey of exploration with my students in the forest of mechanical systems control.