IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
This talk presents recent results in nonlinear observer design and their applications in motion estimation problems ranging from wearable sensors to bicycles. First, a new observer design technique that integrates the classical high-gain observer with a novel LPV/LMI observer to provide significant advantages compared to both methods is presented. Second, the challenges in designing observers for nonlinear systems which are non-monotonic are discussed. Non-monotonic systems are commonly encountered, but popular observer design methods fail to yield feasible solutions for such systems. Hybrid observers with switched gains enable existing observer design methods to be utilized for these systems. Following the analytical observer results, some of their applications in motion estimation are presented, including a wearable device for Parkinson’s disease patients, a smart bicycle that automatically tracks the trajectories of nearby vehicles on the road to protect itself, and smart agricultural/construction vehicles that utilize inexpensive sensors for end-effector position estimation. Each application is accompanied by a video of a prototype experimental demonstration. One of these applications has been successfully commercialized through a start-up company which expects to sell over 5,000 sensor boards this year.
Mechanical motion generation and vibration suppression is fundamental to modern machines and emerging innovations. Abilities to learn and compensate for complex mechanical system and disturbance dynamics are key to synthesizing adequate control actions to achieve precision motions. Using application case studies to motivate challenges and demonstrate implementation results, I will present control methods for addressing narrowband (repetitive control, iterative learning control) and broadband (adaptive control) motions and disturbances. I will attempt to convey a common theme, controller syntheses stemming from ideas of system dynamic inversions and utilizing solutions of optimal model matching problems.
Genetic circuits control every aspect of life and thus the ability to engineer them de-novo opens exciting possibilities, from revolutionary drugs and green energy to bugs that recognize and kill cancer cells. Just like in mechanical, electrical, and hydraulic systems, the problem of loading, or back-action, is encountered when engineering genetic circuits. These molecular loads can be severe to the point of completely destroying the intended function of a circuit. In this talk, I will review a systems theoretic modeling formalism, grounded on the concept of retroactivity, that captures molecular loads in a way that makes the loading problem amenable of a solution. I will, in particular, focus on two types of loading: inter-module loads and loads to cellular resources that feed the modules. I will show experimentally validated models of loading effects on the emergent dynamics of a system and nonlinear control techniques that we have developed and implemented to mitigate these effects.
More information provided here.
Reachability analysis, which considers computing or approximating the set of future states attainable by a dynamical system over a time horizon, is receiving increased attention motivated by new challenges in, e.g., learning-enabled systems, assured and safe autonomy, and formal methods in control systems. Such challenges require new approaches that scale well with system size, accommodate uncertainties, and can be computed efficiently for in-the-loop or frequent computation. In this talk, we present and demonstrate a suite of tools for efficiently over-approximating reachable sets of nonlinear systems based on the theory of mixed monotone dynamical systems. A system is mixed monotone if its vector field or update map is decomposable into an increasing component and a decreasing component. This decomposition allows for constructing an embedding system with twice the states such that a single trajectory of the embedding system provides hyperrectangular over-approximations of reachable sets for the original dynamics. This efficiency can be harnessed, for example, to compute finite abstractions for tractable formal control verification and synthesis or to embed reachable set computations in the control loop for runtime safety assurance. We demonstrate these ideas on several examples, including an application to safe quadrotor flight that combines runtime reachable set computations with control barrier functions implemented on embedded hardware.
The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer-based systems that improve and learn through experience. Oftentimes the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. In comparison to machine learning, adaptive control often focuses on limited-data problems where fast, on-line performance is critical. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the approach used for updating the parameters is often based on gradient descent-like and other iterative algorithms. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms.
This talk will examine the similarities and interconnections between adaptive control and optimization methods commonly employed in machine learning. Concepts in stability, performance, and learning, common to both fields will be discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis will be explored. High-order tuners and time-varying learning rates have been employed in adaptive control leading to very interesting results in dynamic systems with delays. We will explore how these methods can be leveraged to lead to provably correct methods for learning in real-time with guaranteed fast convergence. Examples will be drawn from a range of engineering applications.
Control policies that involve the real-time solution of one or more convex optimization problems include model predictive (or receding horizon) control, approximate dynamic programming, and optimization-based actuator allocation systems. They have been widely used in applications with slower dynamics, such as chemical process control, supply chain systems, and quantitative trading, and are now starting to appear in systems with faster dynamics. In this talk I will describe a number of advances over the last decade or so that make such policies easier to design, tune, and deploy. We describe solution algorithms that are extremely robust, even in some cases division free, and code generation systems that transform a problem description expressed in a high-level domain-specific language into source code for a real-time solver suitable for control. The recent development of systems for automatically differentiating through a convex optimization problem can be used to efficiently tune or design control policies that include embedded convex optimization.
Recent radical evolution in distributed sensing, computation, communication, and actuation has fostered the emergence of cyber-physical network systems. Examples cut across a broad spectrum of engineering and societal fields. Regardless of the specific application, one central goal is to shape the network collective behavior through the design of admissible local decision-making algorithms. This is nontrivial due to various challenges such as the local connectivity, imperfect communication, model and environment uncertainty, and the complex intertwined physics and human interactions. In this talk, I will present our recent progress in formally advancing the systematic design of distributed coordination in network systems. We investigate the fundamental performance limit placed by these various challenges, design fast, efficient, and scalable algorithms to achieve (or approximate) the performance limits, and test and implement the algorithms on real-world applications.
Electrification of mobility and transport is a global megatrend that has been underway for decades. The mobility sector encompasses cars, trucks, busses, and aircraft. These systems exhibit complex interactions of multiple modes of power flow. These modes can be thermal, fluid, electrical, or mechanical. A key challenge in working across various modes of power flow is the widely varying time scales of the subsystems which makes centralized control efforts challenging. This talk will present a particular distributed controller architecture for managing the flow of power based on on-line optimization. A hierarchical approach allows for systems operating on different time scales to be coordinated in a controllable manner. It also allows for different dynamic decision-making tools to be used at different levels of the hierarchy based on the needs of the physical systems under control. Additional advantages include the modularity and scalability inherent in the hierarchy. Additional modules can be added or removed without changing the basic approach.
In addition to the hierarchical control, a particularly useful graph-based approach will be introduced for the purpose of modeling the system interactions and performing early-stage design optimization. The graph approach, like the hierarchy, has the benefits of modularity and scalability along with being an efficient framework for representing systems of different time scales. The graph allows design optimization tools to be implemented and optimize the physical system design for the purpose of control. Recent results will be presented representing both generic interconnected complex systems as well as specific examples from the aerospace and automotive application domains.
To ensure safety, reliability, and productivity of industrial processes, artificial intelligence (AI) and machine learning techniques have been widely used in process industries for decades. The benefits of process monitoring and control are well documented and employed routinely in manufacturing. This talk will go over historical perspective and recent AI and machine learning successes in the areas of real-time analytics, deep learning, reinforcement learning, visualization, and feature engineering. Complex interaction between human decision and automated control will be discussed. Humans grow expertise by quickly adapting to abnormal conditions and using domain knowledge to generate creative solutions. However, reproducing human decisions across the enterprise is a challenge. A common misconception is that AI is to replace human decision. The talk will emphasize how AI and control systems must be complementary to make human decisions as efficient and consistent as possible. Human decision will remain a center piece of how to operate industrial processes in a safe, reliable, and productive manner.
Networked and robotic systems in emerging applications are required to operate safely, adaptively, and degrade gracefully while coordinating a large number of nodes. Distributed algorithms have consolidated as a means for robust coordination, overcoming the challenges imposed by the limited capabilities of each agent. However, plenty of problems still exist to break down the barriers of fast computation, make effective use of measured data, and understand large-scale limit effects. In this talk, I will present ongoing work in the control of infrastructure networks and large-swarm coordination, along with a discussion on modeling approaches, analysis tools, and architectural trade-offs going from small to large-sized robotic networks.
In September 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) initiated the era of gravitational wave astronomy (a new window on the universe) with the first direct detection of gravitational waves (ripples in the fabric of space-time) resulting from the merger of a pair of black holes into a single larger black hole. In August 2017 the LIGO and VIRGO collaborations announced the first direct detection of gravitational waves associated with a gamma ray burst and the electromagnetic emission (visible, infrared, radio) of the afterglow of a kilonova — the spectacular collision of two neutron stars. This marks the beginning of multi-messenger astronomy. The kilonova discovery was made using the U.S.-based LIGO; the Europe-based Virgo detector; and 70 ground- and space-based observatories.
The Advanced LIGO gravitational wave detectors are second generation instruments designed and built for the two LIGO observatories in Hanford, WA and Livingston, LA. These two identically designed instruments employ coupled optical cavities in a specialized version of a Michelson interferometer with 4 kilometer long arms. Resonant optical cavities are used in the arms to increase the interaction time with a gravitational wave, power recycling is used to increase the effective laser power and signal recycling is used to improve the frequency response. In the most sensitive frequency region around 100 Hz, the displacement sensitivity is 10^-19 meters rms, or about 10 thousand times smaller than a proton. In order to achieve this unsurpassed measurement sensitivity Advanced LIGO employs a wide range of cutting-edge, high performance technologies, including an ultra-high vacuum system; an extremely stable laser source; multiple stages of active vibration isolation; super-polished and ion milled optics, high performance multi-layer dielectric coatings; wavefront sensing; active thermal compensation; very low noise analog and digital electronics; complex, nonlinear multi-input, multi-output control systems; a custom, scalable and easily re-configurable data acquisition and state control system; and squeezed light. The principles of operation, the numerous control challenges and future directions in control will be discussed. More information is available at https://www.ligo.caltech.edu/.
Optimal controllers for linear or nonlinear dynamic systems with known dynamics can be designed by using Riccati and Hamilton-Jacobi-Bellman (HJB) equation respectively. However, optimal control of uncertain linear or nonlinear dynamic systems is a major challenge. Moreover, controllers designed in discrete-time have the important advantage that they can be directly implemented in digital form using modern-day embedded hardware. Unfortunately, discrete-time design using Lyapunov stability analysis is far more complex than the continuous-time counterpart since the first difference in Lyapunov function is quadratic in the states and not linear as in the case of continuous-time. By incorporating learning features with the feedback controller design, optimal adaptive control of such uncertain dynamical systems in discrete-time can be solved.
In this talk, an overview of first and second-generation feedback controllers with a learning component in discrete-time will be discussed. Subsequently, the discrete-time learning-based optimal adaptive control of uncertain nonlinear dynamic systems will be presented in a systematic manner using a forward in time approach based on reinforcement learning (RL)/approximate dynamic programming (ADP). Challenges in developing and implementing the three generations of learning controllers will be addressed using practical examples such as automotive engine emission control, robotics, and others. We will argue that discrete-time controller development is preferred for transitioning the developed theory to practice. Today, the application of learning controllers can be found in areas as diverse as process control, energy or smart grids, civil infrastructure, healthcare, manufacturing, automotive, transportation, entertainment, and consumer appliances. The talk will conclude with a short discussion of open research problems in the area of learning control.
Security and privacy are of growing concern in many control applications. Cyber attacks are frequently reported for a variety of industrial and infrastructure systems. For more than a decade the control community has developed techniques for how to design control systems resilient to cyber-physical attacks. In this talk, we will review some of these results. In particular, as cyber and physical components of networked control systems are tightly interconnected, it is argued that traditional IT security focusing only on the cyber part does not provide appropriate solutions. Modeling the objectives and resources of the adversary together with the plant and control dynamics is shown to be essential. The consequences of common attack scenarios, such as denial-of-service, replay, and bias injection attacks, can be analyzed using the framework presented. It is also shown how to strengthen the control loops by deriving security- and privacy-aware estimation and control schemes. Applications in building automation, power networks, and automotive systems will be used to motivate and illustrate the results. The presentation is based on joint work with several students and colleagues at KTH and elsewhere.
Advances in computing and networking technologies have connected manufacturing systems from the lowest levels of sensors and actuators, across the factory, through the supply chain, and beyond. Large amounts of data have always been available to these systems, with currents and velocities sampled at regular intervals and used to make control decisions, and throughputs tracked hourly or daily. The ability to collect and save this detailed low-level data, send it to a central repository, and store it for days, months, and years, enables better insight into the behavior – and misbehavior – of complex manufacturing systems. The output from high-fidelity models and/or reams of historical data can be compared with streams of data coming off the plant floor to identify anomalies. Early identification of anomalies, before they lead to poor quality products or machine failure, can result in significant productivity improvements. We will discuss multiple approaches for harnessing this data, leveraging both physics-based and data-driven models, and how automation can enable timely responses. Both simulation and experimental results will be presented.
Cyber-physical systems are the basis for the new industrial revolution. Growing energy demand and environmental concerns require a large number of renewable energy resources, efficient energy consumption, and energy storage devices and demand responses. Cyber-physical energy systems (CPES) in a broader sense provide a desirable infrastructure for efficient energy production and consumption with uncertain energy resources. This speech will focus on the structure of CPES, and the problem of security-constrained planning and scheduling of CPES, including new renewable energy sources with high levels of uncertainties. The newly developed analytical conditions are discussed for fast identifying the security bottlenecks in a complex power grid when new renewable energy sources coordinate with storable energy sources such as hydro and pumped storages. A new method is introduced to solve the well-knows N-k contingency security assessment problem with 2-3 orders of reduced computational complexity. Production, storage and transportation, and utilization of hydrogen as the main energy source are also introduced. It is shown that the hydrogen-based CPES will provide an ideal infrastructure for energy supply and consumption with almost no pollution, and it will likely lead to the energy revolution anticipated in the new century.
In the six decades of conventional TUNING-BASED adaptive control, the unattained fundamental goals, in the absence of detrimental artificial excitation, have been (1) exponential regulation, as with robust controllers, and (2) perfect learning of the plant model. More than a quarter-century since I started my career by extending conventional adaptive controllers from linear to nonlinear systems, I reach those decades-old goals with a new non-tuning paradigm: regulation-triggered batch identification. The parameter estimate in the controller is held constant and, only once the regulation error grows “too large,” a parameter estimate update, based on the data since the last update, is “triggered.” Such a simple parameter estimator provably, and remarkably terminates updating after a number of state growth-triggered updates which is no greater than the number of unknown parameters. This yields exponential regulation and perfect identification except for zero-measure initial conditions. I present a design for a more general class of nonlinear systems than ever before, an extension to adaptive PDE control, a flight control example (the “wing rock” instability), and, time permitting, a simple robotics example. This is joint work with Iasson Karafyllis from the Mathematics Department of the National Technical University of Athens.
Probability theory has had a significant impact on systems and controls. In this talk, we will visit three developments in control theory that has a close connection to, and impacted by, the results in probability theory. We discuss Perron-Frobenius Theorem and its relation to the results in distributed computation and optimization, and its generalization to time-varying chains. We will discuss a result in controllability of random networks and show how the recent developments in random matrix theory and inverse Littlewood-Offord Theory shed light on such problems. Lastly, we discuss controllability of safety-critical stochastic systems and how Martingale theory leads to design and analysis of control policies for stochastic systems.