IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
At the quantum level, feedback loops have to take into account measurement back-action. The goal of this talk is to explain, in a tutorial way and on the first experimental realization of a quantum-state feedback, how such purely quantum effect can be exploited in models and stabilization control schemes. This closed-loop experiment was conducted in 2011 by the group of Serge Haroche (Physics Nobel Prize 2012). The control goal was to stabilize a small number of micro-wave photons trapped between two super-conducting mirrors and subject to quantum non-demolition measurement via probe off-resonant Rydberg atoms. The implemented control scheme was decomposed into two parts. The first part estimates in real-time the quantum state of the trapped photons via a discrete-time Belavkin quantum filter. The second part is a nonlinear quantum-state feedback based on control Lyapunov functions. It stabilizes via suitable coherent displacements the number of photon(s) towards its set-point, namely an integer less than 5 in the experiment. This control scheme relies on a hidden control Markov model whose structure combines three quantum rules: unitary deterministic Schrödinger evolution; stochastic collapse of the wave packet induced by the measurement; tensor product for the composite systems. These basic quantum rules characterize the structure of all Markovian models describing open-quantum systems. These rules explain also the existence to two kinds of feedback schemes currently developed for quantum error correction: measurement-based feedback where an open quantum system is stabilized by a classical controller; coherent or autonomous feedback (reservoir engineering) where an open quantum system is passively stabilized through its coupling with another highly dissipative quantum system, namely the quantum controller.
Thirty years ago, computer-aided control system design involved an exclusive community of engineers, typically in top research labs or large companies, running esoteric codes on timeshared minicomputers to design and analyze control algorithms, often for expensive systems produced in low volumes. Today, computer-aided control system design has grown into Model-Based Design, encompassing not only system analysis and algorithm design, but also implementation through code generation, plus verification and validation on both models and embedded code. It is used in every industry that creates today’s smart systems – aerospace, automotive, industrial automation, medical devices, robotics, energy, and many more – not only for the controls but integrating computer vision, communication, and machine learning. In this talk, Jack Little reviews the evolution of control design tools, and the corresponding changes in controls education and research. Jack then looks forward to the future of Model-Based Design and how it is addressing the next generation of control engineers: researchers and developers working on challenges such as cyber-physical systems and distributed systems, but also students and makers taking advantage of easy-to-use software with low-cost hardware – everyone building the smarter controlled systems of the future.
Many current products and systems employ sophisticated mathematical algorithms to automatically make complex decisions, or take action, in real-time. Examples include recommendation engines, search engines, spam filters, on-line advertising systems, fraud detection systems, automated trading engines, revenue management systems, supply chain systems, electricity generator scheduling, flight management systems, and advanced engine controls.
I'll cover the basic ideas behind these and other applications, emphasizing the central role of mathematical optimization and the associated areas of machine learning and automatic control. The talk will be nontechnical, but the focus will be on understanding the central issues that come up across many applications, such as the development or learning of mathematical models, the role of uncertainty, the idea of feedback or recourse, and computational complexity.
In this talk, I will describe some of our work on nanomechanics of biological systems and design of medical devices for hospitals in resource poor countries. These may sound like very disparate areas. However, you may be surprised to see how well the skills students learn in one translate well to the other. Atomic Force Microscopy and high precision instrumentation are common tools for the basic sciences. We can use these systems to measure small-scale intermolecular forces and characterize the nano-structures of individual cellular components. These types of measurements help to build more accurate models of tissues and organs to predict behavior during disease and injury. Beyond the basic sciences, the same types of concepts and skills needed for nanoscience work can be applied to solve real-world engineering problems in resource poor hospitals today. Working with engineers and clinicians in Tanzania, our students have designed several novel solutions to problems they have seen in clinics. These range from infant warmers to ink-jet printed diabetes test supplies to basket woven neck braces. In addition, while in the hospitals, our students put their debugging skills to the test by helping to repair and maintain clinical devices and equipment. Experiences in the lab and in the field give students a rounded perspective on engineering and a clearer outlook on their future career paths.
Humans have the ability to walk with deceptive ease, navigating everything from daily environments to uneven and uncertain terrain with efficiency and robustness. With the goal of achieving human-like abilities on robotic systems, this talk presents the process of formally achieving bipedal robotic walking through controller synthesis inspired by human locomotion, and it demonstrates these methods through experimental realization on numerous bipedal robots and robotic assistive devices. Motivated by the hierarchical control present in humans, human-inspired virtual constraints are utilized to synthesize a novel type of control Lyapunov function (CLF); when coupled with hybrid system models of locomotion, this class of CLFs yields provably stable robotic walking. Going beyond explicit feedback control strategies, these CLFs can be used to formulate an optimization-based control methodology that dynamically accounts for torque and contact constraints while being implementable in real-time. This sets the stage for the unification of control objectives with safety-critical constraints through the use of a new class of control barrier functions provably enforcing these constraints. The end result is the generation of bipedal robotic walking that is remarkably human-like and is experimentally realizable, together with a novel control framework for highly dynamic behaviors on bipedal robots. Furthermore, these methods form the basis for achieving a variety of advanced walking behaviors—including multi-domain locomotion, e.g., human-like heel-toe behaviors—and therefore have application to the control of robotic assistive devices, as evidenced by the demonstration of the resulting controllers on multiple robotic walking platforms, humanoid robots and prostheses.
Robotic technology can: (i) deliver therapy to aid recovery after neurological disease; (ii) replace limb function following amputation; and (iii) provide assistance to restore function. This exciting new frontier of robotic applications requires sensitive but forceful physical interaction with a human, yet physical contact can severely de-stabilize robots. Despite these challenges, clinical evidence shows that robot therapy is both effective and cost-effective. Motorized amputation prostheses present even greater challenges. They must manage physical interaction with objects in the world as well as with the amputee. This presentation will review how machine mimicry of natural control provides the gentleness required for robotic therapy and enables seamless coordination of natural and prosthetic limbs. A pre-requisite for success in these applications is a quantitative knowledge of the human motor control system.
Electricity production in the US has changed dramatically since 2000, with the percent of electricity produced from gas growing from 16% to 30%, while coal dropped from 52% to 37%. These changes are primarily driven by two technologies used in shale rock formations, directional drilling to create horizontal wells, and hydraulic fracturing to release the gas within the relatively impermeable rock. This presentation will first give a brief operational overview of hydraulic fracturing. Next, challenges that relate to the control of this technology are described. Lastly, two examples are presented, one a theoretical study investigating the potential of model-based feedback control of the hydraulic fracturing process and the other an implementation that highlights the importance of measurements and data uncertainty when designing effective and robust controllers.
In the past, robotic manipulators, machine tools, measurement devices, and other systems were designed with rigid structures and operated at relatively low speeds. With a growing demand for fuel efficiency, smaller actuators, and speed, lighter weight materials are increasingly used in many systems, making them more flexible. Achieving high-performance control of flexible structures is a difficult task, but one that is now critical to the success of many important applications, such as atomic force microscopes, disk drives, tape drives, robotic manipulators, gantry cranes, wind turbines, satellites, and the space station remote manipulator system. The unwanted vibration that results from maneuvering or controlling a flexible structure often dictates limiting factors in the performance of the system. Over the last few decades, many feedback, feedforward, and combined feedforward/feedback control methods have been developed for flexible structures. We will discuss and compare several of these control methods in conjunction with overviewing some of the issues in the modeling of flexible structures, and we will highlight a few recurring themes across the diverse application areas mentioned above.
In the interim report on the fifth Science and Technology Basic Plan of Japan, the realization of Super Smart Society in our future is highlighted. The initiative for Smart Cities has been also promoted worldwide as societal-scale CPS (Cyber-Physical Systems) infrastructures. Along with efficient traffic/water/security management, distributed EMS (Energy Management Systems) should play a key role as we head toward low carbon environmental friendly society that is essential for sustainable development. To this goal, JST (Japan Science and Technology Agency) has launched a CREST research area for the distributed EMS building. The aim of this project is to create fundamental theory and advanced technology for optimal control of energy balancing between dynamic demand and supply. The topics covered include forecast and integration of renewable energy, management of electric vehicle/storage, demand response and human behavior, development of demand model, and platform building. A particular emphasis is on the promotion of international research collaboration with the US and European Funding Agencies, such as NSF (USA), DFG (Germany), RCN (Norway), CNR (Italy) and others. This would enable all the project researchers involved to catalyze networking and knowledge sharing with a broad array of disciplines. In this talk, the on-going exciting progress of the CREST EMS project will be presented.
Network systems are mathematical models for the study of cooperation, propagation, synchronization and other dynamical phenomena that arise among interconnected agents. Network systems are widespread in science as they are fundamental modeling tools, e.g., in sociology, ecology, and epidemiology. They also play a key growing role in technology, e.g., in the design of power grids, cooperative robotic behaviors and distributed computing algorithms. Their study pervades applied mathematics. This talk will review established and emerging frameworks for modeling, analysis and design of network systems. I will survey the available comprehensive theory for linear network systems and then highlight selected nonlinear concepts. Next, I will focus on recent developments by my group on (i) modeling of the evolution of opinions and social power in social networks, (ii) analysis of security and transmission capacity in power grids, and (iii) design of optimal strategies for robotic routing and coordination.
The emergence of large networked systems has brought about new challenges to researchers and practitioners alike. While such systems perform well under normal operations, they can exhibit fragility in response to certain disruptions that may lead to catastrophic cascades of failures. This phenomenon, referred to as systemic risk, emphasizes the role of the system interconnection in causing such, possibly rare, events. The flash crash of 2010, the financial crisis of 2008, the New England power outage of 2003, or simply extensive delays in air travel, are just a few of many examples of fragility and systemic risk present in complex interconnected systems. Robust interconnections have been the subject of study by the control community for several decades. Substantial progress has been made in the context of both stability and performance robustness for various types of interconnections. Typical problems addressed in the literature involve interconnections with simple topologies, but with more complex components (dynamic, sometimes with high dimensions). More recently, the attention of the research community has shifted towards networked systems where the topology of the network is fairly large and complicated, while the local dynamics are fairly simple. The term fragility is used in this context to highlight the system's closeness to failure. Notions of failure include large amplification of local disturbances (or shocks), instability, or a substantial increase in the probability of extreme events. Cascaded failures, or systemic risk, fit under this umbrella and focus on local failures synchronizing to cause a breakdown in the network. Many abstracted models from transportation, finance, or the power grid fit this framework well. The focus of research is to relate fragility to the size and characteristics of a network for certain types of local interactions. In this talk, I will address this emerging area. I will provide some constructive examples and highlight important research directions.
Modern air transportation systems are complex cyber-physical networks that are critical to global travel and commerce. As the demand for air transport has grown, so have congestion, flight delays, and the resultant environmental impacts. With further growth in demand expected, new control techniques are needed, perhaps even with redesign of some parts of the system, in order to prevent cascading delays and excessive pollution.
This talk presents examples of control and optimization algorithms for air transportation systems that are grounded in real-world data, including their implementation and testing in both simulations and in field trials. These algorithms help us address several challenges, including resource allocation with multiple stakeholders, robustness in the presence of operational uncertainties, and developing decision-support tools that account for human operators and their behavior.
Advances in modeling and control will be required to meet future technical challenges in semiconductor manufacturing. For batch processes such as occur in semiconductor fabrication, modeling and control must be incorporated into a multi-level framework including sequential control, within-the-batch control, run-to-run control, fault detection, and factory control. Implementation challenges include a lack of suitable in situ measurements, variations in process equipment characteristics and wafer properties, limited process understanding, and non-automated operational practices. This presentation reviews how basic research findings in modeling and control have influenced commercial applications in key unit operations such as lithography and plasma etching as well as in overall factory control. The use of simultaneous identification and control algorithms as part of on-line testing is also illustrated.
More information provided here.
Hybrid systems are a modeling tool allowing for the composition of continuous and discrete state dynamics. They can be represented as continuous systems with modes of operation modeled by discrete dynamics, with the two kinds of dynamics influencing each other. Hybrid systems have been essential in modeling a variety of important problems, such as aircraft flight management, air and ground transportation systems, robotic vehicles and human-automation systems. These systems use discrete logic in control because discrete abstractions make it easier to manage complexity and discrete representations more naturally accommodate linguistic and qualitative information in controller design.
A great deal of research in recent years has focused on the synthesis of controllers for hybrid systems. For safety specifications on the hybrid system, namely to design a controller that steers the system away from unsafe states, we will present a synthesis and computational technique based on optimal control and game theory. In the first part of the talk, we will review these methods and their application to collision avoidance and avionics design in air traffic management systems, and networks of manned and unmanned aerial vehicles. It is frequently of interest to synthesize controllers with more detailed performance specifications on the closed loop trajectories. For such requirements, we will present a toolbox of methods combining reachability with data-driven techniques inspired by machine learning, to enable performance improvement while maintaining safety. We will illustrate these "safe learning" methods on a quadrotor UAV experimental platform which we have at Berkeley.
In addition to classical physical applications such as fluid flows in engines, thermal dynamics in buildings, flexible wings of aircraft, electrochemistry in batteries, or plasmas in lasers and tokamaks, PDEs are effective in modeling large multi-agent systems as continua of networked agents, with applications ranging from vehicle formations to opinion dynamics. In its early period PDE control focused on replicating linear control methods (pole placement, LQG, H-infinity, etc) in infinite dimension. Over the last 15 years, a continuum version of the "backstepping" method has given rise to control design tools for nonlinear PDEs and PDEs with unknown functional coefficients. Backstepping designs now exist for each of the major PDE classes (parabolic, hyperbolic, real- and complex-valued, and of various orders in time and space). As a special case, continuum backstepping compensates delays of arbitrary length and dependence on time in general nonlinear ODE control systems. I will present a few design ideas and several applications, including deep oil drilling (where a large parametric uncertainty occurs), extruders in 3D printing (where a large delay is a nonlinear function of the value of the state), and deployment of 2D meshes of agents in 3D space (where deployment into complex surfaces necessarily gives rise to coupled unstable PDEs).
Networked control systems and distributed parameter systems can be viewed as instances of dynamical systems distributed in discrete and continuum space, respectively. This unified perspective provides insightful connections, and motivates new questions in both areas. Owing to the large number of degrees of freedom, these systems often display complex dynamical responses and phenomena.
Understanding these responses is an important challenge for analysis; mitigating these responses and quantifying fundamental performance limitations in the presence of architectural constraints on distributed controllers is the challenge for synthesis.
I will summarize some new directions in distributed systems research by outlining fascinating connections between distributed systems theory on the one hand, and canonical problems in turbulence and statistical mechanics on the other. In one class of problems, spatio-temporal dynamical analysis clarifies old and vexing questions in the theory of shear flow turbulence. In another class of problems, structured, distributed control design exhibits dimensionality-dependence and phase transition phenomena similar to those in statistical mechanics. It appears that such structured design problems, while difficult and non-convex for finite size systems, have sharp answers in the large system limit. I will argue that such results can be used to build a theory of fundamental performance limitations that are induced by network/spatial topological constraints.
These new directions provide exciting research opportunities and suggest that contact with other disciplines enriches both applications and theory of networked and distributed parameter systems. The study of systems with special structure provides informative answers to difficult analysis and synthesis problems. The systems theory and applications for such classes of problems are arguably still in their infancy, and many challenges with significant intellectual and societal impact remain wide open.
General Electric (GE) has embarked on a journey of merging controls and big data to create brilliant machines and systems that will unlock unprecedented performance through self-improvement and self-diagnosis. In this talk we will review the evolution of industrial controls, its complexity and increasing scale, ultimately leading to the systems of systems that GE refers to as the Industrial Internet. We will also consider challenges and opportunities related to interoperability, security, stability, and system resilience, and discuss specific development cases around infrastructure optimization including grid power, rail networks, and flight efficiency.
The number of devices connected to the Internet exceeded the number of people on the Internet in 2008, and is projected to reach 50 billion in 2020; worldwide smart power meter deployment is expected to grow from 130 million in 2011 to 1.5 billion in 2020, 90% of new vehicles sold in 2020 will have on-board connectivity platforms, as compared with 10% in 2012. The Industrial Internet will deliver new efficiency gains, accelerating productivity growth much the same way that the Industrial Revolution and the Internet Revolution did. Controls are at the heart of this new revolution.
Powerful model-based control tools have enabled the realization of modern clean and efficient automobiles. Our effort to minimize automotive pollution and fuel consumption at low cost is pushing us to control powertrain systems at their high efficiency limit where poorly understood phenomena occur. This semi-plenary story will highlight a few of these frontiers.
In the realm of internal combustion engines, a highly efficient gasoline engine with seemingly chaotic behavior will be controlled at the lean combustion stability limit. In the electrification realm, stressed-out batteries and dead-ended fuel cells will highlight the challenges and successes in understanding, modeling, and controlling highly efficient power conversion on-board a vehicle. With these highlights it will become clear that as we race to improve mileage by 50% over the next decade powertrain control engineers will take the driver's seat!
In the evolution of systems and control there has been an interesting interplay between the demands of new applications initiating the development of new methodologies and theories, and conversely theoretical or hardware developments enabling new applications. This talk will attempt to describe progress and interactions in this relationship. Also the role of design methodologies and theoretical results in the design of practical control systems will be discussed. Examples will be taken from flight control, automotive engine management together with some emerging areas.