IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Advances in modeling and control will be required to meet future technical challenges in semiconductor manufacturing. For batch processes such as occur in semiconductor fabrication, modeling and control must be incorporated into a multi-level framework including sequential control, within-the-batch control, run-to-run control, fault detection, and factory control. Implementation challenges include a lack of suitable in situ measurements, variations in process equipment characteristics and wafer properties, limited process understanding, and non-automated operational practices. This presentation reviews how basic research findings in modeling and control have influenced commercial applications in key unit operations such as lithography and plasma etching as well as in overall factory control. The use of simultaneous identification and control algorithms as part of on-line testing is also illustrated.
Distributed robotics refers to the control of, and design methods for, a system of mobile robots that 1) are autonomous, that is, have only sensory inputs---no outside direct commands, 2) have no leader, and 3) are under decentralized control. The subject of distributed robotics burst onto the scene in the late twentieth century and became very popular very quickly. The first problems studied were flocking and rendezvous. The most highly cited IEEE TAC paper in the subject is by Jadbabaie, Lin, and Morse (2003). This lecture gives a classroom-style presentation of the rendezvous problem. It is the most basic coordination task for a network of mobile robots. The robots in the rendezvous problem in the literature are most frequently kinematic points, modeled as simple integrators, dx/dt = u. Of course, a real wheeled robot has dynamics and is nonholonomic, and the first part of the lecture looks at this discrepancy. The second part reviews the solution to the rendezvous problem. The final part of the lecture concerns infinitely many robots. The lecture is aimed at non-experts.
More information provided here.
Hybrid systems are a modeling tool allowing for the composition of continuous and discrete state dynamics. They can be represented as continuous systems with modes of operation modeled by discrete dynamics, with the two kinds of dynamics influencing each other. Hybrid systems have been essential in modeling a variety of important problems, such as aircraft flight management, air and ground transportation systems, robotic vehicles and human-automation systems. These systems use discrete logic in control because discrete abstractions make it easier to manage complexity and discrete representations more naturally accommodate linguistic and qualitative information in controller design.
A great deal of research in recent years has focused on the synthesis of controllers for hybrid systems. For safety specifications on the hybrid system, namely to design a controller that steers the system away from unsafe states, we will present a synthesis and computational technique based on optimal control and game theory. In the first part of the talk, we will review these methods and their application to collision avoidance and avionics design in air traffic management systems, and networks of manned and unmanned aerial vehicles. It is frequently of interest to synthesize controllers with more detailed performance specifications on the closed loop trajectories. For such requirements, we will present a toolbox of methods combining reachability with data-driven techniques inspired by machine learning, to enable performance improvement while maintaining safety. We will illustrate these "safe learning" methods on a quadrotor UAV experimental platform which we have at Berkeley.
In addition to classical physical applications such as fluid flows in engines, thermal dynamics in buildings, flexible wings of aircraft, electrochemistry in batteries, or plasmas in lasers and tokamaks, PDEs are effective in modeling large multi-agent systems as continua of networked agents, with applications ranging from vehicle formations to opinion dynamics. In its early period PDE control focused on replicating linear control methods (pole placement, LQG, H-infinity, etc) in infinite dimension. Over the last 15 years, a continuum version of the "backstepping" method has given rise to control design tools for nonlinear PDEs and PDEs with unknown functional coefficients. Backstepping designs now exist for each of the major PDE classes (parabolic, hyperbolic, real- and complex-valued, and of various orders in time and space). As a special case, continuum backstepping compensates delays of arbitrary length and dependence on time in general nonlinear ODE control systems. I will present a few design ideas and several applications, including deep oil drilling (where a large parametric uncertainty occurs), extruders in 3D printing (where a large delay is a nonlinear function of the value of the state), and deployment of 2D meshes of agents in 3D space (where deployment into complex surfaces necessarily gives rise to coupled unstable PDEs).
Networked control systems and distributed parameter systems can be viewed as instances of dynamical systems distributed in discrete and continuum space, respectively. This unified perspective provides insightful connections, and motivates new questions in both areas. Owing to the large number of degrees of freedom, these systems often display complex dynamical responses and phenomena.
Understanding these responses is an important challenge for analysis; mitigating these responses and quantifying fundamental performance limitations in the presence of architectural constraints on distributed controllers is the challenge for synthesis.
I will summarize some new directions in distributed systems research by outlining fascinating connections between distributed systems theory on the one hand, and canonical problems in turbulence and statistical mechanics on the other. In one class of problems, spatio-temporal dynamical analysis clarifies old and vexing questions in the theory of shear flow turbulence. In another class of problems, structured, distributed control design exhibits dimensionality-dependence and phase transition phenomena similar to those in statistical mechanics. It appears that such structured design problems, while difficult and non-convex for finite size systems, have sharp answers in the large system limit. I will argue that such results can be used to build a theory of fundamental performance limitations that are induced by network/spatial topological constraints.
These new directions provide exciting research opportunities and suggest that contact with other disciplines enriches both applications and theory of networked and distributed parameter systems. The study of systems with special structure provides informative answers to difficult analysis and synthesis problems. The systems theory and applications for such classes of problems are arguably still in their infancy, and many challenges with significant intellectual and societal impact remain wide open.
General Electric (GE) has embarked on a journey of merging controls and big data to create brilliant machines and systems that will unlock unprecedented performance through self-improvement and self-diagnosis. In this talk we will review the evolution of industrial controls, its complexity and increasing scale, ultimately leading to the systems of systems that GE refers to as the Industrial Internet. We will also consider challenges and opportunities related to interoperability, security, stability, and system resilience, and discuss specific development cases around infrastructure optimization including grid power, rail networks, and flight efficiency.
The number of devices connected to the Internet exceeded the number of people on the Internet in 2008, and is projected to reach 50 billion in 2020; worldwide smart power meter deployment is expected to grow from 130 million in 2011 to 1.5 billion in 2020, 90% of new vehicles sold in 2020 will have on-board connectivity platforms, as compared with 10% in 2012. The Industrial Internet will deliver new efficiency gains, accelerating productivity growth much the same way that the Industrial Revolution and the Internet Revolution did. Controls are at the heart of this new revolution.
Powerful model-based control tools have enabled the realization of modern clean and efficient automobiles. Our effort to minimize automotive pollution and fuel consumption at low cost is pushing us to control powertrain systems at their high efficiency limit where poorly understood phenomena occur. This semi-plenary story will highlight a few of these frontiers.
In the realm of internal combustion engines, a highly efficient gasoline engine with seemingly chaotic behavior will be controlled at the lean combustion stability limit. In the electrification realm, stressed-out batteries and dead-ended fuel cells will highlight the challenges and successes in understanding, modeling, and controlling highly efficient power conversion on-board a vehicle. With these highlights it will become clear that as we race to improve mileage by 50% over the next decade powertrain control engineers will take the driver's seat!
In the evolution of systems and control there has been an interesting interplay between the demands of new applications initiating the development of new methodologies and theories, and conversely theoretical or hardware developments enabling new applications. This talk will attempt to describe progress and interactions in this relationship. Also the role of design methodologies and theoretical results in the design of practical control systems will be discussed. Examples will be taken from flight control, automotive engine management together with some emerging areas.
My answer is "yes." In this lecture, I will make the case that there are some important open problems in finance which are ideally suited for researchers who are well versed in control theory. To this end, I will begin the presentation by quickly explaining what is meant by the notion of "technical analysis" in the stock market. Then I will address, from a control-theoretic point of view, a longstanding conundrum in finance: Why is it that so many asset managers, hedge funds and individual investors trade stock using technical analysis techniques despite the existence of a significant body of literature claiming that such methods are of questionable worth with little or no theoretical rationale? In fact, detractors describe such stock trading methods as "voodoo" and an "anathema to the academic world." To date, in the finance literature, the case for "efficacy" of such stock-trading strategies is based on statistics and empirical back-testing using historical data. With these issues providing the backdrop, my main objective in this lecture is to describe a new theoretical framework for stock trading - based on technical analysis and involving some simple ideas from robust and adaptive control. In contrast to the finance literature, where conclusions are drawn based on statistical evidence from the past, our control-theoretic point of view leads to robust certification theorems describing various aspects of performance. To illustrate how such a formal theory can be developed, I will describe results obtained to date on trend following, one of the most well-known technical analysis strategies in use. Finally, it should be noted that the main point of this talk is not to demonstrate that control-theoretic considerations lead to new "market beating" algorithms. It is to argue that strategies which have heretofore been analyzed via statistical processing of empirical data can actually be studied in a formal theoretical framework.
Recent years have witnessed significant interest in the area of distributed architecture control systems, with applications ranging from autonomous vehicle teams to communication networks to smart grid. The general setup is a collection of multiple decision-making components interacting locally to achieve a common collective objective. While such architectures readily suggest game theory as a relevant formalism, game theory is better known for its traditional role as a "descriptive" modeling framework in social sciences rather than a "prescriptive" design tool for engineered systems. This talk begins with an overview of how game theory can be used as an effective design approach for distributed architecture control systems, with illustrative examples of distributed coordination. Inspired by new found connections, the talk continues with a discussion of how methods from systems and control can shed new light on more traditional questions in game theory, specifically regarding evolutionary games and agent based modeling.
Wind energy is recognized worldwide as cost-effective and environmentally friendly and is among the world's fastest-growing sources of electrical energy. Despite the amazing growth in global wind power installations in recent years, science and engineering challenges still exist. It is commonly reported that the variability of wind is a major obstacle to integrating large amounts of wind energy on the utility grid. Wind's variability creates challenges because power supply and demand must match in order to maintain a constant grid frequency. As wind energy penetration increases to higher levels in many countries, however, systems and control techniques can be used to actively control the power generated by wind turbines and wind farms to help regulate the grid frequency. In this talk, we will first provide an overview of wind energy systems by introducing the primary structural components and operating regions of wind turbines. The operation of the utility grid will be outlined by discussing the electrical system, explaining the importance of preserving grid reliability through controlling the grid frequency (which is a measure of the balance between electrical generation and load), and describing the methods of providing ancillary services for frequency support using conventional generation utilities. We will then present a vision for how wind turbines and wind farms can be controlled to help stabilize and balance the frequency of the utility grid, and we will highlight control methods being developed in industry, national laboratories, and academia for providing active power ancillary services with wind energy. Results of simulation studies as well as experimental field tests will be presented to show the promise of the techniques being developed. We shall close by discussing future research avenues to enable widespread adoption of active power control services provided by wind farms, and how advanced distributed capabilities can reduce the integration cost of wind energy and enable much higher wind energy penetrations while simultaneously maintaining and possibly increasing the reliability of the utility grid.
In many problems in control, optimal and robust control, one has to solve global optimization problems of the form: P : f* = minx { f(x) : x ∈ K}, or, equivalently, f* = max{λ : f - λ ≥ 0 on K}, where f is a polynomial (or even a semi-algebraic function) and K is a basic semi-algebraic set. One may even need solve the "robust" version min{f(x) : x ∈ K; h(x; u) ≥ 0, ∀u ∈ U} where U is a set of parameters. For instance, some static output feedback problems can be cast as polynomial optimization problems whose feasible set K is dened by a polynomial matrix inequality (PMI). And robust stability regions of linear systems can be modeled as parametrized polynomial matrix inequalities (PMIs) where parameters u account or uncertainties and (decision) variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials (and even semi-algebraic functions) which are nonnegative on a set, a topic of independent interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non- negative on a basic closed semi-algebraic set K ⊂ Rn. The rst type of characterization is when knowledge on K is through its dening polynomials, i.e., K = {x : gj(x) ≥ 0; j = 1, . . . ,m}, in which case some powerful certicates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to dene a hierarchy fo semidenite relaxations which yields a monotone sequence of lower bounds converging to f* (and in fact, nite convergence is generic). There is also another way of looking at nonnegativity where now knowledge on K is through moments of a measure whose support is K. In this case, checking whether a polynomial is nonnegative on K reduces to solving a sequence of generalized eigenvalue problems associated with a count- able (nested) family of real symmetric matrices of increasing size. When applied to P, this results in a monotone sequence of upper bounds converging to the global minimum, which complements the previous sequence of upper bounds. These two (dual) characterizations provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone of polynomials nonnegative on K.
There has been remarkable progress in sampled-data control theory in the last two decades. The main achievement here is that there exists a digital (discrete-time) control law that takes the intersample behavior into account and makes the overall analog (continuous-time) performance optimal, in the sense of H-infinity norm. This naturally suggests its application to digital signal processing where the same hybrid nature of analog and digital is always prevalent. A crucial observation here is that the perfect band-limiting hypothesis, widely accepted in signal processing, is often inadequate for many practical situations. In practice, the original analog signals (sounds, images, etc.) are neither fully band-limited nor even close to be band-limited in the current processing standards.
The present talk describes how sampled-data control theory can be applied to reconstruct the lost high-frequency components beyond the so-called Nyquist frequency, and how this new method can surpass the existing signal processing paradigm. We will also review some concrete examples for sound processing, recovery of high frequency components for MP3/AAC compressed audio signals, and super resolution for image (still/moving) processing. We will also review some crucial steps in leading this technology to the commercial success of 40 million sound processing chips.
Advanced motion systems like pick-and-place machine used in the semiconductor industry, challenge the frontiers of systems and control theory and practice. In the design phase, control oriented design of the electro-mechanics is necessary in order to achieve the tight performance specifications. Once realized, and since experimentation is fast, a machine in the loop procedure can be explored to close the design loop from experiment, using experimental model building,model-based control design, implementation and performance evaluation. Extension of linear modelling techniques towards some classes of nonlinear systems is relevant for improved control of specific motion systems, such as with friction. In the application field of medical robotics the experiences from high tech motion systems can be used successfully, and an eye surgical robot with haptics will be shown as an example.
The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Game theory is beginning to emerge as a valuable set of tools for achieving this goal as many popular multiagent systems can be modeled as games, e.g., sensor coverage, consensus, task allocation, among others. Game theory is a well-established discipline in the social sciences that is primarily used for modeling social behavior. Traditionally, the preferences of the individual agents' are modeled as utility functions and the resulting behavior is assumed to be an equilibrium concept associated with these modeled utility functions, e.g., Nash equilibrium. This is in stark contrast to the role of game theory in engineering systems where the goal is to design both the agents' utility functions and an adaptation rule such that the resulting global behavior is desirable. The transition of game theory from a modeling tool for social systems to a design tool for engineering promotes several new research directions that we will discuss in this talk. In particular, we will focus on the question of how to design admissible agent utility functions such that the resulting game possesses desirable properties, e.g., the existence and efficiency of pure Nash equilibria. Our motivation for considering pure Nash equilibria stems from the fact that adaptation rules can frequently be derived which guarantee that the collective behavior will converge to such pure Nash equilibria. Our first result focuses on ensuring the existence of pure Nash equilibria for a class of separable resource allocation problems that can model a wide array of applications including facility location, routing, network formation, and coverage problems. Within this class, we prove that weighted Shapley values completely characterize the space of local utility functions that guarantee the existence of a pure Nash equilibrium. That is, if a utility design cannot be represented as a weighted Shapley value, then there exists a game for which a pure Nash equilibrium does not exist. Another concern is distributed versus centralized efficiency. Once distributed agents have settled on an equilibrium, the resulting performance need not be the same as from a centralized design (cf., so-called "price-of-anarchy"). We compare different utility design methods and their resulting effect on efficiency. Finally, we briefly discuss online adaptation rules leading to equilibrium.
Healthcare in the U.S. is at a transition point. Costs for care have risen to unsustainable levels and are among the highest in the world without commensurate benefits. At the same time, new care models and the digitization of healthcare offer tremendous opportunities to improve health and health care while reducing costs. In this context, intelligent technologies are playing a growing role in providing better understanding and decision support in healthcare systems. Solutions range from population health management tools for organizations to predictive clinical decision support applications for individuals. Advanced technologies are also applied in administrative tasks such as insurance eligibility determination and fraud detection. Looking ahead, the advent of personalized medicine will bring the promise and need for intelligent technologies into even sharper focus. This talk will review current trends, discuss representative approaches, and show examples that demonstrate the value of intelligent monitoring and decision support in healthcare systems.
In many practical systems, such as engineering, social, and financial systems, control decisions are made only when certain events happen. This is either because of the discrete nature of sensor detection and digital computing equipment, or the limitation of computing power, which makes state-based control infeasible due to the huge state spaces involved. The performance optimization of such systems is generally different from traditional optimization approaches, such as Markov decision processes, or dynamic programming. In this talk, we introduce, in an intuitive manner, a new optimization framework called event-based optimization. This framework has a wide applicability to the aforementioned systems. With performance potential as building blocks, we develop optimization algorithms for event-based optimization problems. The optimization algorithms are first proposed based on intuition, and theoretical justifications are then given with a performance sensitivity based approach. Finally, we provide a few practical examples to demonstrate the effectiveness of the event-based optimization framework. We hope this framework may provide a new perspective to the optimization of the performance of event-triggered dynamic systems.