IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Recent advances in experimental techniques have made it possible to generate an enormous amount of `raw' biological data, with cancer biology being no exception. The main challenge faced by cancer biologists now is the generation of plausible hypotheses that can be evaluated against available data and/or validated through further experimentation. For persons trained in control theory, there is now a significant opportunity to work with biologists to create a virtuous cycle of hypothesis generation and experimentation. In this talk, we discuss four specific problems in cancer biology that are amenable to study using probabilistic methods. These are: reverse engineering gene regulatory networks, constructing context-specific gene regulatory networks, analyzing the significance of expression levels for collections of genes, and discriminating between drivers (mutations that cause cancer) and passengers (mutations that are caused by cancer or have no impact). Some research problems that merit the attention of the controls community are also suggested.
Recent policies combined with potential for technological innovations and business opportunities, have attracted a high level of interest in smart grids. The potential for a highly distributed system with a high penetration of intermittent sources poses opportunities and challenges. Any complex dynamic infrastructure network typically has many layers, decision-making units and is vulnerable to various types of disturbances. Effective, intelligent, distributed control is required that would enable parts of the networks to remain operational and even automatically reconfigure in the event of local failures or threats of failure. A major challenge is posed by the lack of a unified mathematical framework with robust tools for modeling, simulation, control and optimization of time-critical operations in complex multicomponent and multiscaled networks. Mathematical models of such complex systems are typically vague (or may not even exist); moreover, existing and classical methods of solution are either not available, or are not sufficiently powerful. From a strategic R&D viewpoint, how to retrofit and engineer a stable, secure, resilient grid with large numbers of such unpredictable power sources? What roles will assets optimization, increased efficiency, energy storage, advanced power electronics, power quality, electrification of transportation, novel control algorithms, communications, cyber and infrastructure security play in the grid of the future? What are the emerging technologies to enable new products, services, and markets? In this presentation, we will present an overview of smart grids and recent advances in distributed sensing, modeling, and control, particularly at both the high-voltage power grid and at consumer level. Such advances may contribute toward the development of an effective, intelligent, distributed control of power system networks with a focus on addressing distributed sensing, computation, estimation, controls and dynamical systems challenges and opportunities ahead.
We live in a "distributed world" made by countless "nodes", being them cities, computers, people, etc., connected by a dense web of transportation, communication, or social ties. The term "network", describing such a collection of nodes and links, nowadays has become of common use thanks to our extensive reliance on "connections of interdependent systems" for our everyday life, for building complex technical systems, infrastructures and so on. In an increasingly "smarter" planet, it is expected that systems are safe, reliable, available 24/7, and possibly at low-cost maintenance. In this connection, monitoring and fault diagnosis are of customary importance to ensure high levels of safety, performance, reliability, dependability, and availability. In fact, faults and malfunctions can result, just referring to industrial plants, in off-specification production, increased operating costs, chance of line shutdown, danger for humans, detrimental environmental impact, and so on. Faults and malfunctions should be detected promptly and their source and severity should be diagnosed so that corrective actions can be possibly taken. This lecture deals with a on-line approximation-based distributed fault diagnosis approach for large-scale nonlinear systems, by exploiting a "divide et impera" approach in which the overall diagnosis problem is decomposed into smaller subproblems, simpler enough to be solved within the existing computation and communication architectures. The distributed detection, isolation and identification task is broken down and assigned to a network of "Local Diagnostic Units", each having a "different/local view" on the system: they are allowed to communicate with each other and also to cooperate on the diagnosis of system components that may be shared thus yielding a global diagnosis decision. In the lecture, issues and perspectives will be addressed as well in a paradigmatic industrial context of safety-critical process control systems.
Concept abstraction is an important component of intelligence. Scientists today still do not know how the brain accomplishes it. In this talk we compare some recent mathematical results about random walks on manifolds and graphs with the features of concept abstraction processes to seek understanding of the algorithms involved.
Switched systems with positivity constraints arise in various areas. They have been fruitfully employed to model consensus problems, biological systems dynamics, and recently, viral mutation dynamics under drug treatment. The theory of "positive switched systems" is rather challenging and offers quite a number of interesting open problems. In the talk, we will illustrate the main results available as far as stability, stabilizability, and controllability issues are concerned. Some open problems will be proposed, and some applications of this general theory in the area of biological systems will be illustrated.
Designs in systems and control are traditionally carried out through deterministic algorithms consisting of a sequence of steps set by deterministic rules. This approach, however, can be generalized by the introduction of randomization: a randomized algorithm is an algorithm where one or more steps are based on a random rule, that is – among many deterministic rules – one rule is selected according to a random scheme. Randomization has turned out to be a powerful tool for solving a number of problems deemed unsolvable with deterministic methods.
A crucial fact is that randomization permits one to introduce the notion of ``probabilistically successful algorithm''. In many cases, when deterministic successfulness cannot be achieved, probabilistic successfulness offers a valid alternative.
In the talk, the use of randomized algorithms will be discussed in relation to several problems:
Twenty years ago I delivered a plenary lecture with the same title at the ACC in Boston. I will go back and reflect on the successes and failures, on what we have learned and which problems remain open. The focus will be on robust and constrained control and the real time implementation of control algorithms. I will comment on the progress we have made on the control of hybrid systems and how our vastly more powerful computational resources have affected the design tools we have at our disposal. Throughout the lecture, industrial examples from the automotive and power electronics domains and the industrial energy sector will illustrate the arguments.
Hybrid systems combine continuous-time dynamics with discrete modes of operation. The states of such system usually have two distinct components: one that evolves continuously, typically according to a differential equation; and another one that only changes through instantaneous jumps.
We present a model for Stochastic Hybrid Systems (SHSs) where transitions between discrete modes are triggered by stochastic events, much like transitions between states of a continuous-time Markov chains. However, in SHSs the rate at which transitions occur depends on both the continuous and the discrete states of the hybrid system. The combination of continuous dynamics, discrete events, and stochasticity results in a modeling framework with tremendous expressive power, making SHSs appropriate to describe the dynamics of a wide variety of systems. This observation has been the driving force behind the several recent research efforts aimed at developing tools to analyze these systems.
In this talk, we use several examples to illustrate the use of SHSs as a versatile modeling tool to describe dynamical systems that arise in distributed control and estimation, networked control systems, molecular biology, and ecology. In parallel, we will also discuss several mathematical tools that can be used to analyze such systems, including the use of the extended generator, Lyapunov-based arguments, infinite-dimensional moment dynamics, and finite-dimensional truncations.
Central banks and funds investment managers work with mathematical models. In recent years, a new class of model has come into prominence—generalized dynamic factor models. These are characterized by having a modest number of inputs, corresponding to key economic variables and industry-sector-wide variables for central banks and funds managers respectively, and a large number of outputs, economic time series data or individual stock price movements for example. It is common to postulate that the input variables are linked to the output variables by a finite-dimensional linear time-invariant discrete-time dynamic model, the outputs of which are corrupted by noise to yield the measured data. The key problems faced by central banks or funds managers are model fitting given the output data (but not the input data), and then using the model for prediction purposes. These are essentially tasks usually considered by those practicing identification and time series modelling. Nevertheless there is considerable underlying linear system theory. This flows from the fact that the underlying transfer function matrix is tall. This presentation will describe a number of consequences of this seemingly trivial fact, and then go on to indicate how to cope with time series with different periodicities, e.g. monthly and quarterly, where multirate signal processing and control concepts are of relevance.
Moore's Law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. The self-fulfilling prophesy of Moore's Law is under threat. The new bottleneck comes not from hardware and technology capabilities, but from control, computation, and algorithmic constraints in various steps in the design flow such as verification and optical proximity correction.
In the first part of the talk, we describe our efforts in developing a new class of wireless sensors for use in semiconductor manufacturing. These sensors are fully self-contained with on board power, communications, and signal processing electronics. The sensors offer unprecedented spatial and time resolution, making them suitable for equipment diagnostics and design, and for process optimization and control. We will illustrate the applications of these sensors in IC processing, and describe our efforts at commercializing this technology.
In the second part of the talk, will outline three very large scale computational problems that are critical to next generation ICs. These are: Inverse lithography, design verification, and design-for manufacturability. We formulate these mathematically using the common language of clip calculus and show possible solutions. We will conclude with arguing for the vital importance of computation and modeling in this economically important field.
In economic networks, behaviors that seem intelligently conceived (for the most part) emerge without a coordinating entity, but as a result of multiple agents pursuing self-interested strategies. Such behaviors have traditionally been interpreted using paradigms of non-cooperative game theory. However, standard game theory usually assumes that the players know the models of their own payoff functions and the actions of the other players. Relying on the method of extremum seeking, we design algorithms that do not need any modeling information and where, instead, the players employ only the measurements of their own payoff values. The extremum seeking algorithms are proved to converge to the Nash equilibria of the underlying non-cooperative games. In other words, extremum seeking allows the player to learn its Nash strategy. Extremum seeking algorithms are not restricted to games with a limited number of players but are, in fact, applicable to games with uncountably many players. While in finite games each player employing extremum seeking must employ a distinct probing frequency, a remarkable situation arises in games with uncountably many players - we show that, as long as any frequency is employed by countably many players, convergence to the Nash equilibrium is guaranteed. Such large (for all practical purposes uncountable) games arise in future energy trading markets involving households that own plug-in hybrid electric vehicles, whose battery capacity is used for the storage of energy during periods of excess production from wind and solar sources and for selling energy back to the grid, at a price, or in quantity, determined by a controller pursuing profit maximization for the household with the help of an extremum seeking algorithm.
Extremely rare events occur in several areas of science and engineering. Some examples are huge movements in the prices of assets and commodities; and weather phenomena such as high winds and hurricanes. Attempts to model these kinds of events using the law of large numbers often fail because "rare" events seem to occur *far more frequently* than a Gaussian distribution would suggest. In some areas, such as clinical trials of drug candidates, a "law of large numbers" will not apply because the available data is far too small to permit the drawing of confident conclusions. For instance, one may have to estimate a few dozen parameters on the basis of a few dozen samples (whereas prudence would suggest at least a few thousand samples -- but these are simply not available).
In this talk we give a *very elementary introduction* to some theoretical methods for modeling and coping with extremely rare or adverse events. To handle rare events, we suggest the use of "heavy tailed" random variables, which have finite first moment but infinite variance. The form of the "law of large numbers" for such variables, and how these affect the design process, will be discussed. To model adverse events with limited data, we suggest the use of "worst case probability distributions" which can be formulated as a linear programming problem, and often leads to surprisingly good insights.
There has been remarkable progress in sampled-data control theory in the last two decades. The main achievement here is that there exists a digital (discrete-time) control law that takes the intersample behavior into account and makes the overall analog (continuous-time) performance optimal, in the sense of H-infinity norm. This naturally suggests its application to digital signal processing where the same hybrid nature of analog and digital is always prevalent. A crucial observation here is that the perfect band-limiting hypothesis, widely accepted in signal processing, is often inadequate for many practical situations. In practice, the original analog signals (sounds, images, etc.) are neither fully band-limited nor even close to be band-limited in the current processing standards. The problem is to interpolate high-frequency components beyond the so-called Nyquist frequency, and this is nothing but the intersample signals discarded through sampling. Assuming a natural signal generator model, sampled-data control theory provides an optimal platform for such problems. This new method has been implemented to a custom LSI chips by SANYO corporation, and has made success of producing over 12 million chips. This talk provides a new problem formulation, design procedure, and various applications in sound processing/compression and image processing.
This talk presents the Mean Field (or Nash Certainty Equivalence (NCE)) methodology initiated with Min-Yi Huang and Roland Malhamé for the analysis and control of large population stochastic dynamic systems. Optimal control problems for multi-agent stochastic systems, in particular those with non-classical information patterns and those with intrinsic competitive behavior, are in general intractable. Inspired by mean field approximations in statistical mechanics, we analyse the common situation where the dynamics and rewards of any given agent are influenced by certain averages of the mass multi-agent behavior. The basic result is that classes of such systems possess game theoretic (Nash) equilibria wherein each agent employs feedback control laws depending upon both its local state and the collectively generated mass effect. In the infinite population limit the agents become statistically independent, a phenomenon related to the propagation of chaos in mathematical physics. Explicit solutions in the linear quadratic Gaussian (LQG) - NCE case generalize classical LQG control to the massive multi-agent situation, while extensions of the Mean Field notion enable one to analyze a range of problems in systems and control. Specifically, generalizations to nonlinear problems may be expressed in terms of controlled McKean-Vlasov Markov processes, while localized (or weighted) mean field extensions, the effect of possible major players and adaptive control generalizations permit applications to microeconomics, biology and communications; furthermore, the standard equations of consensus theory, which are of relevance to flocking behavior in artificial and biological systems, have been shown to be derivable from the basic LQG - NCE equations. In the distinct point process setting, the Mean Field formulation yields call admission control laws which realize competitive equilibria for complex communication networks. In this talk we shall motivate the Mean Field approach to stochastic control, survey the current results in the area by various research groups and make connections to physics, biology and economics. This talk presents joint work with Minyi Huang and Roland Malhamé, and Arman Kizilkale, Arthur Lazarte, Zhongjing Ma and Mojtaba Nourian.
Decoherence, which is caused due to the interaction of a quantum system with its environment plagues all quantum systems and leads to the loss of quantum properties that are vital for quantum computation and quantum information processing. Superficially, this problem appears to be the disturbance decoupling problem in classical control theory. In this talk first we briefly review recent advances in Quantum Control. Then we propose a novel strategy using techniques from geometric systems theory to completely eliminate decoherence and also provide conditions under which it can be done so. A novel construction employing an auxiliary system, the bait, which is instrumental to decoupling the system from the environment, is presented. This literally corresponds to the Internal Model Principle for Quantum Mechanical Systems which is quite different from the classical theory due to the quantum nature of the system. Almost all the earlier work on decoherence control employ density matrix and stochastic master equations to analyze the problem. Our approach to decoherence control involves the bilinear input affine model of quantum control system which lends itself to various techniques from classical control theory, but with non-trivial modifications to the quantum regime. This approach yields interesting results on open loop decouplability and Decoherence Free Subspaces (DFS). The results are also shown to be superior to the ones obtained via master equations. Finally, a methodology to synthesize feedback parameters itself is given, that technology permitting, could be implemented for practical 2-qubit systems performing decoherence free Quantum Computing. Open problems and future directions in quantum control also will be discussed.
We address several issues that are important for developing a comprehensive understanding of the problems of control over networks. Proceeding from bottom to top, we describe theoretical frameworks to study the following issues, and present some answers: (i) Network information theory: Are there limits to information transfer over wireless networks? How should nodes in a network cooperate to achieve information transfer? (ii) In-network information processing: How should data from distributed sensors be fused over a wireless network? Can one classify functions of sensor data vis-a-vis how difficult they are to compute over a wireless network? (iii) Real-time scheduling over wireless networks: How should packets with hard deadlines be scheduled for transmission over unreliable nodes? What QoS guarantees can be provided with respect to latencies and throughputs? (iv) Clock synchronization over wireless networks: What are the ultimate limits to synchronization error? How should clocks be synchronized? (v) System level guarantees in networked control: How can one provide overall guarantees on of networked control systems that take into account hybrid behavior, real-time interactions, and distributed aspects? (vi) Abstractions and architecture: What are appropriate abstractions, and what is an appropriate architecture, to simplify networked control system design and deployment?
It is widely recognized that many of the most important challenges faced by control engineers involve the development of methods to design and analyze systems having components most naturally described by differential equations interacting with components best modeled using sequential logic. This situation can arise both in the development of high volume, cost sensitive, consumer products and in the design and certification of one of a kind, complex and expensive systems. The response of the control community to this challenge includes work on limited communication control, learning control, control languages, and various efforts on hybrid systems. This work has led to important new ideas but progress has been modest and the more interesting results seem to lack the kind of unity that would lead to a broadly inclusive theory. In this talk we describe an approach to problems of this type based on sample path descriptions of finite state Markov processes and suitable adaptations of known results about linear systems. The result is an insightful design technique yielding finite state controllers for systems governed by differential equations. We illustrate with concrete examples.
Networked embedded sensing and control systems are increasingly becoming ubiquitous in applications from manufacturing, chemical processes and autonomous robotic space, air and ground vehicles, to medicine and biology. They offer significant advantages, but present serious challenges to information processing, communication and decision-making. This area, called cyber-physical systems, which has been brought to the forefront primarily because of advances in technology that make it possible to place computational intelligence out of the control room and in the field, is the latest challenge in systems and control, where our quest for higher degrees of autonomy has brought us, over the centuries, from the ancient water clock to autonomous spacecrafts. Our quest for autonomy leads to consideration of increasingly complex systems with ever more demanding performance specifications, and to mathematical representations beyond time-driven continuous linear and nonlinear systems, to event-driven and to hybrid systems; and to interdisciplinary research in areas at the intersection of control, computer science, networking, driven by application needs in physics, chemistry, biology, finance. After an introduction to some of the main research and education issues we need to address and a brief description of lessons learned in hybrid systems research, we shall discuss recent methodologies we are currently working on to meet stability and performance specifications in networked control systems, which use passivity, model-based control and intermittent feedback control.
A gray-box model is one that has a known structure (generally constrained to a strict subset of the class of models it is drawn from) but has unknown parameters. Such models typically embody or reflect the underlying physical or mechanistic understanding we have about the system, as well as structural features such as the delineation of subsystems and their interconnections. The unknown parameters in the gray-box model then become the focus of our system identification efforts.
In a variety of application domains, ranging from biology and medicine to power systems, the gray-box models that practitioners accept --- as plausible representations of the reality they deal with every day --- have been built up over decades of study, and are large, detailed and complex. In addition to being difficult to simulate or compute or design with, a significant feature of these models is the uncertainty associated with many or most of the parameters in the model. The data that one collects from the associated system is rarely rich enough to allow reliable identification of all these parameters, yet there are good reasons to not be satisfied with direct black-box identification of a reduced-order model. The challenge then is to develop meaningful reduced-order gray-box models that reflect the detailed, hard-won knowledge one has about the system, while being better suited to identification and simulation and control design than the original large model.
Practitioners generally seem to have an intuitive understanding of what aspects of the original model structure, and which variables and parameters, should be retained in a physically or mechanistically meaningful reduced-order model for whatever aspect of the system behavior they are dealing with at a particular time. Can we capture and perhaps improve on what they are doing when they develop their (often informal) reduced models?
This talk will illustrate and elaborate on the above themes. Examples will be presented of approaches and tools that might be used to explore and expose structure in a detailed gray-box model, to guide gray-box reduction.
Most individuals form their opinions about the quality of products, social trends and political issues via their interactions in social and economic networks. While the role of social networks as a conduit for information is as old as humanity, recent social and technological developments, such as Facebook, Blogs and Tweeter, have added further to the complexity of network interactions. Despite the ubiquity of social networks and their importance in communication, we know relatively little about how opinions form and information is transmitted in such networks. For example, does a large social network of individuals holding disperse information aggregate it efficiently? Can falsehoods, misinformation and rumors spread over networks? Do social networks, empowered by our modern communication means, support the wisdom of crowds or their ignorance? Systematic analysis of these questions necessitate a combination of tools and insights from game theory, the study of multiagent systems, and control theory. Game theory is central for studying both the selfish decisions and actions of individuals and the information that they reveal or communicate. Control theory is essential for a holistic study of networks and developing the tools for optimization over networks. In this talk, I report recent work on combining game theoretic and control theoretic approaches to the analysis of social learning over networks.