IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Thu, April 22, 2021
Recent results in deep learning have left no doubt that it is amongst the most powerful modeling tools that we possess. The real question is how can we utilize deep learning for control without losing stability and performance guarantees. Even though recent successes in deep reinforcement learning (DRL) have shown that deep learning can be a powerful value function approximator, several key questions must be answered before deep learning enables a new frontier in robotics. DRL methods have proven difficult to apply to real-world robotic systems where stability matters and safety is critical. In this talk, I will present our recent work in bringing deep learning-based methods to provably stable adaptive control and expand upon possibilities of using concepts from adaptive control to create safe and stable reinforcement learning algorithms. I will put our theoretical work in context by discussing several applications in flight control and agricultural robotics. I will also bring to light our recent work in understanding how the octopus brain works and how it can inspire future learning and distributed control tools.