inControl

Alberto Padoan
undefined
25 snips
Jan 16, 2023 • 1h 4min

ep8 - Anuradha Annaswamy: Adaptive Control - From the "Brave Era" to Reinforcement Learning and Back

In this episode, our guest is Anuradha Annaswamy. Anu is the Director of the Active-Adaptive Control Laboratory and Senior Research Scientist at the Massachusetts Institute of Technology in the Deparment of Mechanical Engineering.  We delve into adaptive control and its exciting history, ranging from the Brave Era to the audacious X15 tests and to modern intersections with Reinforcement Learning. Outline02:15 - Anu's background 05:20 - What is adaptation? 08:30 - The Brave Era 15:17 - The X15 accident  23:16 - Exploration vs exploitation 28:35 - Beyond linearity and time invariance 45:05 - Adaptive control vs Reinforcement Learning 52:12 - The future of adaptive control 54:34 - OutroEpisode linksAnu's lab:  http://aaclab.mit.edu/NCCR Symposium: https://tinyurl.com/bdz84p4cBook - Stable adaptive systems: https://tinyurl.com/mw4saame X-15 Flight 3-65-97: https://tinyurl.com/2kbe7nsyPaper - Adaptive Control and the NASA X-15-3 Flight Revisited: https://tinyurl.com/2p83k7ezPaper - A historical perspective of adaptive control and learning: https://tinyurl.com/yck89rcdPaper -Adaptive Control and Intersections with Reinforcement Learning: https://tinyurl.com/yc27rsydKYP Lemma: https://tinyurl.com/mkf35jjt Persistence of excitation: https://tinyurl.com/bpfwp9n9 Dual control: https://tinyurl.com/ywduzm5x Paper - Robust adaptive control in the presence of bounded disturbances:  https://tinyurl.com/4pztx23z Paper -  Reinforcement learning is direct adaptive optimal control https://tinyurl.com/appnjzynMRAC: https://tinyurl.com/bdzzphju Self Tuning Control: https://tinyurl.com/3mjs3skmSupport the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
undefined
7 snips
Nov 29, 2022 • 1h 11min

ep7 - Jean-Jacques Slotine: Sliding, nonlinear and adaptive control, contraction theory, complex networks, optimization, and machine learning

In this episode, our guest is Jean-Jacques Slotine, Professor of Mechanical Engineering and Information Sciences as well as Brain and Cognitive Sciences, Director of the Nonlinear Systems Laboratory at the Massachusetts Institute of Technology, and Distinguished Faculty at Google AI.  We explore and connect a wide range of ideas from nonlinear and adaptive control to robotics, neuroscience, complex networks, optimization and machine learning.Outline00:00 - Intro00:50 - Jean-Jacques' early life06:17 - Why control? 09:45 - Sliding control and adaptive nonlinear control18:47 - Neural networks 23:15 - First ventures in neuroscience28:27 - Contraction theory and applications48:26 - Synchronization51:10 - Complex networks57:59 - Optimization and machine learning1:08:17 -  Advice to future students and outro Episode linksNCCR Symposium: https://tinyurl.com/bdz84p4c Sliding mode control: https://tinyurl.com/2s45ra4mApplied nonlinear control: https://tinyurl.com/4wmbt4bwOn the Adaptive Control of Robot Manipulators: https://tinyurl.com/b7jcpkzwGaussian Networks for Direct Adaptive Control: https://tinyurl.com/22zb7pkxThe intermediate cerebellum may function as a wave-variable processor: https://tinyurl.com/2c34ytepOn contraction analysis for nonlinear systems: https://tinyurl.com/5cw4z9j8Kalman conjecture: https://tinyurl.com/2pfjsbkeI. Prigogine: https://tinyurl.com/5ct8yssb RNNs of RNNs: https://tinyurl.com/3mpt7fecHow Synchronization Protects from Noise: https://tinyurl.com/2p82erwp Controllability of complex networks: https://tinyurl.com/24w7hdaeB. Anderson: https://tinyurl.com/e9pkyxdxOnline lectures on nonlinear control: https://tinyurl.com/525cnru4Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
undefined
4 snips
Oct 17, 2022 • 1h 3min

ep6 - Norbert Wiener and Cybernetics

Discover the fascinating life of Norbert Wiener, the founding father of cybernetics. Explore his prodigious academic journey and groundbreaking contributions in communication and control theory. Dive deep into his dynamic collaboration with Arturo Rosenbluth, highlighting their innovative discussions on feedback mechanisms. Learn about the philosophical implications of Wiener's work, which bridged biology and technology, and find out how his ethical principles shaped his legacy in the world of science.
undefined
Aug 18, 2022 • 53min

ep5 - Sean Meyn: Markov chains, networks, reinforcement learning, beekeeping and jazz

In this episode, our guest is Sean Meyn, Professor and Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering at the University of Florida. The episode features Sean’s adventures in the areas of Markov chains, networks and Reinforcement Learning (RL) as well as anecdotes and trivia about beekeeping and jazz.Outline00:00 - Intro00:22 - Sean’s early steps03:53 - Markov chains08:45 - Networks18:26 - Stochastic approximation25:00 - Reinforcement Learning38:57 - The intersection of Reinforcement Learning and  Control42:37 - Favourite theorem44:05 - Beekeeping and jazz48:47 - OutroEpisode linksSean’s website: https://meyn.ece.ufl.edu/Sean’s books: shorturl.at/CFGRY (and T. Sargent's review: shorturl.at/hlGNR)G. Zames: shorturl.at/JPRWX (see also: shorturl.at/chiw5)State space model: shorturl.at/hST07 The life and work of A.A. Markov: shorturl.at/qsv35Fluid model: shorturl.at/HKN56M/M/1 queue: shorturl.at/dQW36Borkar-Meyn theorem: shorturl.at/eSTV4NCCR Automation Symposia: shorturl.at/csv03 (see also shorturl.at/ekpZ3)V. Konda’s PhD Thesis: shorturl.at/bdrv7Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
undefined
Jul 12, 2022 • 39min

ep4 - Alessandro Chiuso: From system identification to computer vision and back

Alessandro Chiuso, a Professor at the University of Padova, dives into his fascinating journey from telecommunications to control engineering. He discusses the complexities of system identification and the transformative role of machine learning in this field. Alessandro highlights the balance between research and personal passion, sharing his experiences as a semi-professional skier. He also emphasizes the importance of curiosity and perseverance for academic success, encouraging future students to embrace challenges in their paths.
undefined
May 16, 2022 • 19min

ep1 - A brief prehistory of control theory

This episode breaks the ice with a bit of the pre-history of control theory. We discuss three iconic ancestors of the science of feedback, including water clocks developed by Ktesibios, the earliest known thermostat,  and governors, a class of mechanical devices,  which,  without exaggeration, have enabled the first industrial Revolution in Britain.Outline00:00 -Intro 01:32 - Ktesibios06:15 - Cornelis Drebble11:55 - GovernorsEpisode linksO. Mayr - The origins of feedback controlK. Kelly - Out of ControlKtesibioshttps://en.wikipedia.org/wiki/CtesibiusDrebblehttps://en.wikipedia.org/wiki/Cornelis_Drebbelhttps://nautil.us/issue/12/feedback/the-vulgar-mechanic-and-his-magical-ovenhttps://sites.google.com/site/ukdrebbel/GovernorsJ.C. Maxwell, “On Governors,”Proc. of the Royal Society of London, vol. 16, pp. 270-283, 1868.S.Bennett- A History of Control Engineering 1800-1930Special issue on control education - The United Kingdom, by M.C. Smith, IEEE Control Systems Magazine, pp. 51-56, April 1996 (check also here).  Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
undefined
15 snips
May 16, 2022 • 24min

ep2 - Florian Dörfler: Power is nothing without control

This episode features an interview with Florian Dörfler, who is an Associate Professor at the Automatic Control Laboratory at ETH Zürich, Switzerland.  We discuss several topics, including his personal research trajectory, the influence of machine learning on control, future challenges in control theory, among others. Check out Florian's website here: http://people.ee.ethz.ch/~floriand/Outline00:00 - Intro 01:03 - Personal research trajectory05:57 - Influence of machine learning on control07:52 - Why doing research in control?09:51 - What would you change in control? 11:36 - Where is the field heading?14:20 - Favourite theorem in control theory16:20 - Vision: what would you like to achieve?17:03 - Influential figures19:17 - Sociology and control21:23 - What would you do if you were a student today?Episode linksFlorian's website: http://people.ee.ethz.ch/~floriand/Gerschgorin theorem: https://en.wikipedia.org/wiki/Gershgorin_circle_theoremSynchronization paper: https://www.pnas.org/doi/abs/10.1073/pnas.1212134110Hamming - "A stroke of genius": https://www.mccurley.org/advice/hamming_advice.html Support the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.
undefined
7 snips
May 16, 2022 • 1h 22min

ep3 - Ben Recht: A tour of optimization, machine learning, and control

In this episode, our guest is Ben Recht. Ben is a Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley.  We discuss several topics, including his research trajectory, Ben's tour of reinforcement learning, and his passion for music, among others. Check out Ben's website here: http://people.eecs.berkeley.edu/~brecht/Outline00:00 - Intro 01:01 - Ben predicts the birth of "inControl"02:40 - Personal research trajectory06:55 - How and why did you dive into control theory?08:43 - Influential figures who shaped Ben's research13:50 -  The "argmin" blog &  myth busting27:43 - Ben's tour of reinforcement learning45:18 - Future challenges for control52:06 - Biological origin of learning58:24 - "This or that" game1:02:54 - Questions from the audience1:14:51 - What would you do if you were a student today?1:17:00 - Ben's band: "the fun years"Episode linksBen's website: http://people.eecs.berkeley.edu/~brecht/argmin: http://www.argmin.net/the fun years: http://thefunyears.com/A tour of reinforcement learning: https://arxiv.org/abs/1806.09460Patterns, predictions and actions: http://mlstory.org/System level synthesis: https://arxiv.org/abs/1904.01634 Aizerman's conjecture: https://en.wikipedia.org/wiki/Aizerman%27s_conjectureSupport the showPodcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to L. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, ETH studio and mirrorlake . Music was composed by A New Element.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app