Avsnitt

  • Support the show to get full episodes and join the Discord community.

    The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. 

    Read more about our partnership.

    Check out this story:  Monkeys build mental maps to navigate new tasks

    Sign up for “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released.

    To explore more neuroscience news and perspectives, visit thetransmitter.org.

    Kim Stachenfeld embodies the original core focus of this podcast, the exploration of the intersection between neuroscience and AI, now commonly known as Neuro-AI. That's because she walks both lines. Kim is a Senior Research Scientist at Google DeepMind, the AI company that sprang from neuroscience principles, and also does research at the Center for Theoretical Neuroscience at Columbia University. She's been using her expertise in modeling, and reinforcement learning, and cognitive maps, for example, to help understand brains and to help improve AI. I've been wanting to have her on for a long time to get her broad perspective on AI and neuroscience.

    We discuss the relative roles of industry and academia in pursuing various objectives related to understanding and building cognitive entities

    She's studied the hippocampus in her research on reinforcement learning and cognitive maps, so we discuss what the heck the hippocampus does since it seems to implicated in so many functions, and how she thinks of reinforcement learning these days.

    Most recently Kim at Deepmind has focused on more practical engineering questions, using deep learning models to predict things like chaotic turbulent flows, and even to help design things like bridges and airplanes. And we don't get into the specifics of that work, but, given that I just spoke with Damian Kelty-Stephen, who thinks of brains partially as turbulent cascades, Kim and I discuss how her work on modeling turbulence has shaped her thoughts about brains.

    Kim's website.Twitter: @neuro_kim.Related papersScaling Laws for Neural Language Models.Emergent Abilities of Large Language Models.Learned simulators:Learned coarse models for efficient turbulence simulation.Physical design using differentiable learned simulators.

    Check out the transcript, provided by The Transmitter.

    0:00 - Intro4:31 - Deepmind's original and current vision9:53 - AI as tools and models12:53 - Has AI hindered neuroscience?17:05 - Deepmind vs academic work balance20:47 - Is industry better suited to understand brains?24?42 - Trajectory of Deepmind27:41 - Kim's trajectory33:35 - Is the brain a ML entity?36:12 - Hippocampus44:12 - Reinforcement learning51:32 - What does neuroscience need more and less of?1:02:53 - Neuroscience in a weird place?1:06:41 - How Kim's questions have changed1:16:31 - Intelligence and LLMs1:25:34 - Challenges

  • Support the show to get full episodes and join the Discord community.

    Àlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.

    So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.

    Alex's website: The Behavior of Organisms Laboratory.Twitter: @behaviOrganisms.Previous episodes:BI 168 Frauke Sandig and Eric Black w Alex Gomez-Marin: AWARE: Glimpses of Consciousness.BI 136 Michel Bitbol and Alex Gomez-Marin: Phenomenology.Related:The Consciousness of Neuroscience.Seeing the consciousness forest for the trees.The stairway to transhumanist heaven.

    0:00 - Intro4:13 - Evolving viewpoints10:05 - Near-death experience18:30 - Mechanistic neuroscience vs. the rest22:46 - Are you doing science?33:46 - Where is my. mind?44:55 - Productive vs. permissive brain59:30 - Panpsychism1:07:58 - Materialism1:10:38 - How to choose what to do1:16:54 - Fruit flies1:19:52 - AI and the Singularity

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Support the show to get full episodes and join the Discord community.

    Damian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.

    Damian's website.Related papersIn search for an alternative to the computer metaphor of the mind and brain.Multifractal emergent processes: Multiplicative interactions override nonlinear component properties.

    0:00 - Intro2:34 - Damian's background9:02 - Brains12:56 - Do neuroscientists have it all wrong?16:56 - Fractals everywhere28:01 - Fractality, causality, and cascades32:01 - Cascade instability as a metaphor for the brain40:43 - Damian's worldview46:09 - What is AI missing?54:26 - Turbulence1:01:02 - Intelligence without fractals? Multifractality1:10:28 - Ergodicity1:19:16 - Fractality, intelligence, life1:23:24 - What's exciting, changing viewpoints

  • Support the show to get full episodes and join the Discord community.

    Luis Favela is an Associate Professor at Indiana University Bloomington. He is part philosopher, part cognitive scientist, part many things, and on this episode we discuss his new book, The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment.

    In the book, Louie presents his NeuroEcological Nexus Theory, or NExT, which, as the subtitle says, proposes a way forward to tie together our brains, our bodies, and the environment; namely it has a lot to do with the complexity sciences and manifolds, which we discuss. But the book doesn't just present his theory. Among other things, it presents a rich historical look into why ecological psychology and neuroscience haven't been exactly friendly over the years, in terms of how to explain our behaviors, the role of brains in those explanations, how to think about what minds are, and so on. And it suggests how the two fields can get over their differences and be friends moving forward. And I'll just say, it's written in a very accessible manner, gently guiding the reader through many of the core concepts and science that have shaped ecological psychology and neuroscience, and for that reason alone I highly it.

    Ok, so we discuss a bunch of topics in the book, how Louie thinks, and Louie gives us some great background and historical lessons along the way.

    Luis' website.Book:The Ecological Brain: Unifying the Sciences of Brain, Body, and Environment

    0:00 - Intro7:05 - Louie's target with NEXT20:37 - Ecological psychology and grid cells22:06 - Why irreconcilable?28:59 - Why hasn't ecological psychology evolved more?47:13 - NExT49:10 - Hypothesis 155:45 - Hypothesis 21:02:55 - Artificial intelligence and ecological psychology1:16:33 - Manifolds1:31:20 - Hypothesis 4: Body, low-D, Synergies1:35:53 - Hypothesis 5: Mind emerges1:36:23 - Hypothesis 6:

  • Support the show to get full episodes and join the Discord community.

    Jovo, as you'll learn, is theoretically oriented, and enjoys the formalism of mathematics to approach questions that begin with a sense of wonder. So after I learn more about his overall approach, the first topic we discuss is the world's currently largest map of an entire brain... the connectome of an insect, the fruit fly. We talk about his role in this collaborative effort, what the heck a connectome is, why it's useful and what to do with it, and so on.

    The second main topic we discuss is his theoretical work on what his team has called prospective learning. Prospective learning differs in a fundamental way from the vast majority of AI these days, which they call retrospective learning. So we discuss what prospective learning is, and how it may improve AI moving forward.

    At some point there's a little audio/video sync issues crop up, so we switched to another recording method and fixed it... so just hang tight if you're viewing the podcast... it'll get better soon.

    0:00 - Intro05:25 - Jovo's approach13:10 - Connectome of a fruit fly26:39 - What to do with a connectome37:04 - How important is a connectome?51:48 - Prospective learning1:15:20 - Efficiency1:17:38 - AI doomerism

  • Support the show to get full episodes and join the Discord community.

    Jolande Fooken is a post-postdoctoral researcher interested in how we move our eyes and move our hands together to accomplish naturalistic tasks. Hand-eye coordination is one of those things that sounds simple and we do it all the time to make meals for our children day in, and day out, and day in, and day out. But it becomes way less seemingly simple as soon as you learn how we make various kinds of eye movements, and how we make various kinds of hand movements, and use various strategies to do repeated tasks. And like everything in the brain sciences, it's something we don't have a perfect story for yet. So, Jolande and I discuss her work, and thoughts, and ideas around those and related topics.

    Jolande's website.Twitter: @ookenfooken.Related papersI am a parent. I am a scientist.Eye movement accuracy determines natural interception strategies.Perceptual-cognitive integration for goal-directed action in naturalistic environments.

    0:00 - Intro3:27 - Eye movements8:53 - Hand-eye coordination9:30 - Hand-eye coordination and naturalistic tasks26:45 - Levels of expertise34:02 - Yarbus and eye movements42:13 - Varieties of experimental paradigms, varieties of viewing the brain52:46 - Career vision1:04:07 - Evolving view about the brain1:10:49 - Coordination, robots, and AI

  • Support the show to get full episodes and join the Discord community.

    Recently I was invited to moderate a panel at the annual Computational and Systems Neuroscience, or COSYNE, conference. This year was the 20th anniversary of COSYNE, and we were in Lisbon Porturgal. The panel goal was to discuss the relationship between neuroscience and AI. The panelists were Tony Zador, Alex Pouget, Blaise Aguera y Arcas, Kim Stachenfeld, Jonathan Pillow, and Eva Dyer. And I'll let them introduce themselves soon. Two of the panelists, Tony and Alex, co-founded COSYNE those 20 years ago, and they continue to have different views about the neuro-AI relationship. Tony has been on the podcast before and will return soon, and I'll also have Kim Stachenfeld on in a couple episodes. I think this was a fun discussion, and I hope you enjoy it. There's plenty of back and forth, a wide range of opinions, and some criticism from one of the audience questioners. This is an edited audio version, to remove long dead space and such. There's about 30 minutes of just panel, then the panel starts fielding questions from the audience.

    COSYNE.
  • Support the show to get full episodes and join the Discord community.

    Mazviita Chirimuuta is a philosopher at the University of Edinburgh. Today we discuss topics from her new book, The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.

    She largely argues that when we try to understand something complex, like the brain, using models, and math, and analogies, for example - we should keep in mind these are all ways of simplifying and abstracting away details to give us something we actually can understand. And, when we do science, every tool we use and perspective we bring, every way we try to attack a problem, these are all both necessary to do the science and limit the interpretation we can claim from our results. She does all this and more by exploring many topics in neuroscience and philosophy throughout the book, many of which we discuss today.

    Mazviita's University of Edinburgh page.The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience.Previous Brain Inspired episodes:BI 072 Mazviita Chirimuuta: Understanding, Prediction, and RealityBI 114 Mark Sprevak and Mazviita Chirimuuta: Computation and the Mind

    0:00 - Intro5:28 - Neuroscience to philosophy13:39 - Big themes of the book27:44 - Simplifying by mathematics32:19 - Simplifying by reduction42:55 - Simplification by analogy46:33 - Technology precedes science55:04 - Theory, technology, and understanding58:04 - Cross-disciplinary progress58:45 - Complex vs. simple(r) systems1:08:07 - Is science bound to study stability?1:13:20 - 4E for philosophy but not neuroscience?1:28:50 - ANNs as models1:38:38 - Study of mind

  • Support the show to get full episodes and join the Discord community.

    As some of you know, I recently got back into the research world, and in particular I work in Eric Yttris' lab at Carnegie Mellon University.

    Eric's lab studies the relationship between various kinds of behaviors and the neural activity in a few areas known to be involved in enacting and shaping those behaviors, namely the motor cortex and basal ganglia.  And study that, he uses tools like optogentics, neuronal recordings, and stimulations, while mice perform certain tasks, or, in my case, while they freely behave wandering around an enclosed space.

    We talk about how Eric got here, how and why the motor cortex and basal ganglia are still mysteries despite lots of theories and experimental work, Eric's work on trying to solve those mysteries using both trained tasks and more naturalistic behavior. We talk about the valid question, "What is a behavior?", and lots more.

    Yttri Lab

    Twitter: @YttriLabRelated papersOpponent and bidirectional control of movement velocity in the basal ganglia.B-SOiD, an open-source unsupervised algorithm for identification and fast prediction of behaviors.

    0:00 - Intro2:36 - Eric's background14:47 - Different animal models17:59 - ANNs as models for animal brains24:34 - Main question25:43 - How circuits produce appropriate behaviors26:10 - Cerebellum27:49 - What do motor cortex and basal ganglia do?49:12 - Neuroethology1:06:09 - What is a behavior?1:11:18 - Categorize behavior (B-SOiD)1:22:01 - Real behavior vs. ANNs1:33:09 - Best era in neuroscience

  • Support the show to get full episodes and join the Discord community.

    Peter Stratton is a research scientist at Queensland University of Technology.

    I was pointed toward Pete by a patreon supporter, who sent me a sort of perspective piece Pete wrote that is the main focus of our conversation, although we also talk about some of his work in particular - for example, he works with spiking neural networks, like my last guest, Dan Goodman.

    What Pete argues for is what he calls a sideways-in approach. So a bottom-up approach is to build things like we find them in the brain, put them together, and voila, we'll get cognition. A top-down approach, the current approach in AI, is to train a system to perform a task, give it some algorithms to run, and fiddle with the architecture and lower level details until you pass your favorite benchmark test. Pete is focused more on the principles of computation brains employ that current AI doesn't. If you're familiar with David Marr, this is akin to his so-called "algorithmic level", but it's between that and the "implementation level", I'd say. Because Pete is focused on the synthesis of different kinds of brain operations - how they intermingle to perform computations and produce emergent properties. So he thinks more like a systems neuroscientist in that respect. Figuring that out is figuring out how to make better AI, Pete says. So we discuss a handful of those principles, all through the lens of how challenging a task it is to synthesize multiple principles into a coherent functioning whole (as opposed to a collection of parts). Buy, hey, evolution did it, so I'm sure we can, too, right?

    Peter's website.Related papersConvolutionary, Evolutionary, and Revolutionary: What’s Next for Brains, Bodies, and AI?Making a Spiking Net Work: Robust brain-like unsupervised machine learning.Global segregation of cortical activity and metastable dynamics.Unlocking neural complexity with a robotic key

    0:00 - Intro3:50 - AI background, neuroscience principles8:00 - Overall view of modern AI14:14 - Moravec's paradox and robotics20:50 -Understanding movement to understand cognition30:01 - How close are we to understanding brains/minds?32:17 - Pete's goal34:43 - Principles from neuroscience to build AI42:39 - Levels of abstraction and implementation49:57 - Mental disorders and robustness55:58 - Function vs. implementation1:04:04 - Spiking networks1:07:57 - The roadmap1:19:10 - AGI1:23:48 - The terms AGI and AI1:26:12 - Consciousness

  • Support the show to get full episodes and join the Discord community.

    You may know my guest as the co-founder of Neuromatch, the excellent online computational neuroscience academy, or as the creator of the Brian spiking neural network simulator, which is freely available. I know him as a spiking neural network practitioner extraordinaire. Dan Goodman runs the Neural Reckoning Group at Imperial College London, where they use spiking neural networks to figure out how biological and artificial brains reckon, or compute.

    All of the current AI we use to do all the impressive things we do, essentially all of it, is built on artificial neural networks. Notice the word "neural" there. That word is meant to communicate that these artificial networks do stuff the way our brains do stuff. And indeed, if you take a few steps back, spin around 10 times, take a few shots of whiskey, and squint hard enough, there is a passing resemblance. One thing you'll probably still notice, in your drunken stupor, is that, among the thousand ways ANNs differ from brains, is that they don't use action potentials, or spikes. From the perspective of neuroscience, that can seem mighty curious. Because, for decades now, neuroscience has focused on spikes as the things that make our cognition tick.

    We count them and compare them in different conditions, and generally put a lot of stock in their usefulness in brains.

    So what does it mean that modern neural networks disregard spiking altogether?

    Maybe spiking really isn't important to process and transmit information as well as our brains do. Or maybe spiking is one among many ways for intelligent systems to function well. Dan shares some of what he's learned and how he thinks about spiking and SNNs and a host of other topics.

    Neural Reckoning Group.Twitter: @neuralreckoning.Related papersNeural heterogeneity promotes robust learning.Dynamics of specialization in neural modules under resource constraints.Multimodal units fuse-then-accumulate evidence across channels.Visualizing a joint future of neuroscience and neuromorphic engineering.

    0:00 - Intro3:47 - Why spiking neural networks, and a mathematical background13:16 - Efficiency17:36 - Machine learning for neuroscience19:38 - Why not jump ship from SNNs?23:35 - Hard and easy tasks29:20 - How brains and nets learn32:50 - Exploratory vs. theory-driven science37:32 - Static vs. dynamic39:06 - Heterogeneity46:01 - Unifying principles vs. a hodgepodge50:37 - Sparsity58:05 - Specialization and modularity1:00:51 - Naturalistic experiments1:03:41 - Projects for SNN research1:05:09 - The right level of abstraction1:07:58 - Obstacles to progress1:12:30 - Levels of explanation1:14:51 - What has AI taught neuroscience?1:22:06 - How has neuroscience helped AI?

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like

    Whether brains actually reorganize after damageThe role of brain plasticity in generalThe path toward and the path not toward understanding higher cognitionHow to fix motor problems after strokesAGIFunctionalism, consciousness, and much more.

    Relevant links:

    John's Lab.Twitter: @blamlabRelated papersWhat are we talking about? Clarifying the fuzzy concept of representation in neuroscience and beyond.Against cortical reorganisation.Other episodes with John:BI 025 John Krakauer: Understanding CognitionBI 077 David and John Krakauer: Part 1BI 078 David and John Krakauer: Part 2BI 113 David Barack and John Krakauer: Two Views On Cognition

    Time stamps0:00 - Intro2:07 - It's a podcast episode!6:47 - Stroke and Sherrington neuroscience19:26 - Thinking vs. moving, representations34:15 - What's special about humans?56:35 - Does cortical reorganization happen?1:14:08 - Current era in neuroscience

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    By day, Max Bennett is an entrepreneur. He has cofounded and CEO'd multiple AI and technology companies. By many other countless hours, he has studied brain related sciences. Those long hours of research have payed off in the form of this book, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

    Three lines of research formed the basis for how Max synthesized knowledge into the ideas in his current book: findings from comparative psychology (comparing brains and minds of different species), evolutionary neuroscience (how brains have evolved), and artificial intelligence, especially the algorithms developed to carry out functions. We go through I think all five of the breakthroughs in some capacity. A recurring theme is that each breakthrough may explain multiple new abilities. For example, the evolution of the neocortex may have endowed early mammals with the ability to simulate or imagine what isn't immediately present, and this ability might further explain mammals' capacity to engage in vicarious trial and error (imagining possible actions before trying them out), the capacity to engage in counterfactual learning (what would have happened if things went differently than they did), and the capacity for episodic memory and imagination.

    The book is filled with unifying accounts like that, and it makes for a great read. Strap in, because Max gives a sort of masterclass about many of the ideas in his book.

    Twitter:@maxsbennettBook:A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains.

    0:00 - Intro5:26 - Why evolution is important7:22 - Maclean's triune brain14:59 - Breakthrough 1: Steering29:06 - Fish intelligence40:38 - Breakthrough 3: Mentalizing52:44 - How could we improve the human brain?1:00:44 - What is intelligence?1:13:50 - Breakthrough 5: Speaking

  • Support the show to get full episodes and join the Discord community.

    Welcome to another special panel discussion episode.

    I was recently invited to moderate at discussion amongst 6 people at the annual Aspirational Neuroscience meetup. Aspirational Neuroscience is a nonprofit community run by Kenneth Hayworth. Ken has been on the podcast before on episode 103. Ken helps me introduce the meetup and panel discussion for a few minutes. The goal in general was to discuss how current and developing neuroscience technologies might be used to decode a nontrivial memory from a static connectome - what the obstacles are, how to surmount those obstacles, and so on.

    There isn't video of the event, just audio, and because we were all sharing microphones and they were being passed around, you'll hear some microphone type noise along the way - but I did my best to optimize the audio quality, and it turned out mostly quite listenable I believe.

    Aspirational NeurosciencePanelists:Anton Arkhipov, Allen Institute for Brain Science.@AntonSArkhipovKonrad Kording, University of Pennsylvania.@KordingLabTomás Ryan, Trinity College Dublin.@TJRyan_77Srinivas Turaga, Janelia Research Campus.Dong Song, University of Southern California.@dongsongZhihao Zheng, Princeton University.@zhihaozheng

    0:00 - Intro1:45 - Ken Hayworth14:09 - Panel Discussion

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Laura Gradowski is a philosopher of science at the University of Pittsburgh. Pluralism is roughly the idea that there is no unified account of any scientific field, that we should be tolerant of and welcome a variety of theoretical and conceptual frameworks, and methods, and goals, when doing science. Pluralism is kind of a buzz word right now in my little neuroscience world, but it's an old and well-trodden notion... many philosophers have been calling for pluralism for many years. But how pluralistic should we be in our studies and explanations in science? Laura suggests we should be very, very pluralistic, and to make her case, she cites examples in the history of science of theories and theorists that were once considered "fringe" but went on to become mainstream accepted theoretical frameworks. I thought it would be fun to have her on to share her ideas about fringe theories, mainstream theories, pluralism, etc.

    We discuss a wide range of topics, but also discuss some specific to the brain and mind sciences. Laura goes through an example of something and someone going from fringe to mainstream - the Garcia effect, named after John Garcia, whose findings went agains the grain of behaviorism, the dominant dogma of the day in psychology. But this overturning only happened after Garcia had to endure a long scientific hell of his results being ignored and shunned. So, there are multiple examples like that, and we discuss a handful. This has led Laura to the conclusion we should accept almost all theoretical frameworks, We discuss her ideas about how to implement this, where to draw the line, and much more.

    Laura's page at the Center for the Philosophy of Science at the University of Pittsburgh.

    Facing the Fringe.

    Garcia's reflections on his troubles: Tilting at the Paper Mills of Academe

    0:00 - Intro3:57 - What is fringe?10:14 - What makes a theory fringe?14:31 - Fringe to mainstream17:23 - Garcia effect28:17 - Fringe to mainstream: other examples32:38 - Fringe and consciousness33:19 - Words meanings change over time40:24 - Pseudoscience43:25 - How fringe becomes mainstream47:19 - More fringe characteristics50:06 - Pluralism as a solution54:02 - Progress1:01:39 - Encyclopedia of theories1:09:20 - When to reject a theory1:20:07 - How fringe becomes fringe1:22:50 - Marginilization1:27:53 - Recipe for fringe theorist

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.

    Eric's website.Related papersPredictive learning as a network mechanism for extracting low-dimensional latent space representations.A scale-dependent measure of system dimensionality.From lazy to rich to exclusive task representations in neural networks and neural codes.Feedback through graph motifs relates structure and function in complex networks.

    0:00 - Intro4:15 - Reflecting on the rise of dynamical systems in neuroscience11:15 - DST view on macro scale15:56 - Intuitions22:07 - Eric's approach31:13 - Are brains more or less impressive to you now?38:45 - Why is dimensionality important?50:03 - High-D in Low-D54:14 - Dynamical motifs1:14:56 - Theory for its own sake1:18:43 - Rich vs. lazy learning1:22:58 - Latent variables1:26:58 - What assumptions give you most pause?

  • Support the show to get full episodes and join the Discord community.

    I was recently invited to moderate a panel at the Annual Bernstein conference - this one was in Berlin Germany. The panel I moderated was at a satellite workshop at the conference called How can machine learning be used to generate insights and theories in neuroscience? Below are the panelists. I hope you enjoy the discussion!

    Program: How can machine learning be used to generate insights and theories in neuroscience?Panelists:Katrin FrankeLab website.Twitter: @kfrankelab.Ralf HaefnerHaefner lab.Twitter: @haefnerlab.Martin HebartHebart Lab.Twitter: @martin_hebart.Johannes JaegerYogi's website.Twitter: @yoginho.Fred WolfFred's university webpage.

    Organizers:

    Alexander Ecker | University of Göttingen, GermanyFabian Sinz | University of Göttingen, GermanyMohammad Bashiri, Pavithra Elumalai, Michaela Vystrcilová | University of Göttingen, Germany
  • Support the show to get full episodes and join the Discord community.

    David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.

    David has been on the podcast a few times... once by himself, and again with Gyorgy Buzsaki.

    Poeppel labTwitter: @davidpoeppel.Related papersWe don’t know how the brain stores anything, let alone words.Memory in humans and deep language models: Linking hypotheses for model augmentation.The neural ingredients for a language of thought are available.

    0:00 - Intro11:17 - Across levels14:598 - Nature of memory24:12 - Using the right tools for the right question35:46 - LLMs, what they need, how they've shaped David's thoughts44:55 - Across levels54:07 - Speed of progress1:02:21 - Neuroethology and mental illness - patreon1:24:42 - Language of Thought

  • Support the show to get full episodes and join the Discord community.

    Check out my free video series about what's missing in AI and Neuroscience

    Kevin Mitchell is professor of genetics at Trinity College Dublin. He's been on the podcast before, and we talked a little about his previous book, Innate – How the Wiring of Our Brains Shapes Who We Are. He's back today to discuss his new book Free Agents: How Evolution Gave Us Free Will. The book is written very well and guides the reader through a wide range of scientific knowledge and reasoning that undergirds Kevin's main take home: our free will comes from the fact that we are biological organisms, biological organisms have agency, and as that agency evolved to become more complex and layered, so does our ability to exert free will. We touch on a handful of topics in the book, like the idea of agency, how it came about at the origin of life, and how the complexity of kinds of agency, the richness of our agency, evolved as organisms became more complex.

    We also discuss Kevin's reliance on the indeterminacy of the universe to tell his story, the underlying randomness at fundamental levels of physics. Although indeterminacy isn't necessary for ongoing free will, it is responsible for the capacity for free will to exist in the first place. We discuss the brain's ability to harness its own randomness when needed, creativity, whether and how it's possible to create something new, artificial free will, and lots more.

    Kevin's website.Twitter: @WiringtheBrainBook: Free Agents: How Evolution Gave Us Free Will

    4:27 - From Innate to Free Agents9:14 - Thinking of the whole organism15:11 - Who the book is for19:49 - What bothers Kevin27:00 - Indeterminacy30:08 - How it all began33:08 - How indeterminacy helps43:58 - Libet's free will experiments50:36 - Creativity59:16 - Selves, subjective experience, agency, and free will1:10:04 - Levels of agency and free will1:20:38 - How much free will can we have?1:28:03 - Hierarchy of mind constraints1:36:39 - Artificial agents and free will1:42:57 - Next book?

  • Check out my free video series about what's missing in AI and Neuroscience

    Support the show to get full episodes and join the Discord community.

    Alicia Juarrero is a philosopher and has been interested in complexity since before it was cool.

    In this episode, we discuss many of the topics and ideas in her new book, Context Changes Everything: How Constraints Create Coherence, which makes the thorough case that constraints should be given way more attention when trying to understand complex systems like brains and minds - how they're organized, how they operate, how they're formed and maintained, and so on. Modern science, thanks in large part to the success of physics, focuses on a single kind of causation - the kind involved when one billiard ball strikes another billiard ball. But that kind of causation neglects what Alicia argues are the most important features of complex systems the constraints that shape the dynamics and possibility spaces of systems. Much of Alicia's book describes the wide range of types of constraints we should be paying attention to, and how they interact and mutually influence each other. I highly recommend the book, and you may want to read it before, during, and after our conversation. That's partly because, if you're like me, the concepts she discusses still aren't comfortable to think about the way we're used to thinking about how things interact. Thinking across levels of organization turns out to be hard. You might also want her book handy because, hang on to your hats, we jump around a lot among those concepts. Context Changes everything comes about 25 years after her previous classic, Dynamics In Action, which we also discuss and which I also recommend if you want more of a primer to her newer more expansive work. Alicia's work touches on all things complex, from self-organizing systems like whirlpools, to ecologies, businesses, societies, and of course minds and brains.

    Book:Context Changes Everything: How Constraints Create Coherence

    0:00 - Intro3:37 - 25 years thinking about constraints8:45 - Dynamics in Action and eliminativism13:08 - Efficient and other kinds of causation19:04 - Complexity via context independent and dependent constraints25:53 - Enabling and limiting constraints30:55 - Across scales36:32 - Temporal constraints42:58 - A constraint cookbook?52:12 - Constraints in a mechanistic worldview53:42 - How to explain using constraints56:22 - Concepts and multiple realizabillity59:00 - Kevin Mitchell question1:08:07 - Mac Shine Question1:19:07 - 4E1:21:38 - Dimensionality across levels1:27:26 - AI and constraints1:33:08 - AI and life