Avsnitt

  • Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/.

    Timestamps:
    0:20 AI Impacts surveys
    18:11 What AI will look like in 20 years
    22:43 Experts’ extinction risk predictions
    29:35 Opinions on slowing down AI development
    31:25 AI “arms races”
    34:00 AI risk areas with the most agreement
    40:41 Do “high hopes and dire concerns” go hand-in-hand?
    42:00 Intelligence explosions
    45:37 Discontinuous progress
    49:43 Impacts of AI crossing the human-level intelligence threshold
    59:39 What does AI learn from human culture?
    1:02:59 AI scaling
    1:05:04 What should we do?

  • Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info

    Timestamps:
    00:00 Pausing AI
    10:23 Risks during an AI pause
    19:41 Hardware overhang
    29:04 Technological progress
    37:00 Safety research during a pause
    54:42 Social dynamics of AI risk
    1:10:00 What prevents cooperation?
    1:18:21 What about China?
    1:28:24 Protesting AGI corporations

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org

    Timestamps:
    00:00 Encode Justice
    06:11 AI ethics and AI safety
    15:49 Humans in the loop
    23:59 AI in social media
    30:42 Deteriorating social skills?
    36:00 AIs identifying as AIs
    43:36 AI influence in elections
    50:32 AIs interacting with human systems

  • Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/

    Timestamps:
    00:00 Is AI like a Shoggoth?
    09:50 Scaling laws
    16:41 Are humans more general than AIs?
    21:54 Are AI models explainable?
    27:49 Using AI to explain AI
    32:36 Evidence for AI being uncontrollable
    40:29 AI verifiability
    46:08 Will AI be aligned by default?
    54:29 Creating human-like AI
    1:03:41 Robotics and safety
    1:09:01 Obstacles to AI in the economy
    1:18:00 AI innovation with current models
    1:23:55 AI accidents in the past and future

  • On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years.

    Timestamps:
    00:00 Technological progress
    07:59 Regulatory capture and AI
    11:53 AI as a new form of life
    15:44 Can AI development be paused?
    20:12 Biden's executive order on AI
    22:54 How would a GPU kill switch work?
    27:00 Regulating models or applications?
    32:13 AGI in 2-8 years
    42:00 China and US collaboration on AI

  • Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/

    Timestamps:
    00:00 A new nuclear arms race
    08:07 How much do world leaders matter?
    18:04 How much does ideology matter?
    22:14 Do nuclear weapons cause stable peace?
    31:29 North Korea
    34:01 Have we overestimated nuclear risk?
    43:24 Time pressure in nuclear decisions
    52:00 Why so many nuclear warheads?
    1:02:17 Has containment been successful?
    1:11:34 Coordination mechanisms
    1:16:31 Technological innovations
    1:25:57 Public perception of nuclear risk
    1:29:52 Easier access to nuclear weapons
    1:33:31 Reaching a stable, low-risk era

  • Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/

    Timestamps:
    00:00 Autonomy in weapon systems
    12:19 Balance of offense and defense
    20:05 Killer drone systems
    28:53 Is autonomy like nuclear weapons?
    37:20 Low-tech defenses against drones
    48:29 Autonomy and power balance
    1:00:24 Tricking autonomous systems
    1:07:53 Unpredictability of autonomous systems
    1:13:16 Will we trust autonomous systems too much?
    1:27:28 Legal terminology
    1:32:12 Political possibilities

  • Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment.

    Timestamps:
    00:00 Uncontrollable superintelligence
    16:41 AI goals and the "virus analogy"
    28:36 Speed of AI cognition
    39:25 Narrow AI and autonomy
    52:23 Reliability of current and future AI
    1:02:33 Planning for multiple AI scenarios
    1:18:57 Will AIs seek self-preservation?
    1:27:57 Is there a unified solution to AI alignment?
    1:30:26 Concrete AI safety proposals

  • Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.

    Timestamps:
    00:00 AI Safety Summit in the UK
    12:18 Are officials up to date on AI?
    23:22 Objections to AI policy
    31:27 The EU AI Act
    43:37 The right level of regulation
    57:11 Risks and regulatory tools
    1:04:44 Open-source AI
    1:14:56 Subsidising AI safety research
    1:26:29 Global institutions for safe AI
    1:34:34 Autonomy in weapon systems

  • Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai

    Timestamps:
    00:00 X.ai - Elon Musk's new AI venture
    02:41 How AI risk thinking has evolved
    12:58 AI bioengeneering
    19:16 AI agents
    24:55 Preventing autocracy
    34:11 AI race - corporations and militaries
    48:04 Bulletproofing AI organizations
    1:07:51 Open-source models
    1:15:35 Dan's textbook on AI safety
    1:22:58 Rogue AI
    1:28:09 LLMs and value specification
    1:33:14 AI goal drift
    1:41:10 Power-seeking AI
    1:52:07 AI deception
    1:57:53 Representation engineering

  • Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca

    Timestamps:
    00:00 Is AGI close?
    06:56 Compute versus data
    09:59 Information theory
    20:36 Universality of learning
    24:53 Hards steps in evolution
    30:30 Governments and advanced AI
    40:33 How will AI transform the economy?
    55:26 How will AI change transaction costs?
    1:00:31 Isolated thinking about AI
    1:09:43 AI and Leviathan
    1:13:01 Informational resolution
    1:18:36 Open-source AI
    1:21:24 AI will decrease state power
    1:33:17 Timeline of a techno-feudalist future
    1:40:28 Alignment difficulty and AI scale
    1:45:19 Solving robotics
    1:54:40 A constrained Leviathan
    1:57:41 An Apollo Project for AI safety
    2:04:29 Secure "gain-of-function" AI research
    2:06:43 Is the market expecting AGI soon?

  • Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year

    In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI’s worldbuilding contest.

    Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer.

    This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers.

    While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that.

    While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower. Instead, they tend to be smaller and more intimately interconnected. The world's inhabitants also have nuanced attitudes towards A.I. tools, embracing or avoiding their applications based on their religious or philosophical beliefs.

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

    Explore this worldbuild: https://worldbuild.ai/computing-counsel

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.

  • Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Most people don’t know or care about the difference and have no idea how they could distinguish between a human or artificial stranger. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and therapy, and those bubbles help to sustain their inhabitants. Can you get excited about a world with these tradeoffs?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year

    In the seventh episode of Imagine A World we explore a fictional worldbuild titled 'Hall of Mirrors', which was a third-place winner of FLI's worldbuilding contest.

    Michael Vasser joins Guillaume Riesen to discuss his imagined future, which he created with the help of Matija Franklin and Bryce Hidysmith. Vassar was formerly the president of the Singularity Institute, and co-founded Metamed; more recently he has worked on communication across political divisions. Franklin is a PhD student at UCL working on AI Ethics and Alignment. Finally, Hidysmith began in fashion design, passed through fortune-telling before winding up in finance and policy research, at places like Numerai, the Median Group, Bismarck Analysis, and Eco.com.

    Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power that we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what is real.

    This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet, on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives.

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

    Explore this worldbuild: https://worldbuild.ai/hall-of-mirrors

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.

  • Steve Omohundro joins the podcast to discuss Provably Safe Systems, a paper he co-authored with FLI President Max Tegmark. You can read the paper here: https://arxiv.org/pdf/2309.01933.pdf

    Timestamps:
    00:00 Provably safe AI systems
    12:17 Alignment and evaluations
    21:08 Proofs about language model behavior
    27:11 Can we formalize safety?
    30:29 Provable contracts
    43:13 Digital replicas of actual systems
    46:32 Proof-carrying code
    56:25 Can language models think logically?
    1:00:44 Can AI do proofs for us?
    1:09:23 Hard to proof, easy to verify
    1:14:31 Digital neuroscience
    1:20:01 Risks of totalitarianism
    1:22:29 Can we guarantee safety?
    1:25:04 Real-world provable safety
    1:29:29 Tamper-proof hardware
    1:35:35 Mortal and throttled AI
    1:39:23 Least-privilege guarantee
    1:41:53 Basic AI drives
    1:47:47 AI agency and world models
    1:52:08 Self-improving AI
    1:58:21 Is AI overhyped now?

  • What if AI allowed us to communicate with animals? Could interspecies communication lead to new levels of empathy? How might communicating with animals lead humans to reimagine our place in the natural world?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.

    In the sixth episode of Imagine A World we explore the fictional worldbuild titled 'AI for the People', a third place winner of the worldbuilding contest.

    Our host Guillaume Riesen welcomes Chi Rainer Bornfree, part of this three-person worldbuilding team alongside her husband Micah White, and their collaborator, J.R. Harris. Chi has a PhD in Rhetoric from UC Berkeley and has taught at Bard, Princeton, and NY State Correctional facilities, in the meantime writing fiction, essays, letters, and more. Micah, best-known as the co-creator of the 'Occupy Wall Street' movement and the author of 'The End of Protest', now focuses primarily on the social potential of cryptocurrencies, while Harris is a freelance illustrator and comic artist.

    The name 'AI for the People' does a great job of capturing this team's activist perspective and their commitment to empowerment. They imagine social and political shifts that bring power back into the hands of individuals, whether that means serving as lawmakers on randomly selected committees, or gaining income by choosing to sell their personal data online. But this world isn't just about human people. Its biggest bombshell is an AI breakthrough that allows humans to communicate with other animals. What follows is an existential reconsideration of humanity's place in the universe. This team has created an intimate, complex portrait of a world shared by multiple parties: AIs, humans, other animals, and the environment itself. As these entities find their way forward together, their goals become enmeshed and their boundaries increasingly blurred.

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

    Explore this worldbuild: https://worldbuild.ai/ai-for-the-people

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects

    Media and resources referenced in the episode:

    https://en.wikipedia.org/wiki/Life_3.0

    https://en.wikipedia.org/wiki/1_the_Road

    https://ignota.org/products/pharmako-ai

    https://en.wikipedia.org/wiki/The_Ministry_for_the_Future

    https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/

    https://en.wikipedia.org/wiki/Occupy_Wall_Street

    https://en.wikipedia.org/wiki/Sortition

    https://en.wikipedia.org/wiki/Iroquois

    https://en.wikipedia.org/wiki/The_Ship_Who_Sang

    https://en.wikipedia.org/wiki/The_Sparrow_(novel)

    https://en.wikipedia.org/wiki/After_Yang

  • If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year

    In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light’. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy.

    Mako’s world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There’s an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There’s a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace.

    While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He’s particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we’d call an Artificial General Intelligence - abbreviated ‘AGI’.

    Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world.

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.


    Explore this worldbuild: https://worldbuild.ai/to-light

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects

    Media and concepts referenced in the episode:

    https://en.wikipedia.org/wiki/Terra_Ignota

    https://en.wikipedia.org/wiki/The_Transparent_Society

    https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer

    https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain

    https://en.wikipedia.org/wiki/The_Matrix

    https://aboutmako.makopool.com/

  • Johannes Ackva joins the podcast to discuss the main drivers of climate change and our best technological and governmental options for managing it. You can read more about Johannes' work at http://founderspledge.com/climate

    Timestamps:
    00:00 Johannes's journey as an environmentalist
    13:21 The drivers of climate change
    23:00 Oil, coal, and gas
    38:05 Solar, wind, and hydro
    49:34 Nuclear energy
    57:03 Geothermal energy
    1:00:41 Most promising technologies
    1:05:40 Government subsidies
    1:13:28 Carbon taxation
    1:17:10 Planting trees
    1:21:53 Influencing government policy
    1:26:39 Different climate scenarios
    1:34:49 Economic growth and emissions
    1:37:23 Social stability

    References:
    Emissions by sector: https://ourworldindata.org/emissions-by-sector
    Energy density of different energy sources: https://www.nature.com/articles/s41598-022-25341-9
    Emissions forecasts: https://www.lse.ac.uk/granthaminstitute/publication/the-unconditional-probability-distribution-of-future-emissions-and-temperatures/ and https://www.science.org/doi/10.1126/science.adg6248
    Risk management: https://www.youtube.com/watch?v=6JJvIR1W-xI
    Carbon pricing: https://www.cell.com/joule/pdf/S2542-4351(18)30567-1.pdf
    Why not simply plant trees?: https://climate.mit.edu/ask-mit/how-many-new-trees-would-we-need-offset-our-carbon-emissions
    Deforestation: https://www.science.org/doi/10.1126/science.ade3535
    Decoupling of economic growth and emissions: https://www.globalcarbonproject.org/carbonbudget/22/highlights.htm
    Premature deaths from air pollution: https://www.unep.org/interactives/air-pollution-note/

  • How do low income countries affected by climate change imagine their futures? How do they overcome these twin challenges? Will all nations eventually choose or be forced to go digital?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.

    In the fourth episode of Imagine A World, we explore the fictional worldbuild titled 'Digital Nations'.

    Conrad Whitaker and Tracey Kamande join Guillaume Riesen on 'Imagine a World' to talk about their worldbuild, 'Digital Nations', which they created with their teammate, Dexter Findley. All three worldbuilders were based in Kenya while crafting their entry, though Dexter has just recently moved to the UK. Conrad is a Nairobi-based startup advisor and entrepreneur, Dexter works in humanitarian aid, and Tracey is the Co-founder of FunKe Science, a platform that promotes interactive learning of science among school children.

    As the name suggests, this world is a deep dive into virtual communities. It explores how people might find belonging and representation on the global stage through digital nations that aren't tied to any physical location. This world also features a fascinating and imaginative kind of artificial intelligence that they call 'digital persons'. These are inspired by biological brains and have a rich internal psychology. Rather than being trained on data, they're considered to be raised in digital nurseries. They have a nuanced but mostly loving relationship with humanity, with some even going on to found their own digital nations for us to join.

    In an incredible turn of events, last year the South Pacific state of Tuvalu was the first to “go virtual” in response to sea levels threatening the island nation's physical territory. This happened in real life just months after it was written into this imagined world in our worldbuilding contest, showing how rapidly ideas that seem ‘out there’ can become reality. Will all nations eventually go digital? And might AGIs be assimilated, 'brought up' rather than merely trained, as 'digital people', citizens to live communally alongside humans in these futuristic states?

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

    Explore this worldbuild: https://worldbuild.ai/digital-nations

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects

    Media and concepts referenced in the episode:

    https://www.tuvalu.tv/

    https://en.wikipedia.org/wiki/Trolley_problem

    https://en.wikipedia.org/wiki/Climate_change_in_Kenya

    https://en.wikipedia.org/wiki/John_von_Neumann

    https://en.wikipedia.org/wiki/Brave_New_World

    https://thenetworkstate.com/the-network-state

    https://en.wikipedia.org/wiki/Culture_series

  • What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this?

    Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.

    In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'.

    How does a team of seven academics agree on one cohesive imagined world? That's a question the team behind 'Core Central', a second-place prizewinner in the FLI Worldbuilding Contest, had to figure out as they went along. In the end, this entry's realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralised AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work?

    In this third episode of 'Imagine a World',​ Guillaume Riesen speaks to two of the academics in this team, John Burden and Henry Shevlin, representing the team that created 'Core Central'. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards alongside John) work with the Centre for the Study of Existential Risk, also at Cambridge University.

    Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

    Explore this imagined world: https://worldbuild.ai/core-central

    The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

    You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects

    Media and Concepts referenced in the episode:

    https://en.wikipedia.org/wiki/Culture_series

    https://en.wikipedia.org/wiki/The_Expanse_(TV_series)

    https://www.vox.com/authors/kelsey-piper

    https://en.wikipedia.org/wiki/Gratitude_journal

    https://en.wikipedia.org/wiki/The_Diamond_Age

    https://www.scientificamerican.com/article/the-mind-of-an-octopus/

    https://en.wikipedia.org/wiki/Global_workspace_theory

    https://en.wikipedia.org/wiki/Alien_hand_syndrome

    https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)

  • Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky.

    Timestamps:
    00:00 The current pace of AI
    03:58 Near-term risks from AI
    09:34 Historical analogies to AI
    13:58 AI benchmarks VS economic impact
    18:30 AI takeoff speed and bottlenecks
    31:09 Tom's model of AI takeoff speed
    36:21 How AI could automate AI research
    41:49 Bottlenecks to AI automating AI hardware
    46:15 How much of AI research is automated now?
    48:26 From 20% to 100% automation
    53:24 AI takeoff in 3 years
    1:09:15 Economic impacts of fast AI takeoff
    1:12:51 Bottlenecks slowing AI takeoff
    1:20:06 Does the market predict a fast AI takeoff?
    1:25:39 "Hard to avoid AGI by 2060"
    1:27:22 Risks from AI over the next 20 years
    1:31:43 AI progress without more compute
    1:44:01 What if AI models fail safety evaluations?
    1:45:33 Cybersecurity at AI companies
    1:47:33 Will AI turn out well for humanity?
    1:50:15 AI and board games