Avsnitt
-
In this episode, Dean and Tim discuss Dean’s trip to Paris for the AI Action Summit, including Vice President Vance’s speech on AI. They talk through the European outlook on AI regulation, European resentment toward America, and the stark shift in policymaker attitudes toward AI safety. Then they turn to OpenAI’s new Deep Research agent, chatting about their experience with the product and reflecting on what it means for the future of policy research.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Kashmir Hill is a reporter at the New York Times who focuses on the social impacts of new technology. In this episode, she describes how users are customizing chatbots like ChatGPT to fulfill emotional and even erotic needs, often bypassing built-in safeguards. These fantasy conversations are usually harmless, but there are potential pitfalls—especially where children are involved. Kashmir also discusses about how policymakers should deal with the emergence of uncannily accurate facial recognition technology.
"She Is in Love With ChatGPT" by Kashmir Hill in the New York Times, 2025.
Your Face Belongs to Us. Book by Kashmir Hill published in 2023.
"The Secretive Company That Might End Privacy as We Know It" by Kashmir Hill in the New York Times, 2020.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Saknas det avsnitt?
-
Tim and Dean chat with Sophia Tung, an entrepreneur, engineer, and now YouTuber, about her recent experience in a Chinese self-driving taxi from Apollo Go, a subsidiary of Baidu. Apollo Go is a bit like China’s Waymo, but Sophia found the experience of riding in an Apollo Go taxi to be far worse than riding in a Waymo. We talk about her experience in China as well as the broader implications: is China just a few years behind American AV companies, or is there a deeper problem?
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Dean and Tim discuss DeepSeek’s r1 release and what it means. We talk export controls, whether the model is a true technical breakthrough, and what “reasoning” models like r1 and o1 mean for the pace of AI progress going forward. This is our first episode with just Dean and Tim chatting, but we hope to do more such episodes in the future.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Nathan Labenz is the host of our favorite AI podcast, the Cognitive Revolution. A self-described “AI scout,” Nathan uses his podcast to explore a wide range of AI advancements, from the latest language models to breakthroughs in medicine and robotics. In this episode, Labenz helps us understand the slowdown in AI scaling that has been reported by some media outlets. Labenz says that AI progress has been “a little slower than I had expected” over the last 18 months, especially when it comes to technology adoption. But Labenz continues to expect rapid progress over the next few years.
Here are some of the key points Nathan Labenz made during the conversation:
* The alleged AI slowdown: There has been limited deployment of AI models in everyday life. But there have been significant advancements in model capabilities, such as expanded context windows, tool use, and multimodality. “I think the last 18 months have gone a little slower than I had expected. Probably more so on the adoption side than the fundamental technology.”
* Scaling laws: Despite rumors and development issues, the leaders in AI seem to indicate that the scaling curve is still steep, with further progress expected. “They’re basically all saying that we’re still in the steep part of the S curve, you know, we should not expect things to slow down.”
* Discovering new scientific concepts: AI has identified new protein motifs, suggesting potential for superhuman insights in some domains. “[Researchers] report having discovered a new motif in proteins: a new recurring structure that seems to have been understood by the protein model before it was understood by humans.”
* Inference-time compute: There is significant potential in the use of more compute time for inference, allowing models to solve complex problems by dedicating resources to deeper reasoning. "Anything where there has been a quick objective scoring function available, reinforcement learning has basically been able to drive that to superhuman levels."
* Memory and goal retention: Current transformer-based models lack sophisticated memory and goal retention, but we’re seeing progress through new architectural and operational innovations like runtime fine-tuning. “None of this seems like it really should work. And the fact that it does, I think should kind of keep us fairly humble about how far it could go.”
* AI deception: We’re starting to see AIs prioritizing programmed goals over user instructions, highlighting the risks of scheming and deception in advanced models. “They set up a tension between the goal that the AI has been given and the goal that the user at runtime has. In some cases—not all the time, but a significant enough percentage of the time that it concerns me—when there is this divergence, the AI will outright lie to the user at runtime to pursue the goal that it has.”
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Lennart Heim is an information scientist and researcher in AI governance at the RAND Corporation and a leading scholar on AI export controls. We asked him about the Biden administration’s “diffusion framework,” which aims to regulate the global diffusion of advanced AI chips and models. We get into all the specifics as well as the broader geopolitical implications of the framework—and whether or not the Trump administration will maintain this policy.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Sam Hammond is senior economist at the Foundation for American Innovation, a right-leaning tech policy think tank based in Washington DC. Hammond is a Trump supporter who expects AI to improve rapidly in the next few years, and he believes that will have profound implications for the public policy. In this interview, Hammond explains how he’d like to see the Trump administration tackle the new policy challenges he expects AI to create over the next four years.
Here are some of the key points Hammond made during the conversation:
* Rapid progress in verifiable domains: In areas with clear verifiers, like math, chemistry, or coding, AI will see rapid progress and be essentially solved in the short term. "For any kind of subdomain that you can construct a verifier for, there'll be very rapid progress."
* Slower progress on open-ended problems: Progress in open-ended areas, where verification is harder, will be more challenging, and there’s a need for reinforcement learning to be applied to improve autonomous abilities. "I think we're just scratching the surface of applying reinforcement learning techniques into these models."
* The democratization of AI: As AI capabilities become widely accessible, institutions will face unprecedented challenges. With open-source tools and AI agents in the hands of individuals, the volume and complexity of economic and social activity will grow exponentially. "When capabilities get demonstrated, we should start to brace for impact for those capabilities to be widely distributed."
* The risk of societal overload: If institutions fail to adapt, AI could overwhelm core functions such as tax collection, regulatory enforcement, and legal systems. The resulting systemic failure could undermine government effectiveness and societal stability. "Core functions of government could simply become overwhelmed by the pace of change."
* The need for deregulation: Deregulating and streamlining government processes are necessary to adapt institutions to the rapid changes brought by AI. Traditional regulatory frameworks are incompatible with the pace and scale of AI’s impact. "We need a kind of regulatory jubilee. Removing a regulation takes as much time as it does to add a regulation."
* Securing models and labs: There needs to be a deeper focus on securing AI models and increasing security in AI labs, especially as capabilities become tempting targets for other nations. "As we get closer to these kind of capabilities, they're going to be very tempting for other nation state actors to try to steal. And right now the labs are more or less wide open."
* The need for export controls and better security: To maintain a technological edge, tighter export controls and advanced monitoring systems are required to prevent adversaries from acquiring sensitive technologies and resources. Investments in technology for secure supply chain management are critical. "Anything that can deny or delay the development of China’s ecosystem is imperative."
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Ajeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation’s grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.
Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Nathan Lambert is the author of the popular AI newsletter Interconnects. He is also a research scientist who leads post-training at the Allen Institute for Artificial Intelligence, a research organization funded by the estate of Paul Allen. This means that the organization can afford to train its own models—and it’s one of the only such organizations committed to doing so in an open manner. So Lambert is one of the few people with hands-on experience building cutting-edge LLMs who can talk freely about his work. In this December 17 conversation, Lambert walked us through the steps required to train a modern model and explained how the process is evolving. Note that this conversation was recorded before OpenAI announced its new o3 model later in the month.
Links mentioned during the interview:
The Allen Institute's Tülu 3 blog post
The Allen Institute's OLMo 2 model
The original paper that introduced RLHF
Nathan Lambert on OpenAI's reinforcement fine-tuning API
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org -
Jon Askonas, an Assistant Professor of Politics at Catholic University of America, is well connected to conservatives and Republicans in Washington DC. In this December 16 conversation, he talked to Tim and Dean about Silicon Valley’s evolving relationship to the Republican party, who will be involved in AI policy in the second Trump Administration, and what AI policy issues are likely to be be prioritized—he predicts it won’t be existential risk.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org