Avsnitt

  • The five short chapters in this episode are the conclusion of the main body of Better Without AI. Next, we'll begin the book's appendix, Gradient Dissent.
    Cozy Futurism - If we knew we’d never get flying cars, most people wouldn’t care. What do we care about?
    https://betterwithout.ai/cozy-futurism
    Meaningful Futurism - Likeable futures are meaningful, not just materially comfortable. Bringing one about requires imagining it. I invite you to do that!
    https://betterwithout.ai/meaningful-future
    The Inescapable: Politics - No realistic approach to future AI can avoid questions of power and social organization.
    https://betterwithout.ai/inescapable-AI-politics
    Responsibility
    https://betterwithout.ai/responsibility
    This is about you
    https://betterwithout.ai/about-you
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • A Future We Would Like - The most important questions are not about technology but about us. What sorts of future would we like? What role could AI play in getting us there, and also in that world? What is your own role in helping that happen?

    https://betterwithout.ai/a-future-we-would-like

    How AI Destroyed The Future -We are doing a terrible job of thinking about the most important question because unimaginably powerful evil artificial intelligences are controlling our brains.

    https://betterwithout.ai/AI-destroyed-the-future

    A One-Bit Future - Superintelligence scenarios reduce the future to infinitely good or infinitely bad. Both are possible, but we cannot reason about or act toward them. Messy complicated good-and-bad futures are probably more likely, and in any case are more feasible to influence.

    https://betterwithout.ai/one-bit-future

    This episode mentions David Chapman's essay "Vaster Than Ideology" for getting AI out of your head.

    Text link: https://meaningness.com/vaster-than-ideology

    Episode link: https://fluidity.libsyn.com/vaster-than-ideology

    You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Stop obstructing scientific progress! We already know how to dramatically accelerate science: by getting out of the way. https://betterwithout.ai/stop-obstructing-science How to science better. What do exceptional scientists do differently from mediocre ones? Can we train currently-mediocre ones to do better? https://betterwithout.ai/better-science-without-AI Scenius: upgrading science FTW. Empirically, breakthroughs that enable great progress depend on particular, uncommon social constellations and accompanying social practices. Let’s encourage these! https://betterwithout.ai/human-scenius-vs-artificial-genius Matt Clancy reviews the evidence for scientific progress slowing, with citations and graphs. https://twitter.com/mattsclancy/status/1612440718177603584 "Scenius, or Communal Genius", Kevin Kelly, The Technium. https://kk.org/thetechnium/scenius-or-comm/

  • Progress requires experimentation. Suggested ways AI could speed progress by automating experiments appear mistaken.
    https://betterwithout.ai/limits-to-induction
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Forgive the sound quality on this episode; I recorded it live in front of an audience on a platform floating in a lake during the 2024 solar eclipse.

    This is a standalone essay by David Chapman on metarationaity.com. How scientific research is like cunnilingus: a phenomenology of epistemology.

    https://metarationality.com/going-down-on-the-phenomenon

    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
  • What Is The Role Of Intelligence In Science?

    Actually, what are “science” and “intelligence”? Precise, explicit definitions aren’t necessary, but discussions of Transformative AI seem to depend implicitly on particular models of both. It matters if those models are wrong.

    https://betterwithout.ai/intelligence-in-science

    Katja Grace, “Counterarguments to the basic AI x-risk case”. https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/ What Do Unusually Intelligent People Do?

    If we want to know what a superintelligent AI might do, and how, it could help to investigate what the most intelligent humans do, and how. If we want to know how to dramatically accelerate science and technology development, it could help to investigate what the best scientists and technologists do, and how.

    https://betterwithout.ai/what-intelligent-people-do Patrick Collison and Tyler Cowen, “We Need a New Science of Progress,” The Atlantic, July 30, 2019. https://www.theatlantic.com/science/archive/2019/07/we-need-new-science-progress/594946/
    Gwern Branwen, “Catnip immunity and alternatives”. https://www.gwern.net/Catnip#optimal-catnip-alternative-selection-solving-the-mdp You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
  • Radical Progress Without Scary AI: Technological progress, in medicine for example, provides an altruistic motivation for developing more powerful AIs. I suggest that AI may be unnecessary, or even irrelevant, for that. We may be able to get the benefits without the risks.

    https://betterwithout.ai/radical-progress-without-AI

    What kind of AI might accelerate technological progress?: “Narrow” AI systems, specialized for particular technical tasks, are probably feasible, useful, and safe. Let’s build those instead of Scary superintelligence.

    https://betterwithout.ai/what-AI-for-progress

  • Recognize that AI is probably net harmful: Actually-existing and near-future AIs are net harmful—never mind their longer-term risks. We should shut them down, not pussyfoot around hoping they can somehow be made safe.
    https://betterwithout.ai/AI-is-harmful
    Create a negative public image for AI: Most funding for AI research comes from the advertising industry. Their primary motivation may be to create a positive corporate image, to offset their obvious harms. Creating bad publicity for AI would eliminate their incentive to fund it.
    https://betterwithout.ai/AI-is-public-relations
    Seth Lazar’s "Legitimacy, Authority, and the Political Value of Explanations": https://arxiv.org/ftp/arxiv/papers/2208/2208.08628.pdf
    Kate Crawford's "Atlas Of AI": https://www.amazon.com/dp/B08WKQ1MTM/?tag=meaningness-20
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • “Apocalypse now” identified the corrosive influence of new viral ideologies, created unintentionally by recommender systems, as a major AI risk. These may cause social collapse if not tackled head-on. You can resist.
    https://betterwithout.ai/spurn-artificial-ideology
    Announcement tweet for the Opening Awareness, Opening Rationality discussion group starting on February 1: https://twitter.com/openingBklyn/status/1751314312415567956
    Document with more details: https://docs.google.com/document/d/1YPaos3zTgdraF9VouWkHUouVHVsrbYBluUO3Kh--Ezs/edit
    Vaster Than Ideology (text): https://meaningness.com/vaster-than-ideology
    Vaster Than Ideology (Fluidity Audiobooks episode): https://fluidity.libsyn.com/vaster-than-ideology
    Coinbase Is A Mission Focused Company: https://www.coinbase.com/blog/coinbase-is-a-mission-focused-company
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Current AI practices produce technologies that are expensive, difficult to apply in real-world situations, and inherently unsafe. Neglected scientific and engineering investigations can bring better understanding of specific risks of current AI technology, and can lead to safer technologies. https://betterwithout.ai/fight-unsafe-AI
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Music is by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • The technologies underlying current AI systems are inherently, unfixably unreliable. They should be deprecated, avoided, regulated, and replaced.
    https://betterwithout.ai/mistrust-machine-learning
    You can support the podcast and get episodes a week early, by supporting the Patreon:
    https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Music is by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Gaining unauthorized access to computer systems is a key source of power in many AI doom scenarios. That is easy now, because there are scant incentives for serious cybersecurity; so nearly all systems are radically insecure. Technical and political initiatives must mitigate this problem. https://betterwithout.ai/cybersecurity-vs-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Practical Actions You Can Take Against AI Risks: We can and should protect against current and likely future harmful AI effects. This chapter recommends practical, near-term risk reduction measures. I suggest actions for the general public, computer professionals, AI ethics and safety organizations, funders, and governments. https://betterwithout.ai/pragmatic-AI-safety
    End Digital Surveillance: Databases of personal information collected via internet surveillance are a main resource for harmful AI. Eliminating them will alleviate multiple major risks. Technical and political approaches are both feasible. https://betterwithout.ai/end-digital-surveillance
    José Luis Ricón’s “Set Sail For Fail? On AI risk”: https://nintil.com/ai-safety
    FTC Sues Kochava for Selling Data that Tracks People at Reproductive Health Clinics, Places of Worship, and Other Sensitive Locations: https://www.ftc.gov/news-events/news/press-releases/2022/08/ftc-sues-kochava-selling-data-tracks-people-reproductive-health-clinics-places-worship-other
    Consumer Reports‘ “Security Planner”: https://securityplanner.consumerreports.org/
    Wirecutter‘s “Every Step to Simple Online Security”: https://www.nytimes.com/wirecutter/guides/simple-online-security/
    Narwhal Academy’s Zebra Crossing: https://github.com/narwhalacademy/zebra-crossing
    Privacy Guides: https://www.privacyguides.org/
    Installing a blocker is explicitly recommended by the FBI as a way to protect against cybercriminals: https://www.ic3.gov/Media/Y2022/PSA221221
    The Electronic Frontier Foundation's page of actions you can take: https://act.eff.org/
    The European Digital Rights organization (EDRi) page of simple ways you can influence EU privacy legislation: https://edri.org/take-action/our-campaigns/
    You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks
    If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • This concludes the "Apocalypse Now" section of Better Without AI. AI systems may cause near-term disasters through their proven ability to shatter societies and cultures. These might potentially cause human extinction, but are more likely to scale up to the level of the twentieth century dictatorships, genocides, and world wars. It would be wise to anticipate possible harms in as much detail as possible. https://betterwithout.ai/incoherent-AI-apocalypses You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Who is in control of AI? - It may already be too late to shut down the existing AI systems that could destroy civilization.
    https://betterwithout.ai/AI-is-out-of-control
    What an AI apocalypse may look like - Scenarios in which artificial intelligence systems degrade critical institutions to the point of collapse seem to me not just likely, but well under way.
    https://betterwithout.ai/AI-safety-failure

    This episode mentions the short story "Sort By Controversial" by Scott Alexander. Here is the audio version narrated by me:

    https://unsong.libsyn.com/sort-by-controversial

    You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.
  • "In this audiobook... A LARGE BOLD FONT IN ALL CAPITAL LETTERS SOUNDS LIKE THIS." Apocalypse now - Current AI systems are already harmful. They pose apocalyptic risks even without further technology development. This chapter explains why; explores a possible path for near-term human extinction via AI; and sketches several disaster scenarios. https://betterwithout.ai/apocalypse-now At war with the machines - The AI apocalypse is now. https://betterwithout.ai/AI-already-at-war This interview with Stuart Russell is a good starting point for the a literature on recommender alignment, analogous to AI alignment: https://www.youtube.com/watch?v=vzDm9IMyTp8 You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Superintelligence should scare us only insofar as it grants superpowers. Protecting against specific harms of specific plausible powers may be our best strategy for preventing catastrophes. https://betterwithout.ai/fear-AI-power For much of the AI safety community, the central question has been “when will it happen?!” That is futile: we don’t have a coherent description of what “it” is, much less how “it” would come about. Fortunately, a prediction wouldn’t be useful anyway. An AI apocalypse is possible, so we should try to avert it. https://betterwithout.ai/scary-AI-when You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Many people call the future threat “artificial general intelligence,” but all three words there are misleading when trying to understand risks. https://betterwithout.ai/artificial-general-intelligence AI may radically accelerate technology development. That might be extremely good or extremely bad. There are currently no good explanations for how either would happen, so it’s hard to predict which, or when, or whether. The understanding necessary to guide the future to a good outcome may depend more on uncovering causes of technological progress than on reasoning about AI.
    https://betterwithout.ai/transformative-AI You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold Original music by Kevin MacLeod. This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • Thanks for your patience while I ran Fluidity Forum. We now resume "Better Without AI" by David Chapman. Speculations about autonomous AI assume simplistic theories of motivation. They also mistakenly confuse those with ethical theories. Building AI systems on these ideas would produce monsters.
    https://betterwithout.ai/AI-motivation Coherent Extrapolated Volition https://betterwithout.ai/AI-motivation#fn_Turchin:~:text=%E2%80%9C-,Coherent%20Extrapolated%20Volition,-%E2%80%9D%20at%20LessWrong%2C%20undated A.I. Alignment Problem: "Human Values" Don't Actually Exist https://www.lesswrong.com/posts/ngqvnWGsvTEiTASih/ai-alignment-problem-human-values-don-t-actually-exist “Can we survive technology?” by John Von Neumann http://geosci.uchicago.edu/~kite/doc/von_Neumann_1955.pdf You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.

  • It’s a mistake to think that human-like agency is the only dangerous kind. That risks overlooking AIs causing agent-like harms in inhuman ways.
    https://betterwithout.ai/diverse-agency#fn_meme_critics
    You can support the podcast and get episodes a week early, by supporting the Patreon: https://www.patreon.com/m/fluidityaudiobooks If you like the show, consider buying me a coffee: https://www.buymeacoffee.com/mattarnold
    Original music by Kevin MacLeod.
    This podcast is under a Creative Commons Attribution Non-Commercial International 4.0 License.