Avsnitt
-
(see Wikipedia: The Purpose Of A System Is What It Does)
Consider the following claims
The purpose of a cancer hospital is to cure two-thirds of cancer patients. The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia. The purpose of the British government is to propose a controversial new sentencing policy, stand firm in the face of protests for a while, then cave in after slightly larger protests and agree not to pass the policy after all. The purpose of the New York bus system is to emit four billion tons of carbon dioxide.These are obviously false.
https://www.astralcodexten.com/p/come-on-obviously-the-purpose-of
-
Here’s a list of things I updated on after working on the scenario.
Some of these are discussed in more detail in the supplements, including the compute forecast, timelines forecast, takeoff forecast, AI goals forecast, and security forecast. I’m highlighting these because it seems like a lot of people missed their existence, and they’re what transforms the scenario from cool story to research-backed debate contribution.
These are my opinions only, and not necessarily endorsed by the rest of the team.
https://www.astralcodexten.com/p/my-takeaways-from-ai-2027
-
Saknas det avsnitt?
-
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
We wrote a scenario that represents our best guess about what that might look like.1 It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.
https://ai-2027.com/
(A condensed two-hour version with footnotes and text boxes removed is available at the above link.)
-
Or maybe 2028, it's complicated
In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.
The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.
He got it all right.
Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.
I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.
Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:
Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models. Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion. Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle. Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.
https://www.astralcodexten.com/p/introducing-ai-2027
https://ai-2027.com/
-
In Ballad of the White Horse, G.K. Chesterton describes the Virgin Mary:
Her face was like an open word
When brave men speak and choose,
The very colours of her coat
Were better than good news.Why the colors of her coat?
The medievals took their dyes very seriously. This was before modern chemistry, so you had to try hard if you wanted good colors. Try hard they did; they famously used literal gold, hammered into ultrathin sheets, to make golden highlights.
Blue was another tough one. You could do mediocre, half-faded blues with azurite. But if you wanted perfect blue, the color of the heavens on a clear evening, you needed ultramarine.
Here is the process for getting ultramarine. First, go to Afghanistan. Keep in mind, you start in England or France or wherever. Afghanistan is four thousand miles away. Your path takes you through tall mountains, burning deserts, and several dozen Muslim countries that are still pissed about the whole Crusades thing. Still alive? After you arrive, climb 7,000 feet in the mountains of Kuran Wa Munjan until you reach the mines of Sar-i-Sang. There, in a freezing desert, the wretched of the earth work themselves to an early grave breaking apart the rocks of Badakhshan to produce a few hundred kilograms per year of blue stone - the only lapis lazuli production in the known world.
Buy the stone and retrace your path through the burning deserts and vengeful Muslims until you’re back in England or France or wherever. Still alive? That was the easy part. Now you need to go through a chemical extraction process that makes the Philosopher's Stone look like freshman chem lab. "The lengthy process of pulverization, sifting, and washing to produce ultramarine makes the natural pigment … roughly ten times more expensive than the stone it came from."
Finally you have ultramarine! How much? I can’t find good numbers, but Claude estimates that the ultramarine production of all of medieval Europe was around the order of 30 kg per year - not enough to paint a medium-sized wall. Ultramarine had to be saved for ultra-high-value applications.
In practice, the medievals converged on a single use case - painting the Virgin Mary’s coat.
https://www.astralcodexten.com/p/the-colors-of-her-coat
-
Asterisk invited me to participate in their “Weird” themed issue, so I wrote five thousand words on evil Atlantean cave dwarves.
As always, I thought of the perfect framing just after I’d sent it out. The perfect framing is - where did Scientology come from? How did a 1940s sci-fi writer found a religion? Part of the answer is that 1940s sci-fi fandom was a really fertile place, where all of these novel mythemes about aliens, psychics, and lost civilizations were hitting a naive population certain that there must be something beyond the world they knew. This made them easy prey not just for grifters like Hubbard, but also for random schizophrenics who could write about their hallucinations convincingly.
…but I didn’t think of that framing in time, so instead you get several sections of why it’s evil cave dwarves in particular, and why that theme seems to recur throughout all lands and ages:
https://www.astralcodexten.com/p/deros-and-the-ur-abduction-in-asterisk
https://asteriskmag.com/issues/09/deros-and-the-ur-abduction
-
People love trying to find holes in the drowning child thought experiment. This is natural: it’s obvious you should save the child in the scenario, but much less obvious that you should give lots of charity to poor people (as it seems to imply). So there must be some distinction between the two scenarios. But most people’s cursory and uninspired attempts to find these fail.
https://www.astralcodexten.com/p/more-drowning-children
-
Jake Eaton has a great article on misophonia in Asterisk.
Misophonia is a condition in which people can’t tolerate certain noises (classically chewing). Nobody loves chewing noises, but misophoniacs go above and beyond, sometimes ending relationships, shutting themselves indoors, or even deliberately trying to deafen themselves in an attempt to escape.
So it’s a sensory hypersensitivity, right? Maybe not. There’s increasing evidence - which I learned about from Jake, but which didn’t make it into the article - that misophonia is less about sound than it seems.
Misophoniacs who go deaf report that it doesn’t go away. Now they get triggered if they see someone chewing. It’s the same with other noises. Someone who gets triggered by the sound of forks scraping against a table will eventually get triggered by the sight of the scraping fork. Someone triggered by music will eventually get triggered by someone playing a music video on mute.
Maybe this isn’t surprising?
https://www.astralcodexten.com/p/misophonia-beyond-sensory-sensitivity
-
Last month, I put out a request for experts to help me understand the details of OpenAI’s forprofit buyout. The following comes from someone who has looked into the situation in depth but is not an insider. Mistakes are mine alone.
Why Was OpenAI A Nonprofit In The First Place?
In the early 2010s, the AI companies hadn’t yet discovered scaling laws, and so underestimated the amount of compute (and therefore money) it would take to build AI. DeepMind was the first victim; originally founded on high ideals of prioritizing safety and responsible stewardship of the Singularity, it hit a financial barrier and sold to Google.
This scared Elon Musk, who didn’t trust Google (or any corporate sponsor) with AGI. He teamed up with Sam Altman and others, and OpenAI was born. To avoid duplicating DeepMind’s failure, they founded it as a nonprofit with a mission to “build safe and beneficial artificial general intelligence for the benefit of humanity”.
But like DeepMind, OpenAI needed money. At first, they scraped by with personal donations from Musk and other idealists, but as the full impact of scaling laws became clearer, Altman wanted to form a forprofit arm and seek investment. Musk and Altman disagree on what happened next: Musk said he objected to the profit focus, Altman says Musk agreed but wanted to be in charge. In any case, Musk left, Altman took full control, and OpenAI founded a forprofit subsidiary.
This subsidary was supposedly a “capped forprofit”, meaning that their investors were capped at 100x return - if someone invested $1 million, they could get a max of $100 million back, no matter how big OpenAI became - this ensured that the majority of gains from a Singularity would go to humanity rather than investors. But a capped forprofit isn’t a real kind of corporate structure; in real life OpenAI handles this through Profit Participation Units, a sort of weird stock/bond hybrid which does what OpenAI claims the capped forprofit model is doing.
https://www.astralcodexten.com/p/openai-nonprofit-buyout-much-more
-
Sorry, you can only get drugs when there's a drug shortage.
Three GLP-1 drugs are approved for weight loss in the United States:
Semaglutide (Ozempic®, Wegovy®, Rybelsus®) Tirzepatide (Mounjaro®, Zepbound®) Liraglutide (Victoza®, Saxenda®)…but liraglutide is noticeably worse than the others, and most people prefer either semaglutide or tirzepatide. These cost about $1000/month and are rarely covered by insurance, putting them out of reach for most Americans.
…if you buy them from the pharma companies, like a chump. For the past three years, there’s been a shortage of these drugs. FDA regulations say that during a shortage, it’s semi-legal for compounding pharmacies to provide medications without getting the patent-holders’ permission. In practice, that means they get cheap peptides from China, do some minimal safety testing in house, and sell them online.
So for the past three years, telehealth startups working with compounding pharmacies have sold these drugs for about $200/month. Over two million Americans have made use of this loophole to get weight loss drugs for cheap. But there was always a looming question - what happens when the shortage ends? Many people have to stay on GLP-1 drugs permanently, or else they risk regaining their lost weight. But many can’t afford $1000/month. What happens to them?
Now we’ll find out. At the end of last year, the FDA declared the shortage over. The compounding pharmacies appealed the decision, but last month the FDA confirmed its decision was final. As of March 19 (for tirzepatide) and April 22 (for semaglutide), compounding pharmacies will no longer be able to sell cheap GLP-1 drugs.
Let’s take a second to think of the real victims here: telehealth company stockholders.
https://www.astralcodexten.com/p/the-ozempocalypse-is-nigh
-
Most headlines have said something like New NAEP Scores Dash Hope Of Post-COVID Learning Recovery, which seems like a fair assessment.
I feel bad about this, because during lockdowns I argued that kids’ educational outcomes don’t suffer long-term from missing a year or two of school. Re-reading the post, I still think my arguments make sense.
So how did I get it so wrong?
When I consider this question, I ask myself: do I expect complete recovery in two years? In 2026, we will see a class of fourth graders who hadn’t even started school when the lockdowns ended. They will have attended kindergarten through 4th grade entirely in person, with no opportunity for “learning loss”.
If there’s a sudden switch to them doing just as well as the 2015 kids, then it was all lockdown-induced learning loss and I suck. But if not, then what?
Maybe the downward trend isn’t related to COVID? On the graph above, the national (not California) trend started in the 2017 - 2019 period, ie before COVID. And the states that tried hardest to keep their schools open did little better than anyone else:
https://www.astralcodexten.com/p/what-happened-to-naep-scores
-
I enjoy the yearly book review contest, but it feels like last year’s contest is barely done, and I want to give you a break so you can read more books before we start over. So this year, let’s do something different. Submit an ACX-length post reviewing something, anything, except a book.
You can review a movie, song, or video game. You can review a product, restaurant, or tourist attraction. But don’t let the usual categories limit you. Review comic books or blog posts. Review political parties - no, whole societies! Review animals or trees! Review an oddly-shaped pebble, or a passing cloud! Review abstract concepts! Mathematical proofs! Review love, death, or God Himself!
(please don’t review human races, I don’t need any more NYT articles)
Otherwise, the usual rules apply. There’s no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of last year’s finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team.
Then send me your review through this Google Form. The form will ask for your name, email, the thing you’re reviewing, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit.
https://www.astralcodexten.com/p/everything-except-book-review-contest
-
Intelligence seems to correlate with total number of neurons in the brain.
Different animals’ intelligence levels track the number of neurons in their cerebral cortices (cerebellum etc don’t count). Neuron number predicts animal intelligence better than most other variables like brain size, brain size divided by body size, “encephalization quotient”, etc. This is most obvious in certain bird species that have tiny brains full of tiny neurons and are very smart (eg crows, parrots).
Humans with bigger brains have on average higher IQ. AFAIK nobody has done the obvious next step and seen whether people with higher IQ have more neurons. This could be because the neuron-counting process involves dissolving the brain into a “soup”, and maybe this is too mad-science-y for the fun-hating spoilsports who run IRBs. But common sense suggests bigger brains increase IQ because they have more neurons in humans too.
Finally, AIs with more neurons (sometimes described as the related quantity “more parameters”) seem common-sensically smarter and perform better on benchmarks. This is part of what people mean by “scaling”, ie the reason GoogBookZon is spending $500 billion building a data center the size of the moon.
All of this suggests that intelligence heavily depends on number of neurons, and most scientists think something like this is true.
But how can this be?
https://www.astralcodexten.com/p/why-should-intelligence-be-related
-
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-february-2025
-
Conflict theory is the belief that political disagreements come from material conflict. So for example, if rich people support capitalism, and poor people support socialism, this isn’t because one side doesn’t understand economics. It’s because rich people correctly believe capitalism is good for the rich, and poor people correctly believe socialism is good for the poor. Or if white people are racist, it’s not because they have some kind of mistaken stereotypes that need to be corrected - it’s because they correctly believe racism is good for white people.
Some people comment on my more political posts claiming that they’re useless. You can’t (they say) produce change by teaching people Economics 101 or the equivalent. Conflict theorists understand that nobody ever disagreed about Economics 101. Instead you should try to organize and galvanize your side, so they can win the conflict.
I think simple versions of conflict theory are clearly wrong. This doesn’t mean that simple versions of mistake theory (the idea that people disagree because of reasoning errors, like not understanding Economics 101) are automatically right. But it gives some leeway for thinking harder about how reasoning errors and other kinds of error interact.
https://readscottalexander.com/posts/acx-why-i-am-not-a-conflict-theorist
-
[Original thread here: Tegmark’s Mathematical Universe Defeats Most Arguments For God’s Existence.]
1: Comments On Specific Technical Points
2: Comments From Bentham’s Bulldog’s Response
3: Comments On Philosophical Points, And Getting In Fightshttps://www.astralcodexten.com/p/highlights-from-the-comments-on-tegmarks
-
St. Felix publicly declared that he believed with 79% probability that COVID had a natural origin. He was brought before the Emperor, who threatened him with execution unless he updated to 100%. When St. Felix refused, the Emperor was impressed with his integrity, and said he would release him if he merely updated to 90%. St. Felix refused again, and the Emperor, fearing revolt, promised to release him if he merely rounded up one percentage point to 80%. St. Felix cited Tetlock’s research showing that the last digit contained useful information, refused a third time, and was crucified.
St. Clare was so upset about believing false things during her dreams that she took modafinil every night rather than sleep. She completed several impressive programming projects before passing away of sleep deprivation after three weeks; she was declared a martyr by Pope Raymond II.
https://www.astralcodexten.com/p/lives-of-the-rationalist-saints
-
It feels like 2010 again - the bloggers are debating the proofs for the existence of God. I found these much less interesting after learning about Max Tegmark’s mathematical universe hypothesis, and this doesn’t seem to have reached the Substack debate yet, so I’ll put it out there.
Tegmark’s hypothesis says: all possible mathematical objects exist.
Consider a mathematical object like a cellular automaton - a set of simple rules that creates complex behavior. The most famous is Conway’s Game of Life; the second most famous is the universe. After all, the universe is a starting condition (the Big Bang) and a set of simple rules determining how the starting condition evolves over time (the laws of physics).
Some mathematical objects contain conscious observers. Conway’s Life might be like this: it’s Turing complete, so if a computer can be conscious then you can get consciousness in Life. If you built a supercomputer and had it run the version of Life with the conscious being, then you would be “simulating” the being, and bringing it into existence. There would be something it was like to be that being; it would have thoughts and experiences and so on.
https://www.astralcodexten.com/p/tegmarks-mathematical-universe-defeats
-
From the Commerce Department:
U.S. Senate Commerce Committee Chairman Ted Cruz (R-Texas) released a database identifying over 3,400 grants, totaling more than $2.05 billion in federal funding awarded by the National Science Foundation (NSF) during the Biden-Harris administration. This funding was diverted toward questionable projects that promoted Diversity, Equity, and Inclusion (DEI) or advanced neo-Marxist class warfare propaganda.
I saw many scientists complain that the projects from their universities that made Cruz’s list were unrelated to wokeness. This seemed like a surprising failure mode, so I decided to investigate. The Commerce Department provided a link to their database, so I downloaded it, chose a random 100 grants, read the abstracts, and rated them either woke, not woke, or borderline.
Of the hundred:
40% were woke 20% were borderline 40% weren’t wokeThis is obviously in some sense a subjective determination, but most cases weren’t close - I think any good-faith examination would turn up similar numbers.
https://readscottalexander.com/posts/acx-only-about-40-of-the-cruz-woke-science
-
In the past day, Zvi has written about deliberative alignment, and OpenAI has updated their spec. This article was written before either of these and doesn’t account for them, sorry.
I.OpenAI has bad luck with its alignment teams. The first team quit en masse to found Anthropic, now a major competitor. The second team quit en masse to protest the company reneging on safety commitments. The third died in a tragic plane crash. The fourth got washed away in a flood. The fifth through eighth were all slain by various types of wild beast.
https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec
- Visa fler