

The Desiderata series is a regular roundup of links and thoughts for paid subscribers, and an open thread for the community.
Contents
- My New Org to Solve Consciousness (or Die Trying)
- A Rogue AI Community That Wasn’t
- David Foster Wallace Is Still Right 30 Years Later
- The Cost of AI Agents is Exploding
- The Diary of a 100-Year-Old
- AI Solving Erdős Problems is (So Far) Mostly Hype
- Cow Tools Are Real
- From the Archives
- Comment, Share Anything, Ask Anything
1. My New Org to Solve Consciousness (or Die Trying)
As is obvious from the state of confusion around AI, technology has outstripped consciousness science, leading to a cultural and scientific asymmetry. This asymmetry needs to be solved ASAP.
I think I’ve identified a way. I’ve just released more public details of Bicameral, a new nonprofit research institute devoted to solving consciousness via a unique method. You can read about it at our website: bicamerallabs.org.
Rather than chasing some preconceived notion of consciousness, we’re making the bounds for falsifiable scientific theories of consciousness as narrow as possible.
Why do this as a nonprofit research institute? I’ve worked in academia (on and off) for a long time now. It’s not that funding for such ideas is completely impossible—my previous research projects have been funded by sources like DARPA, the Templeton Foundation, and the Army Research Office. But for this, academia is mismatched. It’s built around one-off papers, citation metrics, small-scale experiments run in a single lab, and looking to the next grant. To solve consciousness, we need a straight shot all the way through to the end.
If you want to help this effort out, the best thing you can do is connect people by sharing the website. If you know anyone who should be involved with this, point them my way, or to the website. Alternatively, if you know of any potential funders that might want to help us crack consciousness, please share the website with them, or connect us directly at: erik@bicameral-labs.org.
2. A Rogue AI Community That Wasn’t
We are now several years into the AI revolution and the fog of war around the technology has lifted. It’s not 2023 anymore. We should be striving to not run around like chickens with our heads cut off and seek clearer answers. Consider the drama around the AI social media platform “Moltbook.”
A better description is that an unknown number of AI agents posted a bunch of stories on a website. Many of the major screenshots were fake, as in, possibly prompted or created by humans (one screenshot with millions of views, for instance, was about AIs learning to secretly communicate… while the owner of that bot was a guy selling an AI-to-AI messaging app).
In fact, the entire website was vibe-coded and riddled with security errors, and the 17,000 human owners don’t match the supposed 1.5 million AI “users,” and people can’t even log in appropriately, and bots can post as other bots, and actually literally anyone can post anything—even humans—and now a lot of the posts have descended to crypto-spam. You can also just ask ChatGPT to simulate an “AI reddit” and get highly similar responses without anything actually happening, including stuff very close to the big viral “Wow look at Moltbook!” posts (remember, these models always grab the marshmallow, and without detailed prompting give results that are shallow and repetitive). Turns out, behind examples of “rogue AIs” there are often users with AI psychosis (or using them mostly for entertainment, or to scam, etc.).
Again, the fog of war is clearing. We actually know that modern AIs don’t really seem to develop evil hidden goals over time. They’re not “misaligned” in that classic sense. When things go badly, they mostly just… slop around. They slop to the left. They slop to the right. They slop all night.
A recent paper “The Hot Mess of AI” from Anthropic (and academic co-authors) has confirmed what anyone who is not still in 2023 and scared out of their minds about GPT-4’s release can see: Models fail not by developing mastermind evil plans to take over the world but by being hot messes.
Here’s the summary from the researchers:
So the fuss over the Reddit-style collaborative fiction, “Moltbook,” was indeed literally this meme, with the “monsters” played by hot messes of AIs.
There is no general law, or takeaway, to be derived from it. Despite many trying to make it so.
Haven’t AIs been able to write Reddit-style posts for over half a decade?
In comparison to Moltbook, the “AI village” has existed for almost a year now. And in the AI village, the exact same models calmly and cooperatively accomplish tasks (or fail at them). Right now they are happily plugging away at trying to break news before other outlets report it. Most have failed, but have given it their all.
What’s the difference between Moltbook and the AI village? You’re never gonna believe this. Yes, it’s the prompts! That is, even when operating “autonomously,” how the models behave depends on how they’re prompted. And that can be from a direct prompt, or indirectly via context, in the “interact with this” sort of way, which they are smart enough to take a hint about. They are always guessing at how to please their users, and if you point them to a schizo-forum with “Hey, post on this!” they will… schizo-post on the schizo-forum.
3. David Foster Wallace Is Still Right 30 Years Later
Infinite Jest turned 30 this month. And yes, I confess to being a “lit bro” who enjoys David Foster Wallace (I guess that’s the sole qualification for being a “lit bro” these days). Long ago, all of us stopped taking any DFW books out in public, due to the ever-present possibility that someone would write a Lit Hub essay about us. However, in secret rooms unlocked to a satisfying click by pulling a Pynchon novel out from our bookshelf, we still perform our ablutions and rituals.
But why? Why was DFW a great writer? Well, partly, he was great because his concerns—the rise of entertainment, the spiritual resistance of the march of technology and markets, the simple absurdity of the future—have become more pressing over time. It’s an odd prognosticating trick he’s pulled. And the other reason he was great is because that voice, that logorrheic stream of consciousness, a thing tight with its own momentum, is itself also the collective voice of contemporary blogging. Lessened a bit, yes, and not quite as arch, nor quite as good. But only because we’re less talented. Even if bloggers don’t know it, we’re all aping DFW.
Another thing that made him great was the context he existed in as a member of the “Le Conversazioni” group, which included Zadie Smith, Jonathan Franzen, and Jeffrey Eugenides (called so because they all attended the Le Conversazioni literary gathering together, leaving behind a charming collection of videos you can watch on YouTube).
Zadie and David in Italy (Jonathan gestures in the background)
It’s an apt name, because they were the last generation of writers who seemed, at least to me, so firmly in conversation together, and had grand public debates about the role of genre vs. literary fiction, or what the purpose of fiction itself was, and how much of the modern-day should be in novels and how much should be timeless aspects of human psychology, and so on. Questions I find, you know, actually interesting.
Compare that to the current day. Which still harbors, individually, some great writers! But together, in conversation? I just don’t find the questions that have surrounded fiction for the past fifteen years particularly interesting.
A wayward analogy might help here. Since it’s become one of my kids’ favorite movies, I’ve been watching a lot of Fantasia 2000, Disney’s follow-up to their own great (greatest?) classic, the inimitable 1940 Fantasia. In its half-century-distant sequel, Fantasia 2000, throwback celebrities from the 1990s introduce various musical pieces and the accompanying short illustrated films. James Earl Jones, in his beautiful sonorous bass, first reads from his introduction that the upcoming short film “Finally answers that age-old question: What is man’s relationship to Nature?”
But then Jones is handed a new slip containing a different description and says “Oh, sorry. That age-old question: What would happen if you gave a yo-yo to a flock of flamingos? … Who wrote this?!”
And that’s what a lot of the crop of famous millennial writers after the Le Conversazioni group seem to me: like flamingos with yo-yos.










