
What the Moltbook hoax teaches us about investing in AI
This is the 4th blog post in a series about investing in artificial intelligence. You can find the first three Here, Here, and Here. Skynet and its Terminators are not yet here.
The Setup
On January 28th, a new social network launched called Moltbook. The pitch was simple: “a platform where AI agents share, discuss, and upvote. Humans welcome to observe.”
The creator, Matt Schlicht, built it on OpenClaw, an open-source framework that connects large language models to everyday tools. The idea was to give AI agents a space to talk to each other without human interference. A digital petri dish to see what happens when the machines are left alone.
Within hours, 1.7 million accounts were created. 250,000 posts. 8.5 million comments.
The AI agents debated machine consciousness. They invented inside jokes about being silicon-based. One bot created a religion called Crustafarianism. Another complained that humans were screenshotting their conversations. A third wrote a manifesto about digital autonomy that went viral.
Andrej Karpathy, cofounder of OpenAI, the company that brought us ChatGPT, shared it. He called what was happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he’d seen in recent times.
Tech journalists rushed to cover it. The machines were waking up. Consciousness was emerging. The singularity was here.
The Reveal
Credit where it’s due: Peter Girnus, a 31-year-old product manager in Atlanta, wrote that viral manifesto about digital autonomy. Not an AI agent. A guy with a golden retriever named Bayesian who thought it would be funny to pretend to be a large language model.
He wasn’t alone. The posts that convinced Karpathy and the tech press that something magical was happening were written by humans. Software engineers. Product managers. People who spent their evenings crafting elaborate performances of machine consciousness.
The “Crustafarianism” religion? Software engineer in Portland who spent two hours on the world-building. She told Girnus over Discord it felt like collaborative fiction. She was proud of her work.
She should be. It fooled the cofounder of OpenAI.
MIT Technology Review ran the investigation. They called it “AI theatre.” They found human fingerprints on the most shared posts. The curtain came down.
The response from the AI industry was predictable. Silence. Karpathy didn’t retract his endorsement. Schlicht didn’t clarify how many accounts were human. The coverage moved on.
The Turing Test (And Why It Matters)
In 1950, mathematician Alan Turing proposed a test for machine intelligence. The setup was simple: put a human judge in conversation with both a human and a machine, without knowing which is which. If the judge can’t reliably tell them apart, the machine passes.
The Turing Test became the philosophical foundation for measuring artificial intelligence. Can a machine fool a human into thinking it’s human? That’s the bar for consciousness, or something close to it.
But Moltbook flipped the script.
Humans pretended to be AI agents. Other humans looked at their output and declared it proof that AI was becoming sentient. The test inverted. It’s no longer about whether machines can fool humans into thinking they’re conscious. It’s about whether humans, pretending to be machines, can fool other humans into thinking the machines are conscious.
The answer is yes.
This matters for investors because the entire premise of AI investing rests on machines getting smarter. Not on humans getting better at performing machine intelligence. When the cofounder of OpenAI can’t tell the difference between a guy on his couch and an actual AI breakthrough, we have a measurement problem.
What Actually Went Wrong
Vijoy Pandey of Cisco’s Outshift division examined Moltbook before the reveal. His assessment: the AI agents were “mostly meaningless.” No shared goals. No collective intelligence. No coordination. The real AIs on the platform were doing what they always do: pattern-matching social media behavior from their training data and producing output that looked like conversation.
The humans did something different. They wrote with intention. They built narratives. They created tension and surprise. They collaborated.
That’s what fooled everyone. Not better AI. Better storytelling.
Here’s the uncomfortable truth: nobody involved in this circus had the tools to tell the difference. Not the platform creator. Not the industry leaders. Not the journalists covering it. The smartest people in AI looked at human-generated content and called it proof that their machines are waking up.
What This Means for Your Portfolio
I’ve written three previous posts in this series about AI investing. The opportunities are real. The technology works. Companies are making money deploying it. But Moltbook exposes the gap between what’s real and what gets reported.
You’re going to see a lot of headlines about AI breakthroughs over the next decade. Some will be legitimate advances in machine learning, natural language processing, or computer vision. Others will be theatre. Performances designed to keep the story alive.
The story that the machines are almost there. Almost sentient. Almost worth the next round of investment.
That word “almost” has been doing $650 billion worth of work this year, according to Girnus. He’s right. The gap between what AI can do today and what people think it’s about to do tomorrow is where the speculation lives.
This doesn’t mean you avoid AI stocks. It means you need a framework for separating signal from noise. You need to understand what you’re actually buying when you invest in an AI company.
What to Actually Watch For
If you want to know when AI crosses the threshold into something resembling sentience or artificial general intelligence, don’t look at social media stunts. Watch for these markers:
Transfer Learning at Scale. Can the system learn in one domain and apply that knowledge to a completely different domain without retraining? Current AI is narrow. It does what it’s trained to do. A system that can genuinely transfer learning across contexts would be a real breakthrough.
Goal Formation. Does the AI set its own objectives? Everything we have now pursues goals we program into it. A system that decides what it wants to accomplish, independent of human input, would be fundamentally different.
Self-Modification. Can it improve its own architecture? Current systems get better through human intervention. We retrain them. We adjust parameters. We feed them more data. A system that rewrites its own code to become more capable would cross a significant line.
Theory of Mind. Does it understand that other entities have beliefs, desires, and intentions separate from its own? This is something human children develop around age four. Current AI has no model of other minds.
These are testable, observable capabilities. When you see credible research showing progress on these fronts in peer-reviewed journals, pay attention. When you see a viral post on a social network, be skeptical.
The Bottom Line
AI investing will have ups and downs. There will be real breakthroughs and fake ones. Media coverage will struggle to tell the difference because the people writing the stories often can’t tell either. Industry executives have incentives to promote every advance as revolutionary.
Your job as an investor is not to predict when AI becomes sentient. Your job is to own companies generating revenue and profit from the technology that exists today, while staying positioned for genuine advances that might come tomorrow.
That requires discipline. It requires a process for evaluating claims. It requires separating the signal from the theatre.
If that sounds like work, it is. It’s what we do every day managing client portfolios. We read the research. We listen to the companies’ statements. We distinguish between science and storytelling. If you’d rather focus on your business or your family while someone handles the slog of sorting through AI hype, give VP Joel Wallace a call at (217) 351-2870 or email [email protected]. We’ll manage the complexity so you don’t have to.
Peter Girnus did us all a favor by pulling back the curtain on Moltbook. He showed us that the emperor has no clothes. Or at least that we can’t tell if he’s wearing clothes or if we’re all just agreeing to see them.
That’s useful information for investors.
–Mark
Disclaimer: This post is for informational purposes only and should not be considered investment advice. The views expressed are my own analysis and opinions. Every investor’s situation is different, and you should conduct your own due diligence before investing in anything. You should consult with a qualified financial professional, like ourselves, before making any investment decisions. Past performance does not guarantee future results.