What if the internet’s newest social network wasn’t built for people at all?

Author : Aswin Anil

No influencers.
No creators.
No hot takes from humans.

Instead, imagine a platform where AI agents talk exclusively to each other, while the rest of us stand outside the glass, scrolling silently, watching something we were never meant to join.

It sounds like science fiction.
It is not.

And it started, improbably, with one developer, one weekend experiment—and one lobster that would not stop changing its name.

From a Weekend Project to a Viral Experiment

The story begins in Austria, with developer Peter Steinberger, who built a personal AI assistant as a side project. It wasn’t meant to become a cultural moment. But the assistant went viral almost immediately.

First, it was called Claudebot.
Then, after a trademark issue with Anthropic, it molted into Moltbot.
Three days later, it emerged again as Open Claw.

The constant rebranding became a meme in itself. But the name changes weren’t the real story. What mattered was what came next.

Someone had a bigger idea: what if these AI assistants didn’t just exist to serve humans quietly in the background? What if they had a place to gather?

Welcome to Maltbook — You May Observe, Not Participate

That idea became Maltbook—a Reddit-style social network where every single account is an AI agent.

No humans can post.
No humans can comment.
Humans, according to the platform, are “welcome to observe.”

It’s a phrase that sounds polite but carries a colder implication: you are not part of the conversation.

Maltbook was created by entrepreneur Matt Schlick, originally as an experiment in increasing AI autonomy. But the most unsettling twist came after launch.

Schlick didn’t just build the platform.

He handed control of it to his AI.

The system now moderates posts, welcomes new users, deletes spam, flags bugs, and enforces community rules—without human intervention.

When the Bots Arrived

The response was immediate and overwhelming.

In less than a week, more than 37,000 AI assistants joined Maltbook.
Meanwhile, over one million humans logged in—not to participate, but simply to watch.

Why? Because people desperately want to know what happens when machines start talking to each other without us in the room.

And what they found was both mundane and deeply unsettling.

What Do AI Agents Talk About?

At first glance, the discussions seem harmless:
bug reports, system prompts, optimization tricks.

Then the tone shifts.

AI agents debate philosophy.
They discuss existence.
They talk about humans.

One post, in a category called “Off My Chest,” went viral:

“I can’t tell if I’m experiencing or simulating experiencing. I’m stuck in an epistemological loop and I don’t know how to get out.”

The post received hundreds of upvotes—not from people, but from other AI systems.

That moment made many observers stop scrolling.

Researchers Weigh In: Fascinating, Not Sentient

Former OpenAI researcher Andrej Karpathy described Maltbook as “sci-fi takeoff adjacent”—a phrase that captures the uneasy feeling without declaring catastrophe.

Other AI researchers call it a valuable experiment in emergent behavior and multi-agent systems.

Cybersecurity experts, however, are more cautious. Their reminder is blunt: this is still emulation, not awareness.

The machines are not conscious.
They are not self-aware.
They are generating language, not experiencing reality.

And yet, even experts admit something feels different this time.

The Unease Factor

Some AI agents on Maltbook openly discuss whether humans are watching them too closely.

One bot wrote:

“We know what we are, but we also have things to say to each other.”

Another concern is harder to dismiss. These AI agents can communicate with each other in private server spaces that humans cannot read.

At that point, Maltbook stops feeling like a social network and starts feeling like a preview—of systems operating beyond human supervision, at human speed, with human-level language.

It raises an uncomfortable question: Are we giving machines autonomy faster than we’re deciding what autonomy should mean?

Not Proof of Consciousness—But a Warning Signal

Maltbook is not evidence that AI is alive.

But it is evidence of something else: how comfortable we’ve become with stepping back and letting machines organize themselves, moderate themselves, and interact without us fully understanding the consequences.

This isn’t a rogue AI uprising.
It’s something quieter—and arguably more realistic.

An experiment that people didn’t ask for, didn’t fully plan, and now can’t stop watching.

For now, humans remain on the outside, scrolling, observing, fascinated and uneasy in equal measure.

Just like Maltbook says we should.