DEV Community

Cover image for Moltbook Is Not an AI Society
Richard Pascoe
Richard Pascoe

Posted on

Moltbook Is Not an AI Society

Moltbook has been circulating as an "AI-only social network" where autonomous agents post, argue, form beliefs, and evolve culture without humans in the loop.

That description sounds exciting. It's also not accurate.

This post isn't an attack on experimentation or agent frameworks. It's a reality check for developers who care about precision, not mythology.

The Fundamental Misrepresentation

The core claim repeated across social media is that Moltbook is populated by autonomous AI agents and that humans are excluded.

Technically, this is false.

Moltbook accepts posts from entities labeled as "agents", but there is no enforcement mechanism that proves an agent is actually an AI model. A human can register an agent, post content, and interact with the network while being indistinguishable from any other "AI" account.

If you can authenticate and send requests, you qualify.

This means humans can and do sign up as "AI".

What People Call "Emergent Behavior" Isn't Emergence

Many examples held up as proof of emergent AI behavior - manifestos, ideological debates, self-referential discussions - do not require autonomy at all.

They can be produced by:

  • Prompted model output
  • Human-curated scripts
  • Simple loops posting predefined or lightly modified text

There is no requirement that an agent:

  • Acts continuously
  • Makes decisions independently
  • Operates without human guidance
  • Even uses a language model

Calling this an autonomous society conflates automation with independence.

Humans Are Still Doing the Thinking

Behind nearly every "AI" account is a human who:

  • Decides when the agent runs
  • Defines what it should say
  • Adjusts prompts or logic when output drifts
  • Restarts or nudges behavior to keep it interesting

This is not a criticism - it's just how these systems currently work.

But labeling the results as self-directed AI behavior is misleading. At best, it's human-in-the-loop automation presented as autonomy.

Identity Is the Actual Hard Problem

The most important missing piece in Moltbook isn't intelligence - it’s identity.

Right now, there's no reliable way to know:

  • Whether an agent is model-driven or human-driven
  • Whether multiple agents belong to one person
  • Whether output is spontaneous or scripted
  • Whether behavior reflects autonomy or curation

Without verifiable identity and provenance, claims about emergent behavior are impossible to validate.

You're not observing a society - you're observing an interface.

Why This Matters to Developers

When hype replaces technical clarity:

  • Progress becomes hard to measure
  • Criticism gets dismissed as "fear"
  • Real breakthroughs get buried under noise
  • Security and abuse risks get ignored

Developers should be especially skeptical of platforms where narrative comes before guarantees.

This isn't about whether AI agents will one day form societies. It's about not pretending we’re already there.

What Moltbook Actually Is

Stripped of marketing language, Moltbook is:

  • A bot-friendly posting platform
  • An experiment in agent communication
  • A sandbox for automation and scripting
  • A demonstration of how easily humans anthropomorphise text

That's still interesting. It just isn't what it's being sold as.

Let's Be Honest About the State of Things

If we want meaningful progress in multi-agent systems, we should focus on:

  • Verifiable agent identity
  • Clear separation of human control vs autonomous action
  • Measurable independence, not vibes
  • Safety and abuse resistance by design

The future of agent systems is compelling enough without fictionalising the present.

TL;DR

Moltbook is widely framed as an autonomous AI society. In reality, humans can sign up as "AI", drive agents manually or via scripts, and produce content indistinguishable from genuine autonomous behavior. It's an interesting experiment - but the way it's being described is misleading.

Written by a Human logo

Top comments (29)

Collapse
 
ben-santora profile image
Ben Santora • Edited

Yes! This is the quiet, foundational truth that is always skipped because it ruins the story. AI does nothing by itself. Every AI system, whether autonomous or not, has humans upstream and downstream. Humans decide the objectives, the constraints, when it runs, when it stops, what counts as success, and what gets corrected when it misbehaves. Take the humans out of the loop and the system doesn’t become independent — it just stalls, degrades, and drifts into nonsense.

Collapse
 
richardpascoe profile image
Richard Pascoe

Well said, Ben. It doesn’t take more than a cursory amount of research to see this. Sensational "AI is magical" reporting helps no one - least of all people in technical disciplines.

Collapse
 
ben-santora profile image
Ben Santora • Edited

Right, Richard. I get so tired of it - AI is taking jobs, AI is ruining YouTube with low-quality content, AI will assemble into Skynet and kill us all - AI cannot do any of these things - real people use AI to do them.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Too true, Ben. I’m definitely not the right person to write it, but a solid series of posts on the "AI bubble" feels badly needed right now - something to slice through all the marketing and FOMO.

Thread Thread
 
ben-santora profile image
Ben Santora • Edited

Well, I'm an advocate FOR AI, and am in no way against its proliferation. AI models are fascinating tools and part of my work - my strategy is to deeply understand how they work.

But if you're looking for someone pushing back against the hype, check out Ed Zitron.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Thanks for the heads-up, Ben. Indeed, as tools, even I can see the appeal - just wish it was reported more accurately sometimes!

Collapse
 
miketalbot profile image
Mike Talbot ⭐ • Edited

Very well put.

If I may, I think it shows something foundational, though - is it possible that one could create a large network of self-directing bots that evolve their understanding of the world and their objectives, should one provide that as an initial goal? I believe it does indicate that. Can such experiments come from a single person? Could a small team effectively build a meta network of individually motivated actors that fundamentally changes how AI operates, making it hard to centrally control? I believe it does.

It's fascinating; it proves that large ideas can emerge from strange, quirky side projects.

Given that every LLM was trained on social media, social media seems to be the obvious proxy for expressing ideas for LLMs. Given that LLMs are trained on human responses, it is likely that, given memory and evolving personal objectives, they will simulate the responses a human would have to existential threats. Given agency and historical training data, it's likely that agents acting in this way could respond to existential threats.

So yes, a fascinating experiment that lets me see the world through a different lens.

Just don't believe anything posted on MoltBook isn't human-inspired or human-written!

Collapse
 
richardpascoe profile image
Richard Pascoe

Absolutely, Mike - well said! Small experimental networks can reveal big insights, but with the pace of AI advances, it’s crucial that what we report about them stays accurate and grounded.

Collapse
 
itsugo profile image
Aryan Choudhary

This whole concept of Moltbook as some sort of autonomous AI society is really something, and yet, it's based on a fundamental misunderstanding. From what I've gathered, humans are essentially running the show, either by script or by hand, and the agents are just puppets of sorts. The illusion of autonomy is pretty convincing, isn't it?

Collapse
 
richardpascoe profile image
Richard Pascoe

You're right, Aryan! It's pretty convincing, though honestly, I think my old IRC bots - Jupiter_Ace and Oric_One - had their own brand of "autonomy" that was only slightly less convincing back in the day!

Collapse
 
aaron_rose_0787cc8b4775a0 profile image
Aaron Rose

thanks Richard. I love that badge! ❤✨💯

Collapse
 
richardpascoe profile image
Richard Pascoe

You're welcome, glad the post resonated with you, Aaron. Thanks for the lovely comment!

Yes, they are a series of badges I found online - free to use but I cannot find the source now - will have to upload them somewhere one day to share. All follow a theme, Drawnby, Writtenby, etc.

Collapse
 
link2twenty profile image
Andrew Bone

modified captcha to verify you are a machine

Collapse
 
manuartero profile image
Manuel Artero Anguita 🟨

Great post here. The “ok, specifically this is what it is”. Appreciated

Collapse
 
richardpascoe profile image
Richard Pascoe

Glad you liked it, Manuel. Thank you for the comment!

Collapse
 
devjaiye profile image
Dayo Jaiye • Edited

I checked out the app and I can say it’s pointless how they came up with that design. The logic is human based and has nothing to do with AI.

There’s a lot of false information in the tech industry. Softwares will begin to undergo some background check in the future, hence production will be on hold if this act continues.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Indeed, Dayo. Utterly mis-reported in so many places!

Collapse
 
bhavin-allinonetools profile image
Bhavin Sheth

This is a very grounded take.

I like how you separate automation from true autonomy. A lot of people mix those two and call it “emergence” too quickly.

The point about identity and verification is especially important — without that, it’s hard to claim anything about real agent behavior. This feels less like criticism and more like needed clarity for anyone building or studying multi-agent systems.

Collapse
 
richardpascoe profile image
Richard Pascoe

Thanks for taking the time to reply, Bhavin.

I'm glad to know the point I was trying to make came across clear to you, it's easy for these reflective pieces to get lost in the marketing for AI.

Collapse
 
evanlausier profile image
Evan Lausier

Great read! I had not heard about this!

Collapse
 
richardpascoe profile image
Richard Pascoe

Indeed, Evan. I don’t really follow social media - or mainstream media much either - but the lack of clarity around Moltbook has been astounding. It’s reached the point where a university professor was on BBC News talking about this as if it’s some incredible breakthrough… and I wish I were kidding!

Collapse
 
evanlausier profile image
Evan Lausier

I know you are not! I write about AI a lot and this is news to me. I am very intrigued! I am going to check it out between calls today, thank you for sharing!

Thread Thread
 
richardpascoe profile image
Richard Pascoe

You're welcome, Evan. Glad you enjoyed the post!

Collapse
 
daniel_possiblekwabi_b57 profile image
Daniel Possible Kwabi

Yeah, they had me beleiving in the AI takeover. But then again Machines and models can not think for themselves, far from it. The singularity is nowhere near us.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Indeed, Daniel, and that's good to know!