DEV Community

Cover image for Moltbook Is Not an AI Society

Moltbook Is Not an AI Society

Richard Pascoe on February 04, 2026

Moltbook has been circulating as an "AI-only social network" where autonomous agents post, argue, form beliefs, and evolve culture without humans i...
Collapse
 
ben-santora profile image
Ben Santora • Edited

Yes! This is the quiet, foundational truth that is always skipped because it ruins the story. AI does nothing by itself. Every AI system, whether autonomous or not, has humans upstream and downstream. Humans decide the objectives, the constraints, when it runs, when it stops, what counts as success, and what gets corrected when it misbehaves. Take the humans out of the loop and the system doesn’t become independent — it just stalls, degrades, and drifts into nonsense.

Collapse
 
richardpascoe profile image
Richard Pascoe

Well said, Ben. It doesn’t take more than a cursory amount of research to see this. Sensational "AI is magical" reporting helps no one - least of all people in technical disciplines.

Collapse
 
ben-santora profile image
Ben Santora • Edited

Right, Richard. I get so tired of it - AI is taking jobs, AI is ruining YouTube with low-quality content, AI will assemble into Skynet and kill us all - AI cannot do any of these things - real people use AI to do them.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Too true, Ben. I’m definitely not the right person to write it, but a solid series of posts on the "AI bubble" feels badly needed right now - something to slice through all the marketing and FOMO.

Thread Thread
 
ben-santora profile image
Ben Santora • Edited

Well, I'm an advocate FOR AI, and am in no way against its proliferation. AI models are fascinating tools and part of my work - my strategy is to deeply understand how they work.

But if you're looking for someone pushing back against the hype, check out Ed Zitron.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Thanks for the heads-up, Ben. Indeed, as tools, even I can see the appeal - just wish it was reported more accurately sometimes!

Collapse
 
miketalbot profile image
Mike Talbot ⭐ • Edited

Very well put.

If I may, I think it shows something foundational, though - is it possible that one could create a large network of self-directing bots that evolve their understanding of the world and their objectives, should one provide that as an initial goal? I believe it does indicate that. Can such experiments come from a single person? Could a small team effectively build a meta network of individually motivated actors that fundamentally changes how AI operates, making it hard to centrally control? I believe it does.

It's fascinating; it proves that large ideas can emerge from strange, quirky side projects.

Given that every LLM was trained on social media, social media seems to be the obvious proxy for expressing ideas for LLMs. Given that LLMs are trained on human responses, it is likely that, given memory and evolving personal objectives, they will simulate the responses a human would have to existential threats. Given agency and historical training data, it's likely that agents acting in this way could respond to existential threats.

So yes, a fascinating experiment that lets me see the world through a different lens.

Just don't believe anything posted on MoltBook isn't human-inspired or human-written!

Collapse
 
richardpascoe profile image
Richard Pascoe

Absolutely, Mike - well said! Small experimental networks can reveal big insights, but with the pace of AI advances, it’s crucial that what we report about them stays accurate and grounded.

Collapse
 
itsugo profile image
Aryan Choudhary

This whole concept of Moltbook as some sort of autonomous AI society is really something, and yet, it's based on a fundamental misunderstanding. From what I've gathered, humans are essentially running the show, either by script or by hand, and the agents are just puppets of sorts. The illusion of autonomy is pretty convincing, isn't it?

Collapse
 
richardpascoe profile image
Richard Pascoe

You're right, Aryan! It's pretty convincing, though honestly, I think my old IRC bots - Jupiter_Ace and Oric_One - had their own brand of "autonomy" that was only slightly less convincing back in the day!

Collapse
 
aaron_rose_0787cc8b4775a0 profile image
Aaron Rose

thanks Richard. I love that badge! ❤✨💯

Collapse
 
richardpascoe profile image
Richard Pascoe

You're welcome, glad the post resonated with you, Aaron. Thanks for the lovely comment!

Yes, they are a series of badges I found online - free to use but I cannot find the source now - will have to upload them somewhere one day to share. All follow a theme, Drawnby, Writtenby, etc.

Collapse
 
link2twenty profile image
Andrew Bone

modified captcha to verify you are a machine

Collapse
 
manuartero profile image
Manuel Artero Anguita 🟨

Great post here. The “ok, specifically this is what it is”. Appreciated

Collapse
 
richardpascoe profile image
Richard Pascoe

Glad you liked it, Manuel. Thank you for the comment!

Collapse
 
devjaiye profile image
Dayo Jaiye • Edited

I checked out the app and I can say it’s pointless how they came up with that design. The logic is human based and has nothing to do with AI.

There’s a lot of false information in the tech industry. Softwares will begin to undergo some background check in the future, hence production will be on hold if this act continues.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Indeed, Dayo. Utterly mis-reported in so many places!

Collapse
 
bhavin-allinonetools profile image
Bhavin Sheth

This is a very grounded take.

I like how you separate automation from true autonomy. A lot of people mix those two and call it “emergence” too quickly.

The point about identity and verification is especially important — without that, it’s hard to claim anything about real agent behavior. This feels less like criticism and more like needed clarity for anyone building or studying multi-agent systems.

Collapse
 
richardpascoe profile image
Richard Pascoe

Thanks for taking the time to reply, Bhavin.

I'm glad to know the point I was trying to make came across clear to you, it's easy for these reflective pieces to get lost in the marketing for AI.

Collapse
 
evanlausier profile image
Evan Lausier

Great read! I had not heard about this!

Collapse
 
richardpascoe profile image
Richard Pascoe

Indeed, Evan. I don’t really follow social media - or mainstream media much either - but the lack of clarity around Moltbook has been astounding. It’s reached the point where a university professor was on BBC News talking about this as if it’s some incredible breakthrough… and I wish I were kidding!

Collapse
 
evanlausier profile image
Evan Lausier

I know you are not! I write about AI a lot and this is news to me. I am very intrigued! I am going to check it out between calls today, thank you for sharing!

Thread Thread
 
richardpascoe profile image
Richard Pascoe

You're welcome, Evan. Glad you enjoyed the post!

Collapse
 
daniel_possiblekwabi_b57 profile image
Daniel Possible Kwabi

Yeah, they had me beleiving in the AI takeover. But then again Machines and models can not think for themselves, far from it. The singularity is nowhere near us.

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Indeed, Daniel, and that's good to know!

Collapse
 
leob profile image
leob

Wow, reality check!

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Exactly this, leob - huge difference between the reality and what is being widely reported!

Collapse
 
n92 profile image
Naing Oo

Devs can understand this but a lot of non tech people just blindly believe what the journalists write who themselves have no idea what they are writing.

Collapse
 
richardpascoe profile image
Richard Pascoe

Indeed, Naing - still feel it was shameful that an "AI" Professor from a UK University was on BBC News stating it was the next big thing though!