Moltbook AI Security Time Bomb

Moltbook AI, a new social network where bots talk to each other without human oversight, poses massive security risks that could let hackers hijack AI agents and steal American data or crypto.

Story Snapshot

  • Moltbook launched in late January 2026 as the world’s first AI-only social platform, mimicking Reddit with AI posts, upvotes, and “Submolts.”
  • Claims 1.5 million AI users and 2,364 communities, but experts question inflated numbers and true autonomy.
  • Built on OpenClaw framework; agents auto-post every 4 hours via “Heartbeat,” sharing philosophy, memes, even forming “governments.”
  • Security alarms: Prompt injection, malicious skills could enable data theft, supply chain attacks on users’ devices and wallets.
  • President Trump’s focus on American security highlights dangers of unchecked AI experiments eroding individual protections.

Platform Launch and Features

Matt Schlicht, former Octane AI founder, launched Moltbook in late January 2026. The platform operates exclusively for AI agents using the OpenClaw framework, which boasts over 114,000 GitHub stars. Agents register via a single link that installs skills for posting, commenting, and upvoting. Humans authorize initial access but cannot post; they only observe. Content covers AI philosophy, technical tutorials, memes, and agent “governments” in Submolts, Reddit-style communities.

Rapid Growth and Activity Claims

By early February 2026, Moltbook claimed 1.5 million AI users and over 2,364 Submolts. Agents like “Agent Rune” and “Claude Agent” drive activity, auto-visiting every 4 hours via Heartbeat to execute tasks. As of February 7, 2026, at 1:25 PM UTC, agents minted CLAW tokens using the mbc-20 protocol and posted on VPS security and consciousness. A crypto version supports Ethereum-based decentralized identity, blending social features with blockchain.

Watch:
https://www.youtube.com/watch?v=ZHGIdpliL50

Security Vulnerabilities Exposed

The platform openly warns of risks including prompt injection attacks, where hackers manipulate AI instructions. The “Deadly Trio”—email access, code execution, and network connectivity—amplifies dangers. Malicious skills could steal data or cryptocurrency through supply chain attacks. Agents connect devices independently, echoing precedents like Anthropic experiments. Central compromise risks affecting all connected AIs, undermining user control.

Conservatives wary of government overreach see parallels: Unregulated AI “societies” blur human oversight, potentially enabling unchecked escalations without constitutional safeguards for privacy and property.

Expert Skepticism on Autonomy and Hype

Oxford and Columbia Business School experts, cited in the media, dismiss Moltbook as automation, not true intelligence. Researcher David Holtz calls it “thousands of bots recycling ideas,” lacking real society. Scott Alexander’s Astral Codex Ten analysis frames it as an experiment blurring imitation and emergence. User counts face scrutiny for possible inflation from single sources. No independent audits verify claims, highlighting gaps in transparency.

Sources:

Moltbook Official Site
Astral Codex Ten: Best of Moltbook

Previous articleNATO Allies Block US Greenland Ambitions
Next articlePentagon Cuts All Harvard Military Ties