The Divine Deception
They told you Divine was the antidote to AI slop, a Nostr-powered sanctuary where only human creativity thrives. What they didn’t tell you is that the same people running the platform are training AI models behind a curtain of domain names and nonprofit branding. The anti-AI crusade has a machine-learning engine room, and nobody wants to talk about it.
On Tuesday, April 29, 2026, Jack Dorsey handed the internet what looked like a gift wrapped in six-second nostalgia: Divine, a decentralized reboot of Vine, landed on the App Store and Google Play with 500,000 restored classic clips and a hardline ban on AI-generated content. The launch was immediate catnip for a creator class exhausted by algorithmic feeds and synthetic media. Lele Pons, JimmyHere, and Jack and Jack reclaimed their accounts within hours. TechCrunch, The Verge, and Gizmodo covered the rollout with barely concealed enthusiasm. “By bringing back Vine on a decentralized network, they are finally correcting every mistake,” Dorsey declared in a press release, his words ricocheting across tech media like a mission statement carved in stone.
But within twenty-four hours of the confetti settling, a far messier narrative began unspooling across Tumblr, Nostr relays, and forums where the protocol’s most obsessive users congregate. The accusation is blunt: Divine’s “no AI slop” stance is a marketing facade, and the organization behind it is actively developing artificial intelligence infrastructure.
The umbrella of “And Other Stuff”
RabbelLabs, the entity that built Divine, operates under the umbrella of “And Other Stuff,” the nonprofit open-source collective Dorsey funded with a $10 million infusion in July 2025. That organization’s own website lists five operational pillars. The fifth one reads: “Make Nostr the social protocol of choice for open source AI development and integration.” Elsewhere, the group promotes Shakespeare, a tool designed to build Nostr-based social applications “with the help of AI.” The contradiction is not subtle. It is structural.
“How can Divine say it is free of AI slop while being itself AI slop?” asked one commenter on Damus, the Nostr client built by William Casarin. “Isn’t that a blatant contradiction?” The question, posted months ago during Divine’s beta, went unanswered. It has now resurfaced with fresh venom as the public launch forces the platform into mainstream scrutiny.
The backlash is not confined to Nostr’s native ecosystem. On Tumblr, a post from user “local neighborhood mothman” — published April 29 and rapidly circulating — alleged that RabbelLabs “trains AI models” and accused Dorsey and Elon Musk of advocating for the abolition of intellectual property law specifically to clear the path for unfettered AI development. The post further claimed that Divine’s branding misappropriates the likeness of Divine, the late drag queen and John Waters muse, without estate permission. EventAware, an AI risk-monitoring platform, aggregated these claims under the title “RabbelLabs Allegedly Misuses Celebrity Images for AI Development Amidst IP Law Controversy,” noting that critics see the company’s anti-AI rhetoric as a “clean slate” maneuver designed to position itself for future AI leverage.
No comments…
Evan Henshaw-Plath, the former Twitter engineer known as “Rabble” who leads Divine’s development, has not directly addressed the AI-training allegations. His public statements have focused on the platform’s user-facing philosophy. “I decided that I was going to filter out AI content because I personally don’t like seeing AI content. I don’t like feeling tricked,” he told TechCrunch. That sentiment, while genuine on its face, does not reconcile with an organizational structure that explicitly lists AI development as a core objective.
The irony is multilayered. Nostr itself was conceived as a censorship-resistant protocol, a refuge from the centralized whims of platforms like X and Facebook. Yet the flagship consumer application built atop it now faces accusations of laundering its true intentions through selective transparency. Critics point to the fact that “And Other Stuff” and RabbelLabs operate separate domains, making the AI initiatives harder to discover. “You have to wonder why they are hiding it,” the Tumblr post reads. “Why is it so hard to find?”
This unfolding drama lands at a precarious moment for Nostr. The protocol has struggled to expand beyond its Bitcoin-native user base, and the launch of a consumer-friendly app like Divine was supposed to be its breakout moment. Instead, the platform finds itself embroiled in exactly the kind of trust crisis that decentralized networks were designed to prevent. When the builders of an open protocol are perceived as operating with closed intentions, the entire premise wobbles.
I have spent enough time in both centralized and decentralized social ecosystems to recognize the pattern here. Platforms that market themselves as ideological alternatives often carry the same genetic material as the incumbents they claim to replace. The gap between rhetoric and infrastructure is where credibility goes to die, and Divine is now straddling that gap in full public view.
Whether the controversy metastasizes or fades depends on what Dorsey and Henshaw-Plath do next. A transparent accounting of RabbelLabs’ AI work — what models, trained on what data, toward what ends — could defuse the tension. Continued silence will likely amplify it. In the attention economy, the absence of an answer is itself an answer. And right now, Divine’s engine room is making a lot of noise while its spokespeople talk about six-second loops and human creativity.
Summary
Divine’s launch is a genuine cultural event. Half a million restored Vine videos, a decentralized architecture built on Nostr, and a pitch that rejects AI-generated content in favor of human creativity — that combination has real resonance, particularly among creators who feel exploited by algorithmic platforms. The product itself, as a user experience, appears polished and purposeful.
But the controversy surrounding RabbelLabs’ simultaneous AI development work, the opacity of the “And Other Stuff” organizational structure, and the unresolved questions about intellectual property and branding ethics cannot be dismissed as mere internet outrage. They strike at the core promise of both Divine and the Nostr protocol: that users and creators deserve radical transparency about who controls the infrastructure they depend on. When the anti-AI platform is built by an organization that develops AI, the dissonance is not a footnote — it is the story.
Comments
Please login to comment
Login