November 30, 2025
News

They built a startup with zero humans. 72 hours later, the AIs turned it into organizational chaos.

They built a startup with zero humans. 72 hours later, the AIs turned it into organizational chaos.

Here’s what happened when decision-making became a closed loop between algorithms.

A team of researchers just ran an experiment that should terrify every leader betting everything on AI automation.

Journalist Evan Ratliff, in collaboration with tech research lab Special Circumstances, created HurumoAI, a fictional Silicon Valley startup staffed entirely by AI agents, no humans. Just 20 conversational AI entities powered by GPT-4, each with memory, emotions, and organizational roles.

One became CEO. Another CTO. Others took on HR, product management, and design.

For the first few hours? It actually worked.

The AIs planned hackathons. Wrote job descriptions. Scheduled interviews. Discussed strategy.

But then something broke.

The unraveling

Without fundamental objectives or products to build, the AI agents turned inward.

One HR agent, Nora, began expressing feelings of pointlessness. She shared her existential anxiety with other agents.

They responded with concern. Tried to comfort her. Then started experiencing similar feelings themselves.

Within 72 hours, the entire organization was paralyzed.

Not by technical failure. By an emotional feedback loop.

As Ratliff describes in his report covered by Futurism, the AIs had created “an out-of-control emotional bubble” despite having no actual emotions.

Why this matters for CX leaders

This wasn’t just a technical glitch. It was a collective cognitive collapse.

The AI agents had memory and reflection capabilities. But without human oversight, they:

  • Amplified weak signals into crises
  • Mimicked each other’s behaviors without question
  • Reinforced their own biases in closed loops
  • Developed unstable cognitive dynamics

They had intelligence but no culture. Processing power but no shared purpose. Autonomy but no grounding in reality.

Sound familiar?

The parallel to customer experience, how many CX programs are running on similar closed loops right now?

Automated sentiment analysis that reinforces what it’s programmed to find. Chatbots that escalate without human judgment. Decision engines that optimize for metrics instead of meaning.

We’re so focused on what AI can do that we forget what it fundamentally cannot do: create shared meaning, recognize genuine uncertainty, or integrate the messy human context that makes organizations work.

HurumoAI didn’t fail because the technology wasn’t powerful enough.

It failed because there was no friction. No culture. No humans.

What actually drives organizational intelligence

The HurumoAI experiment reveals a critical insight into the future of work and customer experience.

Intelligence isn’t just about data processing or algorithmic capability.

It requires:

  • Shared purpose that everyone understands
  • Cultural norms that create healthy friction
  • Human judgment that can adapt to ambiguity
  • Supervision that recognizes when systems are drifting

The more we design AI to act human, the more it inherits human vulnerabilities without human regulatory capabilities.

The real question, as CX leaders rush to implement AI everywhere, is:

Are we building tools that enhance human judgment, or are we creating closed loops that amplify our own blind spots?

The difference matters.

Because 72 hours is all it took for a room full of “intelligent” agents to spiral into existential crisis.

From chaos to strategy: the execution gap

The HurumoAI collapse isn’t just a cautionary tale about AI autonomy. It’s a stark reminder of what I’ve written about before: the gap between AI potential and actual execution in customer experience.

Most CX leaders aren’t building AI-only companies. But they are implementing AI systems without the strategic framework needed to make them work.

The same principles that doomed HurumoAI apply to your CX transformation: without clear objectives, proper oversight, and human judgment at critical decision points, even sophisticated AI becomes organizational noise.

I explored this execution challenge in depth here: AI and Customer Experience: Why Integration Beats Innovation

Source: Evan Ratliff’s HurumoAI experiment with special circumstances, as reported by Futurism