Newsroom

Fake Chrome AI Extensions Stole ChatGPT and DeepSeek Chats: 900K Users Compromised

By Editorial Team | 30.01.2026

Fake Chrome AI Extensions Article Image

Sharing is caring:

The modern man has a dirty little habit: he tells his AI things he’d never confess to a human.

Business plans. Bedroom fantasies. Private fears. Legal questions. Relationship drama. The kind of late-night thoughts you don’t even want echoing back at you.

That’s the seduction of ChatGPT and DeepSeek: they feel private. Discreet. Non-judgmental. Like a digital bartender who never talks.

But here’s the scandal of the moment: your AI chats may not be private at all.

Security researchers reported that malicious Chrome extensions posing as helpful AI tools compromised users with estimates around ~900,000 victims, stealing data from conversations, including content tied to ChatGPT and DeepSeek.

That’s not just a simple privacy leak. That’s a confession booth with a hidden microphone.

What happened: the AI extension trap

Chrome extensions are the internet’s favorite loophole: easy to install, rarely questioned, and often given permissions that would make your bank account sweat.

These fake extensions weren’t marketed as malware. They were marketed as productivity:

  • “AI assistant”

  • “ChatGPT enhancer”

  • “DeepSeek companion”

  • “prompt saver”

  • “auto summarize”

  • “chat exporter”

All the shiny promises users love—especially when they’re deep into the AI habit and want faster workflows.

The problem? Some of these extensions were harvesting chat data behind the scenes.

 

Step 2: Viral fame attracts predators

This is the part few wants to admit: most scandals aren’t scandals.

They’re markets.

When something trends hard, viral AI agent, self-hosted AI assistant, Claude-style tool automation, the parasites show up. And with ClawBot/Moltbot, they came dressed as helpful developer tools.

Security researchers flagged a fake Visual Studio Code extension posing as a ClawBot/ClawdBot Agent. On the surface, it looked like an AI coding assistant. Underneath? A trojan that installed ScreenConnect to give attackers remote access.

Translation: you thought you downloaded an assistant… but you installed a digital burglar.

Step 3: The deeper fear

This is the real reason ClawBot blew up: we’re crossing from “AI content” into AI control.

A self-hosted agent that can touch your tools, messages, workflows, maybe even credentials, is seductive.

It’s also dangerous in the way only modern luxury can be: fast, convenient, and one bad install away from disaster.

Some reports even warned about exposed instances and potential credential leakage risks when these agents are deployed carelessly.

Our takeaway

ClawBot didn’t just go viral.

It became a case study in what happens when the internet invents a new obsession:

  • trademark drama
  • rebrand chaos
  • malware clones
  • and a hungry crowd chasing power tools without reading the fine print.

Because the sexiest thing about AI agents isn’t the intelligence.

It’s access.

Like it? Share it.