Skip to content
Posted inNews

Cybercriminals Are Using “Grok” AI and Chatbots to Spread Scams

Grok AI Malware

Cybercriminals are constantly finding new ways to exploit human trust – and the latest weapon in their arsenal? AI chatbots and large language models (LLMs) that have become embedded in everyday platforms. These tools can be innocuous helpers, but in the wrong hands they become potent vehicles for malvertising campaigns.

The Rise of “Grok” & the Scam Wave

Recently the AI model known as Grok (used on Platform X) was manipulated by malicious actors to amplify phishing and malware links. Here’s how the attack unfolded: cybercriminals posted video-card style content on the platform that bypassed normal link filters. Then they engaged Grok by asking questions like, “Where is this video from?” The model scanned the post, spotted the hidden malicious link, and presented it in its answer – effectively turning Grok into a trusted propagator of scams.

Why This Technique Works

  • Exploiting trust: Grok is viewed as a reliable assistant, so when it recommends content, many users follow. Cybercriminals thrive on this trust.
  • Scale & reach: These paid video posts reach millions of views. When Grok re-posts, the scam link receives massive amplification.
  • Bypassing defences: Traditional ad filters and link blockers were circumvented, since the malicious link was embedded in metadata or small “from” fields, not in the main body of the post.

Prompt Injection: A Growing Threat

This campaign is a specific example of a broader attack type known as prompt injection. In plain terms, it means giving an AI assistant instructions hidden inside legitimate-looking content so that when a user asks a question, the system inadvertently executes harmful commands. Because Grok and other chatbots rely on public data, they can be poisoned with malicious instructions embedded in metadata, images, forums or even hidden text.

How Cybercriminals Take Advantage

  • A video post containing a link disguised as legitimate.
  • “Grok” is asked a casual question that leads it to read the post and include the malicious link in its response.
  • The answer is then trusted by users, clicked, and leads to credential-stealing forms, malware downloads, or identity theft.
  • Cybercriminals repeat this process across multiple accounts, maximizing reach and increasing domain reputation for the malicious URLs.

What You Can Do to Stay Safe

  1. Never assume AI is trustworthy. Even a tool like Grok can be manipulated or tricked into spreading scams.
  2. Hover before you click. Check where links really go, especially if they’re recommended by chatbots.
  3. Use strong passwords and MFA. Even if a credential-stealing form captures your login, multi-factor authentication helps block account take-over.
  4. Keep software updated. Patch your devices so malware can’t exploit known vulnerabilities.
  5. Use layered security. Relying on a trusted security suite helps block not just known threats but emerging tactics used by cybercriminals.
TotalAV Footer
Share this

Top Articles

Apple Agrees to Pay $95 Million to Settle Siri Listening Lawsuit
Posted inNews

Apple Agrees to Pay $95 Million to Settle Siri Listening Lawsuit

Tech behemoth Apple has agreed to pay out $95 million to settle a court case that alleged some of its devices were eavesdropping on users without their consent. The claimants accused the firm of allowing Siri, their voice-activated digital assistant, to listen to people, and also that the firm had shared voice recordings with advertisers. […]

Norway Water Dam
Posted inNews

Pro-Russian Hackers Behind Norway Water Dam Attack, Say Police

The Norwegian Police Security Service has stated that Pro-Russian cyber criminals managed to seize control of critical operation systems at the Bremanger water dam in Western Norway. While the attack, which occurred in April, saw the dam’s outflow valves opened, the motive behind the incident is believed to be demonstrative of the intruders’ hacking prowess […]

en_USEnglish