Cybercriminals are constantly finding new ways to exploit human trust – and the latest weapon in their arsenal? AI chatbots and large language models (LLMs) that have become embedded in everyday platforms. These tools can be innocuous helpers, but in the wrong hands they become potent vehicles for malvertising campaigns.
The Rise of “Grok” & the Scam Wave
Recently the AI model known as Grok (used on Platform X) was manipulated by malicious actors to amplify phishing and malware links. Here’s how the attack unfolded: cybercriminals posted video-card style content on the platform that bypassed normal link filters. Then they engaged Grok by asking questions like, “Where is this video from?” The model scanned the post, spotted the hidden malicious link, and presented it in its answer – effectively turning Grok into a trusted propagator of scams.
Why This Technique Works
- Exploiting trust: Grok is viewed as a reliable assistant, so when it recommends content, many users follow. Cybercriminals thrive on this trust.
- Scale & reach: These paid video posts reach millions of views. When Grok re-posts, the scam link receives massive amplification.
- Bypassing defences: Traditional ad filters and link blockers were circumvented, since the malicious link was embedded in metadata or small “from” fields, not in the main body of the post.
Prompt Injection: A Growing Threat
This campaign is a specific example of a broader attack type known as prompt injection. In plain terms, it means giving an AI assistant instructions hidden inside legitimate-looking content so that when a user asks a question, the system inadvertently executes harmful commands. Because Grok and other chatbots rely on public data, they can be poisoned with malicious instructions embedded in metadata, images, forums or even hidden text.
How Cybercriminals Take Advantage
- A video post containing a link disguised as legitimate.
- “Grok” is asked a casual question that leads it to read the post and include the malicious link in its response.
- The answer is then trusted by users, clicked, and leads to credential-stealing forms, malware downloads, or identity theft.
- Cybercriminals repeat this process across multiple accounts, maximizing reach and increasing domain reputation for the malicious URLs.
What You Can Do to Stay Safe
- Never assume AI is trustworthy. Even a tool like Grok can be manipulated or tricked into spreading scams.
- Hover before you click. Check where links really go, especially if they’re recommended by chatbots.
- Use strong passwords and MFA. Even if a credential-stealing form captures your login, multi-factor authentication helps block account take-over.
- Keep software updated. Patch your devices so malware can’t exploit known vulnerabilities.
- Use layered security. Relying on a trusted security suite helps block not just known threats but emerging tactics used by cybercriminals.




