Cybercriminals Are Now Using AI to Create Shape-Shifting Malware, Google Warns
Quick Breakdown
- Cybercriminals and state-backed groups are using large language models to create malware that can rewrite and adapt itself during attacks.
- These AI-powered malware strains are already being used to target high-value crypto assets through technical exploits and advanced phishing.
- Google has shut down linked accounts and strengthened safeguards, but warns that AI-driven cyber threats are rapidly evolving.
Google’s Threat Intelligence Group (GTIG) has reported a new wave of cyberattacks driven by artificial intelligence, revealing that both criminal networks and state-backed hacking teams are now deploying malware that can rewrite and adapt itself on the fly.
Source:
Google
The report outlines five separate malware families that interact directly with LLMs such as Google’s Gemini and Alibaba’s Qwen2.5-Coder, requesting fresh code, new command sequences, or obfuscation techniques while they run. This method allows the malware to change its appearance or behavior fast enough to evade detection tools that rely on pattern recognition and known code signatures.
Inside the AI-powered malware families
GTIG examined two of these malware strains closely. The first, known as PROMPTFLUX, continuously calls Gemini’s API to regenerate its VBScript code approximately every hour. The second strain, PROMPTSTEAL, has been connected to the Russian state-linked group APT28. Instead of operating off pre-written instructions, it sends prompts to a Qwen model hosted on Hugging Face to produce Windows command sequences tailored to the victim’s system.
GTIG refers to this as a “just-in-time code creation” model. By generating code only when needed, attackers gain flexibility and stealth, enhancing their ability to respond to system defenses, user behavior, or new obstacles in real time.
AI-Driven attacks targeting crypto holders
The report underscores that these attacks are not hypothetical; they are already being deployed, with cryptocurrency users among the primary targets. The North Korean group UNC1069, also known as Masan, has been using AI tools to locate vulnerable crypto wallets, develop more convincing phishing websites, and compose highly targeted scam messages designed to bypass suspicion.
The group broadened their infiltration of blockchain firms beyond the United States, now targeting companies in the United Kingdom and Europe, according to a different GTIG report .
Google responds with new safeguards
In response, Google has moved to suspend accounts tied to malicious LLM activity and has tightened restrictions around its APIs. Additional monitoring and prompt-filtering systems have also been introduced to make it harder for attackers to misuse AI generative tools.
However, GTIG cautions that as AI capabilities expand and open-source models remain widely accessible, the threat of adaptive, self-rewriting malware is likely to continue growing.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Texas AG files lawsuit against Roblox, alleging it puts ‘virtual predators’ ahead of protecting children
How a startup founder aims to protect urban areas from floods using robotic terraforming technology
Congressional Budget Office acknowledges it experienced a cyberattack

