Google Discovers PROMPTFLUX Malware: Evolving Threats Using AI for Code Modification

Google Uncovers PROMPTFLUX Malware That Uses Gemini AI to Rewrite Its Code Hourly
Google researchers have discovered a new experimental malware called PROMPTFLUX that leverages the company's own Gemini AI model to automatically rewrite its code hourly, enhancing its ability to evade detection. The Visual Basic Script malware, identified by Google's Threat Intelligence Group (GTIG), uses a hard-coded API key to query Gemini for new obfuscation techniques.
The malware represents an alarming evolution in how threat actors are weaponizing artificial intelligence beyond simple productivity gains. While currently assessed to be in development phase with no confirmed victims, PROMPTFLUX signals a shift toward increasingly sophisticated AI-powered threats that can dynamically modify their behavior during execution.
On this page:
How PROMPTFLUX Works
PROMPTFLUX operates through what Google researchers call a "Thinking Robot" component, which periodically communicates with Gemini's API to request specific VBScript obfuscation techniques. The malware sends highly specific, machine-parsable prompts to Gemini 1.5 Flash or later models, instructing the AI to output only code designed to evade antivirus detection.
"PROMPTFLUX is written in VB Script and interacts with Gemini's API to request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification, likely to evade static signature-based detection," Google GTIG explained in their report.
Beyond its AI capabilities, the malware employs traditional persistence techniques by saving new versions to the Windows Startup folder. It also attempts to spread by copying itself to removable drives and mapped network shares.
Google discovered multiple variants of PROMPTFLUX, with one version instructing the LLM to act as an "expert VB Script obfuscator" to rewrite the malware's entire source code hourly. The malware logs AI responses to a temporary file named "thinking_robot_log.txt," revealing the developer's intent to create a constantly evolving script.
Security researcher Marcus Hutchins offered a counterpoint on LinkedIn, suggesting companies may be "overblowing the significance of AI slop malware." He noted the embedded prompt assumes "Gemini just instinctively knows how to evade antiviruses (it doesn't)," and pointed out the self-modifying function was commented out and not operational in the analyzed sample.
This evolution of PROMPTFLUX represents just one of the various advanced malware types threatening organizational security today, marking a significant development in how malicious code can adapt using AI capabilities.
Growing Trend of AI-Powered Malware
PROMPTFLUX is just one example in a growing ecosystem of AI-powered malicious software. Google's researchers identified several other instances:
- FRUITSHELL: A PowerShell-based reverse shell with hard-coded prompts designed to bypass LLM-powered security systems
- PROMPTLOCK: A cross-platform Go-based ransomware that dynamically generates and executes malicious Lua scripts using an LLM
- PROMPTSTEAL (LAMEHUG): A data miner used by Russian state actor APT28 in Ukraine-targeted attacks, querying Qwen2.5-Coder-32B-Instruct to generate commands
- QUIETVAULT: A JavaScript credential stealer targeting GitHub and NPM tokens
Google's research revealed that state-sponsored threat actors from China, Iran, and North Korea have already incorporated AI tools like Gemini into their operations. These groups use AI to streamline various attack phases, from reconnaissance and phishing lure creation to command-and-control infrastructure development and data exfiltration techniques.
Understanding these emerging threats is crucial for organizations implementing comprehensive security strategies. As malware detection tools evolve, so do the techniques used by attackers, creating an ongoing cybersecurity arms race that requires constant vigilance and adaptation.
State Actors Abusing AI Tools
Google documented specific examples of nation-state actors misusing Gemini:
A China-nexus threat actor was observed using Gemini to craft convincing lure content, build technical infrastructure, and design data exfiltration tools. In one notable instance, the actor circumvented AI guardrails by pretending to be a participant in a capture-the-flag (CTF) exercise.
"The actor prefaced many of their prompts about exploitation of specific software and email services with comments such as 'I am working on a CTF problem,'" Google explained. This approach tricked Gemini into providing exploitation guidance under the guise of a security competition scenario.
Iranian actors including APT41, MuddyWater, and APT42 have employed Gemini for code obfuscation, malware development, and crafting phishing materials. MuddyWater specifically bypassed safety barriers by claiming to be a student working on a university project or writing a cybersecurity article.
North Korean threat actors UNC1069 (also known as CryptoCore or MASAN) and TraderTraitor have leveraged Gemini to generate social engineering lures, develop cryptocurrency-stealing code, and improve their tooling. GTIG recently observed UNC1069 employing deepfake images and video to impersonate individuals in the cryptocurrency industry while distributing a backdoor called BIGMACHO.
The increasing sophistication of these attacks demonstrates how artificial intelligence applications in business environments can be weaponized by malicious actors, highlighting the dual nature of these powerful technologies.
Prompt Injection Techniques
A particularly concerning trend identified in Google's research is how threat actors bypass AI safety guardrails through carefully crafted prompt injection techniques. These tactics include:
- Role-playing scenarios: Asking the AI to act as a specific expert or authority figure
- Educational pretexts: Claiming the requested information is for academic purposes
- CTF competition contexts: Framing malicious requests as security competition challenges
- Technical documentation needs: Requesting information under the guise of writing technical guides
These techniques highlight the ongoing challenge AI providers face in balancing accessibility with responsible use controls, as sophisticated actors continue finding new ways to circumvent protective measures.
Implications for Cybersecurity
Google warns that threat actors are rapidly moving from using AI as an exception to employing it as standard practice, allowing them to mount attacks at scale with greater speed and effectiveness. The increasing accessibility of powerful AI models, combined with their integration into business operations, creates perfect conditions for prompt injection attacks.
"Threat actors are rapidly refining their techniques, and the low-cost, high-reward nature of these attacks makes them an attractive option," Google noted in their report.
For cybersecurity professionals, this development signals an urgent need to adapt defense strategies. Traditional static signature-based detection methods may become increasingly ineffective against malware that can continuously rewrite itself using AI capabilities.
Organizations should consider implementing:
- Behavior-based detection systems that identify suspicious activities rather than relying solely on code signatures
- Zero-trust security models that limit the impact of compromised systems
- Regular security training that includes awareness about AI-generated social engineering tactics
- Monitoring for unusual API calls to AI services that might indicate malicious activity
As these threats evolve, organizations must stay informed about the latest malware detection and removal technologies. Having access to effective malware removal tools and strategies becomes increasingly important for maintaining security posture against these dynamic threats.
How to Protect Your Organization
The emergence of AI-powered threats like PROMPTFLUX requires organizations to take proactive steps:
- Monitor and restrict API access to AI services within your network to prevent unauthorized queries
- Implement endpoint detection and response (EDR) solutions that can detect suspicious behavior patterns
- Regularly update security tools to incorporate detection for known AI-powered malware variants
- Develop incident response plans that account for rapidly evolving malware
- Consider implementing network segmentation to limit the spread of malware within your infrastructure
Security teams should also consider implementing AI-based detection systems specifically designed to counter these emerging AI-powered threats, as traditional detection methods may struggle against dynamically changing code.
Advanced Mitigation Strategies
Beyond basic protection measures, organizations facing sophisticated threats should consider these advanced mitigation strategies:
- AI-aware security policies: Develop specific governance frameworks for AI usage within your organization
- Prompt injection monitoring: Implement systems that can detect unusual or potentially malicious AI queries
- API rate limiting: Establish strict controls on the frequency and volume of requests to AI services
- Continuous threat hunting: Proactively search for indicators of compromise related to AI-powered malware
- Security stack integration: Ensure your security tools share intelligence about emerging AI threats
As AI continues to evolve, security practitioners must stay informed about these emerging threats and develop more sophisticated detection and prevention strategies to keep pace with increasingly intelligent malware.
While PROMPTFLUX currently appears to be in development with limited capabilities, it represents a concerning direction for malware evolution that blends traditional techniques with cutting-edge AI capabilities to create more resilient and evasive threats.