OpenAI’s Disrupting Malicious Uses of AI: June 2025 report offers a revealing insight into how generative AI is being exploited through coordinated efforts by criminal groups, private companies, and government-affiliated actors.
I’ve chosen to highlight three operations – from North Korea, the Philippines, and Cambodia – because they demonstrate how large language models (LLMs) are being utilized as tools across various threat areas, including cyber intrusions, influence campaigns, and fraud.
Deceptive Employment Scheme (DPRK-linked):
– Actors used ChatGPT to generate fake résumés, interview scripts, and cover letters to infiltrate remote IT jobs, primarily in the U.S.
– They researched tools such as Tailscale VPN and OBS Studio to bypass identity checks and simulate “live” remote work.
– In some cases, U.S.-based individuals were recruited to receive corporate laptops that were remotely accessed, indicating preparations for a deeper compromise.
Operation “High Five” (Philippines):
– A domestic influence campaign linked to a local PR firm used ChatGPT to generate political comments supporting President Marcos and targeting VP Duterte.
– Content – short, upbeat, often emoji-filled – was posted across TikTok and Facebook. Dozens of fake accounts amplified identical videos with tailored captions.
– Real engagement was low, but the goal was clear: distort public perception and simulate support.
Operation “Wrong Number” (Cambodia):
– This task scam offered high pay for trivial tasks, like liking posts, via SMS, WhatsApp, and Telegram.
– AI translated between Chinese and six languages, generated fake recruiter dialogues, and scripted group chats showing fake earnings.
– Victims were later pressured into deposits or crypto transfers.
– Multiple fake companies were identified, suggesting a broader, coordinated network.
These operations show once more how AI is rapidly becoming a tool of statecraft, used not only by traditional intelligence services but also by proxy actors, commercial front companies, and influence-for-hire networks.
Many state-affiliated actors, not just the “regular suspects,” are quick to adopt.
For those of us working at the intersection of cybersecurity and international policy, this is a clear signal: AI is reshaping the tactics and tempo of cyber competition and affecting both national security and our everyday lives.
Understanding how these tools are deployed in gray zone operations is now essential to interpreting intent, attribution, and escalation in the global cyber domain.


