Skip to content

AI-Enabled Cybercrime: Beyond the Hype (#1)

Share:

Recently, AI has been generating significant discussion, particularly regarding its implications for cybersecurity, cyberattacks, and cybercrime.

Alongside our ongoing research at the University of California, Berkeley, with the support of Fortinet (details in the first comment), several insightful reports have been released that analyze claims about how GenAI is fundamentally transforming cybercrime and examining current trends.

One notable report is “Cybersecurity in the Age of Generative AI Joint Analytic Report: Combating GenAI-Assisted Cyber Threats,” published by the Cyber Threat Alliance.

When addressing the question, “How are malicious actors using GenAI tools?” the report provides some compelling insights:

Malicious actors can exploit GenAI in four formats: text, audio, video, and images. While these formats are not new avenues for cyber threats, GenAI enhances the speed and efficiency of malicious activities within them.

Rather than utilizing AI for novel activities, malicious actors primarily employ GenAI tools to enhance the efficiency and effectiveness of existing activities. GenAI serves as either a “force multiplier,” boosting the effectiveness of an attack, or a “participation enabler,” reducing the barriers to engaging in such activities.

**GenAI in scams:
One of the most significant ways that malicious actors use GenAI for scams is through deepfakes, especially audio deepfakes. These pose a growing risk in financial fraud, as they can convincingly imitate voices to mislead victims.

**GenAI in misinformation and disinformation:
These tactics serve various motives, including political agendas, financial gain, and personal rivalries. Deepfakes are among the most commonly exploited techniques in this context. The widespread availability of AI-powered editing tools has significantly lowered the technical skills required to create manipulated content, allowing not only state actors and organized criminals but also everyday individuals to fabricate misleading content.

**GenAI for personal and reputational harm:
GenAI’s accessibility has empowered individuals who previously would not be considered “malicious actors” to misuse the technology. Non-technical users can now create deepfakes intended to damage personal and professional reputations, demonstrating how AI-driven manipulation increasingly facilitates targeted attacks.

While AI tools have improved the efficiency of certain malicious activities, such as disinformation campaigns, they have not fundamentally altered most attack methods.

And I would add — not yet.
In the not-too-distant future, cyberattacks and cybercrimes committed using AI agents will be interesting to watch as they develop and become more sophisticated.

If you are interested in learning more about our AI-enabled cybercrime project, please feel free to contact me.

#cybercrime #AI
Center for Long-Term Cybersecurity
Berkeley Risk and Security Lab

More Posts