Artificial intelligence (AI) is revolutionizing numerous fields, from science to cybersecurity, and for digital defense, as it empowers defenders and enhances our collective security. However, the same capabilities that make AI a powerful tool for good can also be exploited by malicious actors. While much of the discussion around AI misuse remains theoretical, Google's Threat Intelligence Group (GTIG) has provided a comprehensive analysis of how threat actors are interacting with Google's AI-powered assistant, Gemini. This article dives into GTIG's findings, shedding light on the current reality of AI misuse and its implications for cybersecurity.
The GTIG combines decades of experience tracking threat actors and protecting Google, its users, and customers from various threats, including:
Their analysis aims to understand how threat actors use AI in their operations and whether this activity represents novel AI-enabled attack techniques.
GTIG's research, consistent with industry peers, suggests that while AI can be a valuable tool for threat actors, it is not yet the "game-changer" it's sometimes portrayed to be. The key findings include:
While not enabling disruptive changes, generative AI allows threat actors to move faster and at higher volumes. For skilled actors, AI tools provide a helpful framework similar to the use of Metasploit or Cobalt Strike. For less skilled actors, they offer a learning and productivity tool, enabling them to develop tools and incorporate existing techniques more quickly.
Attackers can use LLMs in two primary ways:
GTIG observed low-effort experimentation with publicly available jailbreak prompts to bypass Gemini's safety controls. These attempts, involving copying and pasting prompts with minor variations, were largely unsuccessful. A jailbreak is a type of Prompt Injection attack, where malicious instructions cause an AI model to behave in ways it was trained to avoid.
GTIG's analysis revealed that APT groups from over 20 countries used Gemini. Here's a breakdown by country:
Google emphasizes the importance of collaboration between the private sector, governments, educational institutions, and other stakeholders to maximize AI's benefits while reducing the risks of abuse. They are committed to responsible AI development, guided by their principles, and share resources and best practices to enable responsible AI development across the industry. Google also continuously improves its AI models to make them less susceptible to misuse.
While AI misuse is not yet creating novel capabilities for threat actors, it is accelerating their operations and increasing their efficiency. As the AI landscape evolves, GTIG anticipates the threat landscape to evolve in stride, with threat actors adopting new AI technologies in their operations. This highlights the ongoing need for vigilance, collaboration, and responsible AI development to mitigate potential risks and ensure a safer digital environment.