As the utility of generative AI extends beyond creating artificial images to practical applications, Google is gearing its AI capabilities towards enhancing cybersecurity. The tech giant announced its new initiative, Google Threat Intelligence, aimed at simplifying and strengthening cybersecurity measures for organisations.
In a recent blog post, Google unveiled its latest cybersecurity product, Google Threat Intelligence, which integrates the expertise of its Mandiant cybersecurity unit with the threat data from VirusTotal. Central to this product is the Gemini 1.5 Pro large language model, which Google claims significantly expedites the process of analysing and reversing malware attacks. For instance, Gemini 1.5 Pro managed to decode the WannaCry virus code in just 34 seconds, identifying a crucial kill switch. This ransomware severely impacted hospitals, companies, and various organisations globally in 2017.
Broader applications and preemptive measures
Beyond merely decoding malware, Gemini’s capabilities include condensing extensive threat reports into digestible, natural language summaries through the Threat Intelligence interface. This feature is designed to help companies accurately gauge the potential impact of threats, avoiding overreactions or underestimations. Additionally, threat intelligence leverages a broad network of information, enhancing the ability to monitor and preempt potential threats. The Mandiant team, known for their pivotal role in exposing the 2020 SolarWinds cyber attack, contributes human expertise in tracking malicious groups and consulting on defence strategies. The VirusTotal community also plays a crucial role by continuously updating threat indicators.
Secure AI Framework and future directions
Recognising the vulnerabilities inherent in AI systems themselves, Google plans to utilise Mandiant’s expertise to assess and fortify AI-related security defences. Part of this effort includes the Secure AI Framework, where Mandiant will evaluate AI models’ defences and assist in red-teaming activities to identify vulnerabilities. One notable threat to these AI systems is “data poisoning,” where corrupt code is introduced into the data sets used by AI, potentially crippling the AI’s response mechanisms.
While Google is not alone in integrating AI with cybersecurity—Microsoft has also introduced its Copilot for Security, powered by GPT-4 and a cybersecurity-specific AI model—the effectiveness and long-term viability of these AI applications in cybersecurity are still under evaluation. However, the move towards using AI for more than generating visuals represents a significant shift towards practical, impactful applications in the tech industry.