As the technological landscape evolves, artificial intelligence (AI) is becoming increasingly integrated into cybersecurity, raising questions about its role in both defending against and perpetrating cyber threats. During the recent Google Cloud Next 2024 event, this dual nature of AI was a key topic of discussion, emphasised by new security-centric offerings.
AI’s expanding role in security
Kevin Mandia, CEO of Mandiant since its acquisition by Google in September 2022, shared insights on the pivotal role AI is playing in enhancing Google’s security strategy. He explained that AI is now a common element utilised by both protectors and adversaries in cyber security: “AI is another technology that’s coming along that good people will use, and bad people will use—it’s just another tool in the toolbox now.”
Google Cloud Next 2024 spotlighted several advancements leveraging AI through Mandiant’s services. Notably, the Gemini in Threat Intelligence, part of the new Gemini in Security platform, allows users to engage in conversational search to swiftly pinpoint details about existing threats or malicious actors. It also supports researchers by automating the collection and summarisation of open-source intelligence, thus streamlining the response to cyber threats.
Additionally, Gemini in Security Operations employs natural language processing to articulate key findings to security administrators, offering guided steps to mitigate detected threats through user-friendly instructions and prompts.
Humans and AI: A collaborative future
Despite AI’s growing capacity to handle significant aspects of threat detection, Mandia emphasised the continuing necessity for human involvement in cybersecurity. He remarked on the evolving innovation cycle where AI supplements rather than replaces human intelligence: “You’ll always need cybersecurity folks, and AI is the sidecar to that for now.”
Highlighting the acceleration in training and response capabilities afforded by AI, Mandia noted, “We can take someone who’s only been doing security for half a year and make them way faster and smarter,” indicating that AI could rapidly scale defences across businesses of varying sizes.
However, challenges remain, particularly in defining ‘normal’ behaviour within business operations to accurately identify anomalies. Mandia pointed out the complexity of distinguishing unusual activities when most business processes are repetitive and consistent.
Additionally, the increasing proficiency of AI in voice and video spoofing has led to heightened scrutiny, necessitating more robust regulations to mitigate these emerging threats. “Folks that do a lot of business by voice are going to have to start looking into what can be faked, and what can be done about it,” Mandia explained, underscoring the ongoing improvements in defensive tactics against such deceptions.
In conclusion, while AI continues to make strides in data analysis and learning, the synergy between human expertise and AI capabilities is crucial for developing a comprehensive approach to thwart cyber attacks. Mandia maintains that, for now, the balance between human-operated security measures and AI-driven enhancements is essential: “Security is too important to just remove a gating factor without knowing and ensuring that whatever you’re replacing it with works.”