Google’s AI plans now include cybersecurity

As people try to find more uses for generative AI that are less about taking a fake photo and more about actually being useful, Google plans to relegate AI to cybersecurity and make threat reports easier to read.

In a blog postGoogle writes that its new cybersecurity product Google Threat Intelligence will combine the work of its Mandiant cybersecurity unit and VirusTotal Threat Intelligence with the Gemini AI model.

The new product uses the large language model Gemini 1.5 Pro, which Google says reduces the time required to reverse engineer malware attacks. The company claims that analyzing the code of Gemini 1.5 Pro, released in February, took just 34 seconds the WannaCry virus – the 2017 ransomware attack that crippled hospitals, businesses and other organizations around the world – and identify a kill switch. This is impressive, but not surprising considering how well LLMs can read and write code.

Another possible use of Gemini in the threat space is to aggregate threat reports within threat intelligence in natural language so that organizations can assess how potential attacks might impact them – or in other words, so that organizations do not over- or under-react to threats.

According to Google, Threat Intelligence also has an extensive network of information to monitor potential threats before an attack occurs. This gives users a larger overview of the cybersecurity landscape and allows them to prioritize what they want to focus on. Mandiant provides human experts who monitor potentially malicious groups and consultants who work with companies to block attacks. The VirusTotal community also regularly publishes threat indicators.

The company also plans to use Mandiant’s experts to assess security vulnerabilities around AI projects. Through Google’s Secure AI Framework, Mandiant will test the defenses of AI models and help with red teaming efforts. While AI models can help summarize threats and reverse engineer malware attacks, the models themselves can sometimes fall prey to malicious actors. These threats sometimes include “data poisoning” This adds bad code to data scraping from AI models, making the models unable to respond to certain prompts.

Of course, Google isn’t the only company combining AI with cybersecurity. Microsoft has introduced Copilot for security , powered by GPT-4 and Microsoft’s cybersecurity-specific AI model, and enables cybersecurity professionals to ask questions about threats. Whether either is actually a good use case for generative AI remains to be seen, but it’s nice to see it being used for something different Images of a proud pope.

Source link