Google confirms new AI security success
SOPA Image/LightRocket (via Getty Images)
Google’s Cloud Next 2024 is coming to an end, but the news stories continue. But one thing that has yet to surface may turn out to be the most important, at least from a user security perspective. It uses AI’s large-scale language models to protect his Gmail users from harm.
Google announces evolution of Gmail security using AI
Even though we weren’t able to attend Cloud Next this year, Google insiders still gave us the most important updates. And I’m glad they did. Otherwise, you may have missed a major security update for Gmail and Google Drive users.
According to Google, the main problem being addressed is that generative AI has become so rapidly better that it has “dramatically lowered the barrier to attack” and that it has “dramatically lowered the barrier to attack” and that it has led to “high-quality phishing attacks at scale.” It is acknowledged that this has led to a sudden increase in the number of cases. As you can imagine, gaining access to Gmail and Drive accounts is a priority for attackers, given the treasure trove of readily actionable data they contain. According to Google, the solution was technically difficult but conceptually simple. “We’ve built custom LLMs to help you fight back.” These LLMs, first introduced in late 2023, are now “seeing great results,” Google said.
These custom LLMs are trained on the “latest and worst spam and phishing” content, as identifying semantically similar content is LLM’s unique specialty. Considering the large Google Workspace user base of 3 billion people, “the results are very impactful and LLM will continue to improve in this regard,” he said, a Google spokesperson said. Masu.
- LLM blocks 20% more spam in Gmail
- We’re seeing a 1000% increase in user-reported Gmail spam every day
- 90% faster response time to new spam and phishing attacks on Drive
The good side of AI security fences
There’s been a lot of talk about Google Gemini LLM in recent months, but not all of it has been filled with praise, in fact, the opposite is true. My colleague Zach Doffman, a respected contributor on privacy issues for Forbes magazine, recently flagged concerns about Google’s AI-powered Message Helper. Zack’s concerns come from a well-placed place of real-world knowledge about the privacy implications of AI, but many commentators have simply jumped on the “AI is bad” bandwagon. It is therefore reassuring to be able to report on the positive side of security his fence, focusing on the Generated AI LLM.
According to Google, these AI-powered protections detect twice as much malware as standard third-party antivirus and security products and stop 99.9% of spam. That’s a pretty impressive number, but a Google spokesperson told me that “within Google Workspace, we’re very focused on innovation to address that last 0.1%.” .
Delivering new AI security tools to 10 million paying customers
In addition to built-in security advances for over 3 billion Google Workspace users and 10 million paying Gmail and Drive customers, Google also announced new optional AI security add-ons. In response to the common desire of his Workspace customers to protect sensitive information in their files, Google built a tool that automatically classifies and protects such sensitive data. Google says, “Protecting obviously sensitive information is easy, but unexpectedly sensitive data is much harder to protect.” The reason so many customers are requesting support is because many customers are currently performing the same tasks manually. New AI tools “find hidden pockets of sensitive data and recommend additional protections, which can be automatically implemented in a few easy clicks,” Google said. As for pricing, I’ve heard that the tool can be fine-tuned to any customer’s needs for $10 per user per month and can be added to most Workspace plans.
follow me twitter Or LinkedIn. check out My website and other works can be found here.