AI Traffic Is Up, and So is Security Risk
Enterprise users are diving into AI applications, even as they face the spread of new cyber-threats based on AI.
That’s a core finding of the latest 2024 AI Security Report issued by the ThreatLabz security research team at Zscaler (Nasdaq: ZS). Using the Zscaler Zero Trust Exchange cloud security platform from April 2023 to January 2024, the group tracked 18.09 billion AI and machine learning (ML) transactions. Here are some findings:
- Enterprise use of AI/ML tools shot up 594.82% within the time period, going from 521 million AI/ML-driven transactions in April 2023 to 3.1 billion in January 2024.
- All this traffic was generated by a small number of AI tools. Topping the list was ChatGPT, which accounted for 52.23% of traffic during the timeframe – representing growth of 634.1%. The model turned up as the first choice for users in manufacturing, healthcare, financial services, education, and government. ChatGPT was also the most blocked of all AI applications.
- Enterprises are blocking 18.5% of all AI/ML transactions. That represents a 577% increase in blocked transactions over the timeframe, which Zscaler attributes to concerns about the security of AI data. Companies are also blocking AI-enriched domains such as bing.com.
- The most AI traffic came from manufacturing companies. Within the timeframe, 20.9% of all AI/ML transactions coming through Zscaler's cloud came from industrial concerns. Finance and insurance companies came in second, with a general classification of services following third.
- The U.S. and India led the world in AI/ML transactions within the timeframe, with U.S. enterprises accounting for 40.9% of all traffic and India for 16%.
Top AI applications by transaction volume. Source: Zscaler ThreatLabz 2024 AI Security Report
Here Comes Trouble
While enterprise AI use is skyrocketing, so are threats associated with it. According to the report, organizations face exposure of intellectual property and confidential data in the process of generating AI applications. And as AI applications proliferate, it’s getting more difficult to control how the data they generate is used. Further, there’s the risk that data quality may be poor – the “garbage in, garbage out” conundrum.
In addition to these threats, cyber-villains are using AI to manipulate public opinion with disinformation, to steal identities, to create intricate phishing, malware, and ransomware schemes. Zscaler gives a chilling example in the report:
“In a high-profile example, attackers using AI deepfakes of a company CFO convinced an employee at a Hong Kong-based multinational firm to wire the equivalent of US$25 million to an outside account. While the employee suspected phishing, their fears were calmed after joining a multi-person video conference that included the company CFO, other staff, and outsiders. The call’s attendees were all AI fakes.”
Zscaler’s report reflects a deeper interest the company has in claiming its place in responding to the market requirements of GenAI. The company has been adding AI to its zero trust network access (ZTNA) platform and recently bought Avalor, an Israeli startup specializing in AI-driven security, for a reported $310 million to boost its capabilities in AI.
Zscaler's not alone. Rivals such as Cisco, Fortinet, and Palo Alto Networks are also trumpeting the AI in their security products. As AI traffic increases, this trend will too.