Waymo, an Alphabet entity that was previously Google’s self-driving car project, has recently filed a lawsuit against Otto, the self-driving truck company, and its parent company Uber for patent infringement and stealing trade secrets. Waymo claims that Anthony Levandowski, the engineer who was instrumental in building Google’s self-driving car project, downloaded 14,000 technical files from a company server before leaving Google. These files are claimed to have contained sensitive information that substantially reduced the time and expense in building the self-driving technology for Otto, a company acquired by Uber in 2016.

This story would not have seen the light of day had it not been for a mistake made by one of Waymo’s suppliers, who accidentally included a Waymo employee’s ID in an email chain sent to the Uber team. The email had an attachment that contained a drawing of a circuit board for a Lidar system. Lidar, which is short for light detection and ranging, is a technology that serves as the eyes of the autonomous vehicle by creating a 3D map of the area around the car.

Definitive Guide to Cloud Threat Protection

Learn about the most common threats to corporate data in the cloud and how to protect against them.

Download Now

After performing forensic investigations on Levandowski’s former company computer and analyzing correspondence between Otto and the Nevada Governor’s Office of Economic Development and the DMV, Waymo claims that Otto is using Lidar sensors that bear a striking resemblance to those of its own and that Levandowski has misappropriated a crucial and highly confidential piece of intellectual property owned by the company.

As the case plays out, we may or may not find out if Anthony Levandowski did steal proprietary information owned by Waymo. But this story offers a wake up call to companies on threats represented by insiders acting maliciously or inadvertently, at a time when cloud services are being used to exchange large volumes of sensitive data. According to Skyhigh’s Cloud Adoption and Risk Report, 18.1% of files uploaded by enterprise users to cloud-based file sharing and collaboration services contained sensitive data such as passwords, credit cards, social security numbers, health information, and confidential company IP such as product schematics or source code.

As businesses consider data as one of the primary revenue drivers, insider threats represent a clear and present danger to business. The average company experiences 10.9 insider threats each month. Here are some commonly observed examples of insider threats within large enterprises.

1. Exfiltration of company IP by departing employees

A Fortune 500 financial services company experiences an uptick in competitive losses when members of their sales teams moved on. A top 10 casino saw its ‘whales’ or high rollers moving on when they made staffing changes. Both of these examples are instances where company insiders downloaded confidential sales or customer data before they left to join a competitor. 94% of companies experience at least one insider threat each month where insiders either voluntarily or unknowingly put company data at risk by exposing it to unauthorized access. As is evidenced in the Google-Uber lawsuit, insider threats can have significant impact on companies’ market standing and competitive positions.

2. Misuse of privileges by administrators

The adverse impact of a rogue administrator is amplified, given their unfettered access to data and systems within their application(s). Yet, as indicated by a Ponemon survey, 49% of the responders have no policies for assigning privileged user access. While the process is improving, there is still a large number of companies that are vulnerable to this threat. In 2013, a US based energy company lost three quarters of its revenue, half of its workforce, and more than $1 billion in market value when a privileged user was enticed by a foreign company to steal company source code and other IP.

3. Corporate data uploaded to risky shadow services

An average company uses 1,427 cloud services and only 8.1% meet data security and privacy requirements of enterprises. Unlike the process IT departments use when evaluating applications, regular employees usually do not examine security controls before adopting a cloud service and end up uploading corporate data into high risk cloud services to get their job done. A large conglomerate faced this problem when members of their legal team uploaded contracts to online PDF converters, which are high risk applications for multiple reasons including terms of service that assume complete ownership of all documents uploaded into their systems and the right to distribute data to any third party.

4. Unauthorized data access via compromised employees’ credentials

In this era where breaches have become commonplace, obtaining compromised credentials is shockingly simple. Per recent Skyhigh research, 92% of companies have stolen credentials for sale on the darknet. While this is not an insider threat per se, employees often make it easy for hackers to steal credentials by setting simple passwords, sharing them with colleagues, or uploading them onto a publicly shared document on a cloud-based file sharing service (the average organization has 1,739 files containing passwords in the cloud). They don’t realize that they are opening the company up to a devastating attack because an attacker using a compromised account bypasses all security controls put in place to detect malicious elements.

Insider threats and compromised accounts are a reality that enterprises have to deal with as they adopt cloud services for core business functions. To address these threats, companies are turning to Cloud Access Security Brokers (CASBs), which are control points between users and cloud services and provide threat protection offerings to address threats from insiders, privileged users, and compromised accounts.

CASBs analyze cloud activity across multiple heuristics, developing an accurate and continuously updated model of behavior for a specific user, group, and time period. Using this model, they can detect anomalous usage across multiple cloud services and correlate these data points to surface credible threats associated with insiders and privileged users to be remediated by IT teams. CASBs can also detect compromised accounts based on parameters such as location and login attempts. For instance, if a user logs in from multiple locations in a short time frame (indicative of impossible travel) or if several login attempts are made on an account, the CASB highlights these events as anomalies indicative of compromised accounts and flags them for further investigation.