Platform

Is a Data Breach Lurking in Your Software Supply Chain?

How automating data compliance can support a Zero Trust strategy and protect sensitive data in DevOps environments

Lenore Adam

Aug 31, 2021

Organizations are becoming increasingly aware of the software supply chain as an emerging attack vector. This was painfully evident in the SolarWinds intrusion—the most sophisticated hack of government and corporate computer networks in U.S. history. Hackers gained access to numerous public and private organizations around the world through a trojanized software update from SolarWind’s IT monitoring and management software, according to cybersecurity provider FireEye.

Researchers revealed Solarwinds’ DevOps pipeline was the point of compromise. The attackers didn’t even need to hack production systems. SolarWind customers installed the upgrade, where cyber threat actors gained access to the customer’s network using compromised credentials. From there, their activity focused on lateral movement and data theft.

Application Test Environments Contain Vast Amounts of PII Data

The software supply chain is clearly an increasing target for intrusion. And that’s exactly where a lot of sensitive data resides. The unfortunate reality is that sensitive information in non-prod environments goes largely unprotected. Our research shows 56% of enterprise customers don’t anonymize sensitive data in test environments.

In a continuous delivery model, there is tremendous demand for test beds that are configured with data copied from the production instance. One of our customers deploys 1,700 test instances a day! This creates an enormous attack surface for bad actors inside the organization or hackers infiltrating IT systems looking to steal sensitive information. Extortionware is also on the rise, where stolen sensitive data is used to force ransom payments by cyber attackers.

Organizations find that the process of anonymizing or masking sensitive data to be at odds with the speed of DevOps workflows. This level of security in non-prod environments is viewed as a barrier to innovation. As a result, sensitive data is left exposed.

We do see many companies attempt to use a homegrown solution, where they manually go through hundreds, if not thousands, of tables to discover sensitive data, then use brittle scripts to execute some form of anonymization. We describe this as a cracked dam, meaning these poorly executed processes leave organizations at high risk of inference attacks and sensitive data leakage.

The 2024 State of Data Compliance and Security Report

54% of organizations have experienced data breaches or theft in non-production environments. Find out why — and what you can do about it in The 2024 State of Data Compliance and Security Report. Discover insights from 250 global leaders around sensitive data, compliance, masking, AI, and more.

Get the Data Compliance Report >>

What is Zero Trust?

A Zero Trust model is based on the idea that any user may pose a threat and cannot be trusted.

The National Institute of Standards and Technology (NIST) defines Zero Trust as an “evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources.”

Enterprise networks have grown in complexity, challenging traditional methods for security and access control. Collaboration occurs across enterprise boundaries as employees who work remotely, contractors, and third-party vendors try to connect with a proliferation of both managed and unmanaged devices. A NIST publication last fall stated that this complexity “has outstripped legacy methods of perimeter-based network security as there is no single, easily identified perimeter for the enterprise.”

Infrastructure is no longer 100% on-premises, yet cloud services are considered inside the perimeter, making it difficult to determine where the perimeter is when verifying if a connection request should be trusted or not. Because the cloud has moved applications and data out of the perceived safety of on-premises systems, the traditional enterprise perimeter has essentially been dissolved.

The increasing adoption of a microservices architecture in the cloud introduces additional complexity into the network environments as communication is solely via APIs, which presents additional security challenges. With a disappearing or at least not well-defined perimeter, there is a need to move away from network-based controls or weak identities, because when an attacker breaches the perimeter, lateral movement is unhindered. Once you are on the network, you are trusted.

As a result, businesses and government agencies are shifting away from traditional VPNs and perimeter defense tactics. Instead, they are adopting an identity-focused approach to protecting access to internal resources and data.

In a Zero Trust model, every user and transaction must be validated before access is granted. Authentication and authorization for both user and device are discrete functions performed before a session to an enterprise resource is established.

Data Requires Collective Data Stewardship

A Zero Trust model essentially eliminates trust in systems, nodes, and services. Security designs establish a “never trust, always verify” policy to enforce the “least privilege” concept.

The concept extends beyond networks and devices, though. Defense-in-depth tactics encompass people and workloads to protect a company’s most precious resource, data. Organizations are beginning to focus, for example, on defending their data in its various states—at rest, in transit, and in use. Zero Trust should stretch across not just IT functions, but also to business functions like finance and HR.

The strategic value of business data has created a growing need for more data-ready environments where sensitive data is increasingly accessed for analytics and decision making.

All this means data is constantly on the move. Data may be extracted from a repository on-prem and loaded into an analytics workload in the cloud. And datasets are often moved from inside the business to outsourced development teams or third-party vendors for additional processing.

Unless this data is anonymized, businesses are basically distributing more and more sensitive data to more and more non-production environments every day. Everyone must understand that data is a strategic asset that requires collective data stewardship, making data-centric security an important component of the Zero Trust journey.

How to Support Zero Trust Strategy with Automated Data Masking

A comprehensive Zero Trust strategy dictates anonymizing sensitive data everywhere except production workflows. Manual processes simply aren’t a sustainable solution for all the sensitive data in lower level environments, where data is copied and distributed over and over.

There's a crucial need for automated data operations to discover and mask sensitive data at scale. PII masking irreversibly transforms the data, making it useless for hackers. However, the data remains useful for app dev and other use cases as the process replaces sensitive data with realistic but fictitious data.

By prioritizing automated static data masking within the Zero Trust model, businesses can establish comprehensive data governance and ensure compliance is not a roadblock to innovation.

Watch this webinar, “Data Compliance in a Zero Trust World,” or read more about our Data Compliance, Privacy, and Security solutions.