BLOG

Three Factors for DSPM Success

BY JEREMY FIELDS
March 26, 2024

Data Security Posture Management (DSPM), if implemented correctly, relies on at least three key factors: 

  1. Discover all PII
  2. Remediate selectively.
  3. Classify Automatically.

The following sections outline how to be successful with each of these core DSPM practices, knowing that supplemental activities should be built on top of accurate scan results, actionable decisions, and automated outcomes.  

ONE: Data at Rest is Data at Risk 

Sensitive data is inevitable. Unmanaged, what is collected as the lifeblood or byproduct of business activity can become a ticking timebomb primed with a mess of financial and reputational damages.  

Most organizations recognize this possibility, and task security teams to operationalize governance, risk, and compliance (GRC) policies, yet too many focus on traditional Data Loss Prevention (DLP) or Data Access Governance (DAG) tools with two common limitations: 

  1. Discovery as a Secondary (and Inefficient) Feature: When searching for sensitive data is treated as a means to an end – usually to inform control mechanisms (encryption, redaction, classification, etc.) about when/where to trigger – the discovery process scales poorly due to inaccurate findings or a technology’s inability to scan every target repository. 

Finding where risk resides cannot be where corners are cut. DSPM success is only sustainable if scanning for sensitive information encompasses all types of data across all target locations, with minimal need for customization and oversight. Privacy-Grade data protection requires purpose-built discovery software.   

  1. Access and Traffic Controls as a (Misplaced) Priority: It is completely understandable that security teams gravitate toward systems fixated with the movement of data. This way of thinking closely resembles how physical spaces are protected: with guard gates, door locks, security cameras, and the like surrounding boundaries and perimeters.  

These tools are important in safeguarding digital spaces as well, though proper information security comes from understanding an environment’s data at rest and ideally remediating it prior to any active use. With discovery as the first step, it is easier to effectively direct downstream controls tasked with managing the access or flow of traffic by minimizing the attack surface. 

TWO: Reduce Risk with Remediation 

Overcoming these two constraints, shifting from an emphasis of “in-motion DLP” to “at-rest remediation” in terms of risk reduction, requires being proactive about unnecessary information records. Although it isn’t realistic to suggest that all sensitive data can be remediated in broad strokes, there are universal considerations that help determine the necessity of results returned by a discovery scan:  

Is there a retention policy to enforce? If so, consider the secure disposal (via multi-pass digital shredding) of ROT’n file locations. 

Are there systems/platforms where this data should not reside? Similarly, either shredding or relocating the out-of-bounds record immediately reduces its threat potential. 

Is there content to remove or otherwise obfuscate from an offending record that would rid it of the risk it represents? The answer is often yes, however implementing a successful data redaction policy requires the appropriate handling of a wide variety of file formats and data types. 

If any of the above are applicable, remediation needs to be applied. Doing so requires automation that evaluates the content and context of locations to execute the appropriate outcome, since it is of course the latter that empowers administrators to detect quick wins such as the stale or out-of-bounds information mentioned above.   

Extensibility is another central element beside selectivity when creating a remediation strategy. Almost every DSPM solution on the market comes with search parameters and pre-defined remediation actions to address common causes of concern – PII, PHI, etc. – but some products support customization more readily than others: 

  • Expand with Custom Data Types: defining non-standard discovery scans, like searching for intellectual property or internal system IDs, can be as vital as using out-of-box search terms.  
  • Interoperate with Third-party Tools: native platform capabilities can and should be enhanced by additional measures driven by metadata tagging, APIs, and user-defined scripts. 

THREE: Classification Matters 

Lastly, classification tags provide an essential layer of data attribution that is easily referenced by users and automated systems alike. Labels consolidate similar indicators of sensitivity into a unified taxonomy, allowing organizations to granularly identify but also broadly categorize records for visibility and subsequent orchestration.  

Without classification, difficulties arise when trying to differentiate between variations of comparable match types, a constraint that significantly hinders selective remediation.

Getting Started with Classifications 

It is sometimes difficult to begin classifying when an existing set of labels isn’t already defined. For organizations not yet operating under any compliance obligations – internal, regulatory, or otherwise – start with the guidance in NIST SP 800-53, which calls for tagging information based on potential harm if it were involved in a data breach.

NIST’s three-tiered recommendation qualifies risk around information’s capacity to disrupt an organization’s primary function, assessing whether there would be a high, medium, or low impact to operations.

This creates an intuitive “stoplight” label system as highlighted in the video below, where the “red” internal records are handled more conservatively than the “green” public file locations, and “yellow” confidential data is addressed separately with additional scrutiny. 

Persistent labeling should occur in tandem with most (if not all) remediation actions (except perhaps for standalone actions like shredding or encrypting). Not only does it ensure the visibility and auditability of DSPM activity: metadata classification is also a widely supported method for interoperating with leading information security solutions. 

Because of their versatility, many organizations benefit from assigning a record location with multiple tags. A PDF document with medical billing details could be considered both PHI and PCI data, for instance, belong to the HR department, while ultimately receiving a visual marking of “Confidential” to promote safe usage.  

Note that maintaining multiple classification schemas happens gradually – there is no need to anticipate all label types upfront, rather they are incorporated into search strategies as a DSPM implementation matures. However, it is critical that labelling engines accommodate evolving requirements. 

Stay Tuned 

Whether in pursuit regulatory compliance or just due diligence in recognition of the private and potentially sensitive data hosted across unstructured and structured information repositories, discovery scanning is the beginning of a journey best concluded with the persistent and selective application of classification labels and remediation actions. 

This is no small undertaking. Considering the wide variety of workstations, fileservers, cloud storage, SaaS platforms, and database infrastructure crucial to any given organization’s ongoing operations, searching for sensitive data can be a burden for security and privacy teams if not conducted with scalability in mind. The only way for this activity to scale is if it starts with accurate and automated discovery across all locations where data might be stored.  

Besides establishing a “single source of truth” for sensitive data, centralizing the discovery process enables first- and third-party remediation to offload concerns about accuracy to a purpose-built solution. Without DSPM success, administrators would have to rely on disparate functionality built into various walled gardens and technology investments where at-rest DLP controls are simply implemented as better-than-nothing alternatives. 

Around the Corner… 

In a future blog post, we will look closer at remediation strategies that take full advantage of interoperability to create powerful-yet-flexible solutions made possible by a foundation of accurate, automated, and actionable sensitive data discovery.