Page 2 of 2
Unstructured Data Management: Plan Your Security and Governance
Storage engineers and IT infrastructure managers have more on their plates than ever. Beyond managing the performance of on-premises and cloud storage and backup technologies, they need to manage the data within these systems. Data needs to be accessible to users, fast to access, and moved to colder tiers and archives after its active use ends (to control costs). Yet data protection requirements continue to expand due to ongoing ransomware threats, sophisticated cyberterrorism and cybercriminal organizations, and an increase in natural disasters. AI innovations in the last year, especially from ChatGPT and its many competitors, have also introduced new threats and risks to corporate data.
Storage is just one area of cybersecurity, but it is indeed critical to protect the data at its source. Unstructured data, which today constitutes at least 80% of all data created and stored, has become difficult to protect with traditional backup and storage methods because of its sheer size. Most enterprises have petabytes of data under management. Backup software typically creates three copies of data, and therefore constitutes the lion’s share of IT spending -- commonly at least 30% of the IT budget. Disaster recovery and ransomware threats are adding to costs as additional offline copies may be needed to protect against these threats. Organizations need affordable data protection for non-critical data as the cost of traditional backups would be prohibitive.
Compliance is also a growing concern. In IDCs 2023 research, “What Every Executive Needs to Know About Unstructured Data,” 46% of respondents noted that compliance with regulations, including data privacy in all regulatory jurisdictions, is a top challenge for their organization. Over half (51%) reported non-compliance with data regulations in the past 12 months, with an average total cost of $1.03 million.
IT and business executives who responded to the Komprise 2023 State of Unstructured Data Management survey identified the following high-priority capabilities:
Creating Your Security and Governance Program
- Monitoring and alerting for anomalous activity in storage systems
- Policy-based automation such as copying file data to low-cost object storage for ransomware protection and moving data to cold storage or confinement for deletion
- Data protection and compliance/legal hold were identified as two of the top new use cases for unstructured data management
One-size-fits-all management for unstructured data no longer works. The following tactics can allow IT to create a cost-effective security and governance program.
Know your data. Although it may sound obvious, you need holistic understanding of all data in storage. Gaps in visibility, hidden applications, obscure data silos in branch offices -- this all contributes to higher risk if the data is not managed properly. Consider that protected data is going to end up in places where it shouldn’t, such as on forgotten or underutilized file servers and shadow IT cloud services. Employees unwittingly copy sensitive data to incompliant locations more often than you’d think. You’ll need a way to see all your data in storage and search across it to find the files to segment for security and compliance needs.
You can use the data management capabilities in your NAS/SAN/cloud storage products to search for file types such as HR and IP data, but you’ll need to integrate visibility across all storage vendors and clouds if you use more than one vendor’s solution. Knowing how much data is cold, what data is obsolete, and what data should be deleted are equally important for eliminating unwarranted exposure and risks.
Set cold data thresholds with security and business leads. IT infrastructure teams must collaborate with security and network teams to procure, install, and manage new storage and data management technology, but a more formal process centered around the data itself is required. This may involve stakeholders from legal, compliance, risk management, finance, and IT directors in key business units. The goal is to create requirements and guidelines for data management security and governance and enhance those already in place.
Cross-functional teams can also create policies for data tiering and archiving, which reduces the footprint of data residing on primary storage where the 3x backup copy standard is in place. Anywhere from 60 to 80% of data in storage is rarely used and can be moved to cheaper storage, such as object storage in the cloud; data is kept on resilient immutable storage and backups of the cold data may not be necessary. Tiering data older than one year of age is a reasonable goal, but these policies will differ depending upon the data type and the department that owns it.
Use AI/automation to tag and find sensitive data. A real struggle with massive volumes of unstructured data spread across enterprise data silos is that it can be painstaking work to find data sets that need a higher level of protection. Start by enriching file metadata with custom tags that indicate regulated or sensitive PII and IP information. This classification is also useful in the case of a regulatory audit or even for use cases such as legal discovery. AI tools such as Amazon Macie can help by analyzing the content of millions of files for sensitive data (such as customer contact information or credit cards) and then tagging them. IT can use the output of such AI scans to segregate those data sets, move them to the most secure storage location, or delete them altogether if corporate rules require that.
Create policies for automated data movement across vendors. Such policies, for example, could dictate that files containing financial data move to encrypted cold storage after one year of age, customer files move to immutable cloud object storage for a period once an account is closed or inactive, or ex-employee data be deleted after 30 days from an employee’s last day. Automated policy features in storage and data management technologies can make this easier to execute for small IT teams. The idea is to lower the risk of data being in the wrong place at the wrong time, thereby creating security loopholes that a bad actor can easily exploit. Getting rid of unnecessary data and/or moving it to archival storage is also a great way to save money on expensive primary storage.
Leverage monitoring and alerting features in IT systems. IT and data management applications today provide alerts and notifications that can help you proactively identify threats. Make use of these tools to monitor storage and backup systems for any anomalies, such as excessive file retrievals from one user account or excessive writes to a storage location, indicating a possible security incident. Monitoring features can show other details such as orphaned data or duplicate data that may increase liabilities unnecessarily or be reflected in metrics indicating potential performance problems, such as a file server or NAS device reaching capacity. Ensure you have a process in place to review alerts and monitor data to escalate and fix issues; AI, of course, is already doing this automatically in newer technologies.
Leverage affordable ransomware protection in the cloud. An immutable copy of data in a location separate from storage and backups provides a way to recover data in the event of a ransomware attack. However, keeping multiple copies of data can get prohibitively expensive. Determine whether the data is being actively modified or used. If the data is cold or inactive, you don’t necessarily need multiple copies of it. An effective strategy is to tier cold data from expensive storage and backups into a resilient destination such as Amazon S3 IA with Object Lock. By moving cold data to object-locked storage and eliminating it from active storage and backups, you can create a logically isolated recovery copy while drastically cutting storage and backup costs. This is also an immutable medium to prevent deletion or alteration.
Incorporate data auditing and tracking for generative AI. There is much to consider when it comes to safely and ethically adopting generative AI solutions in the workplace. The role of data storage and data governance specialists is multifaceted. Strategies may include developing employee guidelines for which data is sanctioned to send to generative AI tools and for what kinds of research and use cases. Conversely, IT must lock down sensitive data (such as software code, proprietary information, customer information, HR data) that individuals should not access for use in AI.
It is vital to request documentation from vendors that incorporates AI in their products about how they are handling your data and how they can help mitigate any data risk from their tools. For instance, ChatGPT now allows users to disable chat history so that chats won’t be used to train its models. Maintain an audit trail of all corporate data that has fed AI applications and track who commissioned derivative works from generative AI tools. This can protect your organization against any lawsuits for copyright infringement. This two-part blog series goes into more depth on AI data governance.
The more insights you have on your data in storage across on-premises data centers and clouds, the better. Knowledge, such as where PII and other sensitive data is stored, which data is cold and can be stored and protected at much lower cost, and anomalous user access patterns, are some of the valuable insights data intelligence enables to create a watertight plan for data in storage.
Krishna Subramanian is COO, president, and co-founder of Komprise. In her career, Subramanian has built three successful venture-backed IT businesses and was named a “2021 Top 100 Women of Influence” by Silicon Valley Business Journal. You can reach the author at via email, Twitter, or LinkedIn.