Page 2 of 2
How DataOps Is Transforming Industries
In parts 1 and 2 of this series, I discussed how to build a DataOps team and why adopting agile mindsets, skill sets, and toolsets is crucial. In this, the third part of our four-part series, I'll delve into a pair of examples of how customers are putting these principles into practice and making impressive strides toward transforming their businesses.
Case Study #1: DataOps in Financial Services
Financial services is one industry where a complete DataOps approach is having a big impact. One of our customers -- a large financial institution -- has built a data lab that invents solutions that harness data and advanced analytics to more deeply understand more than 60 million customers and translate that understanding into simpler, more intuitive, and more intelligent products and customer experiences. The main goal is to get businesses to do more business with each other using the bank's cards.
The data lab has taken a holistic approach to achieving this goal, starting with mingling disciplines such as human-centered design, full-stack engineering, and data science -- and continuously working to build out an interdisciplinary team. The lab then marries this with an entrepreneurial approach to develop successful solutions that deliver real impact quickly. The team includes a project manager who oversees the entire end-to-end pipeline (and also serves as the subject matter expert who helps train the data unification solution) as well as several DevOps team members and data scientists.
Data Unification Central to the Pipeline
Behind the scenes, raw data is first cleaned and deduplicated. It is then fed into a data unification system for classification and training. Through bulk matching, the bank can determine whether a supplier/vendor list from its master data source overlaps with the supplier/vendor list collected from the customer. For those that come back as matches, the data lab already knows if the supplier or vendor accepts the bank's credit cards. For those that don't match, the bank ingests data into its model from alternate sources to enrich information about the vendor or supplier and obtain more matches.
Throughout the process, subject matter experts familiar with the data act as curators to improve the accuracy of the machine learning models used to predict matches. Classified and trained data is continually enriched and returned to data consumers to improve decision making. In addition to improving customer experiences, the solutions from the data lab are developing new approaches to high-value analytical problems, such as risk prediction and fraud detection.
Case Study #2: DataOps in Big Pharma
Another of our customers -- a large pharmaceutical company -- realized that its R&D data environment wasn't up to par when compared to competitors. This was preventing the company from developing new drugs with the level of innovation and speed required. Increasingly, pharma companies compete on the basis of their analytics capabilities, so this company knew it had to make a DataOps transformation.
The main goal was to make it easier to access and use data for exploratory analysis and decision making about new medicines. The company had been relatively good at making decisions with data, but executives felt that the data within R&D was too siloed and fragmented to be used effectively for exploratory purposes. In particular, R&D data was kept within silos created for particular scientists, experiments, or clinical trials. Secondary analysis of it was almost impossible.
Uniting Silos of Information was Key
The first step was to conduct a far-reaching survey with questions such as how easy it was to share data across the organization, whether scientists could get data from other departments, and if it were possible to perform analytics on data across the organization. The survey responses were virtually unanimous: it was very difficult or impossible to work with data outside a personal or departmental silo.
It was evident that integrating diverse data was the top priority. The DataOps team identified the top 10 use cases, judged based on their value and ability to inform important decisions and answer scientific questions. Instead of focusing solely on a specific type of data such as DNA sequencing or electronic health records, the company wanted to work within and across data domains.
MDM Not Enough
The company had millions of different data elements to rationalize, so instead of taking a traditional master data management approach, which would have taken too much time and effort, the DataOps team turned to machine learning. The team used a probabilistic matching approach to combine data across the organization into a single Hadoop-based data lake with three domains -- experiments, clinical trials, and genetic data. The team accomplished this within three months -- an unheard-of achievement using traditional data management approaches.
To work across the three domains, the R&D data team created an integrated layer on top of them with standardized categories or ontologies; this was the only way to solve the use cases.
In the clinical trials domain, for example, the DataOps team believed there were many opportunities to gain insight beyond the original goals for a particular trial. Combining trial data was difficult because of great variance in how trials are conducted and how their results are recorded. However, the data could be ingested and mapped to industry-standard formats, and machine learning models learned this process. The team fed in the source trial data, and what the target format should look like -- and then let the machine go to work.
Outcomes initially were 50 to 60 percent accurate; now in some domains they are at 100 percent. After the models were developed and refined, they could be applied to other data with relatively little human intervention -- just some occasional judgments from an expert team.
More Innovation in Less Time
The company is benefiting from this approach. Scientists are beginning to see what an asset they have now, and the number of use cases has expanded from 10 to 250. Many projects that use the new data environment are underway. The time needed to get an answer to an ad hoc question has been significantly reduced. As the company has rationalized clinical trial data, a team is focused on clinical trial diversity to make sure the company's trials match the demographics of patients. Real-world evidence from more than 30 sources is now rationalized to the industry standard instead of being a catch-all category, as it is in many pharma firms.
The company's R&D data environment is something that one often hears about in start-ups, but is rarely found in large enterprises whose roots go back over 300 years. It's great news for all of us humans who will benefit from the scientific advances it is likely to engender.
A Final Word
Both of these examples illustrate how a holistic approach to DataOps that involves people, processes, and technology can transform the way an organization innovates, interacts with customers, solves high-value problems, and improves its competitive position. At the heart of the transformation is the ability to unite data from various disparate sources, curate it with the help of machine learning and subject matter experts, and gain invaluable insights that the business can act on.
Mark Marinelli is head of product with Tamr, which builds innovative solutions to help enterprises unify and leverage their key data. A 20-year veteran of enterprise data management and analytics software, Mark has held engineering, product management, and technology strategy roles at Lucent Technologies, Macrovision, and most recently at Lavastorm, where he was chief technology officer.