The Case for Operationalizing Self-Service Data Prep
Imagine empowering more of your employees by using self-service data prep tools to address both exploratory use cases and operational use cases.
- By Steve Swoyer
- April 27, 2016
The traditional business intelligence (BI) tools model created an ambiguous class of disenfranchised person -- the non-user. In any given organization, BI usage rarely exceeded 30 percent of all potential consumers.
Over a decade-long span, according to BI Scorecard's Successful BI Survey, BI uptake tended to hover between 20 to 25 percent of potential users, on average. Most of the people in any given organization were neither users nor consumers of BI.
The self-service model may only perpetuate this split between users and non-users. The ugly truth is that self-service tools haven't done much to improve BI adoption. Self-service data discovery achieved success largely by catering to the 25 percent who were already using BI tools.
The market for self-service data prep is following the same trajectory. This makes some sense, because self-service data prep is often used to complement self-service front-end tools, which typically lack rich data prep or data integration capabilities of their own.
There's a strong case, however, for operational self-service data prep. This case can't be made for self-service data discovery, chiefly because the pool of potential self-serving discoverers is just a fraction of the pool of potential information consumers.
To put it bluntly, an overwhelming majority of the people in any given organization don't want to discover insights by exploring a data set, much less by blending data together from multiple data sources. They want discoveries served up to them: pre-fab insights. Can you blame them?
The case for operationalizing self-service data prep is much stronger. Imagine a model in which data prep tools are used to address one-off and exploratory use cases -- e.g., the business analyst or data scientist who needs to engineer data for analysis -- and operational use cases. In the latter case, data flows (the products of self-service data wrangling) are instantiated and reused to support operational requirements.
There's another wrinkle. In the traditional, IT-provisioned BI model, data integration (DI), of which data prep is a subset, was the single biggest barrier to success. We've all heard the adage about how much time the average data scientist spends preparing data for analysis. According to legend, it's anywhere from 60 to 80 percent. (This claim is repeated uncritically, usually without supporting evidence. Clearly it has a grip on the imagination.)
In the IT-provisioned BI model, data prep and DI continue to account for an outsized proportion of squandered IT productivity -- to say nothing of missed opportunities. Many a BI or analytics project has withered on the vine because it posed significant (tedious, time-consuming, or unknown) DI and data-prep challenges.
Self-service data prep can remedy this dysfunction, particularly in connection with operational use cases. Consider, for example, a model in which savvy users provision and prepare data for themselves or less savvy users and common, frequently recurring, or critical data flows are identified and instantiated as repeatable data integration processes. It makes a lot of sense, doesn't it? It sounds a lot like the ideal usage model for self-service discovery.
That's the rub: how much repeatability can we expect in self-service data prep given how little is actually present in the self-service data discovery paradigm? In some contexts -- e.g., QlikView and Qlik Sense -- there is a good amount of repeatability, but these tools were designed with just that usage model in mind. In the Tableau environment, by contrast, instantiation and reuse is still far from commonplace.
In a March presentation to the Portland Tableau Users Group, Tableau guru Dan Murray, director of strategic innovation with Interworks Inc., spoke at some length about the advantages of instantiating demanding and/or widely used dashboards, visualizations, and other analytics as persistent structures in a database. (Murray -- author of Tableau Your Data! -- was careful not to use the term "data warehouse," which is anathema to many in the Tableau community.)
He emphasized the performance and manageability benefits of instantiation and reuse. He re-emphasized them. The sense of ambivalence among the 40 or so assembled Tableau users was palpable. On the other hand, the case for operational self-service data prep pretty much sells itself.
"It's the ability to take work that's traditionally been done manually, on the desktop, [such that] everything gets automatically described and captured, [such that] you can see all of the data sources that were used, all of the modifications that were made to that data source, [such that] it becomes a load plan for enterprise automation ... or you share it with another analyst," says Dan Potter, chief marketing officer with self-service data prep specialist DataWatch Inc.
"When analysts take that workbook you created, [they're able to determine] here's how [this person] cleaned the data, here's the calculated fields they made to transform or modify the data."
"Our expertise is being able to access data that's stuck in operational systems. This started as mainframe information, but [has grown to include] PDFs, JSON [objects], reports, [as well as] other structured and unstructured content," Potter continued, arguing that data prep lends itself to reuse to a degree that data discovery doesn't. "Because you're doing it in a centralized fashion, you're able to address some of the other [data management] requirements [such as] data retention, auditing, [and] SOX Compliance, because these are documented, repeatable processes."
A senior analyst with a prominent global logistics provider says his company's use of self-service data prep technology -- in this case, software from DataWatch -- comports with both the classic self-service use case (that of the savvy business analyst or data scientist) and the operational variant.
This analyst, who isn't authorized to speak publicly about his company's use of DataWatch, said he and his team started with the classic self-service use case. They quickly discovered, however, that the fruits of self-service data prep could be codified, instantiated, and reused to support a large base of consumers. "A lot of our month-end financial reporting comes out of mainframe reports. We still have a lot of older mainframe systems that have static reports, and they come in [EBCDIC] text files, and parsing it out using macros is incredibly tedious and error-prone," he said.
"Now we're also trying to utilize it for some of our operational activity on the product end," the analyst continued. "For example, we now have operational reports that show how many shipments we're producing on a given day, and this helps us to strategize tactically about how we're going to employ the workforce on a daily basis. If we only have so many shipments on Monday but more on Tuesday, we're going to want more people on Tuesday. This one example just scratches the surface."
Stephen Swoyer is a technology writer with 20 years of experience. His writing has focused on business intelligence, data warehousing, and analytics for almost 15 years. Swoyer has an abiding interest in tech, but he’s particularly intrigued by the thorny people and process problems technology vendors never, ever want to talk about. You can contact him at firstname.lastname@example.org.