RESEARCH & RESOURCES

New Report Examines Enterprise Hadoop Adoption

A new report from TDWI Research tackles the burning question: what does it take to deploy, manage, and scale Hadoop in the enterprise?

A new report from TDWI Research helps readers understand what it takes to deploy, manage, and scale Hadoop in the enterprise.

It's a key issue for enterprises. After all, Hadoop was developed (a decade ago) to address a range of new and unprecedented problems in building and scaling distributed applications. To that end, Hadoop was designed primarily for programmers by programmers. Its application to traditional data management (DM) is a comparatively new phenomenon.

In the same way, Hadoop was designed as a platform for batch data processing, not for interactive query processing. The bulk of day-to-day business intelligence (BI) and decision support workloads consists of query processing; it was only very recently -- with the debut of Hadoop's YARN resource negotiator 18 months ago -- that the Hadoop platform was retrofitted to support interactive (as distinct to bulk or batch) data processing. The upshot is that Hadoop's DM feature set is still relatively primitive, and its suitability for traditional DM applications -- i.e., as a cost-effective, scalable, manageable, and feature-rich platform for BI and decision support -- not yet a slam dunk.

Philip Russom, research director for data management with TDWI Research, takes a look at just what Hadoop can and can't do in his new report, Hadoop for the Enterprise: Making Data Management Massively Scalable, Agile, Feature-Rich, and Cost-Effective.

"Hadoop is … challenged to prove its worth, this time by satisfying the stringent requirements that traditional IT departments and business units demand of their platforms for enterprise data and business applications," Russom writes. He cites an array of technology and business drivers that are helping to spur Hadoop adoption in the enterprise. On the technology tip, Russom argues, enterprises are adopting Hadoop because it comprises a practically elastic compute and storage platform that's able to scale to address ever increasing data volumes.

Russom's report includes this quote from a Hadoop user who requested anonymity: "The high-end relational databases are really expensive in configurations big enough to deal with big data. Data warehouse appliances are almost as expensive. We all need a more economical platform, which is the main reason we're all considering Hadoop."

What's more, enterprise adopters are compelled by Hadoop's ability to complement or extend traditional data warehousing (DW), archiving, and content management solutions, he indicates. Finally, there's Hadoop's ability to store, manage, and analyze multi-structured data, which the data warehouse itself can't cost-effectively do.

"Early adopters have shown that Hadoop excels at storing, managing, and processing unstructured data [e.g., human language text], semi-structured data [XML and JSON files], and data with evolving schema [such as sensor, log, and social data]. For organizations with these forms of data in large quantities … Hadoop can make the analysis of it more affordable, scalable, and valuable," he says.

Other technology drivers include Hadoop's evolving real-time capabilities. (Real-time is always expensive, but Hadoop's economics are compelling here, too).

Hadoop adoption in the enterprise is also being spurred by business needs, Russom points out.

"[E]veryone wants to get business value and other organizational advantages out of big data instead of merely managing it as a cost center," he says. "Analytics has arisen as the primary path to business value from big data, and that's why the two come together in the term 'big data analytics.' Hadoop is not just a storage platform for big data; it's also a computational platform for business analytics. This makes Hadoop ideal for firms that wish to compete on analytics, as well as retain customers, grow accounts, and improve operational excellence via analytics."

This is all well and good, you ask, but to what degree is Hadoop a credible enterprise DM platform?

According to TDWI's survey data, Hadoop's enterprise-readiness is basically a moot issue: enterprises are already using it for many common DM workloads. For example, nearly half of all survey respondents (46 percent) are using Hadoop to complement (or extend) a traditional data warehouse environment, while 39 percent say they're shifting their data staging and/or data landing workloads to Hadoop, too. TDWI's data is based on a survey of 247 IT professionals from late last year; of these, 70 percent self-identified as "corporate IT or BI professionals," 21 percent as consultants, and nine percent as business sponsors or users.

TDWI's sample had a diverse representation of company sizes: 52 percent are affiliated with companies that generate $1 billion or more in annual revenues; of these, 25 percent generate $10 billion or more in revenue. At the other extreme, 41 percent of respondents are affiliated with companies that generate less than $1 billion in annual revenues. Of these, 21 percent had less than $100 million in annual revenues.

In other words: enterprise Hadoop, DM-certified or no, is already in extensive use. How extensive?

According to Russom and TDWI Research, Hadoop is used as:

  • A complementary extension of a data warehouse (46 percent)
  • A platform for data exploration and data discovery (46 percent)
  • A data staging area for data warehousing and data integration (39 percent)
  • A data lake (36 percent)
  • A queryable archive for non-traditional data (36 percent)
  • A computational platform and/or sandbox for advanced analytics (33 percent)
  • An enterprise data hub for both new and traditional data (28 percent)
  • A platform for BI reporting, dashboards, and visualizations (27 percent)

Hadoop is also used for a half-dozen other enterprise use cases, Russom notes.

This doesn't mean it is smooth sailing for Hadoop in the enterprise. Russom flags several issues enterprise Hadoop adopters will likely have to manage -- starting with an acute skills shortage. Almost half (42 percent) of respondents cited the lack of Hadoop skills, he notes.

"This is natural because Hadoop is still quite new, but it's not a show stopper. Determined users tend to learn Hadoop on their own without looking to hire rare personnel who have Hadoop experience," he argues.

Other barriers include weak business support -- e.g., lack of a business case, lack of business sponsorship -- security concerns, tool deficiencies, containing costs, and, of course, Hadoop's data management shortcomings.

"Enterprise data management professionals used to mature relational database management systems are often deterred by Hadoop's lack of metadata management [cited by 28 percent of respondents), immature support for ANSI-standard SQL [19 percent], and limited interoperability with existing systems or tools [also 19 percent]," Russom concedes. "However, as these professionals work with Hadoop and understand its methods, they see that metadata is managed at run time (not a priori), and Hadoop has its own approach to queries and relational data structures."

Concludes Russom: "These methods have advantages with the unstructured and schema-free data that Hadoop typically manages, so users adopt them. Furthermore, Hadoop's approaches to metadata, SQL, and standard interfaces improve regularly."

You can download a complete copy of Russom's report here.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.