By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

SQL, NoSQL, or Hadoop? Determining the Best Platform for Your Purpose

An increasing variety of data types, analytics needs, and business uses drive data architecture decisions today. Follow this real-world, informal evaluation of SQL, NoSQL, and Hadoop solutions that address a business use case.

Until recently, determining the best architecture for a given purpose was relatively easy. For one thing, not many choices were available. Each information management platform was bred for a singular purpose. Each platform -- such as SQL relational data warehouses, NoSQL document-oriented and graph databases, and Hadoop -- had clear strengths that distinguished it from its counterparts on the data landscape. You might argue this is still true.

However, as I and my fellow information professionals and architects are discovering, these choices are more difficult today. The variety and volume of data, current business problems, and analytics use cases are blurring the lines between what used to be cut-and-dried architectural choices.

Case Study: Working Simultaneously with Structured and Unstructured Data

A recent case I was involved in offers a good example. An organization needed to mesh structured and unstructured data for side-by-side analysis -- that is, they wanted to perform quantitative and qualitative analysis at the same time. The free-form text sources contained keywords and other data indicating causation, intent, and sentiment and the structured relational sources had the discrete time, events, and outcomes -- the "facts" of the situation.

To work with both types of data, my team faced many architectural considerations.

First, there was already a data warehouse appliance in place, but it had a row length limit. This solution handled the structured data wonderfully, but we were left holding the bag on the unstructured data. Obviously, we had to consider other options.

The other concern we had was volume. This organization was processing millions of these transactions every day, and the amount of unstructured data for each neared half a KB on average. Do the math.

All told, we built three proof-of-concept scenarios:

  1. SQL data warehouse loaded with a proprietary ETL tool
  2. NoSQL document-oriented database loaded with its native tool
  3. Hadoop loaded with an open source ETL tool

I'm not mentioning the names of the actual vendor platforms because the results published here are strictly unofficial, and I'm only sharing them to illustrate our thought process in making our selection.

Option 1: SQL Data Warehouse Loaded with Proprietary ETL

For the data warehouse solution, we loaded the required structured data into a modeled relational schema from the source system using a conventional, proprietary ETL tool that was already in-house. The unstructured data was scanned for keywords of interest, and those counts were aggregated for each document. Then the structured data was delimited and put in rows with the unstructured components, and an R-ready output file was generated, enabling deeper analysis.

To test this concept we processed a sample of five million rows, and the ETL job took 25 minutes to complete. This simulated about a day's worth of data and fit within our overnight windows of extraction and integration.

The upside of this method was that it was fairly simple and direct. It performed well at this level, but scalability was a concern. Scaling the data warehouse appliance is not cheap, and we were unsure whether its valuable data real estate should be consumed by data with a relatively low-density business value.

Also the unstructured data was severely truncated. What if, upon analysis, the business discovers a new set of keywords to add to the mix? The solution would have to be reconfigured and rescaled. The text output offers some consolation, but it has to be loaded into an analytical platform such as R so it can be queried and analyzed.

Option 2: NoSQL Document-Oriented Database Loaded with Its Native Tool

Next, we considered a document-oriented database. We loaded the documents into the database with a "schema on load." The same five million structured rows took a little more than an hour and a half to complete.

One benefit of this approach was that the structured data (20 columns in all) and the unstructured data could be combined in the same "document" and marked up with tags to identify columns. Scalability was no problem either because the document "height" and volume is limited only by the disk space of the virtual machine on which it sits.

We had both data sets fully loaded together and query-able. Unfortunately, there were very few people in the company who knew how to query it using JSON or xQuery. Also, the slower load performance was a concern.

Option 3: Hadoop Loaded with an Open Source ETL Tool

Finally, we also tried loading both data types into Hadoop using an open source ETL tool. The proprietary ETL we used for the SQL case had only limited Hadoop-licensed features, so we used the open source tool. The text of both the structured and unstructured sources was loaded with a flat file schema similar to the output file of the SQL case. Hive was also considered a possibility.

The five million records of combined data were loaded into the Hadoop distributed file system in just over an hour -- nearly halfway between the SQL and the NoSQL trials. This solution was scalable and fit for purpose, and the commodity hardware used for the Hadoop cluster was relatively inexpensive. The downside was, once again, introducing a platform unfamiliar to the business.

Conclusion: The Client Played It Safe

Eventually, the business elected to go with the SQL solution because they were most comfortable with that. Getting business and IT to stretch outside their comfort zones and embark in new architectural directions is sometimes a challenge.

I suspect that eventually the lack of scale and the total cost of this solution will catch up with them and become unwieldy and unsustainable. However, for now, it seemed to be their best option.

Still, the results of these proof-of-concept experiments were interesting. The value we gleaned from them is not the particular performance results but a case study of the variety of architectural decisions one must make today.

The various platform choices are suited to different business use cases and different amounts of data variety and volume. If you are faced with similar crossroads, hopefully our unofficial experiments are informative.

About the Author

Jake Dolezal

Dr. Jake Dolezal is practice leader of Analytics in Action at McKnight Consulting Group Global Services, where he is responsible for helping clients build programs around data and analytics. You can contact the author at [email protected].

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.