TDWI Articles

Why It’s Time to Consider a Hyperscale Approach to Data Analytics and Operational Intelligence

The explosion of machine-generated data is driving enterprises to adopt strategies and tools to harness data and leverage operational intelligence at hyperscale.

The amount of data generated by an increasingly digital and digitized world continues to grow at an unprecedented rate. According to Matt Aslett, VP and research director at Ventana Research, “Through 2024, six in 10 organizations will re-examine their current operational database suppliers with a view to supporting more agile and intelligent operational applications and improving fault tolerance.

For Further Reading:

How to Overcome the Insights Gap with AI-Powered Analytics

4 Reasons Data Analytics Projects Fail

Data Silos and Breaches: Building a Long-term Security Operations Platform with Elasticsearch

With an ever-growing number of sensors and smart devices generating data while in motion, many organizations must not only scale the amount of data they ingest and analyze but also adapt to harness more dynamic and complex data sets coming from a fast-growing number of sources.

This explosion in machine-generated data is creating opportunities to unlock more insights than ever before. It simultaneously creates structural, operational, and organizational challenges as companies must quickly adapt and innovate to unlock smarter business decisions and stay ahead.

For this reason, it’s time for enterprises to consider moving beyond big data and adopting strategies and tools to harness data and leverage operational intelligence at hyperscale. How can organizations hop on the bandwagon quickly while managing (or even cutting) costs? To dive a level deeper, I’ll examine the massive opportunities on the horizon and share three distinct ways in which hyperscale data analytics is shaping the future.

How Hyperscale Data Analytics Is Changing the Game

Today, terabytes to petabytes of data are common in a hyperscale data center, but that data often remains difficult to analyze in a meaningful way. This year, Google reported that its data centers were handling about 1 million requests per second (RPS) and were processing more than 20 petabytes of data per day -- and that’s just from search queries. At times like these, it can be difficult to know where to start analyzing all of that valuable information, let alone how to do it quickly. Even if you find what you’re looking for in your overwhelming amount of data, it can be difficult to make sense of it.

When it comes to hyperscale data analytics, not every organization has the resources of a tech giant such as Google to scale and manage powerful computational solutions to continuously work with their data. Architecting high-performance solutions in a cost-effective manner is therefore quite challenging, particularly when the data sets begin to reach petabyte scale, but most organizations recognize effective data analysis is critical to their business.

For example, vehicle manufacturers could better understand driving patterns and behaviors to develop new in-car entertainment services or determine where to place their next dealership. Intelligent disaster response teams may leverage hyperscale data analytics to better assess the impact of earthquakes through patterns in social media usage. The opportunities are limitless and demand for these kinds of capabilities will only increase as more individuals and companies jump into cloud computing and more 5G-enabled devices come online.

To pursue these opportunities, organizations should consider adopting new strategies and tools to harness the vast amount of data created by an increasingly digital world. Within the field of data management and analytics, we see three key trends leading organizations to adopt a hyperscale approach to data analytics and operational intelligence.

Trend #1. The systems are getting smaller and more cost effective

One area where we see drastic disruption by hyperscale data solutions is in how small and cost-effective the structure and scale of systems are becoming. The trick here is to multiply the amount of data ingested, stored, and analyzed without increasing the footprint or cost of your solution.

We’ve seen this type of disruption with the evolution of the computing industry in a relatively short amount of time. The first computer, for example, took up an entire room. Today, people carry computers around in their backpacks and back pockets. The consolidation of computing power, system scale, and sheer volume of applications is mind-boggling when you consider where computers and personal computers started. Hyperscale data analytics is now catapulting the industry forward along a similar trajectory.

Gaining significant structural advantages for hyperscale data analytics begins with extracting every ounce of performance from ultra-modern, industry-standard hardware. The database or data warehouse has to be fully optimized to benefit from the transformative gains of the industry’s latest high-core-count processors and super-fast networking. The direction these systems are heading in the near term is essentially supercomputer-like performance from a single cluster of servers. However, in the not-so-distant future we may be able to deliver a hyperscale data analytics system in a briefcase or backpack.

This fundamental shift in system sizing and cost is a game changer. It is enabling a new class of intelligent applications fueled by incredible amounts of data, providing insights and intelligence that weren’t previously feasible with legacy technologies.

Trend #2. Data movement and access are no longer at odds with security and compliance

For Further Reading:

How to Overcome the Insights Gap with AI-Powered Analytics

4 Reasons Data Analytics Projects Fail

Data Silos and Breaches: Building a Long-term Security Operations Platform with Elasticsearch

One of the core challenges within any data organization is balancing the tradeoffs between keeping data secure and moving data around into elastic workloads and environments. A key shift we see with the emergence of hyperscale data analytics is the ability for organizations to move data “at hyperscale” into a core platform, gain near-real-time access, and leverage data across multiple workloads and users without moving it further into disparate environments. Most systems leveraging data at this scale are not looking at data at rest. They’re plugged into a firehose of data that may come from a fleet of connected vehicles, a telecommunications provider’s network activity, an ad exchange, or other source with trillions of events, logs, and records that need to be ingested and analyzed quickly.

Moving data at hyperscale is critical to enabling analytics at hyperscale. Effective solutions streamline the data path by addressing the hyperscale ETL (extract, transform, and load) and ELT (extract, load, and transform) requirements of an organization, which have typically required separate tools and investment. Addressing the challenge of data ingestion at hyperscale within a single, hyperscale solution enables organizations to accelerate their time to market and see value right away.

Another aspect of data movement is directly related to access for various lines of business. Because hyperscale data analytics systems can maintain performance at scale, they enable organizations to consolidate multiple workloads within the same platform. This, in turn, lowers the amount of data movement required once data is in the system. Additional intra-database capabilities that support machine learning, data science modeling, and other tasks ensure raw data stays within a single platform while making it accessible to multiple users. The sheer ability to stream and transform high-volume data while simultaneously supporting thousands of concurrent users across the enterprise is a critical capability enabled by hyperscale data analytics solutions.

Trend #3. Organizations are moving beyond data silos

A third game-changing trend we see in hyperscale data analytics is an organizational shift away from data fragmentation and towards consolidating data pipelines for integration, analysis, and enablement. Although addressing data-specific requests and projects one by one may have been an initial way to tackle data management across the enterprise, it’s no longer feasible for organizations to embrace digital transformation with such a fragmented and siloed approach. The reason? It starts to get costly and inefficient, particularly at petabyte scale, not to mention the security concerns with keeping data in silos that need to be individually protected and audited.

As data starts to come from everywhere within the field and the enterprise, we see organizations prioritizing advanced and streamlined efforts to manage data strategically and in a way that enables various lines of business to maximize value from a single source of raw data. Whether through a central DataOps team or data-focused center of excellence, improved communication, integration, and automation of data flows between data managers and data consumers across organizations are changing the way organizations take advantage of their data.

Working in this way requires a solid strategy, clear governance, and an ecosystem of partners who can support the organization’s goals. For this reason, we see more organizations looking to offload some of the engineering-intensive work to capable partners while improving the skills of their teams to focus on strategic data problems and holistic operational and new business opportunities and scaling the number of projects they can take on as a whole. The opportunity for skilled and specialized partners to collaborate with large enterprises to harness more customer and operational intelligence from hyperscale data sets has never been greater.

Organizations That Adopt a Hyperscale Approach Will Change the Game Forever

I’ve examined several structural, operational, and organizational benefits of moving towards a hyperscale approach to data analytics and operational intelligence, but we all know handling ever-growing amounts of data while maintaining or increasing performance is not easy to execute. Today, data analytics providers are tackling hyperscale data analytics in a fundamentally different way, making solutions accessible to organizations that need to migrate from legacy technologies and upgrade existing systems quickly.

Organizations that can put hyperscale data analytics to work will inform smarter business decisions across every aspect of their organization and drastically scale their ability to harness data in near real time. They’ll double, triple, or quadruple their organization’s ability to process and analyze data and open vast new market opportunities without doubling, tripling, or quadrupling cost. They’ll improve their strategic ability to operate more efficiently and effectively, and potentially even sleep better at night.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.