TDWI Articles

GPUs Help BI and AI Converge

If your enterprise needs to move to real-time analytics and incorporate the benefits of AI, these three trends are worth your attention.

Organizations across industries are turning to GPUs to accelerate deep learning, analytics, and software applications. From government labs to universities to small and medium-sized businesses around the world, GPUs are playing a major role in accelerating applications in areas such as numerical analytics, medical imaging, computational finance, bioinformatics, and data science, to name just a few.

In the past, organizations were constrained by the limits of compute power and human capabilities. Now, GPU hardware acceleration is revolutionizing high performance computing. For example, organizations can replace their 300-node database clusters with just 30 nodes of a GPU-accelerated database. The benefits are obvious: one-tenth the size, significantly faster than other in-memory analytics solutions, significant datacenter operations cost savings, and greater deployment flexibility. In short, companies can harness the power of GPUs for unprecedented performance to ingest, explore, and visualize data in motion and at rest.

Three converging technology trends are enabling organizations to deliver both real-time analytics and machine learning/artificial intelligence in a simple and cost-effective way.

Trend #1: The Shift to GPUs

The rise of graphics processing units (GPUs) in the data center, in vehicles, and in cell phones is one of the latest technology trends, bringing orders of magnitude more compute power with a significantly smaller hardware and energy footprint. These processors, made by chip manufacturer NVIDIA, were originally designed solely for image processing. They are well-suited for broader applications because they pack thousands of cores into each device -- perfect for distributed, scalable, parallel processing applications such as high-performance databases.

GPUs differ significantly from standard CPUs in that today's GPUs have around 4,500 cores (computational units) per device compared to a CPU which typically has 8 or 16 cores. This brings compute-hungry and big-data applications within the reach of most enterprise for the first time.

Organizations can also deploy substantially less hardware than before thanks to the GPU's highly concentrated compute power. In fact, organizations report that they are deploying between one-tenth and one-fortieth of the server hardware when GPUs are added to the mix.

The GPU has come along at just the right time too: it's an inconvenient truth that CPU designs are no longer keeping up with the strides enjoyed over the past 52 years as predicted by Moore's Law (double the CPU performance every 18-24 months). In addition, traditional CPU-only solutions are falling further behind in dealing with the data explosion, with some studies showing that the CPUs' physical limitations could be reached in 2017. Businesses can mitigate this by adding "assistive" technologies such as GPUs to the mix.

What types of enterprises are using GPUs today? Many banks have been innovating with GPUs for more than five years and were among the first businesses to realize their value. Companies such as JP Morgan Chase have enormous clusters of thousands of GPUs deployed to run algorithms including Monte Carlo simulations on rapidly changing, streaming trading data to compute risk, for example -- essential for regulatory compliance.

Trend #2: In-memory Processing

As memory costs continue to tumble and server memory capacity increases, it makes sense to leverage semiconductor main-memory as your primary data storage instead of, or in addition to, magnetic spinning disk or SSD storage. Memory has the unique benefit of superior speed between it and the computer's CPU (and to GPUs, too). It's literally the best place to keep and process data.

One organization that's taking advantage of in-memory processing and GPUs is the United States Postal Service, the largest logistic entity in the U.S. The enterprise moves more individual items in four hours than UPS, FedEx, and DHL combined move in a year. With 200,000 devices emitting their location every minute -- over a quarter of a billion events captured daily -- USPS turned to a GPU-accelerated database to meet its requirements for analytics and route optimization.

Trend #3: BI and AI Convergence

The third recent trend I'd like to discuss is the intermingling of relational database workloads with machine intelligence and artificial intelligence (AI) applications in a single solution and on a single copy of the data. It is made possible by combining the first two trends with open-sourced software libraries for machine learning across a range of tasks. These libraries include TensorFlow from Google, Caffe (a deep learning framework), and Torch (a machine-learning and neural-network framework). These libraries can be extremely compute-hungry, especially on massive datasets, and are specifically designed to benefit from GPU horsepower and in-memory processing.

The Trends in Practice

Who is leveraging all three of these trends?

Numerous innovators deploying GPU-accelerated, in-memory database solutions can be found in utilities and energy, healthcare, genomics research, automotive, retail, telecommunications, and many other industries. Adopters are combining traditional and transactional data, streaming data, and data from blogs, forums, social media sources, orbital imagery, and other IoT devices in a single, distributed, scalable solution.

Enterprises can run sophisticated data science workloads in the same database that houses the rich information needed to run the business and drive day-to-day decisions. This neatly solves the data movement challenge because there isn't any data movement, which leads to simpler Lambda architectures and more efficient ML and AI workloads. Quants, data scientists, and analysts can deploy a model from a deep learning framework via a simple API call, or train their models on all the data; users can experience the GPU's benefits and in-memory processing without needing to learn new programming languages.

Organizations dealing with burgeoning data volumes, unacceptable latency, high costs, complexity, or skills gaps are turning to GPU-based, in-memory databases to capture and better understand their data, visualize insights, and prepare for tomorrow's challenges and opportunities for data-driven innovation.

For Further Reading:

About the Author

James Mesney is principal solutions engineer at Kinetica. Mesney has 25 years of experience working with numerous vendors in the analytics, BI, data warehousing, and the Hadoop spaces. He has a degree in computer science from Staffordshire University, and lives in Hampshire with his wife and two daughters.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.