By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Executive Q&A: Understanding In-Memory Computing

In-memory computing brings computation together with data, leveraging data-aware execution, data locality and making it possible to dramatically improve performance and scalability for streaming analytics and machine learning. John DesJardins, CTO of Hazelcast, explains.

What is in-memory computing? What can it do for businesses, from e-commerce to fleet management? What can we look for in the future? Upside asked John DesJardins, CTO of Hazelcast, for his perspective.

For Further Reading:

How In-Memory Databases Have Transformed in the Last Decade

In-Memory Computing for Turbo-Powered Analytics

In-Memory Computing and the Future of Machine Learning

Upside: What is in-memory computing and what's driving enterprises to adopt it?

John DesJardins: In-memory computing begins with a distributed, shared memory layer and then brings together computation logic and data in a data-aware distributed architecture. The result offers both transactional and analytical data processing at very low latency with real-time processing with zero downtime and linear scalability. Low-latency means results of computation are nearly instantaneous, real-time means the data is acted on continuously and predictably, as it is created.

There are many trends driving adoption of in-memory computing, the top being the adoption of microservices, serverless, and cloud-native technologies. Other major trends driving this include Internet of Things (IoT), edge computing, and machine learning (ML). These are driving the need for faster processing at cloud scale.

What are the benefits for the enterprise?

At a high level, the primary benefits are shared state and business context across services, resiliency and zero down time with performance at scale, and the ability to manage complexity at scale. For example, Hazelcast enables businesses to embed analytics into applications to create smarter and more innovative experiences. At the same time, it enables delivering giga-scale performance with near-zero latency and zero downtime. This removes barriers to customer engagement. This is all run on cost-effective hardware anywhere that's needed -- cloud, data center, or edge -- lowering costs, reducing time to market on innovation, driving revenue growth, and enabling risk reduction.

With the exception of "performance at scale," those all sound like benefits for IT. Are there any benefits that a business analyst would see?

Faster time to insight will mean you can ask more questions (or more complex ones) and get answers in real time or near real time. For example, banks can do risk calculations during the day and even do ad hoc risk calculations, where in the past they had to do these as end of day batch processes.

An analyst can now deliver autonomous experiences, such as real-time personalized recommendations on a retail, travel, or media website, autonomous ad-targeting and re-targeting, or autonomous customer support experiences driven by smart bots.

What are some of the misconceptions people have about in-memory computing?

Many people assume that in-memory computing is just about faster response times on data access (reads, writes, and queries) and aren't aware of the benefits of faster computation use-cases from in-memory computing. By bringing the computation together with the data, and leveraging data-aware execution and data locality, it is possible to dramatically improve performance and scalability for streaming analytics and machine learning, enabling real-time, low-latency processing.

What tasks are best suited for in-memory computing?

Any application can take advantage of in-memory computing. However, some tasks are, indeed, better suited, particularly high-throughput applications or those requiring low-latency and real-time data processing. Some examples of top use cases are online e-commerce sites such as in retail and travel, online banking sites such as for consumer banking, wealth management, payments processing, or real-time trading, online gaming, digital advertising, online media, as well as logistics.

Other use cases include IoT and telemetry applications such as connected vehicles, fleet management, predictive maintenance, and industrial automation. A growing driver for in-memory computing is real-time machine learning (inference) for building AI-driven autonomous applications such as for robotic process automation.

What does it take to get started with in-memory computing?

It is easy to get started with in-memory computing. In-memory platforms such as Hazelcast are available as open source, in the cloud, or via Docker, Kubernetes, and other container-based platforms such as Red Hat OpenShift. The platforms can be installed with a simple command line, Helm or Maven. There are many blogs and tutorials available and most languages are supported. There is also free online training.

Where do you see in-memory going in the next, say, 1-2 years?

We see in-memory architectures continuing to grow in adoption with new applications in the coming years. There are technology trends that will contribute to the evolution of these platforms, including Intel Optane and other 3D XPoint memory technologies, as well as DDR5 DRAM becoming available as primary memory, and improvements in SSDs. Leading vendors will incorporate these technologies to enable processing more data while delivering peak performance, scalability, and resilience.

About the Author

James E. Powell is the editorial director of TDWI, including research reports, the Business Intelligence Journal, and Upside newsletter. You can contact him via email here.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.