By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

How In-Memory Databases Have Transformed in the Last Decade

The path to speedier and more powerful analytics may be in-memory databases.

Ten years ago, enterprises were far different than they are today, with information contained within the traditional four walls of the organization. Businesses were accustomed to using only platforms such as Google and Bing to ask for information. It was a stark contrast from today's world, where businesses have the tools and intelligence to think smarter and act faster. These advancements are all the result of data: how it is stored, the speed at which it is processed, and interactions resulting from it.

For Further Reading:

In-Memory Computing and the Future of Machine Learning

Modernizing the Logical Data Warehouse

How to Accelerate Decision Making

Looking back at 2010, the computing industry faced an inflection point where it had the power to evolve beyond traditional relational databases that for years were the backbone of the enterprise. As with any technology in its early days, there was skepticism around database evolution backed by worries about data storage and data security.

Now, a decade later, in-memory databases have transformed how businesses operate with fast, integrated intelligence. In-memory databases leverage columnar storage rather than rows so queries of large data volumes complete faster. Unlike traditional row-oriented storage, columnar storage is optimized for analytics and has the advantage of great compression, storing much more data in much less memory. In this article, I outline the evolution of in-memory databases and explore their future.

The Skeptics and the Possibilities

Years ago, thinking that a database could be supported by memory alone was considered to be too far-reaching, described by many as "a complete fantasy." Data lakes and the cloud were once only a thing of the future; artificial intelligence and machine learning were not yet being thought about, much less discussed. For many people, thinking about data being stored and accessed in intangible environments was difficult -- especially because information had long been kept on premises in relational databases.

Despite its discomfort with not being able to see where data would be stored, the world soon entered into a new decade ripe for disruption. It was an opportune time for the database community to learn of a new possibility: in-memory.

With the reliability of in-memory databases an unknown, there were many questions about the loss of information. What would happen if there was a power failure? Would data survive?

Beyond this uncertainty, there were also cost challenges. As a new type of infrastructure, database costs remained high early on, impacting the speed of initial enterprise adoption.

However, even with these challenges, there were twice as many benefits for in-memory databases, including speed and response time and application support. Unlike databases of the past, in-memory meant speed; response times could be cut to mere seconds. Easing concerns of skeptics, even if data were lost, it could be reloaded in no time at all.

With in-memory databases, everything from predictive analytics to simulations and multistep queries could be accomplished at human speeds. As a result, businesses were able to operate faster, see stronger results, and ultimately drive stronger customer satisfaction and higher revenue.

In-Memory Databases at Work

With the high level of data production the world began to see a decade ago, in-memory databases were no longer nice to have but a must-have. Enterprises produced information at unprecedented levels on a variety of new devices and platforms (such as smartphones and social media). With in-memory databases, organizations could better keep up with and access their information.

Fast forward to today and it's clear that the amount of data will only continue to grow. In fact, a 2018 whitepaper by IDC predicted the increase of the global datasphere by 142 zettabytes between 2018 and 2025!

Today, when persistent memory is combined with in-memory databases, enterprises are empowered with even more powerful analytics and, as a result, can conduct more real-time transactions. Persistent memory (PMEM) supports in-memory even further and has the advantage of lower central processing unit (CPU) costs than dynamic random access memory (DRAM) does, meaning data can stay in memory instead of being offloaded to disk. The powerful combination of persistent memory CPUs and in-memory databases allows for support of online analytical processing (OLAP) and online transaction processing (OLTP) at previously inconceivable speeds.

The Future

Within just a decade, we've seen industries transform and the flow of data skyrocket to levels never before thought possible. We've expanded beyond the four walls of the enterprise. It's clear that data can no longer be kept only on premises -- it's all about integration and access from any device, in any environment, at any time. In-memory databases have changed the way that businesses operate -- taking the enterprise from static to intelligent.

In the future, we only anticipate more change at greater speeds. Organizations need to both dramatically grow data use with self-service access and make sure the business's applications have access to real-time data. At the same time, they must ensure governance and security of their company's precious data assets. Enterprises that can pull this off will possess a data superpower that competitors will find extremely tough to copy.

About the Author

Neil McGovern is VP of marketing at SAP where he is responsible for product marketing for HANA. You can reach the author via LinkedIn.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.