In-memory database management systems have matured to the point where they predictably promise accelerated application performance. By adopting alternative storage layouts amenable to in-memory processing, these databases take advantage of efficient use of available memory to reduce or even eliminate the data latencies typically associated with significantly slower disk-based storage media.
Yet when applications for reporting and compute-intensive analytics remain dissociated from the database management system, the performance enhancements of keeping the data in main memory diminish as latencies are reintroduced when moving the data to the application platform. The conventional wisdom for reporting, business intelligence, and analytics urges architects to engineer extraction, transformation, and loading into a segregated environment. Because of this, as RAM and cache memory costs have plummeted and massive multiprocessor systems have entered the mainstream, it does not occur to us that instead of continually moving data among different systems, it might be smarter to consider ways to leverage in-memory computing to fully integrate transaction processing, operational processing, and reporting and analytics within the same platform.
In this talk we will consider how historical motivations for segregating the data warehouse have been confronted and how architectural evolution has made them moot. In turn, we discuss approaches to in-memory database management, and then we consider ways to transition the in-memory paradigm into one that encompasses a broader set of functionality for more robust in-memory computing.
You will learn:
Individual, Student, & Team memberships available.