LESSON - Evaluating Data Warehouse Appliances Based on Cost-per-Statement
By Jesse M. Fountain, Vice President, Pre-Sales Services, DATAllegro
As more and more organizations turn to data warehouse appliances (DWAs) to control the spiraling costs of their VLDB (very large database) data warehouse implementations, they most often look for easier ways to evaluate vendors.
As little as two years ago, pioneering organizations were sold on the price/performance of DWAs but focused mainly on single-run query speeds as the primary justification for the investment. Not considered were factors related to system throughput for 200+ concurrent users, let alone throughput for mixed workload queries while simultaneously loading and updating large tables.
Consider that your data loads (time and volume), your database design, the complexity (or simplicity) of your queries, your concurrency/mixed workload, etc., are very specific to your environment and culture. What you really need is an easy way to incorporate these nuances into the evaluation criteria so that you can effectively compare and contrast vendors.
The evaluation criteria for DWAs has matured to include query speeds, data load speeds, concurrency, and backup/restore time. However, some organizations are adding another measure to the mix that provides a more common denominator to evaluate DW appliances.
This new yardstick is often referred to as the cost-per-statement, or CPS. CPS is a more accurate measurement, as it represents not only queries, but DML and load statements as well. It can also include other operations such as system backup and data export.
Here is the simplified equation:
To build your own customized CPS model:
A. Determine your expected physical system requirements and costs. These costs should include the cost of the hardware, database, operating expenses, implementation /training costs, etc.
B. Determine statements-per-hour (SPH): Select 20–30 of your most “representative.” There are various techniques for identifying these, but essentially a query stream from your busiest workday can be pulled and analyzed.
- Set up a concurrency model of the representative queries where each query is parameterized with varying values (dates, geographies, etc.). The concurrency model should simulate the number of threads needed to handle your particular workload of users.
- A separate model should be set up to represent your particular data loading requirements. In addition to data loads, this model should include DML statements (CTAS, updates, deletes, etc.).
- Run each of the models over a period of time and capture average timings to calculate the SPH. Note that if real-time or near-real-time data loading/ updating is required, then both models should be run simultaneously.
C. Extrapolate SPH into statements-per-year (SPY), taking into consideration your particular operating window. Example: for a service level agreement where the system is available seven days a week, 24 hours per day, and queries and DML statements run concurrently, the factor is (24 x 7 x 365 = 61,320).
Table 1 is a simple CPS analysis of a 30TB DATAllegro appliance for a retail organization running intensive market analysis queries, simultaneously loading 50 million rows per hour and applying updates to two five-billion–row fact tables.
Figure 1: A simple CPS analysis of a 30TB DATAllegro appliance. (View larger image.)
Somewhat akin to miles per gallon for automobiles, CPS allows you to compare the price tag of various data warehousing infrastructure solutions. While CPS takes into consideration your particular environment to determine a common factor that can be used across all vendors, it does not consider the predictability of run times (minimum and maximum query execution times) in a mixed workload environment. Therefore, CPS should not be the only factor used to evaluate DWAs. Nonetheless, CPS should be part of an evaluation—and, of course, your mileage may vary.