Five Steps to Increase Business Insight Velocity
Speed is not a vanity metric when it comes to business intelligence because data is only as valuable as it is fresh, accurate, and actionable.
When delivered into the hands of the right people in a timely and usable manner, analytics and business intelligence (BI) can completely transform how organizations make decisions. In fact, according to a Deloitte report, organizations with CEOs who use data to drive decisions are 77 percent more likely to achieve their business goals. On the flip side, a lack of data or incomplete or inaccurate data can have all sorts of negative consequences, from misleading reports to incorrect conclusions to slowed decision-making.
Of course, the big challenge with big data is that most employees don't have the skills or the time to make use of it. Sales, marketing, HR, and every other critical function in an organization can and should make use of data to drive decisions, but without the ability to access a single source of truth, that's not an option. IT team members (data scientists, business analysts, engineers, and others) who can handle data at a technical level often don't have the tools they need to convert raw data into insights for their colleagues -- certainly not at speed or at scale.
Success with BI depends on speed. Speed is not a vanity metric when it comes to business intelligence because data is only as valuable as it is fresh, accurate, and actionable. In an ideal world, business insights should be limited only by the ability to frame the right questions. Speed and accuracy should not be a concern -- but that is easier said than done.
Based on my experience, here are the five steps necessary to advancing your organization's data strategy to make business intelligence faster and more accessible. These touch all of the traditional "3 Vs" of data: volume, velocity, and variety.
Step 1: Accelerate access to live data via virtualization
It's impossible to get reliable, fresh data with serial processing of data pipelines and one-and-done data analysis efforts. Accelerating access to live data is the first step to reducing the time it takes to deliver business insights. The data ingestion stack is the keystone to successful big data analytics and delivering business insights. From our view, it's not realistic to have all of your data in one place. That's why intelligent data virtualization is a crucial data integration style to augment traditional ETL/ELT styles and provide up-to-date access to a variety of data sources.
Step 2: Invest in scalable infrastructure and processing
Provisioning on-premises infrastructure for processing data generally takes far too long to approve, acquire, and get running. This can significantly delay getting analytics and insights to end users. It's critical to invest in scalable and elastic data infrastructure -- generally via the cloud.
Large businesses and those with large data volumes need highly elastic, horizontally scalable capacity to grow. Plus, if you are dealing with semistructured data (think: variety), you need to be able to unfold nested data in a scalable fashion to gain velocity.
Step 3: Remove data-movement constraints
Building physical cubes and data marts can be fragile and ripe for error, leading to considerable maintenance and risk, especially when trying to physically move data from one location to another. If the process fails, you can easily have an outage that lasts hours or even days, causing a data blackout.
To remove data movement constraints, a semantic layer backed by data virtualization can be a great solution. With a universal semantic layer in place, you can minimize latency between IT and users, reducing data preparation time from weeks to minutes, and break the physical constraints for moving data from point A to point B.
Step 4: Plan for volume growth
Data is growing at an unprecedented rate. Many businesses can easily see 100 percent growth year over year in their volume of data flowing under management. This is why organizations need a plan -- to handle the amount of data they have today and the amount of data they'll have next year and the year after that.
With a realistic, future-focused plan for data volume growth, it will be easier to analyze large data sets and drive business initiatives with useful intelligence. This means that choosing a data storage and processing architecture that can elastically grow with your data needs is crucial.
Step 5: Build self-service analytics
Finally, the absolute best move you can make when it comes to decreasing the time to insight is offering self-service analytics to all business users. No one should need to know how to use SQL or run a complex data query to get the insights they need to make critical (or even mundane) decisions.
Offering self-service tooling means providing access no matter what tool the user prefers (from Power BI to Excel to Tableau). It also means developing a single, shared vocabulary for what each metric means (to humans and to machines). This is the purpose of the semantic layer. If the product team and the sales team both know exactly what "net sales" means in your organization, then shared insights across teams have real meaning and can enable cross-functional collaboration. Ultimately, this leads to better, more data-driven decision making.
Dave Mariani is the founder and chief technology officer of AtScale. Prior to AtScale, Dave ran data and analytics for Yahoo!, where he pioneered the development of Hadoop for analytics and created the world's largest multidimensional analytics platform. He also held the position of CTO for Bluelithium, where he managed one of the first display advertising networks delivering 300M ads per day powered by a multiterabyte behavior targeting data warehouse. Dave is a big data visionary and serial entrepreneur. You can contact the author at LinkedIn.