By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Upside - Where Data Means Business

Supercharge Your BI Program with Machine Learning

Have you already optimized your BI program but still aren’t getting the user engagement you’re looking for? Applying machine learning in five key areas can improve the performance of your BI program.

When car enthusiasts want to take their high-performance vehicles to the next level, they may adjust the engine to change the physical nature of how it works. Your company has probably spent a considerable amount of time and effort building out your BI program, optimizing key performance indicators (KPIs) and developing reports, charts, and graphs to meet your company objectives, address your executives’ most burning questions, and keep your environment running at its best.

What if, by adjusting the engine that serves up your BI content, you could make it more engaging and more effectively consumed by your user base? With machine learning, you can do just that. The secret is to optimize the performance of the context around your BI content. Even without an overhaul of your BI content, the inclusion of machine learning to optimize this context you can increase your user engagement.

Machine learning is the process by which a set of mathematical algorithms learns patterns from historical data and uses those patterns to operate. Unlike the rule-based programming of the past where developers coded the rules into a process, machine learning uses the past data to extrapolate what those rules need to be by identifying statistical correlations between the inputs and the desired output and applies that pattern as the rules in an operational process.

Imagine that you knew that your CFO arrived every morning at 7:00 a.m., grabbed a cup of coffee, and then sat down to look over the real-time sales numbers. If this were the case, a developer could manually program the rules to send the current report to the CFO at 7:00 a.m. with the up-to-date sales numbers from the previous day.

What would happen if your CEO woke up at 5:00 a.m. and wanted a synopsis of sales delivered to his phone before he even got out of bed? Again, a developer could implement rules that would do just this.

The problem is that your code now has a growing list of rules for each user to deliver the content at the opportune time and must be manually configured with each new user to get it just right. Your other option is to send the same report to everyone at 6:00 a.m., which won’t satisfy your CEO or your CFO. Your CEO now must wait to get the information and your CFO doesn’t have the last hour of sales.

With machine learning, you wouldn’t have to create the rules manually or suboptimize them by applying generic rules. Instead, you would provide a list of attributes about the different scenarios, categorizing which instances of delivery were successfully consumed and the machine learning engine would find patterns within that data to optimize a customized delivery plan for a wide audience. By basing it off patterns of similarity, adding a new user to the mix would be easily picked up and categorized with similar users and adjust their information delivery accordingly.

Deep learning, neural networks, decision trees, and clustering are types of algorithms used to automatically extract patterns; these algorithms are implemented in multiple tools on the market today.

To leverage machine learning to optimize the delivery of your BI content, you can look in five areas to identify what attributes define successful engagement.

When

The first set of attributes relate to time. Convenience of information is critical. Users are more likely to consume BI content if it arrives at the time it is needed most. In our scenario, the CFO’s optimal time is upon arrival at the office, whereas the CEO’s optimal time is right after waking up. Attributes that could be used to predict this include the time of day, day of the week, and holidays. Other attributes of time that are much more fluid, but can be even more important in defining a model, and could include proximity to major company milestones (such as month- or quarter-end or upcoming sales meetings).

Who

Next, you want to identify who the optimal recipients are. Inherent in the definition of who is the optimal information consumer is what they will be doing with the data. Too often, BI content gets distributed to a wide audience in the hope that the subset of actual information consumers falls somewhere within this group. Using attributes such as level of management or seniority, department, job title, or span of control can all be factors for predicting if a user should receive a specific piece of content.

What

Third, look at what is being delivered. If you can pinpoint the specific content that catches your users’ attention, you can narrow the delivery to just that information and eliminate the excess noise that leads to information overload. If there are multiple pieces of content that meet a user’s needs, ideally which can be grouped dynamically and served together so that all information delivered is engaging and consumed.

If you have already broken down your reports into discrete and autonomous KPIs and are distributing these independently, you can measure the engagement of each piece. If you are sending larger, more comprehensive reports, you need to determine what data is being consumed. You could look at what types of information are most frequently being requested and by whom or track the correlation between certain instances of reports and what in that report was statistically different from those reports that had little engagement.

How

Fourth, how the delivery occurs can be critical to developing an optimized pattern of delivery. Are your users engaging more through your Web portal, on-demand, or through an outbound medium such as e-mail, mobile push notification, SMS, Slack, WeChat, or WhatsApp? Your delivery method can be leveraged as a set of attributes in machine learning. Other attributes, such as culture and country, age, user devices, and browser types can be used to determine the method of delivery likely to be most effective for each user.

How Much

Like the other four questions, your goal here is to make sure you are delivering just the right amount of information. Even too much of a good thing can become noise and progressively lose value. Looking at engagement rates over time will allow you to predict if user interest is starting to decline or is showing an increase in interest. Dynamically responding to this and decreasing the frequency of messages when interest wanes or increasing frequency of messages when a renewed interest is identified can create an optimal delivery mechanism. Attributes such as how much information has been delivered to a user over the last day, week, or month and how much of that past information has been actively consumed can be used to identify patterns.

A Final Word

By identifying attributes of when, who, what, how, and how much about your user’s experience and correlating this to when they are most engaged will allow you to use machine learning to identify hidden patterns. By applying these patterns to establish rules for personalized delivery, your BI engine will be able to dynamically adjust to your users’ needs without having to manually develop configurations for each new user or treating all users as a single homogenous group with the hopes that someone’s needs are being met.

About the Author

Troy Hiltbrand is the chief information officer at Amare Global where he is responsible for its enterprise systems, data architecture, and IT operations. You can reach the author via email.


TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.