Defining Data Analytics Services in Support of Business Process Optimization
As we get better at building, deploying, and managing analytics services, we’ll need to create and maintain these services so end users can maximize BI in their everyday work.
By David S. Linthicum, Founder and CTO, Blue Mountain Labs
The role of business intelligence is to provide core decision makers with the ability to understand both the state of the business, and to identify things that need to change. These days, BI technology companies provide core decision makers with good data visualization and analysis tools that offer powerful views of abstract representations of data for human consumption. Humans who are executives, that is.
However, as we get better at developing and leveraging analytical interfaces, we could find new ways to leverage data analytics that are embeddable in systems that may provide more value. Perhaps it’s time we understood and created an approach for using this technology. Here is how we can do it.
If you believe this technology already exists, you’re right. There is nothing really new about accessing key analytics data using services. Indeed, we’ve created and leveraged data abstraction-type interfaces for years, which placed data into more meaningful and understandable structures, and we’ve accessed that data using APIs (such as a Web service, so let’s just call it a service).
These days, we’re finding new opportunities with this approach as big data technology becomes cheaper and easier to leverage. Also, as more systems come online that can leverage services, we can build around the service-orientation movement of the last 10 years.
The best argument is that business is demanding that automated processes be put in place to auto-magically correct business issues. Executives want to access core enterprise data to support operationally-oriented decisions using embedded analytics.
The cloud makes this process more appealing. Cloud services typically speak, and can understand, Web services. Thus, it’s easier to leverage analytics such as APIs or services.
Moreover, large data sets are beginning to migrate to cloud-based platforms in support of business intelligence operations, and these will have well-defined analytical service interfaces as well. Most IaaS and PaaS providers, including Amazon Web Services (AWS), Microsoft, and Google have databases built into their clouds already and plan to expand these data services in the short term.
The objective here is to create a set of data services that provide access to key analytical data that may be accessed from an application, process integration engine, user interface, machine interface, or anything that can consume a service (e.g., Web services). Examples of analytical services are what you would expect:
- Risk analytics for financial transactions, including adhering to risk limits
- Supplier rankings, based upon past delivery and quality records
- The likelihood that lacking inventory will delay production given forthcoming weather patterns
- The accuracy of a physician’s diagnosis based on past diagnoses of the particular patient profile
The idea is to provide these very discrete points of analytics in support of key business processes. For instance, a trading system could gather the risk analytics required to complete a trade without sending the trader to another user interface. A supply chain system could automatically evaluate a supplier’s ability to deliver critical supplies to a manufacturing company by imbedding this service into the system instead of forcing the user to use a data reporting or visualization tool to determine the correct path of the business process.
Those charged with creating these types of interfaces will find that there are many ways to build, deploy, and maintain them. Many traditional BI and data visualization tools can expose preset analytics using service-based interfaces. Some databases provide the ability to expose data services, and many rely on traditional approaches to service development (such as traditional system development tools). Data virtualization tools also provide such capabilities, allowing developers to create abstract data services on top of physical data structures. The path to creating a data analytics interface strategy is similar to the process of defining a BI strategy focused on data visualization. However, there are a few new steps, including:
Define the structures of the services
Although many BI analysts like to work from the source data sets to the API or service interface, that approach can be limiting. The way you want to view the data could be different from the way you store the data. Thus, I work from the interface/analytics requirements back to the physical database.
You want to create a well-defined set of structures that represent the structure of the answers you’re looking for. Typically, they are simple structures with only a few attributes. You’re only getting an answer to a question, not the complete result set.
Define the analytics of the service
You need to define the guts of the analytics task, such as culling through GBs of operational and DSS data to get to the answer defined in your requirements. The use cases vary widely here, but they are typically complex database operations that deal with information that can be gathered, analyzed, and transmitted back pretty quickly.
Analytics that will take a long time to return are not good candidates for analytics services because they typically stop the host application from processing until the answer is provided.
As we get better at building, deploying, and managing analytics services, there is a clear need to create and maintain these services as end users look to leverage the power of BI in day-to-day operations. Indeed, as we measure the productivity of BI technology and its return on investment, this direction seems to provide the best bang for the buck.
Much will evolve over the next few years around this approach as both big data and cloud computing continue to rise. The access to commoditized resources at a much lower price point will certainly drive interest in leveraging enterprise data assets in new and more productive ways.
David S. Linthicum is the founder and the CTO of Blue Mountain Labs and an internationally recognized industry expert and thought leader. He is the author or co-author of 13 books on computing, including Enterprise Application Integration (Addison Wesley). You can contact the author at [email protected].