Page 3 of 3
Agility, Speed, and Trust: Driving Business Data Strategies in 2022
In the nearly two years since the beginning of the COVID-19 pandemic, "resilience" has been the most important objective. Amid population health challenges, organizations have had to weather the impact of unforeseen changes to supply chains, inventory, manufacturing, and labor availability, not to mention customer demand. Rules of thumb from past eras became legacies. As we move into 2022, organizations still face uncertainties; they need good data and analytics to interpret trends so that current strategies fit with today's situations and leaders can look forward and seize opportunities.
The importance of resilience is not going away, but three additionally important objectives in 2022 will be agility, speed, and trust. Agility means having the ability to sense change, adjust behavior, and take advantage of unexpected opportunities. Speed means quickness and eliminating delays or unnecessary steps, but also maintaining balance and direction. Trust is essential to internal and external collaboration and culture, customer relationships, and support for ethical and legal behavior regarding sensitive information.
Organizations in 2022 will be focusing on how they can deploy modern technologies, services, methodologies, and practices to improve agility, speed, and trust -- and resilience, because the risks and threats related to the pandemic and related supply chain chaos are not gone yet. Here's a look at what's ahead in business data and analytics strategies and solutions regarding these objectives.
Trend #1: Cutting-edge technologies and practices will improve agility
Underpinning business agility today is data agility. Fluid situations demand innovation; flatter organizational structures empower decision makers at all levels to explore and analyze data for both real time and predictive insights. Business intelligence tools and analytics applications are increasingly enabling users to interact with varied data types beyond carefully cleansed and structured data to understand performance metrics, collaborate on visualizations and models, and arrive at insights that drive positive outcomes.
Easier self-service capabilities are vital to democratizing data analysis and preparation, but organizations are finding that they need to do more than supply users with tools to realize gains in terms of productivity and agility. The foundation, of course, is data integration and management that delivers current, complete, and in-context data. The advent of cloud data platforms, containers, and APIs enable organizations to address project needs by setting up scalable and elastic data platforms that offer varied data types to meet project needs. Here are three agility-related developments to watch in 2022:
AI-driven augmentation and natural language interaction. BI and analytics solutions are evolving to offer recommendations about data sets users might explore and prepare for analytics, which visualizations to use, and, ultimately, what business decisions or actions to take. Such recommendations can result in faster response and the ability to evaluate more options for action. Some solutions also offer AI-led automation of complex activities such as predictive forecasting and real-time analytics.
In 2022, we will see continued progress with AI-driven augmentation, including through embedded functionality in data-driven applications. AI augmentation will help organizations personalize reports, data quality rules, and data transformation, as well as predefine variables for analytics so that different "personas" do not begin work with a blank slate.
Along with AI augmentation and smarter personalized workspaces, users will benefit from deeper implementation of natural language search and query capabilities. These are important to bridge skills gaps and enable users to interact with data in the language of their business context. Providing natural language interaction and better ease of use for all skill levels is key to increasing adoption and productivity because these capabilities reduce the need for coding expertise.
Data catalogs and semantic layers. Data catalogs make it easier to locate data sets and understand their quality and lineage. With a data catalog, users move faster to discover and explore data relationships and examine variations across data sets, including by examining metadata and other information about semi- and unstructured data in the cloud data lake. In our research, Just over half (51 percent) of organizations surveyed say that managing metadata in a data catalog is one of the most important steps they could take to increase success with BI and analytics. [Editor's Note: All TDWI research quoted in this article is from the 2021 Q4 TDWI Best Practices Report: Modernizing Data and Information Integration for Business Innovation.]
In 2022 there will be increased focus on AI-driven automation in data catalog development and maintenance. AI-driven automation will also be key to modernizing BI and analytics systems' semantic layers that are critical to the quality and consistency of business representations of data for reporting, calculations, multidimensional modeling, and developing and testing analytics models.
Knowledge graphs, ontologies, and graph databases for understanding data relationships. To search for and analyze complex data relationships effectively, organizations need models and systems that can store data relationships and make them discoverable. Knowledge graphs, network-based representations, and graph databases specialize in these capabilities. Knowledge graphs, which are built using graph databases and ontologies, capture how different data sets relate to each other and to higher-level entities such as people, places, and things. Graph databases and query languages can manage and retrieve complex data relationships without having to fit them into relational table structures.
In 2022, we will see advances in ease of use and self-service visualization capabilities for knowledge graphs and semantic data integration. Graph databases will also prove useful in data catalogs and semantic layers for capturing deeper information about data relationships for user exploration, data pipeline development, and governance.
Trend #2: To accelerate speed to insight, organizations modernize integration technologies and methodologies
Achieving faster data insights involves reducing delays and bottlenecks throughout data life cycles so users and automated applications have the right information at the point of decision. In 2022, organizations will focus on modernizing data pipelines, data loading, data virtualization, change data capture, and other technologies that enable real-time operational dashboards, automated applications, and data science exploration and AI/ML development.
Today's data explosion puts pressure on organizations to modernize data architectures so users can unlock value at the speed of business. Data fragmentation into numerous data silos increases latency because users have to go from source to source to collect and blend relevant data. Reducing the number of data silos through consolidation into a centralized (and increasingly, cloud-based) data warehouse or data lake (or unified data lakehouse) saves time in locating and integrating the data.
In TDWI research, a significant percentage (38 percent) of respondents' organizations regard cloud migration as an opportunity to unify silos and solve data fragmentation. However, for some use cases, organizations should evaluate whether a data virtualization layer could provide a faster path to real-time views and the ability to federate queries to the sources rather than moving and replicating data to a centralized platform.
Our research finds that organizations face problems with dislocated pipeline processes; 36 percent need improvement in end-to-end integration of data pipeline processes and 23 percent require major technology upgrades for better synchronization and flow in sourcing, collecting, cleansing, transforming, and enriching data. Organizations also struggle to monitor, manage, and orchestrate numerous data pipeline processes. This makes it difficult to determine which processes users are applying to which data sets, why some pipelines may be performing poorly, and where dependencies exist between pipeline processes. Reducing delays through better end-to-end integration of data pipeline processes is key to streamlining analysis of new data sets, which 42 percent of organizations surveyed view as a modernization priority.
In the coming year, organizations will demand smarter and more automated tools to eliminate bottlenecks, monitor performance, and spot integration problems in pipelines. To focus modernization investments, many organizations are seeking a bigger, more holistic picture of their data ecosystem so they can effectively address data integration, distributed data querying, and governance.
In 2021, data fabrics and data mesh architectures became hot topics; this will continue in 2022. Data fabrics and data mesh architectures deserve evaluation for establishing connected networks of information that unify environments and make it seamless to view, access, and manage all data. Organizations should consider data fabrics and data mesh architectures for improving response in dynamic situations where they need to perform analytics right away on data in multiple sources.
Data fabrics and data mesh architectures typically depend on data catalogs full of integrated metadata and semantic data knowledge. Knowledge graphs can play an important role for visualizing data relationships across fabrics and meshes. Supporting a data fabric, data virtualization can apply metadata and integration logic to consolidate querying, governance, and user access control in a single layer.
Along with technology solutions, more organizations in 2022 will apply methodologies such as DevOps and DataOps for improving orchestration, collaboration, and teamwork in developing applications, services, and data pipelines. These methodologies help organizations balance agile, business-driven development with enterprise-level coordination, orchestration, and governance as they scale up to more workloads and larger and faster data. Organizations will also adopt lambda and kappa architectures, each of which (depending on the use case) help frame efficient stream processing and transformation of huge amounts of data at low latency.
Trend #3: Organizations focus on data trust to support expansion in data sharing
Organizations surveyed by TDWI continue to see data quality as one of their biggest challenges -- and not surprisingly, a key focus of improvement through adoption of modern technologies and practices. Along with data governance, data quality is a pillar of data trust, which is about instilling confidence in shared data for each use case. Increasingly, data trust involves adhering to laws and regulations covering data security, privacy, and confidentiality. Thus, data trust requirements need to be top of mind for data governance boards and chief data officers.
Data trust is especially critical for organizations seeking to monetize data through services offered to customers and partners and to participate in data marketplaces and exchanges. Monetization is about identifying data assets, reports, dashboards, and analytics that could be valuable to business partners, customers, or employees located in different internal divisions and subsidiaries. Monetization can be a fundamental component of new, data-driven business models that flexibly leverage data assets to meet diverse and changing requirements. A data marketplace or exchange provides a curated space for buying, selling, and sharing data and analytics, and thus offers a good forum for monetizing data. However, data sharing through monetization and participation in a marketplace or exchange cannot go forward without attention to data trust.
Data literacy training should identify data trust as an important goal of data quality practices and also as part of self-service users' responsibilities as they collect, integrate, prepare, and share data. In our recent Best Practices Report research, we find that training users in data governance is something that 35 percent of TDWI survey respondents regard as important to improving trust; 30 percent say using governance to increase users' confidence in the data is important. Along with data literacy training, organizations need to update data stewardship practices to accommodate expanded data sharing.
What's ahead in technology to improve data trust? Data catalogs will play an important role. Along with an inventory of metadata about data assets, data catalogs can contain governance and security rules that drive constraints in data pipelines and applications. Modern data pipeline solutions enable developers to incorporate governance, user authentication, and security rules and constraints. Looking forward, AI-driven automation can support the notion of governance "policy as code," -- that is, governance constraints embedded in applications so that users receive guidance as they interact with data. Embedded governance constraints reduce pressure on IT personnel to monitor all activity to ensure that users follow governance rules and the organization complies with regulations.
David Stodder is director of TDWI Research for business intelligence. He focuses on providing research-based insight and best practices for organizations implementing BI, analytics, performance management, data discovery, data visualization, and related technologies and methods. He is the author of TDWI Best Practices Reports on mobile BI and customer analytics in the age of social media, as well as TDWI Checklist Reports on data discovery and information management. He has chaired TDWI conferences on BI agility and big data analytics. Stodder has provided thought leadership on BI, information management, and IT management for over two decades. He has served as vice president and research director with Ventana Research, and he was the founding chief editor of Intelligent Enterprise, where he served as editorial director for nine years.