TDWI Articles

Q&A: Data Fabric Technologies: Stitching Together Disparate Data for Analytics

Data fabrics can incorporate a wide range of analytics capabilities, including data exploration, business intelligence, natural language processing, and machine learning. Jeff Fried from InterSystems tells us what we need to know about the technology.

We asked Jeff Fried, director of product management at InterSystems, to explain the ins and outs of data fabrics and how they fit into the landscape of emerging technologies, such as artificial intelligence and machine learning.

For Further Reading:

Data Fabrics for Big Data

Benefits and Best Practices for Data Virtualization in the Real World

Q&A: What's Ahead for the Data Landscape

Upside: What is a data fabric? How is it different from other data storage technologies, such as data virtualization?

Jeff Fried: Think of a data fabric as a web stretched across a large network of existing data and technology assets. It connects disparate data and applications, including on-premises, from partners, and in the public cloud. A data fabric is a reference architecture that provides the capabilities needed to discover, connect, integrate, transform, analyze, manage, utilize, and store data assets to enable the business to meet its myriad of business goals faster and with less complexity than previous approaches, such as data lakes. An enterprise data fabric combines several data management technologies, including database management, data integration, data transformation, pipelining, API management, etc.

Smart data fabrics take the approach a step further and incorporate a wide range of analytics capabilities, including data exploration, business intelligence, natural language processing, and machine learning. It enables organizations to gain new insights and create intelligent prescriptive services and applications.

What are the benefits?

The next generation of innovation and automation must be built on strong data foundations. Emerging technologies, such as artificial intelligence and machine learning, require a large volume of current, clean, and accurate data from different business silos to function. Yet seamless access across a global company’s multiple data silos is extremely difficult without a real-time, consistent, and secure data layer to deliver the required information to the relevant stakeholders and applications at the right time.

Although data lakes have been implemented in attempts to solving many data management challenges, we’re now seeing rapid business change. Enterprise data collection is expected to increase at a 42 percent annual growth rate over the next two years. In reality, many data lakes have often been nothing more than data swamps -- murky with disorganized data that presents challenges around accessibility and the ability to leverage the data for actionable insights.

Data fabrics enable firms to make better use of their existing data architectures without requiring an entire structural rebuild of every application or data store. By enabling existing applications and data to remain in place, organizations can access, harmonize, and analyze the data in flight and on-demand to meet a variety of business initiatives.

Moreover, data fabrics enable valuable real-time insights such as on demand access to risk data and analytics in the moment. This means organizations can adapt in real time to market developments. This has benefits for both capital and liquidity management, which is especially critical during times of extreme intraday pricing volatility. Decision making across internal functions is easier when organizations have a current and accurate view of enterprise risk.

Leading organizations leverage smart data fabrics to stitch together distributed data from across the enterprise as well as power a wide variety of mission-critical initiatives, from business management reporting and scenario planning and modeling enterprise risk and liquidity to regulatory compliance and portfolio optimization. This gives financial organizations a holistic and comprehensive view of what has happened in the past, what’s currently happening, and what is likely to happen in the future so they can be proactive and prescriptive rather than reactive to market changes.

What are the drawbacks?

For Further Reading:

Data Fabrics for Big Data

Benefits and Best Practices for Data Virtualization in the Real World

Q&A: What's Ahead for the Data Landscape

Organizational silos, technical silos, and the complexity of implementation can make data fabrics difficult to implement in practice.

As I mentioned, as enterprise data collection increases over the next two years, technical and organizational silos will continue to persist and grow. Within this influx of data, grappling with the hundreds or even thousands of existing applications makes integrating and leveraging data from internal and external data sources to power business decisions a serious challenge.

For smart enterprise data fabric initiatives to be successful, organizations need to tackle technical and organizational challenges. Appointing a chief data officer (CDO) is one strategy to foster top-down data governance and provide necessary organizational support for a cohesive data strategy.

The complexity of implementation can be a challenge with data fabrics. It requires exposing and integrating the data and systems that will ultimately provide immediate and significant value to the organization, such as calculating real-time cash management with multiple parts of the business. When building a data fabric for the first time, data interoperability is essential. Because disparate systems format data differently, this lack of native interoperability adds friction, slows time-to-value for data stakeholders, and introduces the need to harmonize, deduplicate, and cleanse data.

Thus, an organization needs to understand its data consumption and regulatory compliance needs to make proper use of its data fabric. A lack of understanding often creates complexity or worse, points of failure.

How do you know you need a data fabric?

Thanks to COVID-19, financial institutions are challenged by the “business-as-usual” approach. Ongoing market volatility has caused significant increases in trading volume, which has put pressure on front-, middle-, and back-office teams to keep pace. Though the market is no stranger to volatility, the scale of recent events, such as the unforeseen uptick in the trading of GameStop stock, represent circumstances that are increasingly challenging.

Previous crises saw organizations experience serious operational and data challenges based on volume spikes, but the price volatility effect has magnified these challenges for global organizations. This volatility has left many firms on the back foot when it comes to assessing risk as markets move so quickly within minutes. The result is business lost to newer, more agile competitors.

In grappling with this volatility, the three main pain points, or signs, associated with needing a data fabric include lack of real-time visibility, manual and inefficient processes that expose delays when new services come online, and the inability to access data to quickly adjust to unpredictable market changes with machine learning and predictive analytics.

Capital markets are paying the price for delaying core investments in their data architectures and are now struggling to keep pace with the ongoing market dynamics. When data isn’t accessible across systems, heads of businesses struggle to garner an accurate picture of the market and the relevant opportunities presented by ongoing client and market developments. Capital market firms that have been slow to move to cloud-enabled or cloud-hosted environments and to adopt digital processes along the trade life cycle face incompatibility challenges between next-generation platforms and the legacy technologies that remain.

Sunsetting legacy applications takes a lot of time and effort, but firms shouldn’t be held back by these limitations. Investment in modern data management technologies such as enterprise data fabrics enables firms to continue to run their legacy systems and stitch together distributed data from across the enterprise, as well as provide analytical capabilities and insights from the source data in real time. In turn, modern data management can greatly simplify architectures by reducing the number of different products needed to build and maintain a smart data fabric.

What mistakes do enterprises make when they adopt and implement a data fabric and what best practices can you recommend to avoid these?

From an architecture standpoint, many enterprises try to use several separate point solutions. Although still possible, this type of strategy for building a data fabric adds complexity, delays time-to-value, and increases the total cost of ownership. Modern data platform software provides a broad and deep set of needed functionality spanning integration, database management, analytics, and API management, greatly reducing the number of moving parts, simplifying architectures, lowering total cost of ownership, and speeding time to value.

A mistake some enterprises make when adopting and implementing a data fabric is trying to do everything at once. Don’t try to boil the ocean. Start small. Measure and quantify the benefits and learn as you go. It’s a process and a journey. Learn, adjust, and get value at every step along the way.

Successful implementations also require buy-in from executive management from the start. As I mentioned, appointing a CDO is one way to remedy these implementation challenges as it fosters top-down data governance and provides the necessary organizational support for a cohesive data strategy.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.