On Demand
There are two technological advances that are influencing significant changes in the way we think about data integration: the increasing consumption of external streaming data and the reliance on cloud computing as an acceptable alternative to on-premises computing. The enlightened perception of using the Internet as a broad platform for distribution of data and computing means that conventional approaches to data exchange and ingestion are yielding to more sophisticated approaches to data integration that, paradoxically, rely on a simplified development architecture.
David Loshin
Sponsored by
Liaison Technologies
Organizations can gain powerful, actionable insights by combining maps, geographical data, and relevant “big data” sources such as customer behavior or sensor data. Leading firms in a variety of industries—including retail, real estate, energy, telecommunications, land management, and law enforcement—are today engaged in projects involving geospatial analytics, and broader interest is growing. TDWI Research, in a recent survey on emerging technologies, found that the number of respondents who stated that they would be using geospatial analytics will double over the next three years.
Fern Halper, Ph.D., David Stodder
Sponsored by
Hewlett Packard Enterprise
Without design principles, swimming in circles in a big data lake can make your arms tired. Fortunately, the data lake concept has evolved sufficiently that best practices have emerged. In an open discussion, these big data experts will shed light on how a data lake changes the data storage, data processing, and analytic workflows in data management architectures.
Wayne Eckerson
Sponsored by
MapR, Teradata
Some emerging technologies (ETs) are so new that they are truly just emerging—for example, technologies for agile BI and analytics, data visualizations, BI on clouds or SaaS, event processing, Hadoop, Apache Spark and Shark, mashups, mobile BI, NoSQL, social media, the Internet of things, solid-state drives, and streaming data. Other ETs have been around for a few years, but are just now seeing appreciable user adoption—for example, appliances, competency centers, collaborative BI, columnar databases, data virtualization, open source, in-database analytics, in-memory databases, MDM, real-time operation, predictive analytics, and unstructured data.
Philip Russom, Ph.D., Fern Halper, Ph.D., David Stodder
Sponsored by
Hewlett Packard Enterprise, MicroStrategy, Qlik®, Snowflake, Trifacta, Voltage Security, Striim
When a new technology or platform enters IT, we often see it applied first with operational applications and their servers. Then BI platforms and data warehouses adopt the new technology, followed by data management tools. We’ve seen this with various technologies, including Java and services. We’re now seeing the same sequence with clouds (whether public, private, or hybrid).
Philip Russom, Ph.D.
Sponsored by
Liaison Technologies
Changes are occurring in how businesses make decisions. Successful companies are not willing to wait a week or even a day for insight from IT. They want it on-demand, close to real time, and more frequently—and embedded into business processes. Organizations want easy-to-use analytics software for both traditional BI and even more advanced analytics. The need for speed, flexibility, and agility in decision making is becoming a business imperative.
Fern Halper, Ph.D.
Sponsored by
SAP and Intel
Apache Spark is a parallel processing engine for big data that achieves high speed and low latency by leveraging in-memory computing and cyclic data flows. Benchmarks show Spark to be up to 100 times faster than Hadoop MapReduce with in-memory operations and 10 times fast with disk-bound ones. High performance aside, interest in Spark is rising rapidly because it offers a number of other advantages over Hadoop MapReduce, while also aligning with the needs of enterprise users and IT organizations.
Philip Russom, Ph.D.
Sponsored by
Think Big, a Teradata company