Freddie Mac adopted data virtualization to achieve better time-to-market for making relevant data available to business users, at a reduced cost, and with increased self-service functionality. We have a very ETL-heavy data pipeline that is expensive to maintain and slow to develop, and it only manages to provide stale data to business users. Another problem that is rapidly growing is excessive data redundancy and its associated costs. Also, as data volumes continue to skyrocket, we need to be able to support a wider variety of data formats. While we still need to keep a historical record in a data warehouse, we are augmenting this data in real time with current data, semi-structured (JSON and XML) data, and unstructured data (documents, photos, etc.) by leveraging data virtualization. This relieves us from relying too heavily on ETL-based data pipelines, which reduces costs while simplifying the overall architecture.