By using website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES Launches Early Access to Metacloud Platform

AI developers have the flexibility to run end-to-end AI flows on any mix of cloud or on-premises compute resources or storage they choose.

Note: TDWI’s editors carefully choose vendor-issued press releases about new or upgraded products and services. We have edited and/or condensed this release to highlight key features but make no claims as to the accuracy of the vendor's statements., an operating system for artificial intelligence (AI) and machine learning (ML) built by data scientists, has announced the exclusive early release of the Metacloud, a new managed service enabling AI developers the flexibility to run AI/ML workloads on a mix of infrastructure and hardware choices, even within the same AI/ML workflow or pipeline. Available platform integrations include Intel, AWS, Azure, GCP, Dell, Redhat, VMWare, and Seagate. Exclusive early access to the Metacloud is now available upon request on our website.

Many AI projects stall due to the inability of the existing IT infrastructure (either in the cloud or on-premises) to meet the growing demands of AI workloads. AI developers are often locked in to one infrastructure architecture, giving them little flexibility to try new ML/AI infrastructure options. In order to experiment with new environments, data scientists need to re-instrument a completely new stack, which may take months to set up. AI developers need the ability to choose the best-of-breed compute and cloud solution for each workload based on each architecture’s cost/performance trade-offs, instantly, without the burden of a long-term commercial commitment.

With the early release of Metacloud, AI developers now have the full flexibility to run any AI architecture for any AI workload on demand. Together with the end-to-end operating system for machine learning, AI developers can now instantly manage data, and develop, train, and deploy models on any infrastructure. Metacloud introduces a new, flexible interface for running AI workloads instantly -- BYOC (Bring Your Own Compute) and BYOS (Bring Your Own Storage) -- by delivering a developer-friendly portal to set up and launch AI/ML workflows. Metacloud works with any AI infrastructure provider because it is designed with cloud-native technologies such as containers and Kubernetes. Developers simply create an account, select the AI/ML infrastructure to run their project (public cloud, on-premises, co-located, dev cloud, pre-release hardware, and more), and run the workload. Metacloud is provided as part of, a Kubernetes-based full stack machine learning operating system that includes everything data scientists and developers need to build and deploy AI applications.

A Metacloud preview is now available for early access upon request at

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.