By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Rafay’s New Platform-as-a-Service Empowers Customers to Quickly Leverage the Power of AI

Extends core PaaS offering to address enterprise GPU consumption requirements, along with MLOps- and LLMOps-focused capabilities for data scientists.

Note: TDWI's editors carefully choose vendor-issued press releases about new or interesting research and services. We have edited and/or condensed this release to highlight key study results or service features but make no claims as to the accuracy of the vendor's statements.

Rafay Systems, a platform-as-a-service (PaaS) provider for modern infrastructure and accelerated computing, has extended the capabilities of its enterprise PaaS for modern infrastructure to support graphics processing unit- (GPU-) based workloads. This makes compute resources for AI instantly consumable by developers and data scientists with the enterprise-grade guardrails Rafay customers leverage today. The company also launched a new AI Suite with standards-based pipelines for machine learning operations (MLOps) and large language model operations (LLMOps) to help enterprise platform teams quicken the development and deployment of AI applications for developers and data scientists. 

The AI landscape has rapidly transformed, with AI and accelerated computing now evolving from an area of focus for small, specialist teams to permeating every aspect of application development and delivery for all businesses. Moreover, as the global GPU-as-a-Service market expands, organizations actively seek scalable solutions to quickly and easily connect their data scientists and developers to expensive, in-short-supply accelerated computing infrastructure. 

Rafay’s enterprise customers have long leveraged Rafay’s PaaS for modern infrastructure to rapidly give developers access to central processing unit- (CPU-) based infrastructure on premises and in all the major public clouds, with guardrails included. The same issues that needed to be addressed for CPU-based workloads—environment standardization, self-service consumption of compute, secure use of multi-tenant environments, cost optimization, zero-trust connectivity enforcement, and auditability—ow have to be addressed with GPU-based workloads. Aspects such as cost are even more critical to control in the new age of AI. 

In addition to applying its existing capabilities to GPU-based workloads, Rafay has extended its enterprise PaaS with features and capabilities that specifically support GPU workloads and infrastructure. Rafay makes AI-focused compute resources instantly consumable by developers and data scientists, enabling customers to empower every developer and data scientist to accelerate the speed of AI-driven innovation—and do it within the guidelines and policies set forth by the enterprise.

"Rafay has extended our enterprise PaaS offering to support GPU-based workloads in data centers and in all major public clouds,” said Haseeb Budhani, co-founder and CEO of Rafay Systems. “Beyond the multicluster matchmaking capabilities and other powerful PaaS features that deliver a self-service computing-resource--consumption experience for developers and data scientists, platform teams can also make users more productive with turnkey MLOps and LLMOps capabilities available on the Rafay platform.”

To address challenges associated with building and deploying AI-based applications, Rafay’s newly added support for GPU workloads helps enterprises and managed service providers power a new GPU-as-a-service experience for internal developers and customers, respectively. This provides developers and data scientists with:

  • Developer and data scientist self-service: Easy to use, self-service experience to request for GPU-enabled workspaces
  • AI-optimized user workspaces: Pre-configured workspaces for AI model development, training, and servicing with necessary AI tools including Jupyter Notebooks and Virtual Studio Code (VSCode) internal developer environment (IDE) integrations
  • GPU matchmaking: Similarly for CPUs, dynamically match the user workspaces with available GPUs or pools of GPUs based on criteria such as proximity, cost efficiency, GPU type, and more to improve utilization
  • GPU virtualization: Time slicing and multiinstance GPU sharing to virtualized GPUs across workloads and lower the costs of running GPU hardware with dashboards to visualize GPU usage

Rafay’s new AI Suite adds to Rafay’s existing portfolio of suites, which consists of the company's Standardization Suite, Public Cloud Suite, and Private Cloud Suite. New capabilities include:

  • Pre-configured LLMOps playgrounds: Help developers experiment with generative AI by rapidly training, tuning, and testing generative AI apps with approved models, vector databases, inference servers, and more
  • Turnkey MLOps pipeline: Deliver an enhanced developer experience with an all-in-one MLOps pipeline, complete with GPU support, a company-wide model registry, and integrations with Jupyter Notebooks and VSCode IDEs
  • Central management of LLM providers and prompts: Built-in prompt compliance and cost controls on public LLM use (such as OpenAI and Anthropic) to ensure developers consistently comply with internal policies
  • AI data source integrations and governance: Leverage pre-configured integrations with enterprise data sources such as Databricks and Snowflake while controlling usage for AI application development and deployments

The new GPU-based capabilities in Rafay’s PaaS, along with the AI Suite are now generally available for customers. More information is available here.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.