By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Kong AI Gateway Adds Enterprise Capabilities

Kong customers can more easily manage AI costs, build AI agents and chatbots, get real-time AI responses, and ensure content safety.

Note: TDWI's editors carefully choose vendor-issued press releases about new or upgraded products and services. We have edited and/or condensed this release to highlight key features but make no claims as to the accuracy of the vendor's statements.

Kong Inc., a developer of cloud API technologies, has released a suite of open-source AI plugins for Kong Gateway 3.7 release and making Kong AI Gateway generally available.

AI developers can now focus on building AI-specific use cases, such LLM RAG chatbots or AI integrations, without having to build the underlying infrastructure to establish a secure and observable life cycle for AI applications in production. This is fully supported by Kong at scale on both Kong Konnect and Kong Gateway Enterprise.

Kong AI Gateway can also be provisioned entirely in the cloud as a dedicated SaaS service with Kong’s new Konnect Dedicated Cloud Gateways offering, and it supports a wide range of use cases to help accelerate the adoption and rollout of new AI applications into production.

The general availability of Kong Gateway Open Source (OSS) 3.7. features AI Gateway updates with new streaming plugins, as well as performance and security improvements, and several capabilities to enable organizations to secure, scale, and observe AI traffic. The release includes improvements and new plugins such as the following.

  • Support for the existing OpenAI SDK: API to access all of the LLMs it supports, with standardized OpenAI API specification to help developers to onboard more quickly by providing them with familiar API specification. Easily build AI agents and applications using Kong AI Gateway by natively supporting the OpenAI SDK client library. Start using LLMs behind the AI Gateway simply by redirecting requests to a URL that points to a route of the AI Gateway. Re-use OpenAI SDK to consume every LLM supported by Kong AI Gateway, removing the need to alter their code.
  • Introducing AI streaming support (OSS + Enterprise): Natively streaming in the “AI-proxy” plugin when consuming every LLM provided. Unlocks more real-time experiences versus waiting for the full response to be processed by the LLM before sending it back to the client. Response will now be sent token by token in HTTP response chunks (SSE). Capability can be enabled in the plugin configuration by setting the following property of “ai-proxy”, allowing the clients to request streaming by making requests, creating more compelling and interactive AI experiences.
  • New plugin: AI token rate-limiting advanced (Enterprise): New enterprise-only AI capability to rate-limit the use of any LLM by the number of request tokens. Better manage AI spend across the board by specifying different levels of consumption for different teams in the organization. Self-hosted LLM providers can better scale their traffic on the AI infrastructure when the AI traffic increases across the applications. New AI-rate-limiting-advanced plugin now focuses on the number of AI tokens requested, regardless of the number of raw HTTP requests being sent to it. If the customer wants to rate-limit both raw requests and AI tokens specifically for AI-rate-limiting-advanced plugin to work in combination with the standard Kong rate-limiting plugin.
  • New plugin: New AI Azure Content Safety (Enterprise): Allows customers to seamlessly integrate with the Azure AI Content Safety service to validate prompts that are being requested via the AI Gateway across every LLM that is supported (not only Azure AI). 
  • Dynamic URL-sourced LLM model in ai-proxy (OSS + Enterprise): Configure the requested model dynamically via the URL path requested by the client, and users can consume a model by hard coding its name in the plugin configuration to scale Kong AI Gateway across the teams that want to experiment with a wide variety of models, without having to pre-configure them in the “ai-proxy” plugin. Apply the “AI-proxy” plugin once and then support all models available by the underlying AI provider by parsing the URL path requested instead.
  • Support for Anthropic Claude 2.1 Messages API (OSS + Enterprise): One API interface to consume models across both cloud and self-hosted providers, including the expansion of unified API interface to now also support the Anthropic Claude 2.1 Messages API that is typically being used to create chatbots or virtual assistant applications. The API manages the conversational exchanges between a user and an Anthropic Claude model (assistant).
  • Updated AI analytics format (OSS + Enterprise): Updated analytics logging format was created for all AI requests that are processed by Kong. Users can now measure consumption across every model that has been requested by “AI-proxy,” “AI-request-transformer,” and “AI-response-transformer.”
  • Performance improvement: With improvements to Kong’s cache key generation algorithm, Kong Gateway 3.7 improves performance. Details can be found in this blog.

For a complete rundown on features, fixes, and updates, check out the Kong Gateway 3.7 OSS CHANGELOG. Get your hands on Kong Gateway 3.7 by downloading it, or level up with Kong Konnect, offering the quickest route to try out Kong Gateway 3.7.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.