By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

RESEARCH & RESOURCES

Comet’s Suite of Tools, Integrations Accelerate Large Language Model Workflow for Data Scientists

Company boosts productivity and performance with introduction of cutting-edge LLMOps capabilities.

Note: TDWI’s editors carefully choose vendor-issued press releases about new or upgraded products and services. We have edited and/or condensed this release to highlight key features but make no claims as to the accuracy of the vendor's statements.

Comet, a platform for managing, visualizing, and optimizing machine learning models, announced a new suite of tools designed to revolutionize the workflow surrounding Large Language Models (LLMs). These tools mark the beginning of a new market category, known as LLMOps. With Comet's MLOps platform and LLMOps tools, organizations can effectively manage their LLMs and enhance their performance in a fraction of the time.

Comet’s new suite of tools debuts as data scientists working on NLP are no longer training their own models; rather, they’re spending days working to generate the right prompts (i.e., prompt engineering or prompt chaining, in which data scientists create prompts based on the output of a previous prompt to solve more complex problems). However, data scientists haven’t had tools to sufficiently manage and analyze the performance of these prompts. Comet's offering enables them to embrace higher levels of productivity and performance. Its tools address the evolving needs of the ML community to build production-ready LLMs and fill a gap in a neglected market.

"Previously, data scientists required large amounts of data, significant GPU resources, and months of work to train a model," noted Gideon Mendels, CEO and co-founder of Comet. "Today they can bring their models to production more rapidly. However, the new LLM workflow necessitates dramatically different tools, and Comet's LLMOps capabilities were designed to address this crucial need."

Comet LLMOps Tools in Action

Comet’s LLMOps tools are designed to allow users to leverage the latest advancement in prompt management and query models to quickly iterate, identify performance bottlenecks, and visualize the internal state of the prompt chains.

The new suite of tools serves three primary functions:

  • Prompt Playground: Comet’s Prompt Playground allows prompt engineers to iterate quickly with different prompt templates and understand the impact on different contexts.
  • Prompt History: This debugging tool keeps track of prompts, responses, and chains to track experimentation and decision-making through chain visualization tools.
  • Prompt Usage Tracker: Now teams can track usage at a project and experiment level to help understand prompt usage at a very granular level.

Integrations with Leading Large Language Models and Libraries

Comet also announced integrations with OpenAI and LangChain, adding significant value to users. Comet’s integration with LangChain allows users to track, visualize, and compare chains so they can iterate faster. The OpenAI integration empowers data scientists to leverage the full potential of OpenAI's GPT-3 and capture usage data and prompts/responses so that users never lose track of their past experiments.

"The goal of LangChain is to make it as easy as possible for developers to build language model applications. One of the biggest pain points we've heard is around keeping track of prompts and prompt completions," said Harrison Chase, creator of LangChain. "With Comet, users can easily log their prompts and LLM outputs and compare different experiments to make decisions faster. This integration allows LangChain users to streamline their workflow and get the most out of their LLM development."

For more information on the new suite of tools and integrations, please visit http://comet.com/site/products/llmops.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.