Kinetica Launches Native LLM for Language-to-SQL on Enterprise Data
Embedded large language model allows organizations to run SQL-GPT with enhanced privacy, security, and greater fine-tuning.
Note: TDWI’s editors carefully choose vendor-issued press releases about new or upgraded products and services. We have edited and/or condensed this release to highlight key features but make no claims as to the accuracy of the vendor's statements.
Kinetica, the speed layer for generative AI and real-time analytics, has released a native large language model (LLM) combined with Kinetica’s innovative architecture that allows users to perform ad hoc data analysis on real-time, structured data quickly using natural language. Unlike with public LLMs, no external API call is required and data never leaves the customer’s environment. This announcement follows Kinetica’s earlier innovation of it analytics database integration with OpenAI.
In the wake of the excitement around LLMs, enterprises and government agencies are exploring innovative ways to automate certain business tasks while keeping sensitive information that might be revealed through fine-tuning or prompt augmentation secure. Public LLMs, such as Open AI’s GPT 3.5, raise privacy and security concerns that can be addressed by native offerings integrated into the Kinetica deployment and contained within the customer's network perimeter.
In addition to being more secure, Kinetica’s native LLM is tailored to its syntax and industry data definitions such as telecommunications, financial services, automotive, logistics, and others, creating more reliable and accurate SQL generation. This capability extends beyond standard SQL, ensuring efficient handling of intricate tasks for enhanced decision-making for time-series, graph, and spatial questions. Kinetica's approach to fine-tuning prioritizes SQL generation optimization to ensure consistent and accurate results compared to more common techniques that prioritize creativity that yield diverse but unpredictable responses. This guarantees ongoing functionality for businesses and users, offering peace of mind in SQL query outcomes.
“At Kinetica, we believe in fostering openness and embracing the diversity of generative AI models,” said Amit Vij, co-founder and president of Kinetica. “We expect there will be different LLM platforms that emerge and we want to provide our customers with choice. We currently support two models, and our commitment lies in continuously expanding our offerings to accommodate client-driven preferences and seamlessly integrate with a wide array of future models. Towards that end, Kinetica will roll out integration with other LLM platforms such as NVIDIA NeMo later this year for language to SQL as new state-of-the-art models emerge."
The Kinetica database converts natural language queries to SQL and returns answers within seconds, even for complex and unknown questions. Further, Kinetica converges multiple modes of analytics (such as time series, spatial, graph, and machine learning) that broadens the types of questions that can be answered.
Kinetica can deliver conversational query thanks to its use of native vectorization. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint. Vectorization is made possible by GPUs and the latest advancements in CPUs, which perform simultaneous calculations on multiple data elements, greatly accelerating computation-intensive tasks by allowing them to be processed in parallel across multiple cores or threads.
Kinetica’s native LLM is immediately available to customers in a containerized, secure environment either on premises or in the cloud.