Enterprises need more from LLMs: On coherence vs knowledge

Eddie Zhou

Engineering

Arvind Jain

CEO

AI chatbots and question answering systems have been around for decades, but never before have they seized the world’s attention by such storm. ChatGPT, the main culprit for the craze, saw over a million users interact with it within just the first week of availability. Microsoft, now a $10 billion investor of the chatbot, announced a new Bing and Edge browser featuring GPT-powered features in a bid to upend the consumer search industry. Google shortly followed by announcing their own AI companion for search, “Bard” – although to mixed reception due to it sporting a factual error during its grand reveal. 

So, what’s causing all the recent commotion?

Coherent at last

One of the most pivotal breakthroughs in recent generative AI is the coherency of model output. Previous incarnations of ChatGPT (like GPT-2) and older generative models often struggled to generate easy-to-follow and sensible text with adequate fluency, making their output difficult to use in real applications.

In recent months, that’s changed. The current iteration of ChatGPT/GPT-3+ has demonstrated the capability to produce largely coherent and fluent responses that lend a believable credibility to its answers. They've also become far more capable of synthesis – the ability to parse and summarize complex information and conversations. 

This combination of response coherence and synthesis consequently exposes an interface to the vast volume of implicit world knowledge stored within these models. General use of ChatGPT over the past few weeks has been able to thoroughly convince the public of the potential to revolutionize the way we discover, learn, and work. 

{{richtext-banner-component}}

Confidence in knowledge

When models haven't been trained on or exposed to data in a given domain, though, the implicit world knowledge in the model can be counterproductive. Example domains include medicine, legal, or the internal lingo you use at work. While there are some domains covered by the vast training data used in modern LLMs (try asking ChatGPT for legal advice!), most enterprise data are in walled gardens. This means that the downsides of the conflation between coherence and knowledge can be made very clear in the enterprise environment.

For example, the general definition of "Scholastic" refers to something regarding education and schooling (and/or the education company). For Glean employees, “Scholastic” (a play on the popular search engine stack “Elastic”) refers to our learned, vector-based retrieval and scoring system. 

The best models are ones that solve for domain adaptation – systems set up to learn from domain-specific information and augment their base knowledge, allowing them to overcome language confusion due to disambiguation.

These types of LLMs are essential to delivering a better search and knowledge discovery system that's refined enough for the enterprise environment. This quality search experience, capable of maintaining coherency in the face of new knowledge, is key to unlocking the potential of generative AI and delivering the full potential of its value to knowledge workers collaborating in complex digital environments. 

The future of company-specific generative models 

Leveraging generative models for enterprise question answering is just the tip of the iceberg. In the future, generative models equipped with your company’s knowledge will be capable of doing far more. By training these models in-domain, they can be fine-tuned to augment our workflows and completely transform the way we work. Think of a world where these systems are drafting technical design documents, automatically filling out RFPs, or summarizing your weekly inbox for review. 

We believe that expanding generative AI experiences to facilitate information access and discovery is the first step towards unlocking that full potential for enterprise environments. Glean is at the forefront of training models in-domain and fine-tuning LLMs to power that progress. If you’re interested in learning more about how we’re helping transform knowledge discovery and enterprise search, sign up for a demo today. 

Related articles

No items found.
Integrating LLMs and GPT into enterprise workflows

Integrating LLMs and GPT into enterprise workflows

Discover in our white paper how improvements to generative AIs brought them to the forefront of modern workplace transformation – and how best to integrate them into several key areas of enterprise business.

The AI-powered work assistant. Across all your company's data.
Get a Demo
CTA Section Background Shape