Why personalization is essential to getting AI like ChatGPT workplace-ready

0
minutes read

Arjun Landes

Engineering

Why personalization is essential to getting AI like ChatGPT workplace-ready
Glean Icon - Circular - White
AI Summary by Glean
  • Generative AI solutions need to be highly personalized and relevant for each user to be truly effective in the workplace, ensuring that the answers provided are tailored to the user's specific context and needs.
  • An enterprise-ready generative AI solution depends on the quality of the retrieval system over company documents, requiring a deep understanding of user activity and company knowledge to deliver relevant and personalized answers.
  • Building a nuanced retrieval system involves understanding the variety of datasources and user interactions, ensuring privacy and relevance by capturing meaningful engagement and integrating it into a ranking algorithm for better search results.

Sometimes, the responses you receive from generative AI just aren’t quite right. You’ve probably experienced it before – the generated response is well-written and factually correct, but it doesn’t have the right context and depth of detail you were looking for. After all, the model may have all the world’s knowledge at its fingertips, but that doesn’t include any sensitive information and user activity related to your enterprise. After a few more responses either lacking in important context or providing too much unnecessary detail, you decide it’s probably best to go looking for the information yourself. 

At the end of the day, did you save any time at all? 

Most generative AI solutions on the market are excellent tools that provide great utility. Need a paragraph pared down? A quick checklist or plan made for a trip? A summarization of a news article? They’ve got you covered. 

On the other hand, there’s plenty of everyday workplace questions they’ll struggle to answer. Can it build out a press release for your latest product launch without leaking information as training data? A plan for your latest campaign integrating the many documents, conversations, and assets your team has recently collaborated on? A quick summary of the biggest user pain points collected from the customer calls your team conducted over the past two weeks? 

To elevate generative AI into true workplace assistants, the answers they provide need to be tailored for the user along with the enterprise they work in. If you’re looking to build an enterprise-ready ChatGPT, you need to ensure that each answer is highly personalized and relevant for each user – and that the first few answers are all they’ll need to move forward with their workday. 

In this blog, I’ll go over some essential components and considerations that we’ve integrated into our solution – Glean Chat – that ensures highly relevant and personalized answers each time, for every user. 

Retrieval quality is the key

An enterprise-ready generative AI solution depends critically on the quality of the retrieval system over the company documents. Just because an answer is factually correct, doesn’t always mean that it’s useful. To deliver true relevance and personalization, an assistant needs to understand the activity of all users across various applications. By doing so, it develops a deep understanding – almost an intuition – of what’s happening at your company, who’s doing what, and which pieces of information are most important. 

Contrary to what some might think, that deep understanding of company knowledge can't be achieved by just fine-tuning an LLM over company documentation. You can't train on any document that has non-global permissions (which is most documents). Thus, you have to first search over documents that a user has access to, find snippets of text likely to contain the answer to the user’s question, and then feed these into an LLM to synthesize answers. 

{{richtext-banner-component}}

Datasource and activity variety requires nuance

However, building a nuanced, high-quality retrieval system is a challenging endeavor. In any given corpus, there’s an incredible volume and diversity of documents. Each user has different propensities to visit different datasources. Simply tracking file engagement doesn’t relate to relevance for all users, either – within a single datasource, for the same exact query, resulting documents can have wildly different relevance for different users! 

For example, let’s say a company called Alpha is a Glean customer. For the query “alpha”, different departments at Glean will expect totally different search pages. An engineer might expect engineering Jiras related to the customer’s deployment, while a customer support agent might expect user experience complaint Jiras. 

A system correctly tracking activity data and applying personalization for each user can capture these differences in relevance between departments, orgs, teams, and users, and apply them appropriately to search results. It’s far from easy, though – while some datasources offer "native" activity data you have to connect to and process in a timely manner, other datasources don't have this. You’ll have to capture activity through a browser extension regularly to get an accurate picture. Then, you’ll have to synthesize all of the information, filter out the spam, and figure out how to integrate it into a ranking algorithm and trade it off with more traditional information retrieval signals like BM-25. However, exceptional search can’t be achieved by simply implementing BM-25 and hoping for the best – it needs to go further to provide the best-possible results in enterprise environments. 

Doing all that while preserving individual user data and personalization datapoints from leaking to other user profiles is the (not-so-optional!) cherry on top. Privacy thresholds need to ensure data is only collected when a common datapoint is spotted across multiple users. 

Determining personalization and relevance

Understanding which datasources that a user and their co-workers interact most with, what the sharing patterns of content across different applications looks like, and how user activity changes over time, is critical to properly interleaving documents from disparate sources. 

Developing a system that deeply understands this, as well as being capable of distinguishing between meaningful engagement (a user reading and commenting on a design document) and superficial engagement (visiting a non-relevant document multiple times, personal documents weighed as a more widely-relevant asset), helps offer a more complete and relevant knowledge discovery experience that’s truly ready for use in enterprise environments. If you’re looking for a better idea on what it takes to nail personalization for your generative AI and search solution, check out on on-demand webinar!

If you’re more interested in getting started with truly enterprise-ready generative AI today – not tomorrow – sign up for a Glean demo!

Related articles

No items found.
Integrating LLMs and GPT into enterprise workflows

Integrating LLMs and GPT into enterprise workflows

Discover in our white paper how improvements to generative AIs brought them to the forefront of modern workplace transformation – and how best to integrate them into several key areas of enterprise business.

Integrating LLMs and GPT into enterprise workflows
Work AI for all.
Get a Demo
CTA Section Background Shape

Work AI for all.

Get a demo
Background GraphicBackground Graphic