AI and Search: Part II

Published by

on

What role can generative AI play in information retrieval? A few months ago, I set out to learn what was possible using the current generation of AI models. I found some surprises, both good and bad. In this 3-part series, I’ll delve into the lessons I learned along the way.

Recap

In part 1, I tried a straight chatbot as an interface to search a product catalog. I learned two big things along the way: 

  1. Using generative AI can be a game-changer when the user is looking for ideas rather than specifics
  2. The chatbot pattern leads people to ask anything. This can be bad when your agent doesn’t know how to answer everything.

Part 2: Restricted Replies

Could we focus on what LLMs do well, and steer the user away from things that are problem areas? Several users with the first chatbot iteration found great results when they were looking for gift ideas for someone without being overly specific. The generative AI responses were coming up with true novel suggestions that they hadn’t thought of on their own.

In the second phase, I decided to focus on that. Rather than an all-purpose shopping assistant, what about a gift-finding assistant?

But how can we do this while keeping the user from thinking they can ask *anything*?

To start, I chose a different personification of the assistant. My earlier chatbot presented as an AI sales assistant, which led to users assuming it was omniscient about all product questions. I considered that a bit in this post, and settled on a face with much less implied intelligence: Gifty the Gift Retriever. A Golden Retriever that “fetches” potential gifts, he carries much less implied expertise about the details of the products. 

This had some effect, but wasn’t a silver bullet. Some users stopped assuming he’d be omniscient, but not everyone. More was needed. 

In addition to changing the personification of the assistant, I put some structure around upstream communication. Rather than an open-ended prompt, I led the user through a brief questionnaire about the person they were finding a gift for. After gathering some information, the assistant was instructed to find an idea and describe only why it might be a good fit for the person.

Most importantly, while the chatbot pattern was still in place, the user wasn’t given an open-ended input box. Instead, after each idea ‘fetched’, they were  given starting templates for reactions:

  1. Good choice, have a treat!
  2. This isn’t a good option because…
  3. Just OK, fetch another idea.
  4. Let me tell you more about them…

The hope was that this would keep users away from asking about topics that were beyond the assistant’s grasp, while still giving them the ability to provide free-form feedback to direct the gift hunt.

Problem 1: Details, Details

The biggest challenge to overcome in this phase was getting users to provide a lot of detail. After all, if they only came to say:

 “I need to find a gift for an adult female who is outdoorsy”

…generative AI doesn’t provide much value vs. precompiled lists. Where it starts to shine is when you get deeper: 

“I need to find a gift for my 55-year old mother. She likes low-impact outdoor activities, and in the summer does a lot of kayaking and hiking. She has two dogs and lives in Montana on a ranch. She’s into R&B and motown, and has a record player and collects vinyl. She doesn’t cook for herself much, and eats out a lot.” 

OK, now we’re talking! Generative AI does a good job with this level of detail, while traditional gift lists or keyword search would struggle with this level of specificity.

Getting users to write this level of detail was tricky, though. Open-ended prompts often left users unsure on how to answer, and other times users would simply assume what they’d enter in a search box was what was intended.

Solution 1: Memory

I created a small memory model for the assistant. At each stage, it would summarize what it knew about the person so far. This summary jumping off point gave it a basic for ‘interviewing’ the user – crafting specific follow-up questions to ask about parts of their personality it hadn’t heard about yet. 

After a few iterations, this worked well – new users were clear about what to expect, and started providing much more detailed description – spaced out over several questions. 

Problem 2: And Another Thing…

The response templates didn’t completely work to restrict users from asking anything they wanted. It did curtail *some* freeform Q&A, but because the UX presented the statements as a chat dialog, people would still ask for more. 

“This isn’t a good option for them because…”

… would get completed…

 “they don’t follow sports. …Also, can you tell me if the lantern you recommended before comes with a charger?” 

The chatbot was closed off and knew not to engage on that question, but the experience was still frustrating because the users felt ignored. 

This approach also made for a very slow trickle of ideas. After each idea, the user needed to select from prompts and sometimes enter text. Having to make decisions and give written feedback after each product idea was far too slow! We had the worst of both worlds: the slower and more thinking-heavy burden of the chatbot style interface, with the limited flexibility of a more conventional search application. 

Bring in the UX

It was here that I engaged with a user experience colleague, John Choquette. We talked about the good and the bad at some length, and noted a few observations:

  1. The generative AI provided value in the ‘imagination’ stage – coming up with off-the-wall ideas for gifts that were hyper-specific to the person. It *wasn’t* providing much value with generating dialog with the user *about* what it came up with.
  2. Presenting the information-gathering stage as a chat interface also wasn’t necessary. While having the AI write good follow-up questions was critical, the dialog UX was superfluous.

In short, the value being provided by ChatGPT had very little to do with the ‘chat’ part. Maybe we shouldn’t be using ‘chat’ at all! 

Check back here for part 3, where we’ll dive into the final phase: eliminating chat from our chatbot to make it a better chatbot!

One response to “AI and Search: Part II”

  1. AI and Search: Part III – MeshThink Avatar

    […] In part 2, I built a more restricted chatbot interface, with response prompts, and discovered: […]

    Like

Leave a comment