If you had to teach an alien from outer space to make a peanut butter and jelly sandwich, how would you write the instructions? My guess is it might look something like this:
Reasonable enough for an adult human to follow. But what about the alien, who has none of the context required to make sense of these instructions? Would they know to unwrap the loaf and take out two slices? Or to use a butter knife to spread the peanut butter? And how would they know that they need to remove the lids from the peanut butter and jelly jars?
If you’ve ever taken an entry-level coding class, you know where I’m going with this. It’s a classic exercise that helps orient students to the level of precision required to instruct a computer to execute a task. Check out this teacher's attempts to make a sandwich based on some students’ written instructions – you’ll quickly see where things go wrong.
Profitero’s eCommerce AI Assistant, Ask Profitero, doesn’t require any programming knowledge to use, but I keep returning to the peanut butter sandwich concept as my team refines the tool, and as I work with new users. We can leave the coding application behind and focus on the “Aha!” moment that this exercise creates: Watching someone enact a literal interpretation of our instructions brings the vast array of assumptions we make in our everyday communication into the foreground.
There’s often one user in a group of Ask Profitero testers who gets better results than other users right off the bat. It seems like a magic trick, but this person has a knack for seeing their own assumptions clearly, and they can usually spot the assumptions lurking in their colleagues' failing prompts as well.
Take the following question a user posed to Ask Profitero as an example:
“What are the top out of stock products for Cat Food?”
The response they got? Mainly competitor products, and historical rather than current data. The answer wasn’t relevant and frustrated the user. A colleague who was having better luck with the tool jumped in to edit their prompt. They spotted the assumptions in the original prompt, and rewrote it as:
“What are my top out of stock products for Cat Food in the last 14 days at Walmart?”
The first user had assumed Ask Profitero knew that they were referring to their own brand’s products and wanted to focus on current inventory issues. The rewritten prompt made these assumptions explicit.
You can see how this played out in the Ask Profitero app below. The second response provides a list of the client’s own products at a specific retailer (Walmart) where they have an opportunity to take action to improve stock rates:
Note: These Ask Profitero conversations reflect actual user prompts run on fake data to protect client privacy. No actual data for any brand is shown here.
If you look closely at the examples above, you’ll notice that in the first image Ask Profitero actually made an assumption of its own. While the user didn’t specify a date range, Ask Profitero chose to select the last 30 days' worth of data.
This is because although a precise prompt is a sure path to good results, the best experience with an AI assistant allows the user to speak as naturally as possible. When we speak naturally (like in the peanut butter sandwich exercise) we make lots of assumptions. This is fine if we’re speaking to another human because they can ask us clarifying questions, but the large language models (LLMs) used in AI can’t always do so. For this reason, the Ask Profitero team needed to make our artificial intelligence more, well, intelligent.
To do this, we built default assumptions into Ask Profitero’s interpretations of user prompts. These provide guardrails that allow the user to ask a broader question and still receive an informed, intelligent response.
The first assumption we added was that if the user doesn’t specify a brand, they’re interested in analyzing both their own products and competitor products. However, if the user’s question specifies “my” or “our,” Ask Profitero assumes the user is only interested in data about their own brand(s).
Another common problem we saw in failing user prompts was a lack of specified date range. This is a natural way for our users to speak, since they might normally be directing their questions to an analyst who would make an informed decision about what date range to consider. Or they might be using the Profitero application where their default filter could be set to the trailing month, or month-to-date. However, unspecified date ranges led to unhelpful responses from Ask Profitero.
So, we added a default date range assumption. For many prompts, Ask Profitero will assume that filtering for the last 30 days of data will be a useful place to start. For other use cases, Ask Profitero may use a more applicable date range, such as the last three months.
Check out the examples below. Ask Profitero’s built-in assumptions allow it to generate useful responses to broad or exploratory user prompts:
Note: These Ask Profitero conversations reflect actual user prompts run on fake data to protect client privacy. No actual data for any brand is shown here.
We’ve also taken further steps to provide a conversational experience with Ask Profitero by replicating the back-and-forth exchange of clarifying questions that would be natural to a human conversation. As you can see in the example below, the more vague my prompt gets, the more Ask Profitero will try to clarify what I’m looking for:
Note: These Ask Profitero conversations reflect actual user prompts run on fake data to protect client privacy. No actual data for any brand is shown here.
If you ask someone a question, it's normal for them to seek clarification on the question and for you to inquire how they arrived at their answer. This transparency was the final addition we made to Ask Profitero to create a truly interactive user experience.
In early versions of Ask Profitero, the user had no visibility into how the bot interpreted their question, or what kind of query it generated to pull the data for a response. This made it challenging to diagnose prompts that failed. For the user back in our first example with the failed prompt “What are the top out of stock products for Cat Food?”, there was no way for them to troubleshoot. Was it the lack of a date range? Did they need to specify a retailer? Was the data they were looking for not available?
To solve this problem, we decided to expose Ask Profitero’s backend “thought process” directly to the user, in order to provide immediate feedback on how their prompt was interpreted.
After being asked to clarify my vague prompt above, I followed up with more specific parameters. As Ask Profitero began thinking, I was able to follow along with the steps it took to interpret and then locate the answer to my question. See how this plays out in the screenshot below:
Note: These Ask Profitero conversations reflect actual user prompts run on fake data to protect client privacy. No actual data for any brand is shown here.
Ask Profitero’s thoughts might sometimes sound a little silly, but they provide insight into the flipside of the peanut butter sandwich concept. Live streaming the thought process allows you, the user, to understand whether your instructions were clear enough, and what it looks like when a computer interprets them literally.
Ask Profitero’s transparency closes the feedback loop between the computer and the user, allowing each to coach the other. The user becomes a better prompter, and Ask Profitero becomes a more intelligent analyst, able to interact naturally and provide increasingly nuanced and actionable insights.
Want to start experimenting with Ask Profitero? You can enable it by navigating to this icon in the lower right corner of your Profitero app, or simply reach out to your account manager. Happy querying!