Skip to main content

Copilot analytics

Copilot is an embedded user assistant. We often use a human analog to describe it: imagine you employed an army of human user assistants. They would co-browse with all of your users whenever they login into your product. Watching their sessions, available to answer questions at a moment’s notice at any time of day.

You’d have to give those human assistants a performance review. What are they doing well, where could they be more helpful to users, what additional training do they need to be successful.

As part of that review, you’d also engage them in conversation about the kind of behavior they’re seeing from users and what the rest of the team could do to improve user experience (Kaizen style).

Our Copilot analytics dashboard is designed to let you do the equivalent of the performance review for your Copilot. But without the part where Copilot asks for a raise.

Overview page

Copilot analytics dashboard

Opens

This chart gives you a pulse on how often users are turning to Copilot, during the period you have selected in the selector in the top right. Most of our customers see much a rhythmic increase during the workweek and dropoff during the weekend (though Copilot is oblivious and doggedly waits all weekend in case users need assistance).

Note that an Open refers to a situation in which Copilot is opened (revealing its welcome message), whether or not the user submits a query.

Success rate

Success rate is the rate at which Copilot answers a user query without resorting to a fallback message. When Copilot resorts to a fallback message, it is usually because it doesn’t believe it has sufficient information to respond to the user with a better answer.

The denominator of this number is the total number of user queries made to Copilot during the period selector, not the number of Copilot sessions (which could include multiple queries).

If this number is going down, it’s likely that your users are asking questions about Copilot that aren’t related to any source content you’ve trained it on. This can often happen when:

  • You release a new feature or product and haven’t supplied relevant knowledge base information (or Answers) to Copilot
  • Your user demographic shifts (e.g. because you launched a new marketing channel) and Copilot isn’t yet prepared to answer the types of questions these users are asking
  • A point-in-time event occurs, like a marketing webinar or an outage, that Copilot isn’t aware of

You can view Copilot chats in the table below, and filter the AI Answer category to Falback to see the types of queries that are generating a fallback. The solution to a declining fallback rate is likely one of:

  • Connect a new source that contains information relevant to the queries that are currently generating a fallback response
  • Edit content of an existing source
  • Create answers in CommandBar (this is best for information that likely shouldn’t live in any of the source content management systems you’ve connected).

If you aren’t sure what Copilot is falling back for specific queries where it seems like it should have sufficient information to answer the user, please reach out to us.

Unique users

This is another way of understanding how many users are engaging with Copilot. The Opens chart could record multiple opens from a single extremely eager user who just wants to make friends with Copilot.

Chats table

This table lists all chats that users have begun with Copilot in the specific period.

  • The Chat topic is the user’s first query (which usually represents the topic of the rest of the chat, though the user can off start a completely different topic within the same session)
  • AI Answer = whether Copilot responded with a personalized answer (”Generated”), its fallback message (”Fallback”), or an Error. Note that the latter should occur very infrequently in situations where CommandBar’s AI infrastructure is experiencing an outage (likely due to an outage with one of our vendors’ such as OpenAI). Note that this filter is inclusive; many conversations contain both a generated answer and a fallback response.
  • Feedback = indicates whether the user provided positive or negative feedback during the session. If the user leaves both positive and negative feedback, we’ll show thumbs in both directions in this column 👍👎
  • User ID = the ID of the user corresponding to this chat. Reminder that the format of the User ID is determined by the way CommandBar was installed in your environment (i.e. by your team). CommandBar does not automatically assign user IDs.
  • Date = when the chat occurred

How to use this table:

  • Scan through to get a pulse of the types of questions users ask.
  • Look for deadends by filtering for chats that contain the fallback response.
  • Look for poor responses by filtering for chats that contain negative feedback.
  • Give yourself an ego boost by filtering for chats that contain positive feedback.
  • Identify chats where Copilot made a suggestion to a user. These can help you understand what types of suggestions is Copilot making frequently, which could give you ideas for other nudges to provide to Copilot or actions to wire up.

Note that you can always click on a row in the table to pull up a simulation of the chat the user experienced. From here, you can copy a link (e.g. to share with a colleague, or brag about on Slack) or add an answer (most frequently used to plug a scenario where Copilot is falling back).

Copilot chat simulation