Skip to main content

Copilot analytics

Copilot is an embedded user assistant. We often use a human analog to describe it: imagine you employed an army of human user assistants. They would co-browse with all of your users whenever they login into your product. Watching their sessions, available to answer questions at a moment’s notice at any time of day.

You’d have to give those human assistants a performance review. What are they doing well, where could they be more helpful to users, what additional training do they need to be successful.

As part of that review, you’d also engage them in conversation about the kind of behavior they’re seeing from users and what the rest of the team could do to improve user experience (Kaizen style).

Our Copilot analytics dashboard is designed to let you do the equivalent of the performance review for your Copilot. But without the part where Copilot asks for a raise.

Overview page

Copilot analytics dashboard


This chart gives you a pulse on how often users are turning to Copilot, during the period you have selected in the selector in the top right. Most of our customers see much a rhythmic increase during the workweek and dropoff during the weekend (though Copilot is oblivious and doggedly waits all weekend in case users need assistance).

Note that an Open refers to a situation in which Copilot is opened (revealing its welcome message), whether or not the user submits a query.

Success rate

Success rate is the rate at which Copilot answers a user query without resorting to a fallback message. When Copilot resorts to a fallback message, it is usually because it doesn’t believe it has sufficient information to respond to the user with a better answer.

The denominator of this number is the total number of user queries made to Copilot during the period selector, not the number of Copilot sessions (which could include multiple queries).

If this number is going down, it’s likely that your users are asking questions about Copilot that aren’t related to any source content you’ve trained it on. This can often happen when:

  • You release a new feature or product and haven’t supplied relevant knowledge base information (or Answers) to Copilot
  • Your user demographic shifts (e.g. because you launched a new marketing channel) and Copilot isn’t yet prepared to answer the types of questions these users are asking
  • A point-in-time event occurs, like a marketing webinar or an outage, that Copilot isn’t aware of

You can view Copilot chats in the table below, and filter the AI Answer category to Falback to see the types of queries that are generating a fallback. The solution to a declining fallback rate is likely one of:

  • Connect a new source that contains information relevant to the queries that are currently generating a fallback response
  • Edit content of an existing source
  • Create answers in CommandBar (this is best for information that likely shouldn’t live in any of the source content management systems you’ve connected).

If you aren’t sure what Copilot is falling back for specific queries where it seems like it should have sufficient information to answer the user, please reach out to us.

Unique users

This is another way of understanding how many users are engaging with Copilot. The Opens chart could record multiple opens from a single extremely eager user who just wants to make friends with Copilot.

Chats table

This table lists all chats that users have begun with Copilot in the specific period.

  • The Topic is the user’s first query (which usually represents the topic of the rest of the chat, though the user can off start a completely different topic within the same session)
  • Answer = indicates whether Copilot responded with a personalized answer (”Generated”), its fallback message (”Fallback”), or an Error. Note that the latter should occur very infrequently in situations where CommandBar’s AI infrastructure is experiencing an outage (likely due to an outage with one of our vendors’ such as OpenAI). Note that this filter is inclusive; many conversations contain both a generated answer and a fallback response.
  • Suggestion = indicates whether Copilot suggested any nudge experiences at some point during the chat.
  • Feedback = indicates whether the user provided positive or negative feedback during the session. If the user leaves both positive and negative feedback, we’ll show thumbs in both directions in this column 👍👎
  • Escalation = this field indicates whether a fallback message was provided and whether the user initaited an agent handoff (using one of our supported integrations)
  • Tags = any tags that have been attached to the chat by your team
  • User ID = the ID of the user corresponding to this chat. Reminder that the format of the User ID is determined by the way CommandBar was installed in your environment (i.e. by your team). CommandBar does not automatically assign user IDs.
  • Sentiment = CommandBar automatically tags chats with one of three sentiments based on the user's messages: positive, neutral, negative.
  • Message count = the number of messages that the user sent during the duration of the chat. Longer threads typically mean a more frustrating experience for the user (often long chats occur because the user had to clarify their original question). But this isn't always true. We frequently see long chats occur from a user digging deep into a topic with Copilot, or asking several questions which are unrelated in the same chat session.
  • Date = when the chat began

How to use this table:

  • Scan through to get a pulse of the types of questions users ask.
  • Look for deadends by filtering for chats that contain the fallback response.
  • Look for poor responses by filtering for chats that contain negative feedback.
  • Give yourself an ego boost by filtering for chats that contain positive feedback.
  • Identify chats where Copilot made a suggestion to a user. These can help you understand what types of suggestions is Copilot making frequently, which could give you ideas for other nudges to provide to Copilot or actions to wire up.

Note that you can always click on a row in the table to pull up a simulation of the chat the user experienced. From here, you can copy a link (e.g. to share with a colleague, or brag about on Slack) or add an answer (most frequently used to plug a scenario where Copilot is falling back).

Because there are many columns to view, you can choose to filter down the columns viewable in your chats table.

Copilot analytics table


Chats can be searched by keyword or by cited source (which could be a document, passage, or answer).

Searching Copilot chats


Every chat can be tagged with any text string. This creates a way for your team to categorize chats. Some common use cases of tags include:

  • Tagging chats which suggest new content should be created (such as a chat that included a fallback for a topic where there was no relevant passage)
  • Tagging chats for which you want to create an answer in the future
  • Tagging chats that involve certain topics for later analysis

You can filter the chats table for any tag you've created.

Tagging Copilot chats