Skip to main content

AI security

CommandBar uses various AI vendors to provide certain features. This doc contains questions you might have related to the security and privacy of these systems.

How is CommandBar's AI training data sourced?

CommandBar does not train models. Instead, we use off the shelf models provided by OpenAI (and are exploring other vendors as well).

See below for a description of the pipeline we use to turn customer-provided content into custom answers within Copilot.

All customer-provided content is supplied explicitly by the customer and includes: web pages (e.g. marketing site), help docs, and human-endorsed answers ("Answers").

Does CommandBar have any safety features designed to minimize identified AI risks?

We've taken steps to mitigate several risk vectors:

  • Hallucinations: our answer pipeline incorporates several steps to ensure that answers are based only on source material provided, not general knowledge that may potentially be incorrect or out of date.
  • Prompt injection: while not a huge risk (as prompts do not contain sensitive information), our pipeline also flags malicious user inputs that are designed to obtain the prompt provided to the LLM used to generate responses.
  • Suboptimal answers: the model will not always answer perfectly. To address this, customers can "correct" the bot by provided a human-endorsed answer to a user question that the bot will use for future responses.

No. The way our pipeline works does not involve any model creation at all. Instead, when a user asks a question we perform a semantic search over customer-provided content (docs, web pages, FAQs) and then provide the most relevant information to the LLM, which "condenses" and summarizes the content to produce a custom answer that is a specific response to the user questions.

It goes without saying that customer content is not shared across customers (doing so would be counterproductive to a well-functioning custom assistant that is purpose-built for your own application).

Do you train models on customer data?

At CommandBar we don’t use customer data to train our own models, nor do we allow our vendors to do. At present, we utilize OpenAI’s APIs to power our end-user facing experiences. You can read their full data usage policy here, but the most relevant snippet is:

OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose.