Connect to your Large Language Model (LLM)
ThoughtSpot provides a fully managed "ThoughtSpot Enabled Auto Select" LLM configuration that gives you access to the latest, high-performance models with no setup required. We strongly recommend this option for the best performance and experience.
For organizations with strict compliance or governance requirements that prevent the use of a managed service, ThoughtSpot also supports connecting to your own private LLM endpoints. This "Bring Your Own LLM Key" (BYOLLM Key) option allows you to exercise full control over your AI stack.
When you connect to your own LLM, you are responsible for its performance, cost, and maintenance.
Enabling your own LLM (BYOLLM Key)
Enabling a BYOLLM Key connection is a guided process managed by our Support team to ensure compatibility and performance.
To enable a custom LLM connection, please contact ThoughtSpot support. Our team will work with you to perform a compatibility check and guide you through the setup process.
Prerequisites for setup
Before contact ThoughtSpot support, gather the required credentials for your specific LLM provider. This will expedite the setup and compatibility check.
We currently support connections to Azure OpenAI, Google Vertex AI, and custom LLM gateways.
For Azure OpenAI
You will need:
-
Your endpoint URL.
-
An API key.
-
The deployment names for your Default models.
For steps on how to create an Azure OpenAI resource, see this article.
For Google Vertex AI
You will need:
-
The JSON key file for a service account with the Vertex AI User role.
-
Your Google Cloud Project ID.
-
The Region for your models (for example, us-central1).
-
The Model IDs for your models.
For steps on how to create Service account keys, see this article.
For a custom LLM gateway
You will need:
-
Your Gateway URL (the base URL of your gateway endpoint).
-
The Default Model Name your gateway expects.
-
Authentication details:
-
For API Key Auth: Your API key.
-
For OAuth 2.0 Auth: Your Token Endpoint URL, Client ID, and Client Secret.
-
Your gateway must be fully compatible with the OpenAI /v1/chat/completions API specification and have tool calling enabled.
Supported models
For the best results with connecting to your own LLM, we recommend using the following models or their equivalents:
-
GPT-4.1
-
Claude Sonnet 4
You must ensure that the models you configure are available at your specified endpoint.
Migration policy for existing connections
If you already have Spotter enabled with an existing LLM configuration, it will continue to function without interruption. Previous configurations are migrated as follows:
-
Azure OpenAI configurations are automatically migrated to ThoughtSpot Enabled: Auto Select. If you wish to continue using your specific Azure OpenAI instance, please configure the LLM configuration to ThoughtSpot Enabled Azure OpenAI Connection.
-
Google Gemini configurations remain as Google Gemini.
Best practices
For the best experience, access to the latest models, and the lowest maintenance overhead, we strongly recommend ThoughtSpot Enabled Auto Select. This ensures you always have access to the latest models, fully managed and optimized by ThoughtSpot.
The BYOLLM Key path is a specialized option intended for organizations with specific, non-negotiable compliance or governance requirements that prevent the use of a managed LLM.