Load and manage data
The fastest and easiest way to load a new table is by importing it using the Web browser. This is best for one-time data loads of small tables which do not have complex relationships to other tables. This method is limited to tables that are under 50 MB (megabytes) in size.
Using ThoughtSpot Loader, you can script recurring loads and work with multi-table schemas.
If your data already exists in another database with the schema you want to use in ThoughtSpot, you can pull the schema and data in using the ODBC or JDBC driver.
There are several methods for loading data:
Use ThoughtSpot Connections to read directly from the external databases.
This is an easy way to set up and enable a connection between ThoughtSpot and external databases.
Users can send live queries to the external databases, without having to replicate the data in ThoughtSpot.
Use ThoughtSpot DataFlow to import data from a large variety of databases, file systems, and apps. You can schedule automated data updates, configure validation rules, and specify custom mappings between tables and columns at the original storage and internal ThoughtSpot storage.
Use the ThoughtSpot Web interface to upload an Excel or CSV (comma-separated values) file from your local machine.
This method provides a quick and easy way to complete one-time data loads when you have small files, under 50MB. Users can upload their own data and explore it quickly.
Use TQL and tsload to load data directly into the back end database that ThoughtSpot uses.
This is a programmatic approach to loading large amounts of data or a schema with multiple tables. You can script all the necessary commands, and use them in recurring data loading jobs. For example, upload monthly sales results, or daily logs.
Can be integrated with an ETL solution for automation.
You can use ODBC and JDBC drivers to connect to ThoughtSpot. ODBC and JDBC clients work well with your favorite ETL tool. You can then make use of established ETL processes and tools.
You can connect to ThoughtSpot using third party tools like SSIS. Here, you don’t need to define a schema to accept the data load.
Use secondary disks or your NAS bucket for dataloads. Do NOT use the primary disk, at locations such as