Data caching
ThoughtSpot does all analysis against data in memory to help achieve fast results across millions and billions of records of data. ThoughtSpot caches data as relational tables in memory. The tables can be sourced from different data sources and joined together. ThoughtSpot has several approaches for getting data into the cluster.
JDBC and ODBC Drivers
ThoughtSpot provides a JDBC and ODBC driver that can be used to write data to ThoughtSpot. This is useful for customers who already have an existing ETL process or tool, and want to extend it to populate the ThoughtSpot cache.
JDBC and ODBC drivers are appropriate under the following circumstances:
-
have an ETL load, such as Informatica, SSIS, and so on
-
have available resources to create and manage ETL
-
have smaller daily loads
tsload
You can use the tsload
command line tool to bulk load delimited data with very high throughput.
Finally, individual users can upload smaller (< 50MB) spreadsheets or delimited files.
We recommend the tsload approach in the following cases:
-
initial data load
-
JDBC or ODBC drivers are not available
-
there are large recurring daily loads
-
for higher throughput; this can add I/O costs
Choosing a Data Caching Strategy
The approach you choose depends on your environment and data needs. There are, of course, tradeoffs between different data caching options.
Many implementations use a variety of approaches. For example, a solution with a large amount of initial data and smaller daily increments might use tsload to load the initial data, and then use the JDBC driver with an ETL tool for incremental loads.