Table engines for integrations
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using CREATE TABLE or ALTER TABLE queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like dictionaries or table functions, which require the use of custom query methods on each use.
| Page | Description |
|---|---|
| AzureBlobStorage table engine | This engine provides an integration with Azure Blob Storage ecosystem. |
| DeltaLake table engine | This engine provides a read-only integration with existing Delta Lake tables in Amazon S3. |
| EmbeddedRocksDB table engine | This engine allows integrating ClickHouse with RocksDB |
| ExternalDistributed table engine | The ExternalDistributed engine allows to perform SELECT queries on data that is stored on a remote servers MySQL or PostgreSQL. Accepts MySQL or PostgreSQL engines as an argument so sharding is possible. |
| TimeSeries table engine | A table engine storing time series, i.e. a set of values associated with timestamps and tags (or labels). |
| HDFS table engine | This engine provides integration with the Apache Hadoop ecosystem by allowing to manage data on HDFS via ClickHouse. This engine is similar to the File and URL engines, but provides Hadoop-specific features. |
| Hive table engine | The Hive engine allows you to perform SELECT queries on HDFS Hive table. |
| Hudi table engine | This engine provides a read-only integration with existing Apache Hudi tables in Amazon S3. |
| Iceberg table engine | This engine provides a read-only integration with existing Apache Iceberg tables in Amazon S3, Azure, HDFS and locally stored tables. |
| JDBC table engine | Allows ClickHouse to connect to external databases via JDBC. |
| Kafka table engine | The Kafka Table Engine can be used to publish works with Apache Kafka and lets you publish or subscribe to data flows, organize fault-tolerant storage, and process streams as they become available. |
| MaterializedPostgreSQL table engine | Creates a ClickHouse table with an initial data dump of a PostgreSQL table and starts the replication process. |
| MongoDB table engine | MongoDB engine is read-only table engine which allows to read data from a remote collection. |
| MySQL table engine | Documentation for MySQL Table Engine |
| NATS table engine | This engine allows integrating ClickHouse with NATS to publish or subscribe to message subjects, and process new messages as they become available. |
| ODBC table engine | Allows ClickHouse to connect to external databases via ODBC. |
| PostgreSQL table Engine | The PostgreSQL engine allows SELECT and INSERT queries on data stored on a remote PostgreSQL server. |
| RabbitMQ table engine | This engine allows integrating ClickHouse with RabbitMQ. |
| Redis table engine | This engine allows integrating ClickHouse with Redis. |
| S3 table engine | This engine provides integration with the Amazon S3 ecosystem. Similar to the HDFS engine, but provides S3-specific features. |
| AzureQueue table engine | This engine provides an integration with the Azure Blob Storage ecosystem, allowing streaming data import. |
| S3Queue table engine | This engine provides integration with the Amazon S3 ecosystem and allows streaming imports. Similar to the Kafka and RabbitMQ engines, but provides S3-specific features. |
| SQLite table engine | The engine allows to import and export data to SQLite and supports queries to SQLite tables directly from ClickHouse. |
| YTsaurus table engine | Table engine that allows importing data from a YTsaurus cluster. |
| ArrowFlight table engine | The engine allows querying remote datasets via Apache Arrow Flight. |