Skip to content

Commit ebb904b

Browse files
committed
chore: spelling
1 parent 10205e9 commit ebb904b

File tree

5 files changed

+16
-10
lines changed

5 files changed

+16
-10
lines changed

docs/cloud/features/02_integrations.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ import Msksvg from '@site/static/images/integrations/logos/msk.svg';
1111
import Azureeventhubssvg from '@site/static/images/integrations/logos/azure_event_hubs.svg';
1212
import Warpstreamsvg from '@site/static/images/integrations/logos/warpstream.svg';
1313
import S3svg from '@site/static/images/integrations/logos/amazon_s3_logo.svg';
14-
import Amazonkinesis from '@site/static/images/integrations/logos/amazon_kinesis_logo.svg';
14+
import AmazonKinesis from '@site/static/images/integrations/logos/amazon_kinesis_logo.svg';
1515
import Gcssvg from '@site/static/images/integrations/logos/gcs.svg';
1616
import DOsvg from '@site/static/images/integrations/logos/digitalocean.svg';
1717
import ABSsvg from '@site/static/images/integrations/logos/azureblobstorage.svg';
@@ -41,9 +41,9 @@ ClickPipes can be used for long-term streaming needs or one-time data loading jo
4141
| WarpStream | <Warpstreamsvg class="image" alt="WarpStream logo" style={{width: '3rem'}}/> |Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
4242
| Amazon S3 | <S3svg class="image" alt="Amazon S3 logo" style={{width: '3rem', height: 'auto'}}/> |Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
4343
| Google Cloud Storage | <Gcssvg class="image" alt="Google Cloud Storage logo" style={{width: '3rem', height: 'auto'}}/> |Object Storage| Stable | Configure ClickPipes to ingest large volumes of data from object storage. |
44-
| DigitalOcean Spaces | <DOsvg class="image" alt="Digital Ocean logo" style={{width: '3rem', height: 'auto'}}/> | Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
45-
| Azure Blob Storage | <ABSsvg class="image" alt="Azure Blob Storage logo" style={{width: '3rem', height: 'auto'}}/> | Object Storage | Private Beta | Configure ClickPipes to ingest large volumes of data from object storage.
46-
| [Amazon Kinesis](/integrations/clickpipes/kinesis) | <Amazonkinesis class="image" alt="Amazon Kenesis logo" style={{width: '3rem', height: 'auto'}}/> |Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Amazon Kinesis into ClickHouse cloud. |
44+
| DigitalOcean Spaces | <DOsvg class="image" alt="Digital Ocean logo" style={{width: '3rem', height: 'auto'}}/> | Object Storage | Stable | Configure ClickPipes to ingest large volumes of data from object storage.
45+
| Azure Blob Storage | <ABSsvg class="image" alt="Azure Blob Storage logo" style={{width: '3rem', height: 'auto'}}/> | Object Storage | Private Beta | Configure ClickPipes to ingest large volumes of data from object storage.
46+
| [Amazon Kinesis](/integrations/clickpipes/kinesis) | <AmazonKinesis class="image" alt="Amazon Kinesis logo" style={{width: '3rem', height: 'auto'}}/> |Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Amazon Kinesis into ClickHouse cloud. |
4747
| [Postgres](/integrations/clickpipes/postgres) | <Postgressvg class="image" alt="Postgres logo" style={{width: '3rem', height: 'auto'}}/> |DBMS| Stable | Configure ClickPipes and start ingesting data from Postgres into ClickHouse Cloud. |
4848
| [MySQL](/integrations/clickpipes/mysql) | <Mysqlsvg class="image" alt="MySQL logo" style={{width: '3rem', height: 'auto'}}/> |DBMS| Private Beta | Configure ClickPipes and start ingesting data from MySQL into ClickHouse Cloud. |
4949
| [MongoDB](/integrations/clickpipes/mongodb) | <Mongodbsvg class="image" alt="MongoDB logo" style={{width: '3rem', height: 'auto'}}/> |DBMS| Private Preview | Configure ClickPipes and start ingesting data from MongoDB into ClickHouse Cloud. |

docs/cloud/onboard/01_discover/02_use_cases/00_overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,5 +17,5 @@ Broadly, the most common use cases for ClickHouse Cloud are:
1717
|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
1818
| [Real-Time analytics](/cloud/get-started/cloud/use-cases/real-time-analytics) | ClickHouse Cloud excels at real-time analytics by delivering sub-second query responses on billions of rows through its columnar storage architecture and vectorized execution engine. The platform handles high-throughput data ingestion of millions of events per second while enabling direct queries on raw data without requiring pre-aggregation. Materialized Views provide real-time aggregations and pre-computed results, while approximate functions for quantiles and counts deliver instant insights perfect for interactive dashboards and real-time decision making. |
1919
| [Observability](/cloud/get-started/cloud/use-cases/observability) | ClickHouse Cloud is well suited for observability workloads, featuring specialized engines and functions optimized for time-series data that can ingest and query terabytes of logs, metrics, and traces with ease. Through ClickStack, ClickHouse's comprehensive observability solution, organizations can break down the traditional three silos of logs, metrics, and traces by unifying all observability data in a single platform, enabling correlated analysis and eliminating the complexity of managing separate systems. This unified approach makes it ideal for application performance monitoring, infrastructure monitoring, and security event analysis at enterprise scale, with ClickStack providing the tools and integrations needed for complete observability workflows without data silos. |
20-
| [Data warehousing](/cloud/get-started/cloud/use-cases/data_lake_and_warehouse) | ClickHouse's data warehousing ecosystem connectivity allows users to get set up with a few clicks, and easily get their data into ClickHouse. With excellent support for hsitorical data analysis, data lakes, query federation and JSON as a native data type it enables users to store their data with cost efficiency at scale. |
20+
| [Data warehousing](/cloud/get-started/cloud/use-cases/data_lake_and_warehouse) | ClickHouse's data warehousing ecosystem connectivity allows users to get set up with a few clicks, and easily get their data into ClickHouse. With excellent support for historical data analysis, data lakes, query federation and JSON as a native data type it enables users to store their data with cost efficiency at scale. |
2121
| [Machine Learning and Artificial Intelligence](/cloud/get-started/cloud/use-cases/AI_ML) | ClickHouse Cloud can be used across the ML value chain, from exploration and preparation through to training, testing and inference. Tools like Clickhouse-local, Clickhouse-server and chdb can be used for data exploration, discovery and transformation, while ClickHouse can be used as a feature store, vector store or MLOps observability store. Furthermore, it enables agentic analytics through built-in tools like fully managed remote MCP server, inline text completion for queries, AI-powered chart configuration and Ask AI in product. |

docs/cloud/onboard/01_discover/02_use_cases/04_machine_learning_and_genAI/01_machine_learning.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Regardless of whether this myth holds true or not, what does remain true is that
1717
Whether you’re building RAG pipelines, fine-tuning, training your own model, or evaluating model performance, data is the root of each problem.
1818

1919
Managing data can be tricky, and as a byproduct, the space has experienced a proliferation of tools that are designed to boost productivity by solving a specific slice of a machine learning data problem.
20-
Oftentimes, this takes shape as a layer of abstraction around a more general-purpose solution with an opinionated interface that, on the surface, makes it easier to apply to the specific subproblem at hand.
20+
Oftentimes, this takes shape as a layer of abstraction around a more general-purpose solution with an opinionated interface that, on the surface, makes it easier to apply to the specific sub problem at hand.
2121
In effect, this reduces the flexibility that exists with a general-purpose solution in favor of ease-of-use and simplicity of a specific task.
2222

2323
<Image img={machine_learning_data_layer} size="sm"/>
@@ -52,7 +52,7 @@ This process of evaluation and understanding is an iterative one, often resultin
5252
As companies store increasing amounts of data to leverage for machine learning purposes, the problem of examining the data you have becomes harder.
5353

5454
This is because analytics and evaluation queries often become tediously or prohibitively slow at scale with traditional data systems.
55-
Some of the big players impose significantly increased costs to bring down query times, and disincentivize ad-hoc evaluation by way of charging per query or by number of bytes scanned.
55+
Some of the big players impose significantly increased costs to bring down query times, and discourage ad-hoc evaluation by way of charging per query or by number of bytes scanned.
5656
Engineers may resort to pulling subsets of data down to their local machines as a compromise for these limitations.
5757

5858
ClickHouse, on the other hand, is a real-time data warehouse, so users benefit from industry-leading query speeds for analytical computations.
@@ -76,7 +76,7 @@ However, because they’re separate tools from the database they’re operating
7676

7777
In contrast, data transformations are easily accomplished directly in ClickHouse through [materialized views](/materialized-views).
7878
These are automatically triggered when new data is inserted into ClickHouse source tables and are used to easily extract, transform, and modify data as it arrives - eliminating the need to build and monitor bespoke pipelines yourself.
79-
When these transformations require aggregations over a complete dataset that may not fit into memory, leveraging ClickHouse ensures you don’t have to try and retrofit this step to work with dataframes on your local machine.
79+
When these transformations require aggregations over a complete dataset that may not fit into memory, leveraging ClickHouse ensures you don’t have to try and retrofit this step to work with data frames on your local machine.
8080
For those datasets that are more convenient to evaluate locally, [ClickHouse local](/operations/utilities/clickhouse-local) is a great alternative, along with [chDB](/chdb), that allow users to leverage ClickHouse with standard Python data libraries like Pandas.
8181

8282
### Training and evaluation {#training-and-evaluation}
@@ -98,7 +98,7 @@ Users can easily combine ClickHouse with data lakes, with built-in functions to
9898
**Transformation engine** - SQL provides a natural means of declaring data transformations.
9999
When extended with ClickHouse’s analytical and statistical functions, these transformations become succinct and optimized.
100100
As well as applying to either ClickHouse tables, in cases where ClickHouse is used as a data store, table functions allow SQL queries to be written against data stored in formats such as Parquet, on-disk or object storage, or even other data stores such as Postgres and MySQL.
101-
A completely parallelization query execution engine, combined with a column-oriented storage format, allows ClickHouse to perform aggregations over PBs of data in seconds - unlike transformations on in memory dataframes, users are not memory-bound.
101+
A completely parallelization query execution engine, combined with a column-oriented storage format, allows ClickHouse to perform aggregations over PBs of data in seconds - unlike transformations on in memory data frames, users are not memory-bound.
102102
Furthermore, materialized views allow data to be transformed at insert time, thus overloading compute to data load time from query time.
103103
These views can exploit the same range of analytical and statistical functions ideal for data analysis and summarization.
104104
Should any of ClickHouse’s existing analytical functions be insufficient or custom libraries need to be integrated, users can also utilize User Defined Functions (UDFs).

scripts/aspell-dict-file.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1091,7 +1091,7 @@ MCP's
10911091
daemonset
10921092
--docs/use-cases/observability/clickstack/ingesting-data/kubernetes.md--
10931093
daemonset
1094-
--docs/cloud/onboard/01_discover/02_use_cases/04_machine_learning_and_genAI/03_agent_facing_analytics.md--
1094+
--docs/cloud/onboard/01_discover/02_use_cases/04_machine_learning_and_genAI/02_agent_facing_analytics.md--
10951095
AgentForce
10961096
DeepSeek
10971097
OpenAI's

scripts/aspell-ignore/en/aspell-dict.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ Accepter
55
AICPA
66
ALTERs
77
AMPLab
8+
AmazonKinesis
89
AMQP
910
ANNIndex
1011
ANNIndexes
@@ -326,6 +327,7 @@ DestroyAggregatesThreads
326327
DestroyAggregatesThreadsActive
327328
DictCacheRequests
328329
DigiCert
330+
DigitalOcean
329331
DiskAvailable
330332
DiskObjectStorage
331333
DiskObjectStorageAsyncThreads
@@ -1078,6 +1080,7 @@ Refactorings
10781080
ReferenceKeyed
10791081
Refreshable
10801082
RegexpTree
1083+
RelationMessage
10811084
RemoteRead
10821085
ReplacingMergeTree
10831086
ReplacingReplicatedMergeTree
@@ -2269,8 +2272,10 @@ hiveHash
22692272
hnsw
22702273
holistics
22712274
homebrew
2275+
homebrew
22722276
hopEnd
22732277
hopStart
2278+
Hopsworks
22742279
horgh
22752280
hostName
22762281
hostname
@@ -3017,6 +3022,7 @@ resending
30173022
resharding
30183023
reshards
30193024
resolvers
3025+
resourceGUID
30203026
restartable
30213027
resultset
30223028
resync

0 commit comments

Comments
 (0)