diff --git a/docs/deploy/secrets-management.md b/docs/deploy/secrets-management.md index 9116a9f0..56ca8098 100644 --- a/docs/deploy/secrets-management.md +++ b/docs/deploy/secrets-management.md @@ -1,6 +1,6 @@ # Secrets management -Sometimes you connect the [Quix Connectors](../connectors/index.md), or services you have created, to other services, such as AWS, Vonage, Twilio, Azure and so on. You usually need to provide credentials to access these third-party APIs and services, using environment variables. +Sometimes you connect the [Quix Connectors](../quix-connectors/templates/index.md), or services you have created, to other services, such as AWS, Vonage, Twilio, Azure and so on. You usually need to provide credentials to access these third-party APIs and services, using environment variables. You do not want to expose these credentials through the use of environment variables in your YAML code, service code, Git repository, or even the UI, which may have shared access. Quix provides a feature to enable your credentials to be stored securely - secrets management. diff --git a/docs/develop/integrate-data/overview.md b/docs/develop/integrate-data/overview.md index a82c1a1c..7a22288c 100644 --- a/docs/develop/integrate-data/overview.md +++ b/docs/develop/integrate-data/overview.md @@ -24,7 +24,7 @@ To publish data: The particular method you use depends on the nature of the service you're trying to interface with Quix. Each of these methods is described in the following sections. -There are various ways to connect to Quix, and how you do so depends on the nature of the service and data you are connecting. In many cases Quix has a [suitable connector](../../connectors/index.md) you can use with minor configuration. +There are various ways to connect to Quix, and how you do so depends on the nature of the service and data you are connecting. In many cases Quix has a [suitable connector](../../quix-connectors/templates/index.md) you can use with minor configuration. If you want some example code you can use as a starting point for connecting your own data, you can use the `External source` and `External destination` samples. Or use one of the existing connectors as a starting point, such as the `Starter Source`, or `Starter Destination`. diff --git a/docs/develop/integrate-data/prebuilt-connector-destination.md b/docs/develop/integrate-data/prebuilt-connector-destination.md index b85d96be..e7cc888b 100644 --- a/docs/develop/integrate-data/prebuilt-connector-destination.md +++ b/docs/develop/integrate-data/prebuilt-connector-destination.md @@ -2,7 +2,7 @@ This is the easiest method, as no code needs to be written, and there is usually only minor configuration required to get a Quix connector up and running. -You can review the list of connectors in the [connector documentation](../../connectors/index.md). The code for our connectors can be found in the [Quix Code Samples GitHub repository](https://github.com/quixio/quix-samples){target=_blank}. +You can review the list of connectors in the [connector documentation](../../quix-connectors/templates/index.md). The code for our connectors can be found in the [Quix Code Samples GitHub repository](https://github.com/quixio/quix-samples){target=_blank}. Note there are two main types of connector: diff --git a/docs/develop/integrate-data/prebuilt-connector.md b/docs/develop/integrate-data/prebuilt-connector.md index 65337c9e..a122fb78 100644 --- a/docs/develop/integrate-data/prebuilt-connector.md +++ b/docs/develop/integrate-data/prebuilt-connector.md @@ -2,7 +2,7 @@ This is the easiest method, as no code needs to be written, and there is usually only minor configuration required to get a Quix connector up and running. -You can review the list of connectors in the [connector documentation](../../connectors/index.md). The code for our connectors can be found in the [Quix Code Samples GitHub repository](https://github.com/quixio/quix-samples){target=_blank}. +You can review the list of connectors in the [connector documentation](../../quix-connectors/templates/index.md). The code for our connectors can be found in the [Quix Code Samples GitHub repository](https://github.com/quixio/quix-samples){target=_blank}. Note there are two main types of connector: diff --git a/docs/develop/overview.md b/docs/develop/overview.md index 71b9d685..8907aa85 100644 --- a/docs/develop/overview.md +++ b/docs/develop/overview.md @@ -7,7 +7,7 @@ description: Applications are developed using Python and deployed as services. This section of the documentation covers **developing your application**. -Your data processing pipeline typically consistes of multiple applications working together. Each application represents the implementation of a source, transform, or destination. You develop your application in an environment (a branch in your project), but you can later merge these changes with other branches as required. +Your data processing pipeline typically consists of multiple applications working together. Each application represents the implementation of a source, transform, or destination. You develop your application in an environment (a branch in your project), but you can later merge these changes with other branches as required. For example, you might create a new source component to retrieve data from an external service, a transform to process this data, and then perhaps a destination, which could store data in a Postgres database. You might have another destination to display the data on a Streamlit dashboard. diff --git a/docs/get-started/stream-processing-pipelines.md b/docs/get-started/stream-processing-pipelines.md index ac6f1435..113a9b81 100644 --- a/docs/get-started/stream-processing-pipelines.md +++ b/docs/get-started/stream-processing-pipelines.md @@ -13,4 +13,4 @@ The applications (services) are connected in the pipeline by topics. The service ![Example pipeline](../images/example-pipeline.png) -[Read more about connectors](../connectors/index.md). +[Read more about connectors](../quix-connectors/templates/index.md). diff --git a/docs/integrations/databases/influxdb/migrating-v2-v3.md b/docs/integrations/databases/influxdb/migrating-v2-v3.md index fe06a21b..265cfc1a 100644 --- a/docs/integrations/databases/influxdb/migrating-v2-v3.md +++ b/docs/integrations/databases/influxdb/migrating-v2-v3.md @@ -2,7 +2,7 @@ If you have data in a v2 InfluxDB database, and you want to migrate it to InfluxDB v3, then Quix can help. -Quix provides the following InfluxDB [connectors](../../../connectors/index.md): +Quix provides the following InfluxDB [connectors](../../../quix-connectors/templates/index.md): * InfluxDB v2 source * InfluxDB v3 source diff --git a/docs/integrations/databases/influxdb/quickstart.md b/docs/integrations/databases/influxdb/quickstart.md index 198cb41d..12d858f4 100644 --- a/docs/integrations/databases/influxdb/quickstart.md +++ b/docs/integrations/databases/influxdb/quickstart.md @@ -7,7 +7,7 @@ search: # Quickstart -This quickstart shows you how to integrate Quix with InfluxDB using our standard [connectors](../../../connectors/index.md). +This quickstart shows you how to integrate Quix with InfluxDB using our standard [connectors](../../../quix-connectors/templates/index.md). In the first part of this quickstart, you'll read F1 car telemetry data, transform it, and then publish it to InfluxDB. diff --git a/docs/integrations/databases/influxdb/replacing-kapacitor.md b/docs/integrations/databases/influxdb/replacing-kapacitor.md index 352ca767..404094cf 100644 --- a/docs/integrations/databases/influxdb/replacing-kapacitor.md +++ b/docs/integrations/databases/influxdb/replacing-kapacitor.md @@ -29,7 +29,7 @@ The following illustrates a typical processing pipeline running in Quix. | Scalability | Kapacitor is designed to be horizontally scalable, enabling it to handle large volumes of data and scale alongside the rest of the TICK Stack components. | Quix was designed to be both vertically and horizontally scalable. It is based on [Kafka](../../../kb/what-is-kafka.md), using either a Quix-hosted broker, or an externally hosted broker. This means all the horizontal scaling features of Kafka, such as consumer groups, is built into Quix. Quix also enables you to configure the number of replicas, RAM, and CPU resources allocated on a per-service (deployment) basis, for accurate vertical scaling. | | High availability | Kapacitor supports high availability setups to ensure uninterrupted data processing and alerting even in the case of node failures. | As Quix uses a Kafka broker (including Kafka-compatible brokers such as Redpanda), it has all the high availability features inherent in a Kafka-based solution. In addition, Quix uses a Kubernetes cluster to seamlessly distribute and manage containers that execute your service's Python code. | | Replay and backfilling | Kapacitor enables users to replay historical data or backfill missing data, enabling them to analyze past events or ensure data consistency. | Quix leverages Kafka's retention capabilities for data backfilling and analysis. You can process historical data stored in Kafka topics using standard Kafka consumer patterns. This is useful for testing and evaluating processing pipelines, and examining historical data. This is also enhanced by the ability to connect to external tools using Quix connectors. | -| Extensibility | Kapacitor provides an extensible architecture, enabling users to develop and integrate custom functions, connectors, and integrations as per their specific requirements. | Quix is fully extensible using Python. Complex stream processing pipelines can be built out one service at a time, and then deployed with a single click. It is also possible to use a wide range of standard [connectors](../../../connectors/index.md) to connect to a range of third-party services. Powerful [integrations](../../overview.md) extend these capabilities. In addition, [REST and real-time APIs](../../../develop/apis-overview.md) are available for use with any language that supports REST or WebSockets. As Quix is designed around standard Git development workflows, it enables developers to collaborate on projects. | +| Extensibility | Kapacitor provides an extensible architecture, enabling users to develop and integrate custom functions, connectors, and integrations as per their specific requirements. | Quix is fully extensible using Python. Complex stream processing pipelines can be built out one service at a time, and then deployed with a single click. It is also possible to use a wide range of standard [connectors](../../../quix-connectors/templates/index.md) to connect to a range of third-party services. Powerful [integrations](../../overview.md) extend these capabilities. In addition, [REST and real-time APIs](../../../develop/apis-overview.md) are available for use with any language that supports REST or WebSockets. As Quix is designed around standard Git development workflows, it enables developers to collaborate on projects. | ## See also diff --git a/docs/integrations/overview.md b/docs/integrations/overview.md index a301c21e..b1c5e05b 100644 --- a/docs/integrations/overview.md +++ b/docs/integrations/overview.md @@ -15,4 +15,4 @@ This section of the documentation provides more detailed information on integrat | Upstash | Kafka broker | [Guide](./brokers/upstash.md) | | InfluxDB | Time series database | [Overview](./databases/influxdb/overview.md) | -See also the [Quix connectors](../connectors/index.md). +See also the [Quix connectors](../quix-connectors/templates/index.md). diff --git a/docs/kb/glossary.md b/docs/kb/glossary.md index 8e98b83e..c8ea6763 100644 --- a/docs/kb/glossary.md +++ b/docs/kb/glossary.md @@ -22,7 +22,7 @@ Quix contains a large number of [open source](https://github.com/quixio/quix-sam ## Connectors -There are [many ways](../develop/integrate-data/overview.md) to get data into Quix. One option is to use the many connectors already provided by Quix. These can be viewed in Quix by clicking Code Samples and then selecting Source and Destination filters. Alternatively, you can see a useful page in our documentation, that lists the [available connectors](../connectors/index.md). +There are [many ways](../develop/integrate-data/overview.md) to get data into Quix. One option is to use the many connectors already provided by Quix. These can be viewed in Quix by clicking Code Samples and then selecting Source and Destination filters. Alternatively, you can see a useful page in our documentation, that lists the [available connectors](../quix-connectors/templates/index.md). ## Consumer @@ -69,7 +69,7 @@ The number of instances of the deployment (service). If the replicas are part of ## Destination -A type of [connector](../connectors/index.md) where data is consumed from a Quix topic by an output (destination) such as a database or dashboard. +A type of [connector](../quix-connectors/templates/index.md#destinations) where data is consumed from a Quix topic by an output (destination) such as a database or dashboard. ## Environment @@ -178,7 +178,7 @@ Any application code that runs continuously in the serverless environment. For e ## Source -A type of [connector](../connectors/index.md) where data is published to a Quix topic from an input (source), such as a web service or command line program. +A type of [connector](../quix-connectors/templates/index.md#sources) where data is published to a Quix topic from an input (source), such as a web service or command line program. ## Stream diff --git a/docs/kb/what-is-kafka.md b/docs/kb/what-is-kafka.md index 14db9d3c..a23d4023 100644 --- a/docs/kb/what-is-kafka.md +++ b/docs/kb/what-is-kafka.md @@ -41,6 +41,6 @@ Kafka is extensively used in stream processing due to its ability to handle real * **Scalability**: Stream processing applications built with Kafka can scale horizontally by adding more instances of processing nodes. Kafka handles the distribution of data and load balancing across these instances, ensuring scalability without downtime. -* **Integration with external systems**: Kafka integrates seamlessly with external systems, enabling stream processing applications to interact with various data sinks and sources. For example, processed data can be stored in databases, sent to analytics platforms, or used to trigger downstream actions. Quix features a wide variety of [connectors](../connectors/index.md) and [integrations](../integrations/overview.md) to enable this. +* **Integration with external systems**: Kafka integrates seamlessly with external systems, enabling stream processing applications to interact with various data sinks and sources. For example, processed data can be stored in databases, sent to analytics platforms, or used to trigger downstream actions. Quix features a wide variety of [connectors](../quix-connectors/templates/index.md) and [integrations](../integrations/overview.md) to enable this. Overall, Quix's integration with the Kafka provides a powerful framework for building scalable, fault-tolerant, and real-time stream processing applications, making it a popular choice in the streaming data ecosystem. diff --git a/docs/kb/what-is-quix.md b/docs/kb/what-is-quix.md index d6be9bc3..9ae4d233 100644 --- a/docs/kb/what-is-quix.md +++ b/docs/kb/what-is-quix.md @@ -79,7 +79,7 @@ To achieve these goals, the Quix UI includes the following features: * **Online IDE**: Develop and run your streaming applications directly in the browser without setting up a local environment. -* **Code Samples**: Choose from the [prebuilt Code Samples](../connectors/index.md) ready to run and deploy from the IDE. +* **Code Samples**: Choose from the [prebuilt Code Samples](../quix-connectors/templates/index.md) ready to run and deploy from the IDE. * **Project templates**: Open source application templates that demonstrate what’s possible with Quix. You can fork them and use them as a starting point to build your own Python stream processing pipelines. @@ -129,13 +129,13 @@ When you develop your Python stream processing applications, you build a pipelin ## Integrating your data with Quix -There are [various ways](../develop/integrate-data/overview.md) to connect your data to Quix. Quix provides a number of [connectors](../connectors/index.md) that you can use with only some simple configuration. In addition, there are a range of [APIs](#apis), both REST and WebSockets that are available. There is also the [Quix Streams](#quix-streams) client library, that can be used to get data quickly and easily into Quix. +There are [various ways](../develop/integrate-data/overview.md) to connect your data to Quix. Quix provides a number of [connectors](../quix-connectors/templates/index.md) that you can use with only some simple configuration. In addition, there are a range of [APIs](#apis), both REST and WebSockets that are available. There is also the [Quix Streams](#quix-streams) client library, that can be used to get data quickly and easily into Quix. For a simple example of getting data from your laptop into Quix, see the [Quickstart](../quix-cloud/quickstart.md). ### Connectors -Quix provides numerous standard [connectors](../connectors/index.md) for both source, and destination functions. These enable you to easily stream data in and out of Quix. In addition, a number of prebuilt data transforms to perform processing on your streaming data are also available. +Quix provides numerous standard [connectors](../quix-connectors/templates/index.md) for both source, and destination functions. These enable you to easily stream data in and out of Quix. In addition, a number of prebuilt data transforms to perform processing on your streaming data are also available. !!! tip diff --git a/docs/kb/why-stream-processing.md b/docs/kb/why-stream-processing.md index bcf4725f..972bbbb2 100644 --- a/docs/kb/why-stream-processing.md +++ b/docs/kb/why-stream-processing.md @@ -19,7 +19,7 @@ There are several reasons why organizations choose stream processing: * **Complex event processing**: Stream processing frameworks, such as Quix, often include capabilities for complex event processing (CEP), enabling organizations to detect patterns, correlations, and anomalies in real-time data streams. This is valuable for use cases such as monitoring, anomaly detection, and predictive maintenance. -* **Integration with modern data architectures**: Stream processing complements other components of modern data architectures such as data lakes, data warehouses, and real-time databases. By integrating stream processing into these architectures, organizations can build end-to-end data pipelines that support both real-time and batch processing needs. Read about Quix [connectors](../connectors/index.md) and [integrations](../integrations/overview.md). +* **Integration with modern data architectures**: Stream processing complements other components of modern data architectures such as data lakes, data warehouses, and real-time databases. By integrating stream processing into these architectures, organizations can build end-to-end data pipelines that support both real-time and batch processing needs. Read about Quix [connectors](../quix-connectors/templates/index.md) and [integrations](../integrations/overview.md). * **Continuous computation**: Stream processing enables continuous computation, where computations are ongoing and incremental rather than being triggered by **batch jobs** at fixed intervals. This facilitates more responsive and agile applications that can adapt to changing conditions in real time. @@ -27,7 +27,7 @@ Overall, stream processing provides organizations with the ability to harness th With Quix, you can perform stream processing much more easily, as all necessary infrastructure, such as Kafka, Docker, and Kubernetes can be provisioned for you, and you can develop your stream processing logic using Python and the [Quix Streams client library](https://quix.io/docs/quix-streams/introduction.html). -As an alternative to having Quix host your stream processing infrastructure, you can easily [connect](../connectors/index.md) with third-party providers, or [integrate](../integrations/overview.md) Quix with your existing stream processing infrastructure. +As an alternative to having Quix host your stream processing infrastructure, you can easily [connect](../quix-connectors/templates/index.md) with third-party providers, or [integrate](../integrations/overview.md) Quix with your existing stream processing infrastructure. The following sections review some common stream processing use cases. See also the [templates gallery](https://quix.io/templates){target=_blank} for more examples. diff --git a/docs/manage/MLOps.md b/docs/manage/MLOps.md index 0600d4b5..a3f0f7cb 100644 --- a/docs/manage/MLOps.md +++ b/docs/manage/MLOps.md @@ -35,7 +35,7 @@ In Quix you can: * Connect validated models to live output topics. * Connect models using the UI to form a pipeline. Pipelines consist of transforms connected together using input and output topics. -* Work seamlessly with engineers to connect software services. You can leverage a number of prebuilt [connectors](../connectors/index.md) to connect to common services. +* Work seamlessly with engineers to connect software services. You can leverage a number of prebuilt [connectors](../quix-connectors/templates/index.md) to connect to common services. ## Deploy production models diff --git a/docs/quix-cloud/overview.md b/docs/quix-cloud/overview.md index 3f6954c7..a5f79183 100644 --- a/docs/quix-cloud/overview.md +++ b/docs/quix-cloud/overview.md @@ -98,8 +98,8 @@ Use the following tiles to easily jump to the relevant section of this documenta - __4. Manage your pipeline__ --- - - Once all the services in a pipeline are deployed, your stream proecessing solution is fully operational + + Once all the services in a pipeline are deployed, your stream processing solution is fully operational [Manage your pipeline :octicons-arrow-right-24:](../manage/overview.md) diff --git a/docs/quix-cloud/quickstart.md b/docs/quix-cloud/quickstart.md index 664fd62f..50c2e8d0 100644 --- a/docs/quix-cloud/quickstart.md +++ b/docs/quix-cloud/quickstart.md @@ -104,10 +104,9 @@ Feel free to explore further. ## Next steps -
-- __1. Quix Cloud tour__ +- __Quix Cloud tour__ --- @@ -115,12 +114,7 @@ Feel free to explore further. [Quix Cloud Tour :octicons-arrow-right-24:](../create/overview.md) -
- - -
- -- __1. Deploy a Connector__ +- __Deploy a Connector__ --- diff --git a/docs/tutorials/predictive-maintenance/influxdb-alerts.md b/docs/tutorials/predictive-maintenance/influxdb-alerts.md index e33bef6f..d14b72c3 100644 --- a/docs/tutorials/predictive-maintenance/influxdb-alerts.md +++ b/docs/tutorials/predictive-maintenance/influxdb-alerts.md @@ -1,6 +1,6 @@ # InfluxDB - alerts -This service uses the standard Quix InfluxDB 3.0 [connector](../../connectors/index.md). This connector enables the service to subscribe to messages on a Quix topic to be stored in InfluxDB. +This service uses the standard Quix InfluxDB 3.0 [connector](../../quix-connectors/templates/index.md). This connector enables the service to subscribe to messages on a Quix topic to be stored in InfluxDB. ![InfluxDB raw data pipeline segment](./images/influxdb-alerts-pipeline-segment.png) diff --git a/docs/tutorials/predictive-maintenance/influxdb-raw-data.md b/docs/tutorials/predictive-maintenance/influxdb-raw-data.md index 8909028e..b687b892 100644 --- a/docs/tutorials/predictive-maintenance/influxdb-raw-data.md +++ b/docs/tutorials/predictive-maintenance/influxdb-raw-data.md @@ -1,6 +1,6 @@ # InfluxDB - raw data -This service uses the standard Quix InfluxDB 3.0 [connector](../../connectors/index.md). This connector enables the service to subscribe to messages on a Quix topic to be stored in InfluxDB. +This service uses the standard Quix InfluxDB 3.0 [connector](../../quix-connectors/templates/index.md). This connector enables the service to subscribe to messages on a Quix topic to be stored in InfluxDB. ![InfluxDB raw data pipeline segment](./images/influxdb-raw-data-pipeline-segment.png)