diff --git a/.github/workflows/test.yaml b/.github/workflows/test.yaml
index 721a56669..55e929bff 100644
--- a/.github/workflows/test.yaml
+++ b/.github/workflows/test.yaml
@@ -136,8 +136,8 @@ jobs:
done
- lint_code:
- name: Lint app code
+ lint_code_js:
+ name: Lint JavaScript code
runs-on: ubuntu-latest
steps:
- name: Checkout Source code
@@ -159,4 +159,35 @@ jobs:
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
- - run: npm run lint:code
+ - run: npm run lint:code:js
+
+
+ lint_code_py:
+ name: Lint Python code
+ runs-on: ubuntu-latest
+ steps:
+ - name: Checkout source code
+ uses: actions/checkout@v5
+
+ - name: Get changed files
+ id: changed-files
+ uses: tj-actions/changed-files@v47
+ with:
+ files: '**/*.{md,mdx}'
+ files_ignore: '!sources/api/*.{md,mdx}'
+ separator: ","
+
+ - name: Get uv
+ uses: astral-sh/setup-uv@v7
+
+ - name: List and lint changed files
+ env:
+ ALL_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }}
+ run: |
+ IFS=',' read -ra FILE_ARRAY <<< "$ALL_CHANGED_FILES"
+ for file in "${FILE_ARRAY[@]}"; do
+ uv run --with doccmd --with ruff doccmd --language=py --language=python --command="ruff check --quiet" "$file"
+ done
+
+ # TODO: once we fix existing issues, switch to checking the whole sources/ directory
+ # - run: npm run lint:code:py
diff --git a/package.json b/package.json
index 372aa8141..d9fa91842 100644
--- a/package.json
+++ b/package.json
@@ -38,8 +38,11 @@
"lint:fix": "npm run lint:md:fix && npm run lint:code:fix",
"lint:md": "markdownlint '**/*.md'",
"lint:md:fix": "markdownlint '**/*.md' --fix",
- "lint:code": "eslint .",
- "lint:code:fix": "eslint . --fix",
+ "lint:code": "npm run lint:code:js && npm run lint:code:py",
+ "lint:code:fix": "npm run lint:code:js:fix",
+ "lint:code:js": "eslint .",
+ "lint:code:js:fix": "eslint . --fix",
+ "lint:code:py": "uv run --with doccmd --with ruff doccmd --language=py --command=\"ruff check --quiet\" ./sources",
"postinstall": "patch-package",
"postbuild": "node ./scripts/joinLlmsFiles.mjs && node ./scripts/indentLlmsFile.mjs"
},
diff --git a/sources/academy/platform/getting_started/apify_client.md b/sources/academy/platform/getting_started/apify_client.md
index a5ba1951a..9622ee324 100644
--- a/sources/academy/platform/getting_started/apify_client.md
+++ b/sources/academy/platform/getting_started/apify_client.md
@@ -40,21 +40,20 @@ pip install apify-client
After installing the package, let's make a file named **client** and import the Apify client like so:
+
+
-```js
-// client.js
+```js title="client.js"
import { ApifyClient } from 'apify-client';
```
-```py
-# client.py
+```py title="client.py"
from apify_client import ApifyClient
-
```
@@ -70,9 +69,7 @@ Before we can use the client though, we must create a new instance of the `Apify
```js
-const client = new ApifyClient({
- token: 'YOUR_TOKEN',
-});
+const client = new ApifyClient({ token: 'YOUR_TOKEN' });
```
@@ -80,7 +77,6 @@ const client = new ApifyClient({
```py
client = ApifyClient(token='YOUR_TOKEN')
-
```
@@ -108,7 +104,6 @@ run = client.actor('YOUR_USERNAME/adding-actor').call(run_input={
'num1': 4,
'num2': 2
})
-
```
@@ -136,7 +131,6 @@ const dataset = client.dataset(run.defaultDatasetId);
```py
dataset = client.dataset(run['defaultDatasetId'])
-
```
@@ -149,7 +143,6 @@ Finally, we can download the items in the dataset by using the **list items** fu
```js
const { items } = await dataset.listItems();
-
console.log(items);
```
@@ -158,21 +151,20 @@ console.log(items);
```py
items = dataset.list_items().items
-
print(items)
-
```
+
+
The final code for running the Actor and fetching its dataset items looks like this:
-```js
-// client.js
+```js title="client.js"
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({
@@ -185,17 +177,14 @@ const run = await client.actor('YOUR_USERNAME/adding-actor').call({
});
const dataset = client.dataset(run.defaultDatasetId);
-
const { items } = await dataset.listItems();
-
console.log(items);
```
-```py
-# client.py
+```py title="client.py"
from apify_client import ApifyClient
client = ApifyClient(token='YOUR_TOKEN')
@@ -206,11 +195,8 @@ actor = client.actor('YOUR_USERNAME/adding-actor').call(run_input={
})
dataset = client.dataset(run['defaultDatasetId'])
-
items = dataset.list_items().items
-
print(items)
-
```
@@ -224,10 +210,15 @@ Let's change these two Actor settings via the Apify client using the [`actor.upd
First, we'll create a pointer to our Actor, similar to before (except this time, we won't be using `.call()` at the end):
+
+
```js
+import { ApifyClient } from 'apify-client';
+const client = new ApifyClient({ token: 'YOUR_TOKEN' });
+
const actor = client.actor('YOUR_USERNAME/adding-actor');
```
@@ -235,8 +226,10 @@ const actor = client.actor('YOUR_USERNAME/adding-actor');
```py
-actor = client.actor('YOUR_USERNAME/adding-actor')
+from apify_client import ApifyClient
+client = ApifyClient(token='YOUR_TOKEN')
+actor = client.actor('YOUR_USERNAME/adding-actor')
```
@@ -261,13 +254,18 @@ await actor.update({
```py
-actor.update(default_run_build='latest', default_run_memory_mbytes=256, default_run_timeout_secs=20)
-
+actor.update(
+ default_run_build='latest',
+ default_run_memory_mbytes=256,
+ default_run_timeout_secs=20,
+)
```
+
+
After running the code, go back to the **Settings** page of **adding-actor**. If your default options now look like this, then it worked!:

diff --git a/sources/platform/integrations/ai/agno.md b/sources/platform/integrations/ai/agno.md
index 70722f095..0a1385848 100644
--- a/sources/platform/integrations/ai/agno.md
+++ b/sources/platform/integrations/ai/agno.md
@@ -38,7 +38,7 @@ While our examples use OpenAI, Agno supports other LLM providers as well. You'll
- _Python environment_: Ensure Python is installed (version 3.8+ recommended).
- _Required packages_: Install the following dependencies in your terminal:
-```bash
+```shell
pip install agno apify-client
```
diff --git a/sources/platform/integrations/ai/crewai.md b/sources/platform/integrations/ai/crewai.md
index bee4e93ea..ae5713c88 100644
--- a/sources/platform/integrations/ai/crewai.md
+++ b/sources/platform/integrations/ai/crewai.md
@@ -30,7 +30,7 @@ This guide demonstrates how to integrate Apify Actors with CrewAI by building a
- **OpenAI API key**: To power the agents in CrewAI, you need an OpenAI API key. Get one from the [OpenAI platform](https://platform.openai.com/account/api-keys).
- **Python packages**: Install the following Python packages:
- ```bash
+ ```shell
pip install 'crewai[tools]' langchain-apify langchain-openai
```
diff --git a/sources/platform/integrations/ai/haystack.md b/sources/platform/integrations/ai/haystack.md
index 32445ada6..969a2f5fd 100644
--- a/sources/platform/integrations/ai/haystack.md
+++ b/sources/platform/integrations/ai/haystack.md
@@ -19,7 +19,7 @@ The last step will be to retrieve the most similar documents.
This example uses the Apify-Haystack Python integration published on [PyPi](https://pypi.org/project/apify-haystack/).
Before we start with the integration, we need to install all dependencies:
-```bash
+```shell
pip install apify-haystack haystack-ai
```
diff --git a/sources/platform/integrations/ai/langchain.md b/sources/platform/integrations/ai/langchain.md
index 2f206369a..d4a05f89f 100644
--- a/sources/platform/integrations/ai/langchain.md
+++ b/sources/platform/integrations/ai/langchain.md
@@ -20,7 +20,9 @@ If you prefer to use JavaScript, you can follow the [JavaScript LangChain docum
Before we start with the integration, we need to install all dependencies:
-`pip install langchain langchain-openai langchain-apify`
+```shell
+pip install langchain langchain-openai langchain-apify
+```
After successful installation of all dependencies, we can start writing code.
diff --git a/sources/platform/integrations/ai/langflow.md b/sources/platform/integrations/ai/langflow.md
index 331b61a05..4cb8e3243 100644
--- a/sources/platform/integrations/ai/langflow.md
+++ b/sources/platform/integrations/ai/langflow.md
@@ -43,13 +43,13 @@ Langflow can either be installed locally or used in the cloud. The cloud version
First, install the Langflow platform using Python package and project manager [uv](https://docs.astral.sh/uv/):
-```bash
+```shell
uv pip install langflow
```
After installing Langflow, you can start the platform:
-```bash
+```shell
uv run langflow run
```
diff --git a/sources/platform/integrations/ai/langgraph.md b/sources/platform/integrations/ai/langgraph.md
index 8690ed08f..f39903ac7 100644
--- a/sources/platform/integrations/ai/langgraph.md
+++ b/sources/platform/integrations/ai/langgraph.md
@@ -32,7 +32,7 @@ This guide will demonstrate how to use Apify Actors with LangGraph by building a
- **Python packages**: You need to install the following Python packages:
- ```bash
+ ```shell
pip install langgraph langchain-apify langchain-openai
```
diff --git a/sources/platform/integrations/ai/llama.md b/sources/platform/integrations/ai/llama.md
index 3a7a642a0..4ef56654f 100644
--- a/sources/platform/integrations/ai/llama.md
+++ b/sources/platform/integrations/ai/llama.md
@@ -22,7 +22,9 @@ You can integrate Apify dataset or Apify Actor with LlamaIndex.
Before we start with the integration, we need to install all dependencies:
-`pip install apify-client llama-index-core llama-index-readers-apify`
+```shell
+pip install apify-client llama-index-core llama-index-readers-apify
+```
After successfully installing all dependencies, we can start writing Python code.
diff --git a/sources/platform/integrations/ai/milvus.md b/sources/platform/integrations/ai/milvus.md
index 0ed1e293b..6e838e758 100644
--- a/sources/platform/integrations/ai/milvus.md
+++ b/sources/platform/integrations/ai/milvus.md
@@ -66,7 +66,7 @@ Another way to interact with Milvus is through the [Apify Python SDK](https://do
1. Install the Apify Python SDK by running the following command:
- ```py
+ ```shell
pip install apify-client
```
diff --git a/sources/platform/integrations/ai/openai_assistants.md b/sources/platform/integrations/ai/openai_assistants.md
index 4074f77c5..16fa1ca18 100644
--- a/sources/platform/integrations/ai/openai_assistants.md
+++ b/sources/platform/integrations/ai/openai_assistants.md
@@ -29,7 +29,7 @@ The image below provides an overview of the entire process:
Before we start creating the assistant, we need to install all dependencies:
-```bash
+```shell
pip install apify-client openai
```
@@ -260,7 +260,7 @@ For more information on automating this process, check out the blog post [How we
Before we start, we need to install all dependencies:
-```bash
+```shell
pip install apify-client openai
```
diff --git a/sources/platform/integrations/ai/pinecone.md b/sources/platform/integrations/ai/pinecone.md
index e87b447cd..d3d549bb9 100644
--- a/sources/platform/integrations/ai/pinecone.md
+++ b/sources/platform/integrations/ai/pinecone.md
@@ -74,7 +74,9 @@ Another way to interact with Pinecone is through the [Apify Python SDK](https://
1. Install the Apify Python SDK by running the following command:
- `pip install apify-client`
+ ```shell
+ pip install apify-client
+ ```
1. Create a Python script and import all the necessary modules:
diff --git a/sources/platform/integrations/ai/qdrant.md b/sources/platform/integrations/ai/qdrant.md
index 66f4e74c0..ace973281 100644
--- a/sources/platform/integrations/ai/qdrant.md
+++ b/sources/platform/integrations/ai/qdrant.md
@@ -68,7 +68,7 @@ Another way to interact with Qdrant is through the [Apify Python SDK](https://do
1. Install the Apify Python SDK by running the following command:
- ```py
+ ```shell
pip install apify-client
```