Skip to content

Commit ce75785

Browse files
committed
feat: unlist the legacy JS course
1 parent d468b8e commit ce75785

26 files changed

+27
-1
lines changed

sources/academy/webscraping/scraping_basics_legacy_javascript/best_practices.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Best practices
33
description: Understand the standards and best practices that we here at Apify abide by to write readable, scalable, and maintainable code.
44
sidebar_position: 1.5
55
slug: /scraping-basics-javascript/legacy/best-practices
6+
unlisted: true
67
---
78

89
# Best practices when writing scrapers {#best-practices}

sources/academy/webscraping/scraping_basics_legacy_javascript/challenge/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Challenge
33
description: Test your knowledge acquired in the previous sections of this course by building an Amazon scraper using Crawlee's CheerioCrawler!
44
sidebar_position: 1.4
55
slug: /scraping-basics-javascript/legacy/challenge
6+
unlisted: true
67
---
78

89
# Challenge

sources/academy/webscraping/scraping_basics_legacy_javascript/challenge/initializing_and_setting_up.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Initializing & setting up
33
description: When you extract links from a web page, you often end up with a lot of irrelevant URLs. Learn how to filter the links to only keep the ones you need.
44
sidebar_position: 1
55
slug: /scraping-basics-javascript/legacy/challenge/initializing-and-setting-up
6+
unlisted: true
67
---
78

89
# Initialization & setting up

sources/academy/webscraping/scraping_basics_legacy_javascript/challenge/modularity.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Modularity
33
description: Before you build your first web scraper with Crawlee, it is important to understand the concept of modularity in programming.
44
sidebar_position: 2
55
slug: /scraping-basics-javascript/legacy/challenge/modularity
6+
unlisted: true
67
---
78

89
# Modularity

sources/academy/webscraping/scraping_basics_legacy_javascript/challenge/scraping_amazon.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Scraping Amazon
33
description: Before you build your first web scraper with Crawlee, it is important to understand the concept of modularity in programming.
44
sidebar_position: 4
55
slug: /scraping-basics-javascript/legacy/challenge/scraping-amazon
6+
unlisted: true
67
---
78

89
# Scraping Amazon

sources/academy/webscraping/scraping_basics_legacy_javascript/crawling/exporting_data.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Exporting data
33
description: Learn how to export the data you scraped using Crawlee to CSV or JSON.
44
sidebar_position: 9
55
slug: /scraping-basics-javascript/legacy/crawling/exporting-data
6+
unlisted: true
67
---
78

89
# Exporting data {#exporting-data}

sources/academy/webscraping/scraping_basics_legacy_javascript/crawling/filtering_links.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Filtering links
33
description: When you extract links from a web page, you often end up with a lot of irrelevant URLs. Learn how to filter the links to only keep the ones you need.
44
sidebar_position: 3
55
slug: /scraping-basics-javascript/legacy/crawling/filtering-links
6+
unlisted: true
67
---
78

89
import Tabs from '@theme/Tabs';

sources/academy/webscraping/scraping_basics_legacy_javascript/crawling/finding_links.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Finding links
33
description: Learn what a link looks like in HTML and how to find and extract their URLs when web scraping. Using both DevTools and Node.js.
44
sidebar_position: 2
55
slug: /scraping-basics-javascript/legacy/crawling/finding-links
6+
unlisted: true
67
---
78

89
import Example from '!!raw-loader!roa-loader!./finding_links.js';

sources/academy/webscraping/scraping_basics_legacy_javascript/crawling/first_crawl.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Your first crawl
33
description: Learn how to crawl the web using Node.js, Cheerio and an HTTP client. Extract URLs from pages and use them to visit more websites.
44
sidebar_position: 5
55
slug: /scraping-basics-javascript/legacy/crawling/first-crawl
6+
unlisted: true
67
---
78

89
# Your first crawl {#your-first-crawl}

sources/academy/webscraping/scraping_basics_legacy_javascript/crawling/headless_browser.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ title: Headless browsers
33
description: Learn how to scrape the web with a headless browser using only a few lines of code. Chrome, Firefox, Safari, Edge - all are supported.
44
sidebar_position: 8
55
slug: /scraping-basics-javascript/legacy/crawling/headless-browser
6+
unlisted: true
67
---
78

89
import Tabs from '@theme/Tabs';

0 commit comments

Comments
 (0)