diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/_index.md
new file mode 100644
index 0000000..81880f0
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/_index.md
@@ -0,0 +1,9 @@
+---
+type: "learning-path"
+title: "DigitalOcean Kubernetes (DOKS)"
+description: "Thousands of ISVs, startups, and digital businesses run on DigitalOcean today, achieving top performance and unmatched scalability at significant cost savings. With DigitalOcean Kubernetes, you can easily spin up GPU-powered environments, scale workloads, optimize performance with a developer-friendly approach, and automate infrastructure and software delivery."
+id: "ba2e362b-9f92-4a24-9039-0e886e710de4"
+banner: "digitalocean.svg"
+weight: 2
+level: "beginner"
+---
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/digitalocean.svg b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/digitalocean.svg
new file mode 100644
index 0000000..80262cb
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/digitalocean.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/_index.md
new file mode 100644
index 0000000..407c6d6
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/_index.md
@@ -0,0 +1,12 @@
+---
+type: "course"
+id: "launch-reliably"
+title: "How to Enable High Availability"
+description: "Increase the reliability of your clusters and prevent scaling issues from fault tolerance, load balancing, and traffic management."
+weight: 2
+banner: "digitalocean.svg"
+tags: ["Kubernetes"]
+categories: "Digital-Ocean"
+level: "intermediate"
+---
+
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /_index.md
new file mode 100644
index 0000000..9cde415
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /_index.md
@@ -0,0 +1,126 @@
+---
+type: "page"
+id: "enable-high-availability-using-automation"
+description: ""
+title: "Enable High Availability Using Automation "
+weight: 1
+---
+
+You can enable high availability using the DigitalOcean Kubernetes `doctl` update command or API endpoint by setting the `ha` value to `true`.
+
+
+## How to Update a Kubernetes Cluster Using the DigitalOcean CLI
+
+1. Install doctl, the official DigitalOcean CLI.
+2. Create a personal access token and save it for use with doctl.
+3. Use the token to grant doctl access to your DigitalOcean account.
+doctl auth init
+Finally, run doctl kubernetes cluster update. Basic usage looks like this, but you can read the usage docs for more details:
+doctl kubernetes cluster update [flags]
+The following example updates a cluster named example-cluster to enable automatic upgrades and sets the maintenance window to saturday=02:00:
+doctl kubernetes cluster update example-cluster --auto-upgrade --maintenance-window saturday=02:00
+
+## How to Update a Kubernetes Cluster Using the DigitalOcean API
+
+1. [Create a personal access token](https://docs.digitalocean.com/reference/api/create-personal-access-token/) and save it for use with the API.
+2. Send a PUT request to [`https://api.digitalocean.com/v2/kubernetes/clusters/{cluster_id}`](https://docs.digitalocean.com/reference/api/digitalocean//#operation/kubernetes_update_cluster).
+
+### cURL
+Using cURL:
+```bash
+curl -X PUT \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
+ -d '{"name": "stage-cluster-01", "tags":["staging", "web-team"]}' \
+ "https://api.digitalocean.com/v2/kubernetes/clusters/bd5f5959-5e1e-4205-a714-a914373942af"
+```
+
+### Go
+
+Using [Godo](https://github.com/digitalocean/godo), the official DigitalOcean API client for Go:
+
+```bash
+import (
+ "context"
+ "os"
+
+ "github.com/digitalocean/godo"
+)
+
+func main() {
+ token := os.Getenv("DIGITALOCEAN_TOKEN")
+
+ client := godo.NewFromToken(token)
+ ctx := context.TODO()
+
+ updateRequest := &godo.KubernetesClusterUpdateRequest{
+ Name: "stage-cluster-01",
+ Tags: []string{"staging", "web-team"},
+ }
+
+ cluster, _, err := client.Kubernetes.Update(ctx, "bd5f5959-5e1e-4205-a714-a914373942af", updateRequest)
+}
+```
+
+### Ruby
+
+Using [DropletKit](https://github.com/digitalocean/droplet_kit), the official DigitalOcean API client for Ruby:
+
+```bash
+require 'droplet_kit'
+token = ENV['DIGITALOCEAN_TOKEN']
+client = DropletKit::Client.new(access_token: token)
+
+cluster = DropletKit::KubernetesCluster.new(
+ name: 'foo',
+ tags: ['staging', 'web-team']
+)
+
+client.kubernetes_clusters.update(cluster, id: 'bd5f5959-5e1e-4205-a714-a914373942af')
+```
+
+### Python
+Using [PyDo](https://github.com/digitalocean/pydo), the official DigitalOcean API client for Python:
+
+```bash
+import os
+from pydo import Client
+
+client = Client(token=os.environ.get("DIGITALOCEAN_TOKEN"))
+
+req = {
+ "name": "prod-cluster-01",
+ "tags": [
+ "k8s",
+ "k8s:bd5f5959-5e1e-4205-a714-a914373942af",
+ "production",
+ "web-team"
+ ],
+ "maintenance_policy": {
+ "start_time": "12:00",
+ "day": "any"
+ },
+ "auto_upgrade": True,
+ "surge_upgrade": True,
+ "ha": True
+}
+
+resp = client.kubernetes.update_cluster(cluster_id="1fd32a", body=req)
+```
+
+# Enable High Availability Using the Control Panel
+
+To enable high availability on an existing cluster, go to the [control panel](https://cloud.digitalocean.com/kubernetes/clusters) and click the cluster you want to enable high availability on. Then, in the Overview tab, scroll down and find the following card.
+
+
+
+## I can't find this card
+
+DigitalOcean Kubernetes clusters originally created with version 1.20 or older have a version of the control plane which does not allow you to enable [high availability](https://docs.digitalocean.com/products/kubernetes/details/managed/#new-control-plane). If you cannot find this card, upgrade your control plane.
+
+To check whether you can upgrade your cluster to the new control plane, see [Upgrading to New Control Plane](https://docs.digitalocean.com/products/kubernetes/how-to/upgrade-cluster/#new-control-plane).
+
+In the card, click Add high availability. This opens a pop-up window where you can confirm your change. Once enabled, you cannot disable high availability in the future.
+
+
+
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /add-high-availability.png b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /add-high-availability.png
new file mode 100644
index 0000000..0b81a27
Binary files /dev/null and b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /add-high-availability.png differ
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /digitalocean.svg b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /digitalocean.svg
new file mode 100644
index 0000000..80262cb
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/launch-reliably/enable-high-availability-using-automation /digitalocean.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/_index.md
new file mode 100644
index 0000000..48f9866
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/_index.md
@@ -0,0 +1,12 @@
+---
+type: "course"
+id: "scale-automatically"
+title: "How to Enable Cluster Autoscaler for a DigitalOcean Kubernetes Cluster"
+description: "Automatically scale node pools to zero when idle to save on compute costs with Nodepool Scale-to-Zero. Seamlessly scale clusters up to 1,000 nodes with Cluster Autoscaler."
+weight: 1
+banner: "digitalocean.svg"
+tags: ["Kubernetes"]
+categories: "Digital-Ocean"
+level: "intermediate"
+---
+
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/autoscaling-in-response-to-heavy-resource-use/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/autoscaling-in-response-to-heavy-resource-use/_index.md
new file mode 100644
index 0000000..141199a
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/autoscaling-in-response-to-heavy-resource-use/_index.md
@@ -0,0 +1,12 @@
+---
+type: "page"
+id: "autoscaling-in-response-to-heavy-resource-use"
+description: ""
+title: "Autoscaling in Response to Heavy Resource Use "
+weight: 4
+---
+
+Pod creation and destruction can be automated by a Horizontal Pod Autoscaler (HPA), which monitors the resource use of nodes and generates pods when certain events occur, such as sustained CPU spikes, or memory use surpassing a certain threshold. This, combined with a CA, gives you powerful tools to configure your cluster’s responsiveness to resource demands — an HPA that ensures synchronicity between resource use and the number of pods, and a CA that ensures synchronicity between the number of pods and the cluster’s size.
+
+For a walk-through that builds an autoscaling cluster and demonstrates the interplay between an HPA and a CA, see [Example of Kubernetes Cluster Autoscaling Working With Horizontal Pod Autoscaling](https://docs.digitalocean.com/products/kubernetes/how-to/set-up-autoscaling/).
+
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/digitalocean.svg b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/digitalocean.svg
new file mode 100644
index 0000000..80262cb
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/digitalocean.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/_index.md
new file mode 100644
index 0000000..0ab6d6e
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/_index.md
@@ -0,0 +1,25 @@
+---
+type: "page"
+id: "disabling-autoscaling"
+description: ""
+title: "Disabling Autoscaling "
+weight: 2
+---
+
+# Using the DigitalOcean Control Panel
+
+To disable autoscaling on an existing node pool, navigate to your cluster in [the Kubernetes section of the control panel](https://cloud.digitalocean.com/kubernetes/clusters), then click the Resources tab. Click the three dots to reveal the option to resize the node pool manually or enable autoscaling.
+
+
+
+
+Select Resize or Autoscale, and a window opens prompting for configuration details. Select Fixed size and configure the number of nodes you want to assign to the pool.
+
+
+
+# Using doctl
+To disable autoscaling, run an update command that specifies the node pool and cluster:
+
+```bash
+doctl kubernetes cluster node-pool update mycluster mypool --auto-scale=false
+```
\ No newline at end of file
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-configuration-window.png b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-configuration-window.png
new file mode 100644
index 0000000..6df15a2
Binary files /dev/null and b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-configuration-window.png differ
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-screen.png b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-screen.png
new file mode 100644
index 0000000..c3b156a
Binary files /dev/null and b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/disabling-autoscaling/doks-node-pool-screen.png differ
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /_index.md
new file mode 100644
index 0000000..f390db2
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /_index.md
@@ -0,0 +1,54 @@
+---
+type: "page"
+id: "enable-autoscaling "
+description: ""
+title: "Enable Autoscaling"
+weight: 1
+---
+
+# Using the DigitalOcean Control Panel
+
+To enable autoscaling on an existing node pool, navigate to your cluster in [the Kubernetes section of the control panel](https://cloud.digitalocean.com/kubernetes/clusters), then click the Resources tab. Click the three dots to reveal the option to resize the node pool manually or enable autoscaling.
+
+
+
+
+Select Resize or Autoscale, and a window opens prompting for configuration details. After selecting Autoscale, you can set the following options for the node pool:
+
+- Minimum Nodes: Determines the smallest size the cluster is allowed to “scale down” to; must be greater than or equal to 0 and no greater than Maximum Nodes. See [Scaling to Zero](https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/#scaling-to-zero) for recommendations to follow for scaling down to zero.
+- Maximum Nodes: Determines the largest size the cluster is allowed to “scale up” to. The upper limit is constrained by the Droplet limit on your account, which is 25 by default, and the number of Droplets already running, which subtracts from that limit. [You can request to have your Droplet limit increased.](https://cloud.digitalocean.com/account/profile/droplet_limit_increase)
+
+
+
+# Using doctl
+You can use [`doctl`](https://docs.digitalocean.com/reference/doctl/) to enable cluster autoscaling on any node pool. You need to provide three specific configuration values:
+
+- `auto-scale`: Specifies that autoscaling should be enabled
+- `min-nodes`: Determines the smallest size the cluster is allowed to “scale down” to; must be greater than or equal to 0 and no greater than max-nodes. See [Scaling to Zero](https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/#scaling-to-zero) for recommendations to follow for scaling down to zero.
+- `max-nodes`: Determines the largest size the cluster is allowed to “scale up” to. The upper limit is constrained by the Droplet limit on your account, which is 25 by default, and the number of Droplets already running, which subtracts from that limit. [You can request to have your Droplet limit increased.](https://cloud.digitalocean.com/account/profile/droplet_limit_increase)
+
+You can apply autoscaling to a node pool at cluster creation time if you use a semicolon-delimited string.
+
+```bash
+doctl kubernetes cluster create mycluster --node-pool "name=mypool;auto-scale=true;min-nodes=1;max-nodes=10"
+```
+
+You can also configure new node pools to have autoscaling enabled at creation time.
+```bash
+doctl kubernetes cluster node-pool create mycluster mypool --auto-scale --min-nodes 1 --max-nodes 10
+```
+
+If your cluster is already running, you can enable autoscaling on an any existing node pool.
+
+```bash
+doctl kubernetes cluster node-pool update mycluster mypool --auto-scale --min-nodes 1 --max-nodes 10
+```
+
+# Scaling to Zero
+The Cluster Autoscaler supports scaling a given node pool down to zero. This allows the autoscaler to run simulations and optimize an under-utilised node pool to completely scale it down if possible. You can enable autoscaling using the [control panel or the CLI](https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/#enable-autoscaling). When planning to scale a node pool down to zero, DigitalOcean recommends following these guidelines:
+
+- Maintain at least one fixed node pool of the smallest size with one node. This allows the DOKS [managed components](https://docs.digitalocean.com/products/kubernetes/details/managed/) to always be available and also provides headroom for the cluster autoscaler to scale down larger node sizes as needed. For node pools of larger size, enable autoscaling and set the minimum number of nodes to zero.
+
+- If the unavailability of the managed components is not a consideration, then you can completely scale down all node pools of your cluster to zero nodes. To do this, set both the minimum and maximum nodes for each pool to zero.
+
+This leaves all workloads in a pending state because there are no nodes present in the cluster. The workloads don’t run until you scale a node pool up again on demand.
\ No newline at end of file
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /autoscaling-node-pool.png b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /autoscaling-node-pool.png
new file mode 100644
index 0000000..c3b156a
Binary files /dev/null and b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /autoscaling-node-pool.png differ
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /doks-node-pool-resizing-window.png b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /doks-node-pool-resizing-window.png
new file mode 100644
index 0000000..724b8f9
Binary files /dev/null and b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/enable-autoscaling /doks-node-pool-resizing-window.png differ
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/flexible-node-pool-selection/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/flexible-node-pool-selection/_index.md
new file mode 100644
index 0000000..fb0f84d
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/flexible-node-pool-selection/_index.md
@@ -0,0 +1,123 @@
+---
+type: "page"
+id: "flexible-node-pool-selection"
+description: ""
+title: "Flexible Node Pool Selection"
+weight: 6
+---
+
+In clusters with multiple nodes pools, you can specify how the autoscaler chooses which pool to scale up when an additional node is required. The autoscaler defaults to choosing node pools randomly, which is not always optimal. CA [expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) are strategies for selecting the best node pool to scale up.
+
+You can customize the expanders in your clusters using one of the following options:
+
+- Random: Selects a node pool to scale at random. This is the default expander.
+- Priority: Selects the node pool with the highest priority according to the [customer-provided configuration](https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/#configuring-priority-expander). This expander is useful in case of capacity constraints limiting the ability to scale a specific node pool.
+- Least-waste: Selects the node pool which minimizes the amount of idle resources.
+
+# Configuring Custom Expanders
+
+You can specify expanders using [`doctl`](https://docs.digitalocean.com/reference/doctl/) version v1.128`.0` or higher with the `--expanders` flag. The flag expects a comma-separated list with the following values: `random`, `priority`, or `least-waste`.
+
+The following example specifies to use the priority and random expanders. The autoscaler uses each of the expanders from the list to narrow down the selection of node pools to scale up, until a single best node pool remains. If applying custom expanders still results in multiple node pools to choose from, it selects from the remaining node pools randomly.
+
+```bash
+doctl kubernetes cluster create cluster-with-custom-expanders --region nyc1 --version latest --node-pool "name=pool1;size=s-2vcpu-2gb;count=3" --expanders priority,random
+```
+
+You can also update an existing cluster to use flexible node pool selection for autoscaling:
+
+```bash
+doctl kubernetes cluster update cluster-with-custom-expanders --expanders priority,random
+```
+
+To remove any expander customizations and reset to the default random selection, pass an empty list of expanders:
+
+```bash
+doctl kubernetes cluster update cluster-with-custom-expanders --expanders ""
+```
+
+# Configuring Priority Expander
+
+Once you [enable the priority expander](https://docs.digitalocean.com/products/kubernetes/how-to/autoscale/#configuring-custom-expanders), DOKS creates a starter ConfigMap named cluster-autoscaler-priority-expander in the kube-system namespace with the following content:
+
+```bash
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ labels:
+ c3.doks.digitalocean.com/component: cluster-autoscaler
+ c3.doks.digitalocean.com/plane: data
+ doks.digitalocean.com/managed: "true"
+ name: cluster-autoscaler-priority-expander
+ namespace: kube-system
+data:
+ priorities: |2
+
+ 1:
+ - .*
+```
+
+You need to provide a priority list of node pools in this ConfigMap. To do this, update the `.data.priorities` field to reflect the priorities of the node pools in your cluster. The priorities configuration is a YAML object with keys and values:
+
+- Keys are the integer priority numbers.
+
+- Values are the lists of node pools assigned this priority. You can provide the node pools using their IDs and can also specify regular expressions.
+
+For example, the configuration below selects pool ID `11aa5b5c-817e-4213-a303-b65b4d47ad84` as the best option, pool ID `72da2c27-14a3-434e-9db1-2d405cbc24d5` as the next best option, and all remaining pools using regex `.*` to match any string as the lowest-priority, fallback option:
+
+```bash
+100:
+ - 11aa5b5c-817e-4213-a303-b65b4d47ad84
+90:
+ - 72da2c27-14a3-434e-9db1-2d405cbc24d5
+1:
+ - .*
+```
+
+To find the IDs of your node pools, use `doctl`:
+
+```bash
+doctl kubernetes clusters node-pool list cluster-with-custom-expanders
+```
+
+The command returns the following output:
+
+```bash
+ID Name Size Count Tags Labels Taints Nodes
+11aa5b5c-817e-4213-a303-b65b4d47ad84 s-2vcpu-2gb-amd s-2vcpu-2gb-amd 1 k8s,k8s:08011cad-c5c1-430e-8082-5392b02149a4,k8s:worker map[] [] [s-2vcpu-2gb-amd-f9un]
+72da2c27-14a3-434e-9db1-2d405cbc24d5 s-2vcpu-2gb s-2vcpu-2gb 0 k8s,k8s:08011cad-c5c1-430e-8082-5392b02149a4,k8s:worker map[] [] []
+```
+
+# Priority Expander Example
+
+One of the biggest use-cases for priority expansion is to prepare a cluster for possible capacity constraints, which is especially relevant for very large clusters (100 nodes and more) with large nodes. Suppose your preferred node type is c-8, CPU-optimized Droplet with 8 vCPUs. You can find similar Droplet sizes from the output of doctl compute size list, and create additional, fallback node pools. Suitable alternatives for c-8 might be, for example, s-8vcpu-16gb-amd, s-8vcpu-16gb-intel.
+
+Note
+Choose node pools with Droplets that are not on the same fleet as fallback for capacity constraints. For example, c-16 is not a good fallback for c-8 as both c-8 and c-16 Droplets belong to the same fleet, which means they reside on the same hypervisors and the available amounts of c-8 and c-16 Droplets change in tandem.
+You can create a cluster with 3 node pools having one preferred size and two fallback sizes. You can scale the fallback pools to zero nodes until needed.
+
+doctl kubernetes clusters create `cluster-with-priority-expander` --version latest --node-pool "name=primary;size=c-8;auto-scale=true;count=3;min-nodes=1;max-nodes=10;" --node-pool "name=fallback1;size=s-8vcpu-16gb-amd;auto-scale=true;count=0;min-nodes=0;max-nodes=10;" --node-pool "name=fallback2;size=s-8vcpu-16gb-intel;auto-scale=true;count=0;min-nodes=0;max-nodes=10" --region nyc1 --expanders priority
+Next, to see the list of node pools, use the following command:
+
+doctl kubernetes clusters node-pool list cluster-with-priority-expander
+The output looks similar to the following:
+
+```bash
+ID Name Size Count Tags Labels Taints Nodes
+5421e5fb-7fb1-4893-b65f-1887ab6c3ea6 primary c-8 3 k8s,k8s:2faf374d-5040-4c05-a285-f18d92a4e90c,k8s:worker map[] [] [primary-t0p2t primary-t0p2l primary-t0p22]
+0255d0cc-a010-4eef-a3bc-38dc784b5888 fallback1 s-8vcpu-16gb-amd 0 k8s,k8s:2faf374d-5040-4c05-a285-f18d92a4e90c,k8s:worker map[] [] []
+635eb7c0-c3db-4d22-b883-b771f07c239b fallback2 s-8vcpu-16gb-intel 0 k8s,k8s:2faf374d-5040-4c05-a285-f18d92a4e90c,k8s:worker map[] [] []
+```
+
+Next, in the `cluster-autoscaler-priority-expander` ConfigMap, specify the priority list for this cluster to look similar to the following:
+
+```bash
+100:
+ - 5421e5fb-7fb1-4893-b65f-1887ab6c3ea6
+50:
+ - 0255d0cc-a010-4eef-a3bc-38dc784b5888
+ - 635eb7c0-c3db-4d22-b883-b771f07c239b
+```
+
+Upon a scale-up event, CA first attempts to create a node in the primary pool. If it encounters an error, such as an insufficient capacity error, it moves on to the next priority node pools, `fallback1` and `fallback2`, choosing randomly between the two.
+
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/pod-disruption-budget-support/_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/pod-disruption-budget-support/_index.md
new file mode 100644
index 0000000..59860d4
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/pod-disruption-budget-support/_index.md
@@ -0,0 +1,13 @@
+---
+type: "page"
+id: "pod-disruption-budget-support"
+description: ""
+title: "PodDisruptionBudget Support "
+weight: 5
+---
+
+A `PodDisruptionBudget` (PDB) specifies the minimum number of replicas that an application can tolerate having during a voluntary disruption, relative to how many it is intended to have. For example, if you set the `replicas` value for a pod to `5`, and set the PDB to `1`, potentially disruptive actions like cluster upgrades and resizes occur with no fewer than four pods running.
+
+When scaling down a cluster, the DOKS autoscaler respects this setting, and follows [the documented Kubernetes procedure for draining and deleting nodes when a PDB has been specified](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#does-ca-work-with-poddisruptionbudget-in-scale-down).
+
+We recommend you set a PDB for your workloads to ensure graceful scale down. For more information, see [Specifying a Disruption Budget for your Application](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) in the Kubernetes documentation.
diff --git a/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/viewing-cluster-autoscaler-status /_index.md b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/viewing-cluster-autoscaler-status /_index.md
new file mode 100644
index 0000000..a0556cd
--- /dev/null
+++ b/content/learning-paths/3e2f9c82-1a4c-4781-adf9-99ec22cd994e/digital-ocean-kubernetes/scale-automatically/viewing-cluster-autoscaler-status /_index.md
@@ -0,0 +1,38 @@
+---
+type: "page"
+id: "viewing-cluster-autoscaler-status"
+description: ""
+title: "Viewing Cluster Autoscaler Status "
+weight: 3
+---
+
+You can check the status of the Cluster Autoscaler to view recent events or for debugging purposes.
+
+Check `kube-system/cluster-autoscaler-status` config map by running the following command:
+```bash
+kubectl get configmap cluster-autoscaler-status -o yaml -n kube-system
+```
+
+The command returns results such as this:
+```bash
+apiVersion: v1
+data:
+ status: |+
+ Cluster-autoscaler status at 2021-01-27 21:57:30.462764772 +0000 UTC:
+ Cluster-wide:
+ Health: Healthy (ready=5 unready=0 notStarted=0 longNotStarted=0 registered=5 longUnregistered=0)
+ LastProbeTime: 2021-01-27 21:57:30.27867919 +0000 UTC m=+499650.735961122
+ LastTransitionTime: 2021-01-22 03:11:00.371995979 +0000 UTC m=+60.829277965
+ ScaleUp: NoActivity (ready=5 registered=5)
+ LastProbeTime: 2021-01-27 21:57:30.27867919 +0000 UTC m=+499650.735961122
+ LastTransitionTime: 2021-01-22 19:09:20.360421664 +0000 UTC m=+57560.817703589
+ ScaleDown: NoCandidates (candidates=0)
+ LastProbeTime: 2021-01-27 21:57:30.27867919 +0000 UTC m=+499650.735961122
+ LastTransitionTime: 2021-01-22 19:09:20.360421664 +0000 UTC m=+57560.817703589
+...
+```
+
+To learn more about what is published in the config map, see [What events are emitted by CA?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-events-are-emitted-by-ca).
+
+In the case of an error, you can troubleshoot by using `kubectl get events -A` or `kubectl describe ` to check for any events on the Kubernetes resources.
+