Skip to content
This repository was archived by the owner on Dec 5, 2023. It is now read-only.

Commit baa76ef

Browse files
Merge pull request #48 from microservices-demo/k8s-autoscaling-instructions
instructions for deploying the k8s horizontal autoscaler manifests
2 parents b04bd6b + 7c268b9 commit baa76ef

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

deployment/kubernetes.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,6 +130,18 @@ There are two options for running Weave Scope, either you can run the UI locally
130130

131131
<!-- deploy-doc-end -->
132132

133+
### Service autoscaling (optional)
134+
If you want all stateless services to scale automatically based on the CPU utilization, you can deploy all the manifests in the "deploy/kubernetes/autoscaling" directory.
135+
The autoscaling directory contains Kubernetes horizontal pod autoscalers for all the stateless services, and the Heapster monitoring application with it's depedencies.
136+
137+
```
138+
master_ip=$(terraform output -json | jq -r '.master_address.value')
139+
scp -i ~/.ssh/deploy-docs-k8s.pem -o StrictHostKeyChecking=no -rp deploy/kubernetes/autoscaling ubuntu@$master_ip:/tmp/
140+
ssh -i ~/.ssh/deploy-docs-k8s.pem ubuntu@$master_ip kubectl apply -f /tmp/autoscaling
141+
```
142+
143+
If you cause enough load on the application you should see the various services scaling up in number.
144+
133145
### View the results
134146
Run `terraform output` command to see the load balancer and node URLs
135147

0 commit comments

Comments
 (0)