Skip to content

Commit 76fe4c2

Browse files
committed
reviewed ingress demos
1 parent d942a00 commit 76fe4c2

File tree

5 files changed

+101
-75
lines changed

5 files changed

+101
-75
lines changed

04-cloud/01-eks/06-autoscalling-our-applications/00-install-kube-ops-view.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@ helm install kube-ops-view \
1313
stable/kube-ops-view \
1414
--set service.type=LoadBalancer \
1515
--set rbac.create=True
16-
1716
```
1817

1918
The execution above installs kube-ops-view exposing it through a Service using the LoadBalancer type. A successful execution of the command will display the set of resources created and will prompt some advice asking you to use `kubectl proxy` and a local URL for the service. Given we are using the type LoadBalancer for our service, we can disregard this; Instead we will point our browser to the external load balancer.
@@ -24,6 +23,9 @@ To check the chart was installed successfully:
2423

2524
```bash
2625
helm list
26+
```
27+
28+
```
2729
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/lemoncode/.kube/config
2830
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/lemoncode/.kube/config
2931
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
@@ -34,9 +36,22 @@ With this we can explore kube-ops-view output by checking the details about the
3436

3537
```bash
3638
kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'
37-
3839
```
3940

4041
This will display a line similar to Kube-ops-view URL = http://<URL_PREFIX_ELB>.amazonaws.com Opening the URL in your browser will provide the current state of our cluster.
4142

42-
> Reference: https://kubernetes-operational-view.readthedocs.io/en/latest/
43+
> Reference: https://kubernetes-operational-view.readthedocs.io/en/latest/
44+
45+
## Alternative installation
46+
47+
```bash
48+
helm repo add christianknell https://christianknell.github.io/helm-charts
49+
helm repo update
50+
```
51+
52+
```bash
53+
helm install kube-ops-view \
54+
christianknell/kube-ops-view \
55+
--set service.type=LoadBalancer
56+
```
57+

04-cloud/01-eks/06-autoscalling-our-applications/01-scale-an-application-with-HPA.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,18 @@ These metrics will drive the scaling behavior of the *deployments*.
1111
We will deploy the metrics server using [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server).
1212

1313
```bash
14-
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml
14+
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml
1515
```
1616

1717
Lets' verify the status of the metrics-server APIService (it could take a few minutes).
1818

1919
```bash
20-
$ kubectl get apiservice v1beta1.metrics.k8s.io -o json | jq '.status'
20+
kubectl get apiservice v1beta1.metrics.k8s.io -o json | jq '.status'
21+
```
2122

23+
We expect an output similar to this:
2224

25+
```json
2326
{
2427
"conditions": [
2528
{
@@ -35,7 +38,12 @@ $ kubectl get apiservice v1beta1.metrics.k8s.io -o json | jq '.status'
3538

3639
**We are now ready to scale a deployed application**
3740

38-
A new `addon` is set in our system we can check
41+
A new `addon` is set in our system we can check by running:
42+
43+
```bash
44+
kubectl get pods -n kube-system
45+
```
46+
3947

4048
## Deploy a Sample App
4149

@@ -67,7 +75,7 @@ kubectl autoscale deployment php-apache `#The target average CPU utilization` \
6775
View the HPA using kubectl. You probably will see <unknown>/50% for 1-2 minutes and then you should be able to see 0%/50%
6876

6977
```bash
70-
$ kubectl get hpa
78+
kubectl get hpa
7179
```
7280

7381
## Generate load to trigger scaling
@@ -93,7 +101,7 @@ while true; do wget -q -O - http://php-apache; done
93101
In the previous tab, watch the HPA with the following command
94102

95103
```bash
96-
$ kubectl get hpa -w
104+
kubectl get hpa -w
97105

98106
```
99107

04-cloud/01-eks/06-autoscalling-our-applications/02-cluster-auto-scaler/00-configure-cluster-autoscaler.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,10 +16,15 @@ Cluster Autoscaler will attempt to determine the CPU, memory, and GPU resources
1616
You configure the size of your Auto Scaling group by setting the minimum, maximum, and desired capacity. When we created the cluster we set these settings to 3.
1717

1818
```bash
19-
$ aws autoscaling \
19+
aws autoscaling \
2020
describe-auto-scaling-groups \
2121
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \
2222
--output table
23+
```
24+
25+
We get the following output:
26+
27+
```
2328
-------------------------------------------------------------
2429
| DescribeAutoScalingGroups |
2530
+-------------------------------------------+----+----+-----+
@@ -46,10 +51,15 @@ aws autoscaling \
4651

4752
```bash
4853
# Check new values
49-
$ aws autoscaling \
54+
aws autoscaling \
5055
describe-auto-scaling-groups \
5156
--query "AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') && Value=='lc-cluster']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]" \
5257
--output table
58+
```
59+
60+
We must see the updated values similar to this:
61+
62+
```
5363
-------------------------------------------------------------
5464
| DescribeAutoScalingGroups |
5565
+-------------------------------------------+----+----+-----+

04-cloud/01-eks/07-exposing-service/fruits.ingress.yml

Lines changed: 19 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -2,48 +2,24 @@ apiVersion: networking.k8s.io/v1
22
kind: Ingress
33
metadata:
44
name: example-ingress
5+
annotations:
6+
kubernetes.io/ingress.class: "nginx"
57
spec:
68
rules:
7-
- host: jaimesalas.com
8-
http:
9-
paths:
10-
- backend:
11-
service:
12-
name: apple-service
13-
port:
14-
number: 5678
15-
path: /apple
16-
pathType: Prefix
17-
- backend:
18-
service:
19-
name: banana-service
20-
port:
21-
number: 5678
22-
path: /banana
23-
pathType: Prefix
24-
# apiVersion: extensions/v1beta1
25-
# kind: Ingress
26-
# metadata:
27-
# name: example-ingress
28-
# # annotations:
29-
# # ingress.kubernetes.io/rewrite-target: /
30-
# # nginx.ingress.kubernetes.io/ssl-redirect: "false"
31-
# # nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
32-
# # nginx.ingress.kubernetes.io/rewrite-target: /
33-
# spec:
34-
# # tls:
35-
# # - hosts:
36-
# # - anthonycornell.com
37-
# # secretName: tls-secret
38-
# rules:
39-
# - host: jaimesalas.com
40-
# http:
41-
# paths:
42-
# - path: /apple
43-
# backend:
44-
# serviceName: apple-service
45-
# servicePort: 5678
46-
# - path: /banana
47-
# backend:
48-
# serviceName: banana-service
49-
# servicePort: 5678
9+
- host: "jaimesalas.com"
10+
http:
11+
paths:
12+
- pathType: Prefix
13+
path: "/apple"
14+
backend:
15+
service:
16+
name: apple-service
17+
port:
18+
number: 5678
19+
- pathType: Prefix
20+
path: "/banana"
21+
backend:
22+
service:
23+
name: banana-service
24+
port:
25+
number: 5678

04-cloud/01-eks/07-exposing-service/readme.md

Lines changed: 39 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -101,45 +101,62 @@ spec:
101101
Create some services
102102

103103
```bash
104-
$ kubectl apply -f ./apple.deploy.yaml
105-
$ kubectl apply -f ./banana.deploy.yaml
104+
kubectl apply -f ./apple.deploy.yaml
105+
```
106+
107+
```bash
108+
kubectl apply -f ./banana.deploy.yaml
106109
```
107110

108111
## Step 2. Defining the Ingress resource to route traffic to the services created above
109112

110113
Now declare an Ingress to route requests to `/apple` to the first service, and requests to `/banana` to second service. Create `fruits.ingress.yml`
111114

112115
```yml
113-
apiVersion: extensions/v1beta1
116+
apiVersion: networking.k8s.io/v1
114117
kind: Ingress
115118
metadata:
116119
name: example-ingress
120+
annotations:
121+
kubernetes.io/ingress.class: "nginx"
117122
spec:
118123
rules:
119-
- host: jaimesalas.com
120-
http:
121-
paths:
122-
- path: /apple
123-
backend:
124-
serviceName: apple-service
125-
servicePort: 5678
126-
- path: /banana
127-
backend:
128-
serviceName: banana-service
129-
servicePort: 5678
130-
124+
- host: "jaimesalas.com"
125+
http:
126+
paths:
127+
- pathType: Prefix
128+
path: "/apple"
129+
backend:
130+
service:
131+
name: apple-service
132+
port:
133+
number: 5678
134+
- pathType: Prefix
135+
path: "/banana"
136+
backend:
137+
service:
138+
name: banana-service
139+
port:
140+
number: 5678
131141
```
132142
143+
Note that we're using an annotation for `ingress.class`, since it's not set as the default one, we have to use the `annotation`. You can check this [link](https://stackoverflow.com/questions/65289827/nginx-ingress-controller-not-working-on-amazon-eks) on stackoverflow.
144+
133145
And apply to our cluster
134146

135147
```bash
136-
$ kubectl apply -f fruits.ingress.yml
148+
kubectl apply -f fruits.ingress.yml
137149
```
138150

139151
Now to check that our ingress is working we need the `dns` of NLB that we have created, the easiest way to do this is run:
140152

141153
```bash
142-
$ kubectl get ingress
154+
kubectl get ingress
155+
```
156+
157+
We get something simiar to this:
158+
159+
```
143160
NAME CLASS HOSTS ADDRESS PORTS AGE
144161
example-ingress <none> jaimesalas.com a2e47070555144b06a0cd99a242d6753-ef17945a5e983c95.elb.eu-west-3.amazonaws.com 80 39m
145162
```
@@ -148,7 +165,7 @@ The above adress is the `NLB` resource that's forwarding traffic to the ingress
148165
**-I** the response only contains the headers
149166

150167
```bash
151-
$ curl -I http://a2e47070555144b06a0cd99a242d6753-ef17945a5e983c95.elb.eu-west-3.amazonaws.com
168+
curl -I http://a2e47070555144b06a0cd99a242d6753-ef17945a5e983c95.elb.eu-west-3.amazonaws.com
152169
```
153170

154171
We get the following response
@@ -167,7 +184,7 @@ Connection: keep-alive
167184
Beacuse the **host** field is configured for the Ingress object, you must supply the **Host** header of the request with the same `hostname`
168185
169186
```bash
170-
$ curl -I -H "Host: jaimesalas.com" http://a2e47070555144b06a0cd99a242d6753-ef17945a5e983c95.elb.eu-west-3.amazonaws.com/apple/
187+
curl -I -H "Host: jaimesalas.com" http://a2e47070555144b06a0cd99a242d6753-ef17945a5e983c95.elb.eu-west-3.amazonaws.com/apple/
171188
```
172189

173190
And now we get the following result
@@ -188,14 +205,14 @@ X-App-Version: 0.2.3
188205
Deelete Ingress resource
189206

190207
```bash
191-
$ kubectl delete -f fruits.ingress.yml
208+
kubectl delete -f fruits.ingress.yml
192209
```
193210

194211
Delete services
195212

196213
```bash
197-
$ kubectl delete -f ./apple.deploy.yaml
198-
$ kubectl delete -f ./banana.deploy.yaml
214+
kubectl delete -f ./apple.deploy.yaml
215+
kubectl delete -f ./banana.deploy.yaml
199216
```
200217

201218
Delete NGINX Ingress Controller and NLB

0 commit comments

Comments
 (0)