Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Commit e528044

Browse files
committed
lab 2 updates from review
1 parent 6d0c713 commit e528044

File tree

1 file changed

+95
-25
lines changed

1 file changed

+95
-25
lines changed

workshop/Lab2/README.md

Lines changed: 95 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Lab 2: File storage with Kubernetes
22

3-
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacit ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS.
3+
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacity ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS. The IBM Cloud File Storage provides data across all worker nodes within a single availability zone.
44

55
Following topics are covered in this exercise:
66
- Claim a classic file storage volume.
@@ -9,6 +9,9 @@ Following topics are covered in this exercise:
99
- Use the `Guestbook` application to view the images.
1010
- Claim back the storage resources and clean up.
1111

12+
## Prereqs
13+
14+
Follow the [prereqs](../Lab0/README.md) if you haven't already.
1215

1316
## Claim file storage volume
1417

@@ -38,6 +41,10 @@ ibmc-file-silver-gid ibm.io/ibmc-file Delete Immediate
3841

3942
IKS comes with storage class definitions for file storage. This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
4043

44+
```
45+
kubectl describe storageclass ibmc-file-silver
46+
```
47+
Expected output:
4148
```
4249
$ kubectl describe storageclass ibmc-file-silver
4350
@@ -58,7 +65,25 @@ File sliver has an IOPS of 4GB and a max capacity of 12TB.
5865

5966
## Claim a file storage volume
6067

61-
Review the yaml for the file storage `PersistentVolumeClaim`
68+
IBM Cloud File Storage provides fast access to your data for a cluster running in a single available zone. For higher availability, use a storage option that is designed for [geographically distributed data](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#persistent_storage_overview).
69+
70+
To create a `PersistentVolumeClaim` we must first get the availability zone of our cluster. Run the following commands to find out which region and availablility zone the worker nodes of your cluster are running in:
71+
72+
```
73+
ibmcloud ks clusters [CLUSTER NAME]
74+
ibmcloud ks workers -c [CLUSTER NAME]
75+
```
76+
77+
Example output:
78+
```
79+
$ ibmcloud ks workers -c zaccone-guestbook2
80+
81+
OK
82+
ID Public IP Private IP Flavor State Status Zone Version
83+
kube-bun9o7vw0klq4okmgou0-zacconegues-default-0000010a 169.55.112.195 10.148.14.60 b3c.4x16.encrypted normal Ready wdc04 1.18.10_1532
84+
```
85+
86+
Change the Review the yaml for the file storage `PersistentVolumeClaim`
6287

6388
```
6489
cd guestbook-config/storage/lab2
@@ -70,8 +95,6 @@ metadata:
7095
name: guestbook-pvc
7196
labels:
7297
billingType: hourly
73-
region: us-south
74-
zone: dal10
7598
spec:
7699
accessModes:
77100
- ReadWriteMany
@@ -91,7 +114,7 @@ $ kubectl create -f pvc-file-silver.yaml
91114
persistentvolumeclaim/guestbook-filesilver-pvc created
92115
```
93116

94-
Verify the PVC claim is created with the status `Bound`.
117+
Verify the PVC claim is created with the status `Bound`. This may take a minute or two.
95118
```bash
96119
kubectl get pvc guestbook-filesilver-pvc
97120
```
@@ -103,13 +126,13 @@ NAME STATUS VOLUME C
103126
guestbook-filesilver-pvc Bound pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX ibmc-file-silver 2m
104127
```
105128

106-
Details associated with the `pv`
129+
Details associated with the `pv`. Use the `pv` name from the previous command output.
107130
```bash
108-
kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
131+
kubectl get pv [pv name]
109132
```
110133
Expected output:
111134
```
112-
$ kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
135+
$ kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
113136
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
114137
pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX Delete Bound default/guestbook-filesilver-pvc ibmc-file-silver 90s
115138
```
@@ -121,7 +144,7 @@ Change to the guestbook application source directory and review the html files `
121144
```
122145
cd $HOME/guestbook-nodejs/src
123146
cat client/images.html
124-
cat client/inex.html
147+
cat client/index.html
125148
```
126149

127150
Run the commands listed below to build the guestbook image and copy into the docker hub registry:
@@ -175,6 +198,10 @@ kubectl create -f guestbook-service.yaml
175198
```
176199
Verify the Guestbook application is runing.
177200
```
201+
kubectl get all
202+
```
203+
Expected output:
204+
```
178205
$ kubectl get all
179206
NAME READY STATUS RESTARTS AGE
180207
pod/guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 13s
@@ -199,17 +226,21 @@ NAME READY STATUS RESTARTS AGE
199226
guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 78s
200227
guestbook-v1-5bd76b568f-w6h6h 1/1 Running 0 78s
201228
```
202-
Log into any one of the pod.
229+
Set these variables for each of your pod names:
230+
```
231+
export POD1=[FIRST POD NAME]
232+
export POD2=[SECOND POD NAME]
233+
```
234+
Log into any one of the pod. Use one of the pod names from the previous command output.
203235

204236
```bash
205-
kubectl exec -it guestbook-v1-7fc4684cdb-t8l6w bash
237+
kubectl exec -it $POD1 -- bash
206238
```
207239

208-
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
240+
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
209241

210-
```
211-
$ kubectl exec -it guestbook-v1-5bd76b568f-cdhr5 bash
212-
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
242+
```bash
243+
$ kubectl exec -it $POD1 -- bash
213244
root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# ls -alt
214245
total 252
215246
drwxr-xr-x 1 root root 4096 Nov 13 03:17 client
@@ -245,9 +276,11 @@ fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 0
245276
tmpfs 7.9G 0 7.9G 0% /proc/acpi
246277
tmpfs 7.9G 0 7.9G 0% /proc/scsi
247278
tmpfs 7.9G 0 7.9G 0% /sys/firmware
279+
280+
root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# exit
248281
```
249282

250-
Note the Filesystem `fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01` is mounted on path `/home/node/app/client/images`.
283+
Note the filesystem `fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01` is mounted on path `/home/node/app/client/images`.
251284

252285
Find the URL for the guestbook application by joining the worker node external IP and service node port.
253286

@@ -266,7 +299,7 @@ Verify that the images are missing by viewing the data from the Guestbook applic
266299
Run the `kubectl cp` command to move the images into the mounted volume.
267300
```bash
268301
cd $HOME/guestbook-config/storage/lab2
269-
kubectl cp images guestbook-v1-5bd76b568f-cdhr5:/home/node/app/client/
302+
kubectl cp images $POD1:/home/node/app/client/
270303
```
271304

272305
Refresh the page `images.html` page in the guestbook application to view the uploaded images.
@@ -275,12 +308,10 @@ Refresh the page `images.html` page in the guestbook application to view the upl
275308

276309
## Shared storage across pods
277310

278-
Login into the other pod `guestbook-v1-5bd76b568f-w6h6h` to verify the volume mount.
311+
Login into the other pod `$POD2` to verify the volume mount.
279312

280313
```
281-
kubectl exec -it guestbook-v1-5bd76b568f-w6h6h bash
282-
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
283-
314+
kubectl exec -it $POD2 -- bash
284315
root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# ls -alt /home/node/app/client/images
285316
total 160
286317
-rw-r--r-- 1 501 staff 56191 Nov 13 03:44 gb3.jpg
@@ -301,18 +332,57 @@ fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 128K
301332
tmpfs 7.9G 0 7.9G 0% /proc/acpi
302333
tmpfs 7.9G 0 7.9G 0% /proc/scsi
303334
tmpfs 7.9G 0 7.9G 0% /sys/firmware
335+
336+
root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# exit
304337
```
305338

306339
Note that the volume and the data are available on all the pods running the Guestbook application.
307340

341+
IBM Cloud File Storage is a NFS-based file storage that is available across all worker nodes within a single availability zone. If you are running a cluster with multiple nodes (within a single AZ) you can run the following commands to prove that your data is available across different nodes:
342+
343+
```
344+
kubectl get pods -o wide
345+
kubectl get nodes
346+
```
347+
Expected output:
348+
```
349+
$ kubectl get pods -o wide
350+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
351+
guestbook-v1-6fb8b86876-n9jtz 1/1 Running 0 39h 172.30.224.70 10.38.216.205 <none> <none>
352+
guestbook-v1-6fb8b86876-njwcz 1/1 Running 0 39h 172.30.169.144 10.38.216.238 <none> <none>
353+
354+
$ kubectl get nodes
355+
NAME STATUS ROLES AGE VERSION
356+
10.38.216.205 Ready <none> 4d5h v1.18.10+IKS
357+
10.38.216.238 Ready <none> 4d5h v1.18.10+IKS
358+
```
359+
360+
To extend our table from Lab 1 we now have:
361+
| Storage Type | Persisted at which level | Example Uses
362+
| - | - | - |
363+
| Container local storage | Container | ephermal state
364+
| Secondary Storage ([EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)) | Pod | Checkpoint a long computation process
365+
| Primary Storage ([HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) | Node | Running cAdvisor in a container
366+
| IBM Cloud File Storage (NFS) | Availabilty Zone | Applications running in a single availabilty zone
367+
368+
<br>
369+
Data is available to all nodes within the availability zone where the file storage exists, but the `accessMode` parameter on the `PersistentVolumeClaim` determines if multiple pods are able to mount a volume specificed by a PVC. The possible values for this parameter are:
370+
371+
- **ReadWriteMany**: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
372+
- **ReadOnlyMany**: The PVC can be mounted by multiple pods. All pods have read-only access.
373+
- **ReadWriteOnce**: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
374+
308375

309376
## [Optional exercises]
310377

311-
Back up data.
312-
Delete pods to confirm that it does not impact the data used by the application.
313-
Delete the Kubernetes cluster.
314-
Create a new cluster and reuse the volume.
378+
Another way to see that the data is persisted at the availability zone level, you can:
379+
380+
- Back up data.
381+
- Delete pods to confirm that it does not impact the data used by the application.
382+
- Delete the Kubernetes cluster.
383+
- Create a new cluster and reuse the volume.
315384

385+
##
316386
## Clean up
317387

318388
List all the PVCs and PVs

0 commit comments

Comments
 (0)