Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Commit a4a1074

Browse files
authored
Merge pull request #6 from rojanjose/master
Lab 2 updates
2 parents dca50d3 + 1e82da8 commit a4a1074

File tree

1 file changed

+82
-29
lines changed

1 file changed

+82
-29
lines changed

workshop/Lab2/README.md

Lines changed: 82 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Lab 2: File storage with Kubernetes
22

3-
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacit ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS.
3+
This lab demonstrates the use of cloud based file storage with Kubernetes. It uses the IBM Cloud File Storage which is persistent, fast, and flexible network-attached, NFS-based File Storage capacity ranging from 25 GB to 12,000 GB capacity with up to 48,000 IOPS. The IBM Cloud File Storage provides data across all worker nodes within a single availability zone.
44

55
Following topics are covered in this exercise:
66
- Claim a classic file storage volume.
@@ -9,18 +9,21 @@ Following topics are covered in this exercise:
99
- Use the `Guestbook` application to view the images.
1010
- Claim back the storage resources and clean up.
1111

12+
## Prereqs
13+
14+
Follow the [prereqs](../Lab0/README.md) if you haven't already.
1215

1316
## Claim file storage volume
1417

1518
Review the [storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_storageclass_reference) for file storage. In addition to the standard set of storage classes, [custom storage classes](https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_custom_storageclass) can be defined to meet the storage requirements.
1619

1720
```bash
18-
kubectl get storageclasses | grep file
21+
kubectl get storageclasses
1922
```
2023

2124
Expected output:
2225
```
23-
$ kubectl get storageclasses | grep file
26+
$ kubectl get storageclasses
2427
2528
default ibm.io/ibmc-file Delete Immediate false 27m
2629
ibmc-file-bronze ibm.io/ibmc-file Delete Immediate false 27m
@@ -36,8 +39,12 @@ ibmc-file-silver ibm.io/ibmc-file Delete Immediate
3639
ibmc-file-silver-gid ibm.io/ibmc-file Delete Immediate false 27m
3740
```
3841

39-
This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
42+
IKS comes with storage class definitions for file storage. This lab uses the storage class `ibm-file-silver`. Note that the default class is `ibmc-file-gold` is allocated if storgage class is not expliciity definded.
4043

44+
```
45+
kubectl describe storageclass ibmc-file-silver
46+
```
47+
Expected output:
4148
```
4249
$ kubectl describe storageclass ibmc-file-silver
4350
@@ -58,7 +65,9 @@ File sliver has an IOPS of 4GB and a max capacity of 12TB.
5865

5966
## Claim a file storage volume
6067

61-
Review the yaml for the file storage `PersistentVolumeClaim`
68+
IBM Cloud File Storage provides fast access to your data for a cluster running in a single available zone. For higher availability, use a storage option that is designed for [geographically distributed data](https://cloud.ibm.com/docs/containers?topic=containers-storage_planning#persistent_storage_overview).
69+
70+
Review the yaml for the file storage `PersistentVolumeClaim`. When we create this `PersistentVolumeClaim`, it automatically creates it within an availability zone where the worker nodes are located.
6271

6372
```
6473
cd guestbook-config/storage/lab2
@@ -70,8 +79,6 @@ metadata:
7079
name: guestbook-pvc
7180
labels:
7281
billingType: hourly
73-
region: us-south
74-
zone: dal10
7582
spec:
7683
accessModes:
7784
- ReadWriteMany
@@ -91,7 +98,7 @@ $ kubectl create -f pvc-file-silver.yaml
9198
persistentvolumeclaim/guestbook-filesilver-pvc created
9299
```
93100

94-
Verify the PVC claim is created with the status `Bound`.
101+
Verify the PVC claim is created with the status `Bound`. This may take a minute or two.
95102
```bash
96103
kubectl get pvc guestbook-filesilver-pvc
97104
```
@@ -103,13 +110,13 @@ NAME STATUS VOLUME C
103110
guestbook-filesilver-pvc Bound pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX ibmc-file-silver 2m
104111
```
105112

106-
Details associated with the `pv`
113+
Details associated with the `pv`. Use the `pv` name from the previous command output.
107114
```bash
108-
kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
115+
kubectl get pv [pv name]
109116
```
110117
Expected output:
111118
```
112-
$ kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
119+
$ kubectl get pv pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9
113120
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
114121
pvc-a7cb12ed-b52b-4342-966a-eceaf24e42a9 20Gi RWX Delete Bound default/guestbook-filesilver-pvc ibmc-file-silver 90s
115122
```
@@ -121,7 +128,7 @@ Change to the guestbook application source directory and review the html files `
121128
```
122129
cd $HOME/guestbook-nodejs/src
123130
cat client/images.html
124-
cat client/inex.html
131+
cat client/index.html
125132
```
126133

127134
Run the commands listed below to build the guestbook image and copy into the docker hub registry:
@@ -175,6 +182,10 @@ kubectl create -f guestbook-service.yaml
175182
```
176183
Verify the Guestbook application is runing.
177184
```
185+
kubectl get all
186+
```
187+
Expected output:
188+
```
178189
$ kubectl get all
179190
NAME READY STATUS RESTARTS AGE
180191
pod/guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 13s
@@ -199,17 +210,21 @@ NAME READY STATUS RESTARTS AGE
199210
guestbook-v1-5bd76b568f-cdhr5 1/1 Running 0 78s
200211
guestbook-v1-5bd76b568f-w6h6h 1/1 Running 0 78s
201212
```
202-
Log into any one of the pod.
213+
Set these variables for each of your pod names:
214+
```
215+
export POD1=[FIRST POD NAME]
216+
export POD2=[SECOND POD NAME]
217+
```
218+
Log into any one of the pod. Use one of the pod names from the previous command output.
203219

204220
```bash
205-
kubectl exec -it guestbook-v1-7fc4684cdb-t8l6w bash
221+
kubectl exec -it $POD1 -- bash
206222
```
207223

208-
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
224+
Run the commands `ls -al; ls -al images; df -ah` to view the volume and files. Review the mount for the new volume. Note that the images folder is empty.
209225

210-
```
211-
$ kubectl exec -it guestbook-v1-5bd76b568f-cdhr5 bash
212-
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
226+
```bash
227+
$ kubectl exec -it $POD1 -- bash
213228
root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# ls -alt
214229
total 252
215230
drwxr-xr-x 1 root root 4096 Nov 13 03:17 client
@@ -245,14 +260,16 @@ fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 0
245260
tmpfs 7.9G 0 7.9G 0% /proc/acpi
246261
tmpfs 7.9G 0 7.9G 0% /proc/scsi
247262
tmpfs 7.9G 0 7.9G 0% /sys/firmware
263+
264+
root@guestbook-v1-5bd76b568f-cdhr5:/home/node/app# exit
248265
```
249266

250-
Note the Filesystem `fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01` is mounted on path `/home/node/app/client/images`.
267+
Note the filesystem `fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01` is mounted on path `/home/node/app/client/images`.
251268

252269
Find the URL for the guestbook application by joining the worker node external IP and service node port.
253270

254271
```
255-
HOSTNAME=`ibmcloud ks workers --cluster $CLUSTERNAME | grep Ready | head -n 1 | awk '{print $2}'`
272+
HOSTNAME=`kubectl get nodes -ojsonpath='{.items[0].metadata.labels.ibm-cloud\.kubernetes\.io\/external-ip}'`
256273
SERVICEPORT=`kubectl get svc guestbook -o=jsonpath='{.spec.ports[0].nodePort}'`
257274
echo "http://$HOSTNAME:$SERVICEPORT"
258275
```
@@ -266,7 +283,7 @@ Verify that the images are missing by viewing the data from the Guestbook applic
266283
Run the `kubectl cp` command to move the images into the mounted volume.
267284
```bash
268285
cd $HOME/guestbook-config/storage/lab2
269-
kubectl cp images guestbook-v1-5bd76b568f-cdhr5:/home/node/app/client/
286+
kubectl cp images $POD1:/home/node/app/client/
270287
```
271288

272289
Refresh the page `images.html` page in the guestbook application to view the uploaded images.
@@ -275,12 +292,10 @@ Refresh the page `images.html` page in the guestbook application to view the upl
275292

276293
## Shared storage across pods
277294

278-
Login into the other pod `guestbook-v1-5bd76b568f-w6h6h` to verify the volume mount.
295+
Login into the other pod `$POD2` to verify the volume mount.
279296

280297
```
281-
kubectl exec -it guestbook-v1-5bd76b568f-w6h6h bash
282-
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
283-
298+
kubectl exec -it $POD2 -- bash
284299
root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# ls -alt /home/node/app/client/images
285300
total 160
286301
-rw-r--r-- 1 501 staff 56191 Nov 13 03:44 gb3.jpg
@@ -301,17 +316,55 @@ fsf-dal1003d-fz.adn.networklayer.com:/IBM02SEV2058850_2177/data01 20G 128K
301316
tmpfs 7.9G 0 7.9G 0% /proc/acpi
302317
tmpfs 7.9G 0 7.9G 0% /proc/scsi
303318
tmpfs 7.9G 0 7.9G 0% /sys/firmware
319+
320+
root@guestbook-v1-5bd76b568f-w6h6h:/home/node/app# exit
304321
```
305322

306323
Note that the volume and the data are available on all the pods running the Guestbook application.
307324

325+
IBM Cloud File Storage is a NFS-based file storage that is available across all worker nodes within a single availability zone. If you are running a cluster with multiple nodes (within a single AZ) you can run the following commands to prove that your data is available across different nodes:
326+
327+
```
328+
kubectl get pods -o wide
329+
kubectl get nodes
330+
```
331+
Expected output:
332+
```
333+
$ kubectl get pods -o wide
334+
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
335+
guestbook-v1-6fb8b86876-n9jtz 1/1 Running 0 39h 172.30.224.70 10.38.216.205 <none> <none>
336+
guestbook-v1-6fb8b86876-njwcz 1/1 Running 0 39h 172.30.169.144 10.38.216.238 <none> <none>
337+
338+
$ kubectl get nodes
339+
NAME STATUS ROLES AGE VERSION
340+
10.38.216.205 Ready <none> 4d5h v1.18.10+IKS
341+
10.38.216.238 Ready <none> 4d5h v1.18.10+IKS
342+
```
343+
344+
To extend our table from Lab 1 we now have:
345+
346+
| Storage Type | Persisted at which level | Example Uses
347+
| - | - | - |
348+
| Container local storage | Container | ephermal state
349+
| Secondary Storage ([EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)) | Pod | Checkpoint a long computation process
350+
| Primary Storage ([HostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath)) | Node | Running cAdvisor in a container
351+
| IBM Cloud File Storage (NFS) | Availabilty Zone | Applications running in a single availabilty zone
352+
353+
Data is available to all nodes within the availability zone where the file storage exists, but the `accessMode` parameter on the `PersistentVolumeClaim` determines if multiple pods are able to mount a volume specificed by a PVC. The possible values for this parameter are:
354+
355+
- **ReadWriteMany**: The PVC can be mounted by multiple pods. All pods can read from and write to the volume.
356+
- **ReadOnlyMany**: The PVC can be mounted by multiple pods. All pods have read-only access.
357+
- **ReadWriteOnce**: The PVC can be mounted by one pod only. This pod can read from and write to the volume.
358+
308359

309360
## [Optional exercises]
310361

311-
Back up data.
312-
Delete pods to confirm that it does not impact the data used by the application.
313-
Delete the Kubernetes cluster.
314-
Create a new cluster and reuse the volume.
362+
Another way to see that the data is persisted at the availability zone level, you can:
363+
364+
- Back up data.
365+
- Delete pods to confirm that it does not impact the data used by the application.
366+
- Delete the Kubernetes cluster.
367+
- Create a new cluster and reuse the volume.
315368

316369
## Clean up
317370

0 commit comments

Comments
 (0)