Skip to content
This repository was archived by the owner on Nov 30, 2023. It is now read-only.

Commit a3acfe9

Browse files
committed
Adding README-QWIKLABS.md file
1 parent 6f848a9 commit a3acfe9

File tree

1 file changed

+237
-0
lines changed

1 file changed

+237
-0
lines changed

README-QWIKLABS.md

Lines changed: 237 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,237 @@
1+
# Connecting to Cloud SQL from an application in Kubernetes Engine
2+
3+
## Table of Contents
4+
<!-- TOC -->
5+
* [Introduction](#introduction)
6+
* [Unprivileged service accounts](#unprivileged-service-accounts)
7+
* [Privileged service accounts in containers](#privileged-service-accounts-in-containers)
8+
* [Cloud SQL Proxy](#cloud-sql-proxy)
9+
* [Architecture](#architecture)
10+
* [Initial Setup](#initial-setup)
11+
* [Configure gcloud](#configure-gcloud)
12+
* [Supported Operating Systems](#supported-operating-systems)
13+
* [Tools](#tools)
14+
* [Install Cloud SDK](#install-cloud-sdk)
15+
* [Install kubectl CLI](#install-kubectl-cli)
16+
* [Authenticate gcloud](#authenticate-gcloud)
17+
* [Deployment](#deployment)
18+
* [Validation](#validation)
19+
* [Teardown](#teardown)
20+
* [Troubleshooting](#troubleshooting)
21+
* [Issue](#issue)
22+
* [Resolution](#resolution)
23+
<!-- TOC -->
24+
25+
## Introduction
26+
27+
This demo shows how easy it is to connect an application in Kubernetes Engine to
28+
a Cloud SQL instance using the Cloud SQL Proxy container as a sidecar container.
29+
You will deploy a [Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (Kubernetes Engine)
30+
cluster and a [Cloud SQL](https://cloud.google.com/sql/docs/) Postgres instance
31+
and use the [Cloud SQL Proxy container](https://gcr.io/cloudsql-docker/gce-proxy:1.11)
32+
to allow communication between them.
33+
34+
While this demo is focused on connecting to a Cloud SQL instance with a Cloud
35+
SQL Proxy container, the concepts are the same for any GCP managed service that
36+
requires API access.
37+
38+
The key takeaways for this demo are:
39+
* How to protect your database from unauthorized access by using an
40+
unprivileged service account on your Kubernetes Engine nodes
41+
* How to put privileged service account credentials into a container running on
42+
Kubernetes Engine
43+
* How to use the Cloud SQL Proxy to offload the work of connecting to your
44+
Cloud SQL instance and reduce your applications knowledge of your infrastructure
45+
46+
### Unprivileged service accounts
47+
48+
By default all Kubernetes Engine nodes are assigned the default Compute Engine
49+
service account. This service account is fairly high privilege and has access
50+
to many GCP services. Because of the way the Google Cloud SDK is setup, software
51+
that you write will use the credentials assigned to the compute engine instance
52+
on which it is running. Since we don't want all of our containers to have the
53+
privileges that the default Compute Engine service account has, we need to make
54+
a least-privilege service account for our Kubernetes Engine nodes and then
55+
create more specific (but still least-privilege) service accounts for our
56+
containers.
57+
58+
### Privileged service accounts in containers
59+
60+
The only two ways to get service account credentials are 1.) through your host
61+
instance, which as we discussed we don't want, or 2.) through a credentials
62+
file. This demo will show you how to get this credentials file into your
63+
container running in Kubernetes Engine so your application has the privileges
64+
it needs.
65+
66+
### Cloud SQL Proxy
67+
68+
The Cloud SQL Proxy allows you to offload the burden of creating and
69+
maintaining a connection to your Cloud SQL instance to the Cloud SQL Proxy
70+
process. Doing this allows your application to be unaware of the connection
71+
details and simplifies your secret management. The Cloud SQL Proxy comes
72+
pre-packaged by Google as a Docker container that you can run alongside your
73+
application container in the same Kubernetes Engine pod.
74+
75+
## Architecture
76+
77+
The application and its sidecar container are deployed in a single Kubernetes
78+
(k8s) pod running on the only node in the Kubernetes Engine cluster. The
79+
application communicates with the Cloud SQL instance via the Cloud SQL Proxy
80+
process listening on localhost.
81+
82+
The k8s manifest builds a single-replica Deployment object with two containers,
83+
pgAdmin and Cloud SQL Proxy. There are two secrets installed into the Kubernetes
84+
Engine cluster: the Cloud SQL instance connection information and a service
85+
account key credentials file, both used by the Cloud SQL Proxy containers Cloud
86+
SQL API calls.
87+
88+
The application doesn't have to know anything about how to connect to Cloud
89+
SQL, nor does it have to have any exposure to its API. The Cloud SQL Proxy
90+
process takes care of that for the application. It's important to note that the
91+
Cloud SQL Proxy container is running as a 'sidecar' container in the pod.
92+
93+
![Application in Kubernetes Engine using a Cloud SQL Proxy sidecar container to communicate
94+
with a Cloud SQL Proxy instance](docs/architecture-diagram.png)
95+
96+
## Initial Setup
97+
98+
A Google Cloud account and project is required for this. Access to an existing Google Cloud
99+
project with the Kubernetes Engine service enabled If you do not have a Google Cloud account
100+
please signup for a free trial [here](https://cloud.google.com).
101+
102+
### Configure gcloud
103+
104+
When using Cloud Shell execute the following command in order to setup gcloud cli. When executing this command please setup your region
105+
and zone.
106+
107+
```console
108+
gcloud init
109+
```
110+
111+
### Supported Operating Systems
112+
113+
This project will run on macOS, Linux, or in a [Google Cloud Shell](https://cloud.google.com/shell/docs/).
114+
115+
### Tools
116+
117+
When not using Cloud Shell, the following tools are required.
118+
119+
1. [Google Cloud SDK (200.0.0 or later)](https://cloud.google.com/sdk/downloads)
120+
2. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) >= 1.8.6
121+
3. bash or bash compatible shell
122+
4. A Google Cloud Platform project you have access to with the default network
123+
still intact
124+
5. Create a project and set core/project with `gcloud config set project <your_p
125+
roject>`
126+
127+
#### Install Cloud SDK
128+
The Google Cloud SDK is used to interact with your GCP resources.
129+
[Installation instructions](https://cloud.google.com/sdk/downloads) for multiple platforms are available online.
130+
131+
#### Install kubectl CLI
132+
133+
The kubectl CLI is used to interteract with both Kubernetes Engine and kubernetes in general.
134+
[Installation instructions](https://cloud.google.com/kubernetes-engine/docs/quickstart)
135+
for multiple platforms are available online.
136+
137+
### Authenticate gcloud
138+
139+
Prior to running this demo, ensure you have authenticated your gcloud client by running the following command:
140+
141+
```console
142+
gcloud auth application-default login
143+
```
144+
145+
If you don't have a Google Cloud account you can sign up for a [free account](https://cloud.google.com/).
146+
147+
## Deployment
148+
149+
Deployment is fully automated. In order
150+
to deploy you need to run **create.sh**. The script
151+
takes the following parameters, in order:
152+
* A username for your Cloud SQL instance
153+
* A username for the pgAdmin console
154+
155+
The script requires the following environment variables to be defined:
156+
* USER_PASSWORD - the password to login to the Postgres instance
157+
* PG_ADMIN_CONSOLE_PASSWORD - the password to login to the pgAdmin UI
158+
159+
Here is what it looks like to run **create.sh**:
160+
161+
```USER_PASSWORD=<password> PG_ADMIN_CONSOLE_PASSWORD=<password> ./create.sh <DATABASE_USER_NAME> <PGADMIN_USERNAME>```
162+
163+
**create.sh** will run the following scripts:
164+
1. enable_apis.sh - enables the Kubernetes Engine API and Cloud SQL Admin API
165+
2. postgres_instance.sh - creates the Cloud SQL instance and additional
166+
Postgres user. Note that gcloud will timeout when waiting for the creation of a
167+
Cloud SQL instance so the script manually polls for its completion instead.
168+
3. service_account.sh - creates the service account for the Cloud SQL Proxy
169+
container and creates the credentials file
170+
4. cluster.sh - Creates the Kubernetes Engine cluster
171+
5. configs_and_secrets.sh - creates the Kubernetes Engine secrets and configMap
172+
containing credentials and connection string for the Cloud SQL instance
173+
6. pgadmin_deployment.sh - creates the pgAdmin4 pod
174+
175+
Once **create.sh** is complete you need to run ```make expose``` to connect to
176+
the running pgAdmin pod. ```make expose``` will port-forward to the running pod.
177+
You can [connect to the port-forwarded pgAdmin in your
178+
browser](http://127.0.0.1:8080/login). Use the ```<PGADMIN_USERNAME>``` in the "Email
179+
Address" field and ```<PG_ADMIN_CONSOLE_PASSWORD>``` you defined earlier to login to the console.
180+
From there you can click "Add New Server" and use the ```<DATABASE_USER_NAME>``` and
181+
```<USER_PASSWORD>``` you created earlier to connect to 127.0.0.1:5432.
182+
183+
## Validation
184+
185+
Validation is fully automated. The validation script checks for the existence
186+
of the Cloud SQL instance, the Kubernetes Engine cluster, and the running pod.
187+
All of these resources should exist after the deployment script completes. Now that the application is deployed, we can validate these three deployments by executing:
188+
189+
```console
190+
make validate
191+
```
192+
The script takes the following parameters, in order:
193+
* INSTANCE_NAME - the name of the existing Cloud SQL instance
194+
195+
A successful output will look like this:
196+
```console
197+
Cloud SQL instance exists
198+
GKE cluster exists
199+
pgAdmin4 Deployment object exists
200+
```
201+
202+
## Teardown
203+
204+
Teardown is fully automated. The teardown script deletes every resource created
205+
in the deployment script. In order to teardown you need to run,
206+
207+
```console
208+
make teardown
209+
```
210+
211+
It will run **teardown.sh** which will destroy all of the resources created for this demonstration.
212+
213+
The script takes the following parameters, in order:
214+
* INSTANCE_NAME - the name of the existing Cloud SQL instance
215+
216+
**teardown.sh** will run the following scripts:
217+
1. delete_resources.sh - deletes everything but the Cloud SQL instance
218+
2. delete_instance.sh - deletes the Cloud SQL instance
219+
220+
## Troubleshooting
221+
222+
### Issue
223+
224+
When creating a Cloud SQL instance you get the error:
225+
226+
```ERROR: (gcloud.sql.instances.create) Resource in project [...]
227+
is the subject of a conflict: The instance or operation is not
228+
in an appropriate state to handle the request.
229+
```
230+
231+
### Resolution
232+
233+
You cannot reuse an instance name for up to a week after you have deleted an
234+
instance.
235+
https://cloud.google.com/sql/docs/mysql/delete-instance
236+
237+
**This is not an officially supported Google product**

0 commit comments

Comments
 (0)