You’ll build a simple Node.js + Express + MongoDB app and deploy it to:
-
Local Kubernetes (Minikube) using:
- Helm
- Kustomize
-
AWS EKS using:
-
Helm
-
Kustomize (EKS will be provisioned using cost-effective Terraform modules.)
-
-
GitOps deployment using:
- ArgoCD
Everything will follow DevOps best practices, using:
-
Modular folder structures
-
Production-ready YAML/Helm charts
-
Infrastructure as Code (Terraform)
-
GitOps workflows
node-k8s-app/ ├── backend/ │ ├── src/ │ │ ├── controllers/ │ │ ├── models/ │ │ ├── routes/ │ │ └── index.js │ ├── Dockerfile │ ├── .env │ └── package.json ├── kubernetes/ │ ├── helm/ │ └── kustomize/ ├── infra/ │ ├── terraform/ │ │ └── eks/ ├── gitops/ │ └── argocd/ └── README.md
backend/package.json
{
"name": "node-k8s-app",
"version": "1.0.0",
"main": "src/index.js",
"scripts": {
"start": "node src/index.js"
},
"dependencies": {
"express": "^4.18.2",
"mongoose": "^7.2.2",
"dotenv": "^16.3.1"
}
}
backend/src/index.js (code is contained in project folder) backend/.env (contained in .env file)
-
Containerize the App with Docker
-
Set up Minikube
-
Deploy with Helm
-
Deploy with Kustomize
-
Provision AWS EKS with Terraform
-
Deploy to EKS with Helm/Kustomize
-
Apply GitOps using ArgoCD
Step 1: Create the Dockerfile Go to the backend/ folder and create a file named Dockerfile.
Here’s the complete content and detailed explanation:
# Stage 1: Build stage
FROM node:18-alpine AS builder
# Install build tools
RUN apk --no-cache --update upgrade && \
apk add --no-cache python3 make g++
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY package-lock.json ./
# Install dependencies
RUN npm ci --ignore-scripts
# Copy all source files
COPY . .
# Stage 2: Production stage
FROM node:18-alpine
# Security updates
RUN apk --no-cache --update upgrade
WORKDIR /app
# Create non-root user
RUN addgroup -S appgroup && \
adduser -S appuser -G appgroup && \
chown -R appuser:appgroup /app
# Copy package files
COPY --from=builder --chown=appuser:appgroup /app/package*.json ./
COPY --from=builder --chown=appuser:appgroup /app/package-lock.json ./
# Install production dependencies
RUN npm ci --production --ignore-scripts && \
npm cache clean --force
# Copy application files from builder
COPY --from=builder --chown=appuser:appgroup /app/src ./src
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
# Environment
ENV NODE_ENV=production PORT=3000
# Healthcheck
HEALTHCHECK --interval=30s --timeout=5s \
CMD node -e "require('http').request({host:'localhost',port:3000},(r)=>{process.exit(r.statusCode===200?0:1)}).end()"
# Runtime
USER appuser
EXPOSE 3000
# Find and use the correct entry point (adjust if needed)
CMD ["node", "src/server.js"]
Still in the backend/ folder, create a .dockerignore file to avoid copying unnecessary files into the container.
node_modules
.env
This:
- Skips uploading node_modules (we'll rebuild them in Docker)
- Skips the .env file (we’ll inject env vars securely using Kubernetes later)
If you want to test the image locally (before deploying to Kubernetes):
Open a terminal in the backend/ folder.
Run:
# Build the image
docker build -t node-k8s-app .
# Run the container
docker run -p 3000:3000 --env MONGO_URI="your-local-mongo-uri" node-k8s-app
Then visit http://localhost:3000 in your browser — it should say: Node K8s App running!
Minikube is a lightweight Kubernetes implementation that runs a single-node cluster inside a virtual machine (or container) on your local machine. It lets you test and deploy Kubernetes apps locally.
- System Requirements
-
OS: Linux, macOS, or Windows
-
At least 2 CPUs, 2GB RAM, and 20GB free space
-
Virtualization support (e.g., VT-x/AMD-v enabled in BIOS)
- Install Required Tools a. Install Docker Docker is required to run containers, which Minikube will use.
- Ubuntu/Debian:
sudo apt update
sudo apt install -y docker.io
sudo usermod -aG docker $USER
newgrp docker
- Then verify
docker --version
b. Install Kubectl kubectl is the CLI tool to interact with Kubernetes.
Linux:
curl -LO "https://dl.k8s.io/release/$(curl -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
c. install minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube version
- Start Minikube
minikube start --driver=dockerExplanation:
-
--driver=docker: Tells Minikube to use Docker as the VM/container runtime.
-
If using VirtualBox or none, run minikube drivers to list what's supported.
- Confirm It Works
kubectl get nodesYou should see 1 node in the Ready state, like:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 1m v1.29.0
-
Enable Ingress Controller (Important for Web Access) Kubernetes needs an Ingress controller to expose services to your browser.
minikube addons enable ingressCheck its status:kubectl get pods -n ingress-nginx -
Use Minikube Docker Daemon (Optional but Helpful) So you don’t need to push images to DockerHub:
eval $(minikube docker-env)
Now, when you run:
docker build -t node-k8s-app .
The image is built inside Minikube’s Docker environment, and Kubernetes can access it directly.
| Command | Purpose |
|---|---|
minikube status |
Check status of the cluster |
minikube dashboard |
Launches a visual dashboard (try it!) |
minikube stop |
Stop the cluster |
minikube delete |
Delete the cluster |
minikube tunnel |
Exposes LoadBalancer services to localhost |
Permission denied on Docker: Make sure you ran sudo usermod -aG docker $USER and newgrp docker.
Ingress pods stuck: Run kubectl describe pod to check errors. Usually a restart helps.
Driver issues: Run minikube start --driver=none (Linux only) or --driver=virtualbox if Docker fails.
✅ At This Point You Should Have: A working Minikube Kubernetes cluster
Docker and kubectl installed
The ability to build Docker images inside Minikube
Helm is the package manager for Kubernetes. Think of it like apt for Ubuntu or npm for Node.js—but for Kubernetes apps. It lets you template Kubernetes YAML files, reuse configurations, and manage deployments easily.
We'll create this inside the project root:
project-root/
├── backend/
├── helm/
│ └── node-app/
│ ├── templates/
│ └── values.yaml
- Create the Helm Chart In your project root:
mkdir -p helm
cd helm
helm create node-app
This creates a sample chart with default templates.
- Understand the Helm Chart Layout
node-app/
├── Chart.yaml # Info about the chart (name, version)
├── values.yaml # Configurable values
├── templates/ # All Kubernetes YAML templates
│ ├── deployment.yaml
│ ├── service.yaml
│ └── ...- Clean Up Unnecessary Files You can remove or empty files like:
rm templates/tests/*
rm templates/ingress.yaml templates/hpa.yaml templates/serviceaccount.yaml
Then clean values.yaml to something simpler:
# charts/node-app/values.yaml
replicaCount: 1
image:
repository: node-k8s-app
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
mongodb:
uri: "mongodb://your-mongodb-url"
resources: {}
- Edit deployment.yaml Template Modify templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Chart.Name }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 3000
env:
- name: MONGO_URI
value: {{ .Values.mongodb.uri | quote }}
- Edit service.yam
apiVersion: v1
kind: Service
metadata:
name: {{ .Chart.Name }}
spec:
selector:
app: {{ .Chart.Name }}
ports:
- port: 3000
targetPort: 3000
type: {{ .Values.service.type }}
- Install the App with Helm Ensure your image is built inside Minikube’s Docker if you don't plan to use image from a remote container registry:
eval $(minikube docker-env)
docker build -t node-k8s-app ./backendNow install with Helm:
cd helm
helm install node-app ./node-appCheck status:
kubectl get all
7. Access the App
Expose it temporarily via port-forward:
kubectl port-forward service/node-app 3000:3000
Reuse templates with variables (values.yaml)
Package your entire app for sharing or CI/CD
Great for complex or production-ready deployments
Kustomize lets you customize raw Kubernetes YAML files using overlays and patches — no templating language or values files like Helm. You work with plain YAML and layer environments (dev, prod, etc.) cleanly.
We’ll set it up like this:
kustomize/ ├── base/ │ ├── deployment.yaml │ ├── service.yaml │ └── kustomization.yaml ├── overlays/ │ ├── dev/ │ │ ├── kustomization.yaml │ │ └── patch-env.yaml
- Create Directory Structure
mkdir -p kustomize/base
mkdir -p kustomize/overlays/dev
- Write the Base Deployment kustomize/base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-k8s-app
spec:
replicas: 1
selector:
matchLabels:
app: node-k8s-app
template:
metadata:
labels:
app: node-k8s-app
spec:
containers:
- name: node-k8s-app
image: node-k8s-app:latest
ports:
- containerPort: 3000
env:
- name: MONGO_URI
value: "mongodb://your-mongodb-url"
- Write the Base Service kustomize/base/service.yaml
apiVersion: v1
kind: Service
metadata:
name: node-k8s-app
spec:
selector:
app: node-k8s-app
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
- Create kustomization.yaml for Base kustomize/base/kustomization.yaml
resources:
- deployment.yaml
- service.yaml
- Build Docker Image Make sure you're using Minikube Docker env if you are not pulling image from remote container registry:
eval $(minikube docker-env)
docker build -t node-k8s-app ./backend
- Apply with Kustomize
Navigate to overlay folder and apply:
kubectl apply -k kustomize/overlays/dev