Deploying a k3s Cluster with Nginx Load Balancer using Terraform Blueprint Approach

Md. Ashraf Bhuiya
6 min readOct 26, 2024

--

In this blog, we’ll go through the steps to:

  1. Create a private k3s cluster in AWS.
  2. Set up an Nginx load balancer in a public subnet that routes traffic to the React and Flask applications.
  3. Dockerize and deploy both frontend and backend applications on the k3s cluster.

Prerequisites

  • AWS and Docker Hub accounts.
  • Installed tools: Terraform, AWS CLI, Docker, and SSH.

Infrastructure Overview

The architecture will have:

  • A k3s master node and two worker nodes in a private subnet.
  • An Nginx instance in a public subnet is configured as a load balancer.

Configure AWS CLI

Before starting with Terraform, ensure your AWS CLI is configured. This allows Terraform to interact with your AWS account.

aws configure

Provide my AWS Access Key, Secret Access Key, Default Region (e.g., ap-southeast-1), and output format (e.g., json).

Step 1: Setting up Terraform Configuration

Let’s start by defining the AWS provider, VPC, and other network components in main.tf:

# Provider configuration for AWS
provider "aws" {
region = var.aws_region
}

# Generate a new RSA key pair for SSH access to the k3s cluster
resource "tls_private_key" "k3s_key" {
algorithm = "RSA"
rsa_bits = 4096
}

# Create an AWS key pair for SSH access
resource "aws_key_pair" "k3s_key_pair" {
key_name = "k3s-key-pair"
public_key = tls_private_key.k3s_key.public_key_openssh
}

# Store the private key locally
resource "local_file" "private_key" {
content = tls_private_key.k3s_key.private_key_pem
filename = "${path.module}/k3s-key-pair.pem"
file_permission = "0600"
}

# Create a VPC for the k3s environment
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true
tags = { Name = "k3s-vpc" }
}

# Define a public subnet for the Nginx load balancer
resource "aws_subnet" "public" {
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnet_cidr
map_public_ip_on_launch = true
tags = { Name = "k3s-public-subnet" }
}

# Define a private subnet for the k3s cluster
resource "aws_subnet" "private" {
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnet_cidr
tags = { Name = "k3s-private-subnet" }
}

# Configure an internet gateway for public internet access
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = { Name = "k3s-igw" }
}

# Public route table to direct traffic to the internet gateway
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = { Name = "k3s-public-rt" }
}

# Define a route table for the private subnet to use a NAT gateway
resource "aws_route_table" "private" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}
tags = { Name = "k3s-private-rt" }
}

# Associate route tables with respective subnets
resource "aws_route_table_association" "public" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "private" {
subnet_id = aws_subnet.private.id
route_table_id = aws_route_table.private.id
}

# Create an Elastic IP for the NAT gateway
resource "aws_eip" "nat" {
domain = "vpc"
}

# NAT Gateway to enable internet access for the private subnet
resource "aws_nat_gateway" "main" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public.id
tags = { Name = "k3s-nat-gw" }
}

# Define security group for k3s cluster
resource "aws_security_group" "k3s_cluster" {
name = "k3s-cluster-sg"
description = "Security group for k3s cluster"
vpc_id = aws_vpc.main.id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [aws_vpc.main.cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "k3s-cluster-sg" }
}

# Security group for the Nginx load balancer
resource "aws_security_group" "nginx" {
name = "nginx-sg"
description = "Security group for NGINX load balancer and SSH"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 6443
to_port = 6443
protocol = "tcp"
cidr_blocks = [aws_vpc.main.cidr_block]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "nginx-sg" }
}

# Generate a random token for k3s nodes to join the cluster
resource "random_password" "k3s_token" {
length = 16
special = false
}

variables.tf

Defines key configurations for the nodes, such as the AWS region, VPC CIDR, subnets, instance types, and AMIs.

variable "aws_region" { default = "ap-southeast-1" }
variable "vpc_cidr" { default = "10.0.0.0/16" }
variable "public_subnet_cidr" { default = "10.0.1.0/24" }
variable "private_subnet_cidr" { default = "10.0.2.0/24" }
variable "instance_type" { default = "t3.small" }
variable "ubuntu_ami" { default = "ami-047126e50991d067b" }
variable "key_name" { default = "my-keypair" }

ec2-instances.tf

This file provisions:

  1. The k3s master node and two worker nodes in a private subnet, each with scripts to install k3s.
  2. An Nginx load balancer instance in the public subnet.

Here’s the Terraform code for the master node setup:

resource "aws_instance" "master" {
ami = var.ubuntu_ami
instance_type = var.instance_type
subnet_id = aws_subnet.private.id
key_name = var.key_name
user_data = <<-EOF
#!/bin/bash
apt-get update
apt-get install -y curl
curl -sfL https://get.k3s.io | sh -s - server
EOF
tags = { Name = "k3s-master" }
}

outputs.tf

Defines output values to get the private IPs of nodes and the public IP of the Nginx load balancer.

output "master_node_private_ip" {
value = aws_instance.master.private_ip
}

output "nginx_public_ip" {
value = aws_instance.nginx.public_ip
}

Step 2: Dockerize Applications and Push to Docker Hub

Dockerfile for React Frontend

# Dockerfile for React frontend
FROM node:16-alpine
WORKDIR /app
COPY . .
RUN npm install && npm run build
EXPOSE 3000
CMD ["npm", "start"]

Dockerfile for Flask Backend

# Dockerfile for Flask backend
FROM python:3.9-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "app.py"]

Push the Docker images:

# Build and push React frontend
docker build -t your_dockerhub_username/react-frontend .
docker push your_dockerhub_username/react-frontend

# Build and push Flask backend
docker build -t your_dockerhub_username/flask-backend .
docker push your_dockerhub_username/flask-backend

Step 3: SSH into Nginx Instance and Configure Nginx

  1. SSH into the Nginx instance:
ssh -i my-keypair.pem ubuntu@<NGINX_PUBLIC_IP>

2. Create the Nginx configuration file (nginx.conf):

events {}

http {
upstream react_app {
server <MASTER_NODE_PRIVATE_IP>:30002;
server <WORKER_NODE_1_PRIVATE_IP>:30002;
server <WORKER_NODE_2_PRIVATE_IP>:30002;
}

upstream flask_api {
server <MASTER_NODE_PRIVATE_IP>:30001;
server <WORKER_NODE_1_PRIVATE_IP>:30001;
server <WORKER_NODE_2_PRIVATE_IP>:30001;
}

server {
listen 80;

# Route traffic for React app
location / {
proxy_pass http://react_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

# Route traffic for Flask API
location /api/ {
proxy_pass http://flask_api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}

Reload Nginx to apply the changes:

sudo systemctl restart nginx

Step 4: SSH into the Master Node to Deploy Applications

  1. SSH into the master node from the Nginx instance:
ssh -i my-keypair.pem ubuntu@<MASTER_NODE_PRIVATE_IP>

2. Create Kubernetes deployment files for both applications.

react-frontend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
spec:
replicas: 2
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-container
image: your_dockerhub_username/react-frontend
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: react-service
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30002
selector:
app: react-app

flask-backend.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-api
spec:
replicas: 2
selector:
matchLabels:
app: flask-api
template:
metadata:
labels:
app: flask-api
spec:
containers:
- name: flask-container
image: your_dockerhub_username/flask-backend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
nodePort: 30001
selector:
app: flask-api

3. Apply the deployment files:

kubectl apply -f react-frontend.yaml
kubectl apply -f flask-backend.yaml

Step 5: Access the Application

With everything set up, access the applications using the Nginx load balancer’s public IP. The React app will be accessible at http://<NGINX_PUBLIC_IP>/, and the Flask API at http://<NGINX_PUBLIC_IP>/api/.

--

--

Md. Ashraf Bhuiya
Md. Ashraf Bhuiya

Responses (1)