Kubernetes on AWS EKS — Complete Setup
Advertisement
Kubernetes on AWS EKS — Complete Setup
Deploy and manage production Kubernetes clusters on Amazon EKS with best practices.
Introduction
Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service simplifying cluster management while maintaining control over configuration.
- Kubernetes on AWS EKS — Complete Setup
- Prerequisites
- Creating EKS Cluster with eksctl
- Simple Cluster
- Production-Grade Cluster
- Managing EKS Clusters
- Scaling and Node Groups
- Networking
- VPC and Subnets
- Pod-to-Pod Communication
- RBAC and IAM Integration
- OIDC Provider for IRSA
- Use IAM role in Pod
- Storage
- EBS CSI Driver
- EFS CSI Driver
- Monitoring and Logging
- CloudWatch Container Insights
- VPC Flow Logs
- Load Balancing
- Network Load Balancer
- Cost Optimization
- Spot Instances
- Auto Scaling Groups
- Production Checklist
- FAQ
Prerequisites
# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Configure AWS credentials
aws configure
Creating EKS Cluster with eksctl
Simple Cluster
# Create cluster
eksctl create cluster \
--name my-cluster \
--region us-east-1 \
--nodegroup-name my-nodes \
--node-type t3.medium \
--nodes 3
# This creates:
# - VPC with public/private subnets
# - EKS control plane
# - Node group with 3 nodes
# - Security groups
# - IAM roles
Production-Grade Cluster
# Create cluster with advanced configuration
eksctl create cluster --config-file=cluster-config.yaml
Create cluster-config.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: prod-cluster
region: us-east-1
vpc:
cidr: 10.0.0.0/16
nat:
gateway: Single
nodeGroups:
- name: on-demand
minSize: 2
maxSize: 10
desiredCapacity: 3
instanceTypes:
- t3.large
labels:
node-type: on-demand
tags:
Environment: production
- name: spot
minSize: 1
maxSize: 5
desiredCapacity: 2
spot: true
instanceTypes:
- t3.large
- t3a.large
labels:
node-type: spot
tags:
Environment: production
iam:
withOIDC: true
addons:
- name: vpc-cni
- name: kube-proxy
- name: coredns
Managing EKS Clusters
# List clusters
eksctl get clusters
# Get cluster info
eksctl get cluster --name my-cluster -o json
# Update kubeconfig
aws eks update-kubeconfig --name my-cluster --region us-east-1
# Verify connection
kubectl cluster-info
kubectl get nodes
# Delete cluster
eksctl delete cluster --name my-cluster
Scaling and Node Groups
# List node groups
eksctl get nodegroup --cluster my-cluster
# Create new node group
eksctl create nodegroup \
--cluster my-cluster \
--name new-nodes \
--node-type t3.large \
--nodes 3
# Scale node group
eksctl scale nodegroup \
--cluster my-cluster \
--name my-nodes \
--nodes 5
# Delete node group
eksctl delete nodegroup \
--cluster my-cluster \
--name my-nodes
Networking
VPC and Subnets
# Get VPC info
aws ec2 describe-vpcs --filters "Name=tag:eks:cluster,Values=my-cluster"
# Get subnets
aws ec2 describe-subnets --filters "Name=vpc-id,Values=vpc-xxx"
# Security groups
aws ec2 describe-security-groups --filters "Name=group-name,Values=eks-cluster-sg*"
Pod-to-Pod Communication
Enabled by CNI plugin (AWS VPC CNI):
# Verify CNI
kubectl get daemonset -n kube-system aws-node
# CNI configuration
kubectl describe daemonset -n kube-system aws-node
RBAC and IAM Integration
OIDC Provider for IRSA
# Enable OIDC
eksctl utils associate-iam-oidc-provider \
--cluster my-cluster \
--region us-east-1
# Create IAM role for service account
eksctl create iamserviceaccount \
--cluster my-cluster \
--name ebs-csi-controller-sa \
--namespace kube-system \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
Use IAM role in Pod
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/my-app-role
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
serviceAccountName: my-app
containers:
- name: app
image: my-app:1.0
Storage
EBS CSI Driver
# Install EBS CSI driver
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm install aws-ebs-csi-driver aws-ebs-csi-driver/aws-ebs-csi-driver \
-n kube-system
# Create storage class
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
EOF
EFS CSI Driver
# Install EFS CSI driver
helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver
helm install aws-efs-csi-driver aws-efs-csi-driver/aws-efs-csi-driver \
-n kube-system
Monitoring and Logging
CloudWatch Container Insights
# Enable logging
eksctl utils update-cluster-logging \
--enable-types all \
--cluster my-cluster
# Install CloudWatch agent
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack
VPC Flow Logs
aws ec2 create-flow-logs \
--resource-type VPC \
--resource-ids vpc-xxx \
--traffic-type ALL \
--log-destination-type cloud-watch-logs \
--log-group-name /aws/vpc/flowlogs
Load Balancing
Network Load Balancer
# Install AWS LB controller
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system
# Create Ingress with NLB
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nlb-ingress
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
EOF
Cost Optimization
Spot Instances
# Use spot instances for cost savings
# Already configured in cluster-config.yaml
# Verify spot instances
kubectl get nodes -L karpenter.sh/capacity-type
Auto Scaling Groups
# Configure cluster autoscaler
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm install cluster-autoscaler autoscaler/cluster-autoscaler \
-n kube-system \
--set autoDiscovery.clusterName=my-cluster \
--set awsRegion=us-east-1
Production Checklist
- Enable VPC Flow Logs
- Configure CloudWatch logging
- Set up IAM OIDC provider
- Use managed add-ons (VPC CNI, CoreDNS, kube-proxy)
- Configure cluster autoscaling
- Use Spot instances where appropriate
- Set up ingress controller
- Configure storage classes
- Enable network policies
- Set pod security policies
FAQ
Q: Should I use EKS or EC2 for Kubernetes? A: EKS handles control plane management automatically. Use EKS unless you need low-level control or cost savings from manual setup.
Q: How do I update my EKS cluster? A: Use eksctl upgrade cluster --name my-cluster. Updates are handled automatically with rolling restarts.
Q: Can I mix On-Demand and Spot instances? A: Yes. Create separate node groups for each and use pod affinity to schedule appropriately.
Advertisement