4. eksctl 기반 설치

Update: 2022-10-27 / 1H

eksctl 소개

eksctl은 관리형 Kubernetes 서비스 인 EKS에서 클러스터를 생성하기위한 간단한 CLI 도구입니다. Go로 작성되었으며 CloudFormation을 사용하며 Weaveworks 가 작성했으며 단 하나의 명령으로 몇 분 안에 기본 클러스터를 만듭니다. 이것은 EKS를 구성하기 위한 도구 이며, AWS 관리콘솔에서 제공하는 EKS UI, CDK, Terraform, Rancher 등 다양한 도구로도 구성이 가능합니다.

eksctl을 통한 EKS 구성

1.eksctl 설치

아래와 같이 eksctl을 Cloud9에 설치하고 버전을 확인합니다.

# eksctl 설정 
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

# eksctl 자동완성 - bash
. <(eksctl completion bash)
eksctl version

2. VPC/Subnet 정보 확인

앞서 Cloudformation 구성에서 생성한 VPC 자원들에 대한 고유의 자원 값을 추출해서, Cloud9 내에서 환경 변수에 저장합니다. 아래 Shell을 실행합니다.

~/environment/myeks/shell/eks_shell.sh

cat ~/.bash_profile 을 실행해서 환경 변수가 정상적으로 입력 되었는 지 확인해 봅니다.

VPC id, subnet id, region, master arn은 eksctl을 통해 EKS cluster를 배포하는 데 사용합니다.

3. eksctl 배포 yaml 수정

eksctl yaml 생성을 위해 아래 Shell을 실행합니다. eksworkshop.yaml 이라는 파일이 자동으로 생성됩니다. 해당 파일은 eksctl 에서 eks cluster 구성을 위한 manifest 파일로 사용될 것입니다. (파일 위치 - ~/environment/myeks)

# eksctl yaml 실행 
~/environment/myeks/shell/eksctl_shell.sh
 

vpc/subnet id , KMS CMK keyARN 등이 다를 경우 설치 에러가 발생합니다. 또한 Cloud9의 publickeyPath의 경로도 확인하고, 반드시 다음 단계를 진행하기 전에 다시 한번 Review 합니다.

생성된 eksctl yaml 파일을 dry-run을 실행시켜서 확인해 봅니다.

eksctl create cluster --config-file=/home/ec2-user/environment/myeks/eksworkshop.yaml --dry-run

4. cluster 생성

eksctl을 통해 EKS Cluster를 생성합니다.

# eksctl로 cluster 만들기 
eksctl create cluster --config-file=/home/ec2-user/environment/myeks/eksworkshop.yaml
 

EKS Cluster를 생성하기 위해 20분 정도 시간이 소요됩니다.

출력 결과 예시

2022-04-21 13:29:20 [ℹ]  eksctl version 0.93.0
2022-04-21 13:29:20 [ℹ]  using region ap-northeast-2
2022-04-21 13:29:20 [✔]  using existing VPC (vpc-03e394fb459d894fd) and subnets (private:map[PrivateSubnet01:{subnet-0c5f9613e45bbf12d ap-northeast-2a 10.11.64.0/20} PrivateSubnet02:{subnet-068c86b0cd8fc9bbc ap-northeast-2b 10.11.80.0/20} PrivateSubnet03:{subnet-0bf66408a64af0812 ap-northeast-2c 10.11.96.0/20}] public:map[PublicSubnet01:{subnet-0a7d18f788f913a4d ap-northeast-2a 10.11.0.0/20} PublicSubnet02:{subnet-01f00b8cd50a95661 ap-northeast-2b 10.11.16.0/20} PublicSubnet03:{subnet-0f6b932d0e8db351f ap-northeast-2c 10.11.32.0/20}])
2022-04-21 13:29:20 [ℹ]  nodegroup "ng-public-01" will use "ami-0faa1b4cd7d224b2d" [AmazonLinux2/1.21]
2022-04-21 13:29:20 [ℹ]  using SSH public key "/home/ec2-user/environment/eksworkshop.pub" as "eksctl-eksworkshop-nodegroup-ng-public-01-c7:3c:65:44:87:bc:7d:af:86:b5:e5:9a:c0:02:72:1f" 
2022-04-21 13:29:20 [ℹ]  nodegroup "ng-private-01" will use "ami-0faa1b4cd7d224b2d" [AmazonLinux2/1.21]
2022-04-21 13:29:20 [ℹ]  using SSH public key "/home/ec2-user/environment/eksworkshop.pub" as "eksctl-eksworkshop-nodegroup-ng-private-01-c7:3c:65:44:87:bc:7d:af:86:b5:e5:9a:c0:02:72:1f" 
2022-04-21 13:29:20 [ℹ]  nodegroup "managed-ng-public-01" will use "" [AmazonLinux2/1.21]
2022-04-21 13:29:20 [ℹ]  using SSH public key "/home/ec2-user/environment/eksworkshop.pub" as "eksctl-eksworkshop-nodegroup-managed-ng-public-01-c7:3c:65:44:87:bc:7d:af:86:b5:e5:9a:c0:02:72:1f" 
2022-04-21 13:29:21 [ℹ]  nodegroup "managed-ng-private-01" will use "" [AmazonLinux2/1.21]
2022-04-21 13:29:21 [ℹ]  using SSH public key "/home/ec2-user/environment/eksworkshop.pub" as "eksctl-eksworkshop-nodegroup-managed-ng-private-01-c7:3c:65:44:87:bc:7d:af:86:b5:e5:9a:c0:02:72:1f" 
2022-04-21 13:29:21 [ℹ]  using Kubernetes version 1.21
2022-04-21 13:29:21 [ℹ]  creating EKS cluster "eksworkshop" in "ap-northeast-2" region with managed nodes and un-managed nodes
2022-04-21 13:29:21 [ℹ]  4 nodegroups (managed-ng-private-01, managed-ng-public-01, ng-private-01, ng-public-01) were included (based on the include/exclude rules)
2022-04-21 13:29:21 [ℹ]  will create a CloudFormation stack for cluster itself and 2 nodegroup stack(s)
2022-04-21 13:29:21 [ℹ]  will create a CloudFormation stack for cluster itself and 2 managed nodegroup stack(s)
2022-04-21 13:29:21 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eksworkshop" in "ap-northeast-2"
2022-04-21 13:29:21 [ℹ]  configuring CloudWatch logging for cluster "eksworkshop" in "ap-northeast-2" (enabled types: api, audit, authenticator, controllerManager, scheduler & no types disabled)
2022-04-21 13:29:21 [ℹ]  
2 sequential tasks: { create cluster control plane "eksworkshop", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        4 parallel sub-tasks: { 
            create nodegroup "ng-public-01",
            create nodegroup "ng-private-01",
            create managed nodegroup "managed-ng-public-01",
            create managed nodegroup "managed-ng-private-01",
        },
    } 
}
2022-04-21 13:29:21 [ℹ]  building cluster stack "eksctl-eksworkshop-cluster"
2022-04-21 13:29:21 [ℹ]  deploying stack "eksctl-eksworkshop-cluster"
2022-04-21 13:29:51 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-cluster"
2022-04-21 13:42:22 [ℹ]  building nodegroup stack "eksctl-eksworkshop-nodegroup-ng-public-01"
2022-04-21 13:42:22 [ℹ]  building nodegroup stack "eksctl-eksworkshop-nodegroup-ng-private-01"
2022-04-21 13:42:22 [ℹ]  building managed nodegroup stack "eksctl-eksworkshop-nodegroup-managed-ng-private-01"
2022-04-21 13:42:22 [ℹ]  building managed nodegroup stack "eksctl-eksworkshop-nodegroup-managed-ng-public-01"
2022-04-21 13:42:22 [ℹ]  deploying stack "eksctl-eksworkshop-nodegroup-managed-ng-private-01"
2022-04-21 13:42:22 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-managed-ng-private-01"
2022-04-21 13:42:22 [ℹ]  deploying stack "eksctl-eksworkshop-nodegroup-ng-public-01"
2022-04-21 13:42:22 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-ng-public-01"
2022-04-21 13:42:22 [ℹ]  deploying stack "eksctl-eksworkshop-nodegroup-ng-private-01"
2022-04-21 13:42:22 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-ng-private-01"
2022-04-21 13:42:22 [ℹ]  deploying stack "eksctl-eksworkshop-nodegroup-managed-ng-public-01"
2022-04-21 13:42:22 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-managed-ng-public-01"
2022-04-21 13:42:40 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-ng-public-01"
2022-04-21 13:42:42 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-managed-ng-private-01"
2022-04-21 13:42:42 [ℹ]  waiting for CloudFormation stack "eksctl-eksworkshop-nodegroup-ng-private-01"

2022-04-21 13:46:12 [ℹ]  waiting for the control plane availability...
2022-04-21 13:46:12 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2022-04-21 13:46:12 [✔]  all EKS cluster resources for "eksworkshop" have been created
2022-04-21 13:46:12 [ℹ]  adding identity "arn:aws:iam::511357390802:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-1WO7W7CJXOLYY" to auth ConfigMap
2022-04-21 13:46:13 [ℹ]  nodegroup "ng-public-01" has 0 node(s)
2022-04-21 13:46:13 [ℹ]  waiting for at least 3 node(s) to become ready in "ng-public-01"
2022-04-21 13:46:36 [ℹ]  nodegroup "ng-public-01" has 3 node(s)
2022-04-21 13:46:36 [ℹ]  node "ip-10-11-14-197.ap-northeast-2.compute.internal" is ready
2022-04-21 13:46:36 [ℹ]  node "ip-10-11-21-169.ap-northeast-2.compute.internal" is ready
2022-04-21 13:46:36 [ℹ]  node "ip-10-11-38-99.ap-northeast-2.compute.internal" is ready
2022-04-21 13:46:36 [ℹ]  adding identity "arn:aws:iam::511357390802:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-MONX9B0T65E3" to auth ConfigMap
2022-04-21 13:46:36 [ℹ]  nodegroup "ng-private-01" has 0 node(s)
2022-04-21 13:46:36 [ℹ]  waiting for at least 3 node(s) to become ready in "ng-private-01"
2022-04-21 13:47:09 [ℹ]  nodegroup "ng-private-01" has 3 node(s)
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-73-205.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-95-138.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-99-40.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  nodegroup "managed-ng-public-01" has 3 node(s)
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-14-39.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-17-179.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-34-10.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  waiting for at least 3 node(s) to become ready in "managed-ng-public-01"
2022-04-21 13:47:09 [ℹ]  nodegroup "managed-ng-public-01" has 3 node(s)
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-14-39.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-17-179.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-34-10.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  nodegroup "managed-ng-private-01" has 3 node(s)
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-100-116.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-67-237.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-88-144.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  waiting for at least 3 node(s) to become ready in "managed-ng-private-01"
2022-04-21 13:47:09 [ℹ]  nodegroup "managed-ng-private-01" has 3 node(s)
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-100-116.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-67-237.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:09 [ℹ]  node "ip-10-11-88-144.ap-northeast-2.compute.internal" is ready
2022-04-21 13:47:10 [ℹ]  kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2022-04-21 13:47:10 [✔]  EKS cluster "eksworkshop" in "ap-northeast-2" region is ready

5. Cluster 생성 확인

정상적으로 Cluster가 생성되었는지 확인합니다.

kubectl get nodes

출력 결과 예시

$  kubectl get nodes
NAME                                              STATUS   ROLES    AGE     VERSION
ip-10-11-1-225.ap-northeast-2.compute.internal    Ready    <none>   2m16s   v1.22.15-eks-fb459a0
ip-10-11-15-121.ap-northeast-2.compute.internal   Ready    <none>   5m26s   v1.22.15-eks-fb459a0
ip-10-11-27-197.ap-northeast-2.compute.internal   Ready    <none>   2m17s   v1.22.15-eks-fb459a0
ip-10-11-27-216.ap-northeast-2.compute.internal   Ready    <none>   5m14s   v1.22.15-eks-fb459a0
ip-10-11-33-12.ap-northeast-2.compute.internal    Ready    <none>   2m17s   v1.22.15-eks-fb459a0
ip-10-11-47-74.ap-northeast-2.compute.internal    Ready    <none>   5m25s   v1.22.15-eks-fb459a0
ip-10-11-50-128.ap-northeast-2.compute.internal   Ready    <none>   83s     v1.22.15-eks-fb459a0
ip-10-11-59-180.ap-northeast-2.compute.internal   Ready    <none>   5m34s   v1.22.15-eks-fb459a0
ip-10-11-69-123.ap-northeast-2.compute.internal   Ready    <none>   5m23s   v1.22.15-eks-fb459a0
ip-10-11-74-5.ap-northeast-2.compute.internal     Ready    <none>   84s     v1.22.15-eks-fb459a0
ip-10-11-80-166.ap-northeast-2.compute.internal   Ready    <none>   79s     v1.22.15-eks-fb459a0
ip-10-11-90-157.ap-northeast-2.compute.internal   Ready    <none>   5m30s   v1.22.15-eks-fb459a0
  • 생성된 VPC와 Subnet, Internet Gateway, NAT Gateway, Route Table등을 확인해 봅니다.

  • 생성된 EC2 Worker Node들도 확인해 봅니다.

  • EKS와 eksctl을 통해 생생된 Cloudformation도 확인해 봅니다.

다음과 같은 구성도가 완성되었습니다.

6. EKS 구성 확인

EKS 콘솔을 통해서, 생성된 EKS Cluster를 확인 할 수 있습니다.

각 계정의 User로 로그인 한 경우, 아래에서 처럼 eks cluster에 대한 정보를 확인 할 수 없습니다. 이것은 권한이 없기 때문입니다. User의 권한을 Cloud9에서 추가해 줍니다.

configmap 인증 정보 수정

cloud9 IDE Terminal 에 kubectl 명령을 통해서, aws-auth 파일을 확인해 봅니다.

kubectl get configmap -n kube-system aws-auth -o yaml

aws-auth.yaml 파일을 아래 디렉토리에 생성합니다.

kubectl get configmap -n kube-system aws-auth -o yaml | grep -v "creationTimestamp\|resourceVersion\|selfLink\|uid" | sed '/^  annotations:/,+2 d' > ~/environment/aws-auth.yaml
cp ~/environment/aws-auth.yaml ~/environment/aws-auth_backup.yaml

Cloud9에서 ~/environment/aws-auth.yaml 을 열고, 아래 값을 aws-auth 파일에 입력합니다.

  mapUsers: |
    - userarn: arn:aws:iam::xxxxxxxx:user/{username}
      username: {username}
      groups:
        - system:masters

user arn에서 xxxxx 의 값은 account id 입니다. 아래와 같이 account id는 이미 Shell 환경 변수에 저장해 두었습니다.

echo $ACCOUNT_ID

아래 새로운 사용자의 권한을 mapRoles 뒤에 추가해 줍니다. kubectl edit는 vi edit과 동일하게 수정하는 방식입니다. 아래 명령을 입력하고 복사해서 붙여 넣습니다.

추가한 이후 aws-auth.yaml 파일 입니다.

apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-mana-NodeInstanceRole-1WKNHHYJD99CR
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-mana-NodeInstanceRole-WBJOTQC4HJOD
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-122GCV62TF8AS
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-1SRFNRXCSDVMW
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - userarn: arn:aws:iam::027268078051:user/user01
      username: user01
      groups:
        - system:masters
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
# 아래와 같이 터미널에서 직접 수정도 가능합니다. 
# kubectl edit -n kube-system configmap/aws-auth

aws-auth.yaml을 실행시켜 AWS IAM User 에서도 EKS Cluster 접근 권한을 활성화 합니다.

kubectl apply -f ~/environment/aws-auth.yaml

실제 configmap에 대한 값을 확인해 봅니다. configmap은 kube-system namespace에 존재합니다.

kubectl describe configmap -n kube-system aws-auth

아래는 결과에 대한 예입니다.

Name:         aws-auth
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
mapRoles:
----
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-mana-NodeInstanceRole-1WKNHHYJD99CR
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-mana-NodeInstanceRole-WBJOTQC4HJOD
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-122GCV62TF8AS
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::027268078051:role/eksctl-eksworkshop-nodegroup-ng-p-NodeInstanceRole-1SRFNRXCSDVMW
  username: system:node:{{EC2PrivateDNSName}}

mapUsers:
----
- userarn: arn:aws:iam::027268078051:user/user01
  username: user01
  groups:
    - system:masters

Events:  <none>

EKS Cluster 결과 확인.

EKS Cluster를 다시 콘솔에서 확인해 봅니다. 생성한 모든 노드들을 확인할 수 있습니다. 생성한 클러스터를 선택합니다.

컴퓨팅을 선택하고, 생성된 WorkerNode들을 확인해 봅니다.

EKS Cluster내에 생성된 워크로드들을 확인해 볼 수 있습니다.

managed Node type과 Self Managed Node Type의 차이를 확인할 수 있습니다

Kuernetes의 Resource들을 선택하고 확인해 봅니다.

아래와 같은 EKS Cluster가 완성되었습니다. kubectl 명령을 통해 확인해 봅니다.

#kube-system namespace에 생성된 자원 확인 
kubectl -n kube-system get all

#주요 Pod의 상세 정보 확인 
kubectl -n kube-system pods <pod-name> -o wide

# node 상세 정보 확인 
kubectl get nodes -o wide

다음 섹션에서 워크로드들을 생성한 이후 EKS Cluster 메뉴에서 확인해 봅니다.

Last updated