Oracle Kubernetes 1.22 单机部署中文指南

  • 一、前言

  • 二、基础环境部署

    • 1)前期准备
    • 2)安装容器 docker
    • 3)部署k8s
  • 三、k8s 管理平台 dashboard 环境部署

    • 1)dashboard 部署
    • 2)创建登录用户
    • 3)配置 hosts 登录 dashboard web
  • 四、harbor 部署

    • 1)安装 harbor

    • 2)配置 hosts

    • 3)创建 tls 证书

    • 4)安装 ingress

    • 5)安装 nfs

一、前言

官网:https://kubernetes.io/
官方文档:https://kubernetes.io/zh-cn/docs/home/

docker官网安装地址: https://docs.docker.com/engine/install/ubuntu/

k8s version: 1.22

docker version: 24.0.5

helm version: 3.14.2

ingress-nginx: 4.7.5

cert-manager: 1.11.1

主机系统:oracle-Linux-8

二、基础环境部署

1)前期准备

1、修改主机名

hostnamectl set-hostname k8s-master

2、关闭防火墙、设置iptables

 iptables -P FORWARD ACCEPT
 /etc/init.d/ufw stop
 ufw disable

3、关闭 swap

swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

4、修改内核参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF

modprobe br_netfilter

sysctl -p /etc/sysctl.d/k8s.conf

5、设置repository源

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

6、将 SELinux 设置为 permissive 模式:

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

2)安装容器 docker(所有节点)

提示:v1.24 之前的 Kubernetes 版本包括与 Docker Engine 的直接集成,使用名为 dockershim 的组件。这种特殊的直接整合不再是 Kubernetes 的一部分 (这次删除被作为 v1.20 发行版本的一部分宣布)。你可以阅读检查 Dockershim 弃用是否会影响你 以了解此删除可能会如何影响你。要了解如何使用 dockershim 进行迁移,请参阅从 dockershim 迁移。

1、安装 Docker

sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

2、修改docker cgroup

echo '{"registry-mirrors":["https://as94e9do.mirror.aliyuncs.com","https://ap39i30q.mirror.aliyuncs.com"],"exec-opts":["native.cgroupdriver=systemd"]}' >>   /etc/docker/daemon.json

systemctl daemon-reload

systemctl restart docker

3)、部署k8s

1、安装 kubeadm, kubelet 和 kubectl

参照文档:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

1、添加 Kubernetes 的 yum 仓库

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
##或者
##https://blog.csdn.net/liangpangpangyo/article/details/126901766
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


2、安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动:

sudo yum install -y kubelet-1.22 kubeadm-1.22 kubectl-1.22 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet


2、kubeadm初始化

# 导出初始化默认配置
kubeadm config print init-defaults > kubeadm.yaml
# 修改配置文件
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.125 #修改为当前节点内网ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: master # 修改为当前节点hostname
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.22.0 
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16 # 添加此行
  serviceSubnet: 10.96.0.0/12
scheduler: {}

## 初始化安装
kubeadm init --config kubeadm.yaml


kubernets 自v 1.24.0 后,就不再使用 docker.shim,替换采用 containerd 作为容器运行时端点。因此需要安装 containerd(在 docker 的基础下安装),上面安装 docker 的时候就自动安装了 containerd 了。这里的 docker 只是作为客户端而已。容器引擎还是 containerd。

containerd config default > /etc/containerd/config.toml

##重启 Containerd 服务
sudo systemctl restart containerd

发现节点有问题,查看日志 /var/log/messages

"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

接下来就是安装网络插件、

3.安装flannel

https://github.com/flannel-io/flannel/tree/master

#1、Deploying Flannel with kubectl
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#2、Deploying Flannel with helm
# Needs manual creation of namespace to avoid helm error
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged
helm repo add flannel https://flannel-io.github.io/flannel/
helm install flannel --set podCidr="10.244.0.0/16" --namespace kube-flannel flannel/flannel

#3、Intall CNI Network plugins
mkdir -p /opt/cni/bin
curl -O -L https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-arm64-v1.4.1.tgz
tar -C /opt/cni/bin -xzf cni-plugins-linux-arm64-v1.4.1.tgz

4、安装helm

下载地址: https://github.com/helm/helm/releases,此处节点cpu为arm架构,所以采用helm-v3.14.2-linux-arm64.tar.gz

wget https://get.helm.sh/helm-v3.14.2-linux-arm64.tar.gz
# 解压
tar -zxvf helm-v3.14.2-linux-arm64.tar.gz
# 在解压目录中找到helm程序,移动到需要的目录中
mv linux-arm64/helm /usr/local/bin/helm

5、安装ingress-nginx

下载地址:https://github.com/kubernetes/ingress-nginx/releases

相关参数修改参考文档:https://www.cnblogs.com/syushin/p/15271304.html

#1、 下载、解压
wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.10.0/ingress-nginx-4.10.0.tgz
tar -zxvf ingress-nginx-4.10.0.tgz 


 #3、 部署
 helm install ingress-nginx   ./ingress-nginx  -f ./ingress-nginx/values.yaml 
  
#2、 在./ingress-nginx/values.yaml 中修改参数
dnsPolicy: ClusterFirstWithHostNet


hostNetwork: true
# -- Use a `DaemonSet` or `Deployment


kind: DaemonSet



ipFamilies:
      - IPv4
    ports:
      http: 80
      https: 443
    targetPorts:
      http: http
      https: https
    type: ClusterIP ##修改为ClusterIP
    
    
## Default 404 backend
defaultBackend:
  ##
  enabled: true
  name: defaultbackend
  image:
    registry: registry.k8s.io
    image: defaultbackend-arm64
    ## for backwards compatibility consider setting the full image url via the repository value below
    ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail
    ## repository:
    tag: "1.5"
    
    
    
nodeSelector:
    kubernetes.io/os: linux
    ##ingress: true 添加这行时表示在集群中可以指定ingress-nginx部署在对应有ingress标签的节点,这里我们不需要


发现节点有问题,查看日志

1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/1 nodes are available:

因为kubernetes出于安全考虑默认情况下无法在master节点上部署pod,需要进行解除限制操作,此处的control-plane是指节点的roles角色

root@k8s-master:~|⇒  kubectl get nodes        
NAME         STATUS   ROLES           AGE   VERSION
master   Ready    control-plane,master   14h   v1.22.9
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/(角色)-


6、安装cert-manager

官方安装:

https://cert-manager.io/docs/installation/helm/

https://cert-manager.io/docs/tutorials/acme/nginx-ingress/#step-1---install-helm
https://artifacthub.io/packages/helm/cert-manager/cert-manager

 
 # 1、安装CRDs
 kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
 #2、安装cert-manager
 helm repo add jetstack https://charts.jetstack.io
 helm repo update
 helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.14.4   --set "ingressShim.defaultIssuerName=letsencrypt-prod,ingressShim.defaultIssuerKind=ClusterIssuer"
 # 3、部署Issuers
 wget -f https://raw.githubusercontent.com/cert-
 manager/website/master/content/docs/tutorials/acme/example/production-issuer.yaml
 # 4、修改production-issuer.yaml中有关参数
 apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: zszxingchenid@gmail.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx
 
  # 5、部署
  kubectl apply -f production-issuer.yaml

7、安装ingress-nginx+cert-manager应用检测ssl

helm install nginx-test ./nginx-test

kubectl get svc 
kubectl get secret
kubectl get certificate
kubectl describe certificate  nginx-test-example-tls


8、部署 local-path provisioner 持久化存储

#1、下载local-path-provisioner文件
wget https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

#2、修改local-path-storage.yaml的命名空间namespace为kube-system

apiVersion: v1
kind: Namespace
metadata:
  name: kube-system  # Set the namespace to kube-system
---
...
#3、部署
kubectl apply -f local-path-storage.yaml

#4、使用示例
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi


四、harbor 部署

1)安装harbor

1、添加harbor远程仓库

helm repo add harbor https://helm.goharbor.io
helm repo update

2、拉取部署文件、解压文件

helm pull harbor/harbor
tar -zxvf harbor-1.13.1.tgz

3、修改values.yaml配置

expose:
  # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
  # and fill the information in the corresponding section
  type: ingress
  tls:
    # Enable TLS or not.
    # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
    # Note: if the "expose.type" is "ingress" and TLS is disabled,
    # the port must be included in the command when pulling/pushing images.
    # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
    enabled: true
    # The source of the tls certificate. Set as "auto", "secret"
    # or "none" and fill the information in the corresponding section
    # 1) auto: generate the tls certificate automatically
    # 2) secret: read the tls certificate from the specified secret.
    # The tls certificate can be generated manually or by cert manager
    # 3) none: configure no tls certificate for the ingress. If the default
    # tls certificate is configured in the ingress controller, choose this option
    certSource: secret #自定source of the tls certificate类型
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: ""
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: "harbor-tls" #指定secretName
  ingress:
    hosts:
      core: harbor.example.com #指定域名
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    # set to `alb` if using the ALB ingress controller
    # set to `f5-bigip` if using the F5 BIG-IP ingress controller
    controller: default
    ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
    kubeVersionOverride: ""
    className: "nginx" #指定代理类型,traefik或者nginx
    annotations:
      # note different ingress controllers may require a different ssl-redirect annotation
      # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below
      # 此处为nginx时的配置
      nginx.ingress.kubernetes.io/proxy-body-size: "30M"
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
      cert-manager.io/issue-temporary-certificate: "true" 
      acme.cert-manager.io/http01-edit-in-place: "true"
      # 此处为traefik时的配置
      # kubernetes.io/ingress.class: traefik
      # cert-manager.io/cluster-issuer: letsencrypt-prod


    harbor:
      # harbor ingress-specific annotations
      annotations: {}
      # harbor ingress-specific labels
      labels: {}
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    # Annotations on the ClusterIP service
    annotations: {}
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving HTTP
        port: 80
        # The node port Harbor listens on when serving HTTP
        nodePort: 30002
      https:
        # The service port Harbor listens on when serving HTTPS
        port: 443
        # The node port Harbor listens on when serving HTTPS
        nodePort: 30003
  loadBalancer:
    # The name of LoadBalancer service
    name: harbor
    # Set the IP if the LoadBalancer supports assigning IP
    IP: ""
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
    annotations: {}
    sourceRanges: []

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://harbor.example.com #指定访问地址



# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamically.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you already have existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  # (this does not apply for PVCs that are created for internal database
  # and redis components, i.e. they are never deleted automatically)
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used (the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "local-path" #指定持久化storageClass
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
      annotations: {}
    jobservice:
      jobLog:
        existingClaim: ""
        storageClass: "local-path"
        subPath: ""
        accessMode: ReadWriteOnce
        size: 1Gi
        annotations: {}
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: "local-path" #指定持久化storageClass
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
      annotations: {}
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "local-path" #指定持久化storageClass
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
      annotations: {}
    trivy:
      existingClaim: ""
      storageClass: "local-path" #指定持久化storageClass
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
      annotations: {}
  


# The initial password of Harbor admin. Change it from portal after launching Harbor
# or give an existing secret for it
# key in secret is given via (default to HARBOR_ADMIN_PASSWORD)
# existingSecretAdminPassword:
existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD
harborAdminPassword: "<password>" #指定管理员登陆密码

五、k8s部署问题及解决方案

1、使用cert-manerger安装自签证书使用出现的问题以及解决方式

k8s集群是1.20版本以上,使用这个会造成证书签发不成功

kubectl get certificate 查看READY对应的是False,好久都不会成功

解决方式:

在ingress.yaml加入以下配置

cert-manager.io/issue-temporary-certificate: "true" 
acme.cert-manager.io/http01-edit-in-place: "true"
journalctl -xefu kubelet