k8s部署express web应用(最新验证,手把手教学)

k8s部署express web应用(最新验证,手把手教学),第1张

k8s部署express web应用(最新验证,手把手教学) k8s部署node express web应用

本文档用于梳理k8s部署node应用的过程

  1. 准备项目

    项目名称开放端口路由1路由2websvr13000/web1/index/web1/sendwebsvr23001/web2/index/web2/send

    为了快速搭建,此处的websvr采用express的脚手架express-generator安装:

    #安装express-generator:
    $ npm install express-generator -g  
    #创建脚手架应用app
    $ express app
    #安装依赖
    $ cd app && npm install
    

    在app同级目录编辑Dockerfile:

    #指定node版本
    FROM node:10.15.1	
    #指定作者
    MAINTAINER [SCH]
    #将同级app文件夹添加进入docker下指定目录
    ADD app /opt/app
    #指定工作目录
    WORKDIR /opt/app
    #指定对外端口
    EXPOSE 3000
    #启动执行命令
    CMD [ "nohup","npm","start", "&"]
    

    将app与同级Dockerfile一起创建文件夹,命名为websvr1,修改app/routes/index.js路由文件:

    //此处定义了两个get接口与一个post接口,
    router.get('/web1/index', function (req, res, next) {
        res.render('index', {title: 'Express1'});
    });
    
    router.post('/web1/getIndex', function (req, res, next) {
        res.send("get index1");
    });
    
    //通过websvr1向websvr2,service发起请求,用来验证k8s,service之间的通信流程
    router.get('/web1/send', function (req, res, next) {
        request({
            url: `http://websvr2-service:3001/web2/getIndex`,
            method: "POST",
            timeout: 10000
        }, (error, response, body) => {
            if (error) {
                console.log(error);
                res.render('index', {title: "请求失败1"});
                return
            }
            res.render('index', {title: body});
        })
    });
    

    另外拷贝一份websvr,将app目录下/bin/www内的默认端口和Dockerfile对外端口统一修改为3001,压缩命名为websvr2:

    //此处定义了两个get接口与一个post接口,
    router.get('/web2/index', function (req, res, next) {
        res.render('index', {title: 'Express2'});
    });
    
    router.post('/web2/getIndex', function (req, res, next) {
        res.send("get index2");
    });
    
    //通过websvr1向websvr2,service发起请求,用来验证k8s,service之间的通信流程
    router.get('/web2/send', function (req, res, next) {
        request({
            url: `http://websvr1-service:3000/web1/getIndex`,
            method: "POST",
            timeout: 10000
        }, (error, response, body) => {
            if (error) {
                console.log(error);
                res.render('index', {title: "请求失败2"});
                return
            }
            res.render('index', {title: body});
        })
    });
    

    k8s集群条件

    节点名称IPk8s-master172.16.66.169k8s-node1172.16.66.168k8s-node2172.16.66.170
  2. 创建docker镜像

    将websvr1,websvr2分别上传至node1,node2下的/opt目录下,并创建docker镜像

    $ cd /opt/websvr1
    $ docker build -t websvr:v1 .
    
    $ cd /opt/websvr2
    $ docker build -t websvr:v2 .
    
    #查看docker镜像
    $ docker images
    
    REPOSITORY                                                        TAG        IMAGE ID       CREATED          SIZE
    websvr                                                            v2         2a61bbea0d63   16 seconds ago   907MB
    websvr                                                            v1         a3adb933da80   32 seconds ago   907MB
    calico/node                                                       v3.20.1    355c1ee44040   4 weeks ago      156MB
    calico/pod2daemon-flexvol                                         v3.20.1    55fa5eb71e09   4 weeks ago      21.7MB
    calico/cni                                                        v3.20.1    e69ccb66d1b6   4 weeks ago      146MB
    registry.aliyuncs.com/google_containers/kube-apiserver            v1.21.0    4d217480042e   6 months ago     126MB
    registry.aliyuncs.com/google_containers/kube-proxy                v1.21.0    38ddd85fe90e   6 months ago     122MB
    registry.aliyuncs.com/google_containers/kube-scheduler            v1.21.0    62ad3129eca8   6 months ago     50.6MB
    registry.aliyuncs.com/google_containers/kube-controller-manager   v1.21.0    09708983cc37   6 months ago     120MB
    registry.aliyuncs.com/google_containers/pause                     3.4.1      0f8457a4c2ec   9 months ago     683kB
    coredns/coredns                                                   1.8.0      296a6d5035e2   12 months ago    42.5MB
    registry.aliyuncs.com/google_containers/coredns/coredns           v1.8.0     296a6d5035e2   12 months ago    42.5MB
    registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   13 months ago    253MB
    node                                                              10.15.1    8fc2110c6978   2 years ago      897MB
    
  3. k8s部署websvr

    这里使用k8s,deployment,service部署websvr

    deployment:创建docker容器群,对于同一个websvr,可以创建多个相同副本,通过分配不同虚拟IP及端口进行访问

    service:当存在多个websvr容器副本后,如何通过统一的入口对多个websvr进行访问,就需要使用到service,可以简单理解为对多个容器副本的封装

    $ vim websvr1.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: websvr1-deployment
    spec:
      selector:
        matchLabels:
          app: websvr1
      replicas: 3
      template:
        metadata:
          labels:
            app: websvr1
        spec:
          containers:
          - name: websvr1
            image: websvr:v1
            ports:
            - containerPort: 3000
           
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: websvr1-service
    spec:
      selector:
        app: websvr1
      ports:
      - protocol: TCP
        port: 3000
        targetPort: 3000
    
    $ kubectl apply -f websvr1.yaml
    $ kubectl get pods -o wide 
    
    NAME                                  READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
    websvr1-deployment-7cb5776d76-mzx96   1/1     Running   0          3m8s   10.244.169.134   k8s-node2              
    websvr1-deployment-7cb5776d76-nzx7w   1/1     Running   0          3m8s   10.244.36.68     k8s-node1              
    websvr1-deployment-7cb5776d76-zzhdb   1/1     Running   0          3m8s   10.244.169.135   k8s-node2              
    

    同样方法部署websvr2,将暴露端口改为3001:

    $ vim websvr2.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: websvr2-deployment
    spec:
      selector:
        matchLabels:
          app: websvr2
      replicas: 3
      template:
        metadata:
          labels:
            app: websvr2
        spec:
          containers:
          - name: websvr2
            image: websvr:v2
            ports:
            - containerPort: 3001
           
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: websvr2-service
    spec:
      selector:
        app: websvr2
      ports:
      - protocol: TCP
        port: 3001
        targetPort: 3001
    
    $ kubectl apply -f websvr2.yaml
    $ kubectl get pods -o wide 
    
    NAME                                  READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
    websvr1-deployment-7cb5776d76-mzx96   1/1     Running   0          7m35s   10.244.169.134   k8s-node2              
    websvr1-deployment-7cb5776d76-nzx7w   1/1     Running   0          7m35s   10.244.36.68     k8s-node1              
    websvr1-deployment-7cb5776d76-zzhdb   1/1     Running   0          7m35s   10.244.169.135   k8s-node2              
    websvr2-deployment-58c8b7ffcd-57tsz   1/1     Running   0          7s      10.244.36.69     k8s-node1              
    websvr2-deployment-58c8b7ffcd-9lg4c   1/1     Running   0          7s      10.244.36.70     k8s-node1              
    websvr2-deployment-58c8b7ffcd-dgzl5   1/1     Running   0          7s      10.244.36.71     k8s-node1              
    
    
  4. 验证

    对于运行在各个node节点上的pod,通过统一的serviceIP及端口进行访问,service通过一定的负载均衡规则,分发到不同的node节点的pod上进行业务处理

    $ kubectl get svc -o wide
    
    NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     SELECTOR
    kubernetes        ClusterIP   10.96.0.1                443/TCP    135m    
    websvr1-service   ClusterIP   10.102.171.58            3000/TCP   10m     app=websvr1
    websvr2-service   ClusterIP   10.104.188.128           3001/TCP   2m34s   app=websvr2
    
    #发现此处有刚刚创建的两个websvr service,分别对应websvr1:3000及websvr2:3001
    
  5. 此时外网还无法访问k8s集群内容器,接下来需要进一步部署ingress

    ingress-nginx部署

    Ingress-nginx versionk8s supported versionAlpine VersionNginx Versionv0.48.11.21, 1.20, 1.193.13.51.20.1v0.47.01.21, 1.20, 1.193.13.51.20.1v0.46.01.21, 1.20, 1.193.13.21.19.6

    在master及所有node执行:

    # 从阿里云镜像仓库拉取ingress-nginx所需版本:
    $ docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes-fan/ingress-nginx:v0.48.1
    
    # 将阿里云镜像重新打tag命名为官方镜像名:
    $ docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes-fan/ingress-nginx:v0.48.1 k8s.gcr.io/ingress-nginx/controller:v0.48.1
    
    # 删除阿里云镜像:
    $ docker rmi registry.cn-hangzhou.aliyuncs.com/kubernetes-fan/ingress-nginx:v0.48.1
    

    打开ingress-nginx 0.48.1的deploy.yaml网站、将yaml内容全部复制到本地。

    https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/baremetal/deploy.yaml

    修改本地的deploy.yaml文件:

    image: k8s.gcr.io/ingress-nginx/controller:v0.48.1@sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
    # 修改为
    image: k8s.gcr.io/ingress-nginx/controller:v0.48.1
    

    外网无法打开可以使用下面保存的yaml文件:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: ingress-nginx
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
    
    ---
    # Source: ingress-nginx/templates/controller-serviceaccount.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx
      namespace: ingress-nginx
    automountServiceAccountToken: true
    ---
    # Source: ingress-nginx/templates/controller-configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-controller
      namespace: ingress-nginx
    data:
    ---
    # Source: ingress-nginx/templates/clusterrole.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
      name: ingress-nginx
    rules:
      - apiGroups:
          - ''
        resources:
          - configmaps
          - endpoints
          - nodes
          - pods
          - secrets
        verbs:
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - nodes
        verbs:
          - get
      - apiGroups:
          - ''
        resources:
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - events
        verbs:
          - create
          - patch
      - apiGroups:
          - extensions
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingresses/status
        verbs:
          - update
      - apiGroups:
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingressclasses
        verbs:
          - get
          - list
          - watch
    ---
    # Source: ingress-nginx/templates/clusterrolebinding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
      name: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ingress-nginx
    subjects:
      - kind: ServiceAccount
        name: ingress-nginx
        namespace: ingress-nginx
    ---
    # Source: ingress-nginx/templates/controller-role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx
      namespace: ingress-nginx
    rules:
      - apiGroups:
          - ''
        resources:
          - namespaces
        verbs:
          - get
      - apiGroups:
          - ''
        resources:
          - configmaps
          - pods
          - secrets
          - endpoints
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingresses/status
        verbs:
          - update
      - apiGroups:
          - networking.k8s.io   # k8s 1.14+
        resources:
          - ingressclasses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - configmaps
        resourceNames:
          - ingress-controller-leader-nginx
        verbs:
          - get
          - update
      - apiGroups:
          - ''
        resources:
          - configmaps
        verbs:
          - create
      - apiGroups:
          - ''
        resources:
          - events
        verbs:
          - create
          - patch
    ---
    # Source: ingress-nginx/templates/controller-rolebinding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx
      namespace: ingress-nginx
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: ingress-nginx
    subjects:
      - kind: ServiceAccount
        name: ingress-nginx
        namespace: ingress-nginx
    ---
    # Source: ingress-nginx/templates/controller-service-webhook.yaml
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
    spec:
      type: ClusterIP
      ports:
        - name: https-webhook
          port: 443
          targetPort: webhook
      selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    ---
    # Source: ingress-nginx/templates/controller-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-controller
      namespace: ingress-nginx
    spec:
      type: NodePort
      ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: http
        - name: https
          port: 443
          protocol: TCP
          targetPort: https
      selector:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    ---
    # Source: ingress-nginx/templates/controller-deployment.yaml
    apiVersion: apps/v1
    #kind: Deployment
    #apiVersion: extensions/v1beta1
    # 修改为DaemonSet类型,随每个node节点创建和删除,配合污点容忍可以实现ingress-nginx高可用
    kind: DaemonSet
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: controller
      name: ingress-nginx-controller
      namespace: ingress-nginx
    spec:
      selector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx
          app.kubernetes.io/instance: ingress-nginx
          app.kubernetes.io/component: controller
      revisionHistoryLimit: 10
      minReadySeconds: 0
      template:
        metadata:
          labels:
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/component: controller
        spec:
          dnsPolicy: ClusterFirst
          #开启本机网络
          hostNetwork: true
          containers:
            - name: controller
              image: k8s.gcr.io/ingress-nginx/controller:v0.48.1
              imagePullPolicy: IfNotPresent
              lifecycle:
                preStop:
                  exec:
                    command:
                      - /wait-shutdown
              args:
                - /nginx-ingress-controller
                - --election-id=ingress-controller-leader
                - --ingress-class=nginx
                - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
                - --validating-webhook=:8443
                - --validating-webhook-certificate=/usr/local/certificates/cert
                - --validating-webhook-key=/usr/local/certificates/key
                #若本机端口被占用,需要另行设置
                #- --http-port=81
                #- --https-port=1444
                #- --status-port=18081
              securityContext:
                capabilities:
                  drop:
                    - ALL
                  add:
                    - NET_BIND_SERVICE
                runAsUser: 101
                allowPrivilegeEscalation: true
              env:
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: LD_PRELOAD
                  value: /usr/local/lib/libmimalloc.so
              livenessProbe:
                failureThreshold: 5
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 3
                httpGet:
                  path: /healthz
                  port: 10254
                  scheme: HTTP
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 1
                timeoutSeconds: 1
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
                - name: https
                  containerPort: 443
                  protocol: TCP
                - name: webhook
                  containerPort: 8443
                  protocol: TCP
              volumeMounts:
                - name: webhook-cert
                  mountPath: /usr/local/certificates/
                  readOnly: true
              resources:
                requests:
                  cpu: 100m
                  memory: 90Mi
          nodeSelector:
            kubernetes.io/os: linux
          serviceAccountName: ingress-nginx
          terminationGracePeriodSeconds: 300
          volumes:
            - name: webhook-cert
              secret:
                secretName: ingress-nginx-admission
    ---
    # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
    # before changing this value, check the required kubernetes version
    # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
    apiVersion: admissionregistration.k8s.io/v1
    kind: ValidatingWebhookConfiguration
    metadata:
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
      name: ingress-nginx-admission
    webhooks:
      - name: validate.nginx.ingress.kubernetes.io
        matchPolicy: Equivalent
        rules:
          - apiGroups:
              - networking.k8s.io
            apiVersions:
              - v1beta1
            operations:
              - CREATE
              - UPDATE
            resources:
              - ingresses
        failurePolicy: Fail
        sideEffects: None
        admissionReviewVersions:
          - v1
          - v1beta1
        clientConfig:
          service:
            namespace: ingress-nginx
            name: ingress-nginx-controller-admission
            path: /networking/v1beta1/ingresses
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ingress-nginx-admission
      namespace: ingress-nginx
      annotations:
        helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: ingress-nginx-admission
      annotations:
        helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    rules:
      - apiGroups:
          - admissionregistration.k8s.io
        resources:
          - validatingwebhookconfigurations
        verbs:
          - get
          - update
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: ingress-nginx-admission
      annotations:
        helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: ingress-nginx-admission
    subjects:
      - kind: ServiceAccount
        name: ingress-nginx-admission
        namespace: ingress-nginx
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: ingress-nginx-admission
      namespace: ingress-nginx
      annotations:
        helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    rules:
      - apiGroups:
          - ''
        resources:
          - secrets
        verbs:
          - get
          - create
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: ingress-nginx-admission
      namespace: ingress-nginx
      annotations:
        helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: ingress-nginx-admission
    subjects:
      - kind: ServiceAccount
        name: ingress-nginx-admission
        namespace: ingress-nginx
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ingress-nginx-admission-create
      namespace: ingress-nginx
      annotations:
        helm.sh/hook: pre-install,pre-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      template:
        metadata:
          name: ingress-nginx-admission-create
          labels:
            helm.sh/chart: ingress-nginx-3.34.0
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/version: 0.48.1
            app.kubernetes.io/managed-by: Helm
            app.kubernetes.io/component: admission-webhook
        spec:
          containers:
            - name: create
              image: docker.io/jettech/kube-webhook-certgen:v1.5.1
              imagePullPolicy: IfNotPresent
              args:
                - create
                - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
                - --namespace=$(POD_NAMESPACE)
                - --secret-name=ingress-nginx-admission
              env:
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
          restartPolicy: onFailure
          serviceAccountName: ingress-nginx-admission
          securityContext:
            runAsNonRoot: true
            runAsUser: 2000
    ---
    # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: ingress-nginx-admission-patch
      namespace: ingress-nginx
      annotations:
        helm.sh/hook: post-install,post-upgrade
        helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
      labels:
        helm.sh/chart: ingress-nginx-3.34.0
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.48.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      template:
        metadata:
          name: ingress-nginx-admission-patch
          labels:
            helm.sh/chart: ingress-nginx-3.34.0
            app.kubernetes.io/name: ingress-nginx
            app.kubernetes.io/instance: ingress-nginx
            app.kubernetes.io/version: 0.48.1
            app.kubernetes.io/managed-by: Helm
            app.kubernetes.io/component: admission-webhook
        spec:
          containers:
            - name: patch
              image: docker.io/jettech/kube-webhook-certgen:v1.5.1
              imagePullPolicy: IfNotPresent
              args:
                - patch
                - --webhook-name=ingress-nginx-admission
                - --namespace=$(POD_NAMESPACE)
                - --patch-mutating=false
                - --secret-name=ingress-nginx-admission
                - --patch-failure-policy=Fail
              env:
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
          restartPolicy: onFailure
          serviceAccountName: ingress-nginx-admission
          securityContext:
            runAsNonRoot: true
            runAsUser: 2000
    
    

    在master执行:

    $ kubectl apply -f deploy.yaml
    
    $ kubectl get pod -o wide -n ingress-nginx
    
    NAME                                   READY   STATUS      RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
    ingress-nginx-admission-create-87rgx   0/1     Completed   0          72s   10.244.169.137   k8s-node2              
    ingress-nginx-admission-patch-hq6b6    0/1     Completed   0          72s   10.244.36.74     k8s-node1              
    ingress-nginx-controller-f7d7r         1/1     Running     0          72s   172.16.66.170    k8s-node2              
    ingress-nginx-controller-p2z5t         1/1     Running     0          72s   172.16.66.168    k8s-node1              
    
    #可以看到ingress已经跟随node节点创建了两个controller用以监听nginx配置文件变化并更新
    
  6. 配置ingress

    ingress-nginx安装完成后,还需要配置ingress路由规则,类似nginx的路由规则:

    $ vim ingressRule.yaml
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-ingress
      namespace: default
      annotations:
        kubernetes.io/ingress.class: "nginx"
    spec:
      rules:
        - host: k8s.test.com						#指定域名
          http:			
            paths:
            - path: /web1							#一级路由				
              pathType: Prefix						#匹配规则 Prefix:前缀
              backend:
                service:
                  name: websvr1-service				#指向的service
                  port: 
                    number: 3000					#对应的service暴露的端口
            - path: /web2
              pathType: Prefix
              backend:
                service:
                  name: websvr2-service
                  port: 
                    number: 3001
    
    $ kubectl apply -f ingressRule.yaml
    
    $ kubectl describe ingress
    
    Name:             my-ingress
    Namespace:        default
    Address:          
    Default backend:  default-http-backend:80 ()
    Rules:
      Host            Path  Backends
      ----            ----  --------
      k8s.scbczx.com  
                      /web1   websvr1-service:3000 (10.244.169.134:3000,10.244.169.135:3000,10.244.36.68:3000)
                      /web2   websvr2-service:3001 (10.244.169.136:3001,10.244.36.72:3001,10.244.36.73:3001)
    Annotations:      kubernetes.io/ingress.class: nginx
    Events:
      Type    Reason  Age   From                      Message
      ----    ------  ----  ----                      -------
      Normal  Sync    11s   nginx-ingress-controller  Scheduled for sync
      Normal  Sync    11s   nginx-ingress-controller  Scheduled for sync
    
  7. 验证

    此时通过curl发起get请求验证ingress-nginx路由规则

    $ curl k8s.test.com/web1/index
    Express1Express1

    Welcome to Express1

    $ curl k8s.test.com/web1/send get index2get index2

    Welcome to get index2

    $ curl k8s.test.com/web2/index Express2Express2

    Welcome to Express2

    $ curl k8s.test.com/web2/send get index1get index1

    Welcome to get index1

    至此,k8s集群内的websvr都成功的通过公网域名进行访问

    附:在实际的项目进展中,存在一些在当前线程内存中存储用户登录态的情况,比如服务器session,如果按照当前的service分发规则,很有可能导致用户登录session丢失的问题,那么service是否可以像nginx一样配置分发规则,比如按照前端IP?后面会在别的文档内单独讨论。

    如有问题,欢迎指正。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/4663845.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-11-06
下一篇 2022-11-06

发表评论

登录后才能评论

评论列表(0条)

保存