kubectl explain service.spec.type
ClusterIP:默认的类型,用于k8s内部之间的服务访问,即通过内部的service ip实现服务间的访问,service IP仅可以在内部访问,不能从外部访问。
NodePort:在cluster IP的基础之上,通过在每个node节点监听一个可以指定宿主机端口(nodePort)来暴露服务,从而允许外部client访问k8s集群中的服务,nodePort把外部client的请求转发至service进行处理,
LoadBalancer:主要在公有云如阿里云、AWS上使用,LoadBalancer构建在nodePort基础之上,通过公有云服务商提供的负载均衡器将k8s集群中的服务暴露给集群外部的client访问。
ExternalName:用于将k8s集群外部的服务映射至k8s集群内部访问,从而让集群内部的pod能够通过固定的service name访问集群外部的服务,有时候也用于将不同namespace之间的pod通过ExternalName进行访问。
Service如果是cluster类型那么从clusterIP到pod是默认是TCP协议,TCP支持MySQL、Redis等特定服务,另外还有UDP和SCTP协议。
1.2 Ingress控制器简介Kubernetes暴露服务的几种方式:
- NodePort
- clusterIP
- loadBalancerIP
- ingress
-
Ingress:https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
-
Ingress是kubernetes API中的标准资源类型之一,ingress实现的功能是将客户端请求的host名称或请求的URL路径把请求转发到指定的service资源的规则,即用于将kubernetes集群外部的请求资源转发之集群内部的service,再被service转发之pod处理客户端的请求。
-
Ingress controller:https://kubernetes.io/zh/docs/concepts/services-networking/ingress-controllers/
-
Ingress资源需要指定监听地址、请求的host和URL等配置,然后根据这些规则的匹配机制将客户端的请求进行转发,这种能够为ingress配置资源监听并转发流量的组件称为ingress控制器(ingress controller),ingress controller是kubernetes的一个附件,类似于dashboard或者flannel一样,需要单独部署。
-
Ingress选型:https://kubernetes.io/zh/docs/concepts/services-networking/ingress-controllers/
-
部署ingress controller:https://kubernetes.github.io/ingress-nginx/deploy/
-
NodePort方式:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal
-
官方配置文档:https://kubernetes.io/zh/docs/concepts/services-networking/ingress/
# cat ingress-controller-deploy.yaml
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: --- # Source: ingress-nginx/templates/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - '' resources: - nodes verbs: - get - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - '' resources: - events verbs: - create - patch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch --- # Source: ingress-nginx/templates/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx rules: - apiGroups: - '' resources: - namespaces verbs: - get - apiGroups: - '' resources: - configmaps - pods - secrets - endpoints verbs: - get - list - watch - apiGroups: - '' resources: - services verbs: - get - list - update - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io # k8s 1.14+ resources: - ingresses/status verbs: - update - apiGroups: - networking.k8s.io # k8s 1.14+ resources: - ingressclasses verbs: - get - list - watch - apiGroups: - '' resources: - configmaps resourceNames: - ingress-controller-leader-nginx verbs: - get - update - apiGroups: - '' resources: - configmaps verbs: - create - apiGroups: - '' resources: - endpoints verbs: - create - get - update - apiGroups: - '' resources: - events verbs: - create - patch --- # Source: ingress-nginx/templates/controller-rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx subjects: - kind: ServiceAccount name: ingress-nginx namespace: ingress-nginx --- # Source: ingress-nginx/templates/controller-service-webhook.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller-admission namespace: ingress-nginx spec: type: ClusterIP ports: - name: https-webhook port: 443 targetPort: webhook selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-service.yaml apiVersion: v1 kind: Service metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: http nodePort: 40080 - name: https port: 443 protocol: TCP targetPort: https nodePort: 40444 selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller --- # Source: ingress-nginx/templates/controller-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx spec: selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller revisionHistoryLimit: 10 minReadySeconds: 0 template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/component: controller spec: dnsPolicy: ClusterFirst hostNetwork: true containers: - name: controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --election-id=ingress-controller-leader - --ingress-class=nginx - --configmap=ingress-nginx/ingress-nginx-controller - --validating-webhook=:8443 - --validating-webhook-certificate=/usr/local/certificates/cert - --validating-webhook-key=/usr/local/certificates/key securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE runAsUser: 101 allowPrivilegeEscalation: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 ports: - name: http containerPort: 80 protocol: TCP - name: https containerPort: 443 protocol: TCP - name: webhook containerPort: 8443 protocol: TCP volumeMounts: - name: webhook-cert mountPath: /usr/local/certificates/ readOnly: true resources: requests: cpu: 100m memory: 90Mi serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission --- # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook name: ingress-nginx-admission namespace: ingress-nginx webhooks: - name: validate.nginx.ingress.kubernetes.io rules: - apiGroups: - extensions - networking.k8s.io apiVersions: - v1beta1 operations: - CREATE - UPDATe resources: - ingresses failurePolicy: Fail clientConfig: service: namespace: ingress-nginx name: ingress-nginx-controller-admission path: /extensions/v1beta1/ingresses --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - get - update --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-create annotations: helm.sh/hook: pre-install,pre-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-create labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: create image: jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - create - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc - --namespace=ingress-nginx - --secret-name=ingress-nginx-admission restartPolicy: onFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml apiVersion: batch/v1 kind: Job metadata: name: ingress-nginx-admission-patch annotations: helm.sh/hook: post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx spec: template: metadata: name: ingress-nginx-admission-patch labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook spec: containers: - name: patch image: jettech/kube-webhook-certgen:v1.2.0 imagePullPolicy: IfNotPresent args: - patch - --webhook-name=ingress-nginx-admission - --namespace=ingress-nginx - --patch-mutating=false - --secret-name=ingress-nginx-admission - --patch-failure-policy=Fail restartPolicy: onFailure serviceAccountName: ingress-nginx-admission securityContext: runAsNonRoot: true runAsUser: 2000 --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx rules: - apiGroups: - '' resources: - secrets verbs: - get - create --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-nginx-admission subjects: - kind: ServiceAccount name: ingress-nginx-admission namespace: ingress-nginx --- # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: ingress-nginx-admission annotations: helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded labels: helm.sh/chart: ingress-nginx-2.4.0 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.33.0 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: admission-webhook namespace: ingress-nginx
# kubectl apply -f ingress-controller-deploy.yaml1.4 部署web服务
# cat tomcat-app1.yaml kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: test-tomcat-app1-deployment-label name: test-tomcat-app1-deployment namespace: test spec: replicas: 1 selector: matchLabels: app: test-tomcat-app1-selector template: metadata: labels: app: test-tomcat-app1-selector spec: containers: - name: test-tomcat-app1-container image: harbor.k8s.local/k8s/tomcat-app1:v1 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP name: http env: - name: "password" value: "123456" - name: "age" value: "18" resources: limits: cpu: 1 memory: "512Mi" requests: cpu: 500m memory: "512Mi" volumeMounts: - name: app1-data mountPath: /data/tomcat/webapps/myapp readOnly: false volumes: - name: app1-data nfs: server: 172.16.244.141 path: /data/k8sdata/tomcat-app1-data --- kind: Service apiVersion: v1 metadata: labels: app: test-tomcat-app1-service-label name: test-tomcat-app1-service namespace: test spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 40003 selector: app: test-tomcat-app1-selector
# cat tomcat-app2.yaml kind: Deployment #apiVersion: extensions/v1beta1 apiVersion: apps/v1 metadata: labels: app: test-tomcat-app2-deployment-label name: test-tomcat-app2-deployment namespace: test spec: replicas: 1 selector: matchLabels: app: test-tomcat-app2-selector template: metadata: labels: app: test-tomcat-app2-selector spec: containers: - name: test-tomcat-app2-container image: harbor.k8s.local/k8s/tomcat-app1:v1 #command: ["/apps/tomcat/bin/run_tomcat.sh"] #imagePullPolicy: IfNotPresent imagePullPolicy: Always ports: - containerPort: 8080 protocol: TCP name: http env: - name: "password" value: "123456" - name: "age" value: "18" resources: limits: cpu: 1 memory: "512Mi" requests: cpu: 500m memory: "512Mi" volumeMounts: - name: app2-data mountPath: /data/tomcat/webapps/myapp readOnly: false volumes: - name: app2-data nfs: server: 172.16.244.141 path: /data/k8sdata/tomcat-app2-data --- kind: Service apiVersion: v1 metadata: labels: app: test-tomcat-app2-service-label name: test-tomcat-app2-service namespace: test spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 40004 selector: app: test-tomcat-app2-selector1.5 实现单host及多host的ingress
- 单host:
# cat ingress_single-host.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tom-web namespace: test annotations: kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型 nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式 nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-body-size: "50m" ##客户端上传文件,最大大小,默认为20m #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写 nginx.ingress.kubernetes.io/app-root: /index.html spec: rules: #路由规则 - host: www.test.com ##客户端访问的host域名 http: paths: - path: backend: serviceName: test-tomcat-app1-service #转发至哪个service servicePort: 80 ##转发至service的端口号
# 客户端: # cat /etc/hosts ... 172.16.244.203 www.test.com # ha上: root@ha1:~# cat /etc/haproxy/haproxy.cfg ... listen app-ingress-80 bind 172.16.244.203:80 mode tcp server k8s1 172.16.244.111:40080 check inter 3s fall 3 rise 5 server k8s2 172.16.244.112:40080 check inter 3s fall 3 rise 5 server k8s3 172.16.244.113:40080 check inter 3s fall 3 rise 5 listen app-ingress-443 bind 172.16.244.203:443 mode tcp server k8s1 172.16.244.111:40444 check inter 3s fall 3 rise 5 server k8s2 172.16.244.112:40444 check inter 3s fall 3 rise 5 server k8s3 172.16.244.113:40444 check inter 3s fall 3 rise 5
- 多host:
# cat ingress_multi-host.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tom-web namespace: test annotations: kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型 nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式 nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写 nginx.ingress.kubernetes.io/app-root: /index.html spec: rules: - host: www.test.com http: paths: - path: backend: serviceName: test-tomcat-app1-service servicePort: 80 - host: mobile.test.com http: paths: - path: backend: serviceName: test-tomcat-app2-service servicePort: 801.6 实现基于URL的ingress
# cat ingress-url.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: tom-web namespace: test annotations: kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型 nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式 nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写 nginx.ingress.kubernetes.io/app-root: /index.html spec: rules: - host: www.test.com http: paths: - path: /myapp/url1 backend: serviceName: test-tomcat-app1-service servicePort: 80 - path: /myapp/url2 backend: serviceName: test-tomcat-app2-service servicePort: 80 - host: mobile.test.com http: paths: - path: /myapp/url1 backend: serviceName: test-tomcat-app1-service servicePort: 80 - path: /myapp/url2 backend: serviceName: test-tomcat-app2-service servicePort: 801.7 基于https实现单host及多host的ingress 1.7.1 签发证书
certs# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.test.com' certs# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.test.com' certs# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt1.7.2 上传密钥到k8s
certs# kubectl create secret generic ingrees-www-tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key -n test1.7.3 单Host Ingress
# cat ingress-https-single-host.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-web namespace: test annotations: kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型 nginx.ingress.kubernetes.io/ssl-redirect: 'true' #SSL重定向,即将http请求强制重定向至https,等于nginx中的全站https spec: tls: - hosts: - www.test.com secretName: ingrees-www-tls-secret rules: - host: www.test.com http: paths: - path: / backend: serviceName: test-tomcat-app1-service servicePort: 801.7.4 多Host Ingress 1.7.4.1 添加证书
# openssl req -new -newkey rsa:4096 -keyout mobile.key -out mobile.csr -nodes -subj '/CN=mobile.test.com' # openssl x509 -req -sha256 -days 3650 -in mobile.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out mobile.crt1.7.4.2 将证书添加到k8s
certs# kubectl create secret generic ingress-mobile-tls-secret --from-file=tls.crt=mobile.crt --from-file=tls.key=mobile.key -n test1.7.4.3 创建新的Ingress
# cat ingress-https-multi-host.yaml apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: nginx-web namespace: test annotations: kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型 nginx.ingress.kubernetes.io/ssl-redirect: 'true' spec: tls: - hosts: - www.test.com secretName: ingrees-www-tls-secret - hosts: - mobile.test.com secretName: ingress-mobile-tls-secret rules: - host: www.test.com http: paths: - path: / backend: serviceName: test-tomcat-app1-service servicePort: 80 - host: mobile.test.com http: paths: - path: / backend: serviceName: test-tomcat-app2-service servicePort: 802. 基于HPA控制器实现控制pod副本数 2.1 HPA控制器及metrics-server简介-4.2
kubectl autoscale 自动控制在k8s集群中运行的pod副本数量(水平自动伸缩),需要提前设置pod范围及出发条件。
k8s从1.1版本开始增加了名为HPA(Horizontal Pod Autoscaler)的控制器,用于实现基于pod中资源(CPU/Memory)利用率进行对pod的自动伸缩功能的实现,早期的版本只能基于Heapster组件实现对CPU利用率作为触发条件,但是在1.11版本以后开始使用Metric Server完成数据采集,然后将采集到的数据通过API(Aggregated API,汇总API),例如Metric.k8s.io/custom.metric.k8s.io/external.metrics.k8s.io,然后再把数据提供给HPA控制器进行查询,以实现基于某个资源利用率对pod进行扩展伸缩的目的。
2.2 HPA控制器实现控制Pod副本数 2.2.1 准备Metrics Server控制管理器默认每个15s(可以通过-horizontal-pod-autoscaler-sync-period修改,kube-controller-manager --help | grep horizontal)查询metric的资源使用情况。
支持一下三种metrics指标参数:
- 预定义metrics(比如Pod的CPU)以利用率的方式计算
- 自定义的Pod metrics,以原始值(raw value)的计算方式
- 自定义的object metrics
支持两种metric查询方式:
- Heapster
- 自定义的REST API
支持多metrics
使用metrics-server作为HPA数据源
https://github.com/kubernetes-sigs/metrics-server
2.2.1.1 代码及模板yaml文件准备https://github.com/kubernetes-sigs/metrics-server/releases 下载对应版本,基本上0.4X就够用了。官网也有对应的安装方法,获取到对应的components.yaml文件地址下载。
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml2.2.1.2 准备image
官方yaml采用的google 镜像k8s.gcr.io/metrics-server/metrics-server:v0.4.4,应该提前下载,上传到私有仓库。
# docker pull k8s.gcr.io/metrics-server/metrics-server:v0.4.4 # docker save -o metrics-server-v0.4.4.tar.gz k8s.gcr.io/metrics-server/metrics-server:v0.4.4 # docker load -i metrics-server-v0.4.4.tar.gz # docker tag k8s.gcr.io/metrics-server/metrics-server:v0.4.4 harbor.k8s.local/k8s/metrics-server:v0.4.4 # docker push harbor.k8s.local/k8s/metrics-server:v0.4.42.2.1.3 修改yaml文件
# cat components.yaml ... image: k8s.gcr.io/metrics-server/metrics-server:v0.4.4 #需要修改为私有仓库地址,harbor.k8s.local/k8s/metrics-server:v0.4.4 ...2.2.1.4 创建metrics-server 服务
# kubectl apply -f components-v0.4.4.yaml2.2.1.5 验证metrics-server pod
# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE ... kube-system metrics-server-65f768bf69-tmkrg 1/1 Running 0 12s ...
# kubectl top pod -n test #已经可以显示相关CPU和Memory数据 W1021 23:56:11.441727 138731 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag NAME CPU(cores) MEMORY(bytes) deploy-devops-redis-5f874fd856-65z8f 2m 6Mi mysql-0 14m 215Mi test-consumer-deployment-745c65568-ctgxv 0m 9Mi test-dubboadmin-deployment-549c67d886-zkmnn 134m 724Mi test-jenkins-deployment-58d6bfbcfb-sst2c 3m 523Mi test-provider-deployment-5b74694449-lzrx5 14m 11Mi test-tomcat-app1-deployment-64d4bb4d69-sddw9 3m 154Mi test-tomcat-app2-deployment-9775cc97b-b98k6 3m 124Mi wordpress-app-deployment-9bcd446f8-kbvkk 1m 70Mi zookeeper1-7bd866b955-jhzz2 3m 73Mi zookeeper2-ffd4c44c8-pfcxz 4m 66Mi zookeeper3-6dc789b469-bwxg2 3m 55Mi2.2.2 通过命令配置缩扩容
此方法建议用在测试阶段,正式环境采用yaml文件的方式
# kubectl autoscale --help # kubectl autoscale deployment test-tomcat-app1-deployment --min=2 --max=5 --cpu-percent=30 -n test # kubectl get hpa -n test NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE test-tomcat-app1-deployment Deployment/test-tomcat-app1-deployment 0%/30% 2 5 2 54s2.2.3 在yaml文件中定义缩扩容配置
# cat hpa.yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: namespace: test name: tomcat-app1-hpa labels: #给其它应用调用时提供的用于筛选的label app: test-tomcat-app1-hpa verison: v2beta1 spec: scaleTargetRef: apiVersion: apps/v1 #apiVersion: extensions/v1beta1 kind: Deployment name: test-tomcat-app1-deployment minReplicas: 2 maxReplicas: 20 targetCPUUtilizationPercentage: 60 #metrics: #- type: Resource # resource: # name: cpu # targetAverageUtilization: 60 #- type: Resource # resource: # name: memory # kubectl explain hpa.spec 查看具体版本中的配置参数2.2.4 验证HPA
root@deploy:~# kubectl apply -f hpa.yaml horizontalpodautoscaler.autoscaling/tomcat-app1-hpa created root@deploy:~# kubectl get pod -n test NAME READY STATUS RESTARTS AGE ... test-tomcat-app1-deployment-64d4bb4d69-g9bnh 1/1 Running 0 5s test-tomcat-app1-deployment-64d4bb4d69-sddw9 1/1 Running 1 8h ... root@deploy:~# kubectl get hpa -n test NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE tomcat-app1-hpa Deployment/test-tomcat-app1-deployment 0%/60% 2 20 2 31s
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)