金丝雀部署的方式有很多种,istio只是其中一种选择, Kubernetes 这样的平台已经提供了进行版本上线和金丝雀部署的方法,但很多问题依然不能解决, 所以使用Istio作为金丝雀部署方案也是很好的选择
金丝雀部署首先部署好新版本,然后让一小部分用户流量引入的新版本进行测试,如果一切顺利,则可以调整比例替换旧版本。如在过程中出现问题,回滚到旧版本即可。最简单的方式,是随机选择百分比请求到金丝雀版本,在更复杂的方案下,可以基于请求的区域,用户或其他属性,本次我们只搞下简单方式。
实验思路和环境流量从istio-ingressgateway进入,经过创建的gateway,读取VirtualService的流量分配,通过DestinationRule规则关联到pod
实验思路1、准备两个deployment,设置两个lables,镜像使用nginx即可,为了区分版本,我们进入容器,手动修改一下默认页内容
2、创建svc,选择其中一个labels
3、创建istio gateway
4、创建VirtualService并根据subset分配流量占比
5、创建DestinationRule,设置subsets分别指定不同的lables
6、测试访问
k8s环境[root@k8s-master istio-canary]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 41d v1.22.0 k8s-node01 Readyistio环境41d v1.22.0 k8s-node02 Ready 41d v1.22.0
[root@k8s-master istio-canary]# kubectl get pod,svc -n istio-system NAME READY STATUS RESTARTS AGE pod/istio-egressgateway-687f4db598-s6l8v 1/1 Running 0 103m pod/istio-ingressgateway-78f69bd5db-kcgzb 1/1 Running 0 47m pod/istiod-76d66d9876-5wnf2 1/1 Running 0 115m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/istio-egressgateway ClusterIP 10.108.16.157创建deployment80/TCP,443/TCP 3d23h service/istio-ingressgateway LoadBalancer 10.99.29.225 15021:30954/TCP,80:30869/TCP,443:32748/TCP,31400:31230/TCP,15443:30625/TCP 3d23h service/istiod ClusterIP 10.109.41.38 15010/TCP,15012/TCP,443/TCP,15014/TCP 3d23h
准备两个deployment,设置两个lables,镜像使用nginx,模拟不同版本的应用
[root@k8s-master istio-canary]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: appv1 labels: app: v1 spec: replicas: 1 selector: matchLabels: app: v1 apply: canary template: metadata: labels: app: v1 apply: canary spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: appv2 labels: app: v2 spec: replicas: 1 selector: matchLabels: app: v2 apply: canary template: metadata: labels: app: v2 apply: canary spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- [root@k8s-master istio-canary]# kubectl apply -f deployment.yaml deployment.apps/appv1 created deployment.apps/appv2 created
为了区分两个版本的不通,我们可以修改默认页内容
[root@k8s-master istio-canary]# kubectl get pod NAME READY STATUS RESTARTS AGE appv1-5cf75d8d8b-vdvzr 2/2 Running 0 22m appv2-684dd44db7-r6k6k 2/2 Running 0 22m [root@k8s-master istio-canary]# kubectl exec appv1-5cf75d8d8b-vdvzr -it /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@appv1-5cf75d8d8b-vdvzr:/# echo v1 > /usr/share/nginx/html/index.html root@appv1-5cf75d8d8b-vdvzr:/# exit exit [root@k8s-master istio-canary]# kubectl exec appv2-684dd44db7-r6k6k -it /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@appv2-684dd44db7-r6k6k:/# echo v2 > /usr/share/nginx/html/index.html
修改后我们可以创建个svc,然后通过curl svc地址模拟访问一下,访问结果应该是轮询,各百分之50。
[root@k8s-master istio-canary]# cat service.yaml apiVersion: v1 kind: Service metadata: name: canary labels: apply: canary spec: selector: apply: canary ports: - protocol: TCP port: 80 targetPort: 80 [root@k8s-master istio-canary]# kubectl apply -f service.yaml service/canary created [root@k8s-master istio-canary]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE canary ClusterIP 10.97.182.21980/TCP 7s kubernetes ClusterIP 10.96.0.1 443/TCP 3h11m [root@k8s-master istio-canary]# curl 10.97.182.219 v2 [root@k8s-master istio-canary]# curl 10.97.182.219 v1 [root@k8s-master istio-canary]# curl 10.97.182.219 v2 [root@k8s-master istio-canary]# curl 10.97.182.219 v1 [root@k8s-master istio-canary]# curl 10.97.182.219 v2 [root@k8s-master istio-canary]# curl 10.97.182.219 v1 [root@k8s-master istio-canary]# curl 10.97.182.219 v2
创建gateway
[root@k8s-master istio-canary]# cat gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: canary-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" [root@k8s-master istio-canary]# kubectl apply -f gateway.yaml gateway.networking.istio.io/canary-gateway created [root@k8s-master istio-canary]# kubectl get gateways.networking.istio.io NAME AGE canary-gateway 7s
创建VirtualService和DestinationRule,其中hosts地址必须加上.cluster.local不然报503错误,查了好久,目前还没找到为啥简写不行,有知道的可以留言指导我一下
[root@k8s-master istio-canary]# cat virtualservice.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: canary spec: hosts: - "*" gateways: - canary-gateway http: - match: - uri: prefix: / route: - destination: host: canary.default.svc.cluster.local subset: v1 weight: 90 - destination: host: canary.default.svc.cluster.local subset: v2 weight: 10 --- apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: canary spec: host: canary.default.svc.cluster.local subsets: - name: v1 labels: app: v1 - name: v2 labels: app: v2 [root@k8s-master istio-canary]# kubectl apply -f virtualservice.yaml virtualservice.networking.istio.io/canary created destinationrule.networking.istio.io/canary created
到目前为止环境已经就绪,可以开始测试,因为流量是从istio-ingressgateway进入,所以需要先查看一下svc的入口,入口为LoadBlancer模式,此模式一般用于云厂商的负载均衡,我们也可以使用节点方式访问,80端口已经映射出了对应的端口30869
[root@k8s-master istio-canary]# kubectl get svc -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-egressgateway ClusterIP 10.108.16.15780/TCP,443/TCP 4d1h istio-ingressgateway LoadBalancer 10.99.29.225 15021:30954/TCP,80:30869/TCP,443:32748/TCP,31400:31230/TCP,15443:30625/TCP 4d1h istiod ClusterIP 10.109.41.38 15010/TCP,15012/TCP,443/TCP,15014/TCP 4d1h [root@k8s-master istio-canary]# curl 192.168.3.50:30869 v1 [root@k8s-master istio-canary]# curl 192.168.3.50:30869 v1 [root@k8s-master istio-canary]# curl 192.168.3.50:30869 v1 [root@k8s-master istio-canary]# curl 192.168.3.50:30869 v1 [root@k8s-master istio-canary]# curl 192.168.3.50:30869 v1
通过手动curl测试效果并不好,因为访问的次数少,所以我们可以通过循环来访问
[root@k8s-master istio-canary]# for ((i=1;i<=100;i++)); do curl 192.168.3.50:30869; done
结果太长不贴了,统计结果如下,大约是90:10
[root@k8s-master istio-canary]# cat /tmp/curl.txt |sort |uniq -c 89 v1 11 v2
修改流量
[root@k8s-master istio-canary]# kubectl edit virtualservices.networking.istio.io canary ... route: - destination: host: canary.default.svc.cluster.local subset: v1 weight: 70 # 由90修改为70 - destination: host: canary.default.svc.cluster.local subset: v2 weight: 30 # 由10修改为30 ...
修改完之后再次执执行循环curl命令,统计结果信息如下
[root@k8s-master istio-canary]# for ((i=1;i<=100;i++)); do curl 192.168.3.50:30869; done > /tmp/curl.txt [root@k8s-master istio-canary]# cat /tmp/curl.txt |sort |uniq -c 70 v1 30 v2
目前只完成了这些,接下来会尝试控制固定区域访问,实现内部测试,并研究一下发布流程,欢迎朋友们留言指导,一起交流学习
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)