源码
负载均衡是高可用网络基础架构的关键组件,通常用于将工作负载分布到多个服务器来提高应用的性能和可靠性。
关于GRPC的负载均衡,我们选择NGINX来进行反向代理。NGINX在1.13.9版本已经开始支持GRPC代理。有了对 GRPC 的支持,NGINX 就可以代理 GRPC TCP 连接,还可以终止、检查和跟踪 GRPC 的方法调用。
在nginx的配置文件中配置GRPC代理即可。
通过grpc 调用代理端口
观察NGINX的access.log,可以看到相应的代理记录
首先GRPC是建立在HTTP2.0的基础上进行数据传输,HTTP2.0的优势就不用多说了。因此NGINX负载均衡同样也是建立在HTTP2.0的基础之上。并且NGINX新增了GRPC模块。用于负载GRPC请求。
需要注意以下几个NGINX的参数配置:
nginx_http2模块
http2_max_requests:在一个tcp连接上默认通过的最大数据包数,默认1000个
http2_max_concurrent_streams:在一个tcp连接上默认最大并发流,默认128个
nginx_grpc模块
grpc_send_timeout:将请求传输到gRPC服务器的超时时间,如果超过这个时间,NGINX会断开连接。
grpc_read_timeout:接收gRPC服务器数据的超时时间,如果超过这个时间,NGINX会断开连接。
grpc_socket_keepalive:NGINX与gRPC服务器长连接配置设置。
前言
skywalking是个非常不错的apm产品,但是在使用过程中有个非常蛋疼的问题,在基于es的存储情况下,es的数据一有问题,就会导致整个skywalking web ui服务不可用,然后需要agent端一个服务一个服务的停用,然后服务重新部署后好,全部走一遍。这种问题同样也会存在skywalking的版本升级迭代中。而且apm 这种过程数据是允许丢弃的,默认skywalking中关于trace的数据记录只保存了90分钟。故博主准备将skywalking的部署容器化,一键部署升级。下文是整个skywalking 容器化部署的过程。
目标:将skywalking的docker镜像运行在k8s的集群环境中提供服务
docker镜像构建
FROMregistry.cn-xx.xx.com/keking/jdk:1.8ADDapache-skywalking-apm-incubating/ /opt/apache-skywalking-apm-incubating/RUNln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&&echo 'Asia/Shanghai' >/etc/timezone \
&&chmod +x /opt/apache-skywalking-apm-incubating/config/setApplicationEnv.sh \
&&chmod +x /opt/apache-skywalking-apm-incubating/webapp/setWebAppEnv.sh \
&&chmod +x /opt/apache-skywalking-apm-incubating/bin/startup.sh \
&&echo "tail -fn 100 /opt/apache-skywalking-apm-incubating/logs/webapp.log" >>/opt/apache-skywalking-apm-incubating/bin/startup.shEXPOSE8080 10800 11800 12800CMD/opt/apache-skywalking-apm-incubating/config/setApplicationEnv.sh \
&&sh /opt/apache-skywalking-apm-incubating/webapp/setWebAppEnv.sh \
&&/opt/apache-skywalking-apm-incubating/bin/startup.sh
在编写Dockerfile时需要考虑几个问题:skywalking中哪些配置需要动态配置(运行时设置)?怎么保证进程一直运行(skywalking 的startup.sh和tomcat中 的startup.sh类似)?
application.yml
#cluster:# zookeeper:# hostPort: localhost:2181# sessionTimeout: 100000naming:jetty:#OS real network IP(binding required), for agent to find collector clusterhost:0.0.0.0port:10800contextPath:/cache:# guava:caffeine:remote:gRPC:# OS real network IP(binding required), for collector nodes communicate with each other in cluster. collectorN --(gRPC) -->collectorMhost:#real_hostport:11800agent_gRPC:gRPC:#os real network ip(binding required), for agent to uplink data(trace/metrics) to collector. agent--(grpc)-->collectorhost:#real_hostport:11800# Set these two setting to open ssl#sslCertChainFile: $path#sslPrivateKeyFile: $path# Set your own token to active auth#authentication: xxxxxxagent_jetty:jetty:# OS real network IP(binding required), for agent to uplink data(trace/metrics) to collector through HTTP. agent--(HTTP)-->collector# SkyWalking native Java/.Net/node.js agents don't use this.# Open this for other implementor.host:0.0.0.0port:12800contextPath:/analysis_register:default:analysis_jvm:default:analysis_segment_parser:default:bufferFilePath:../buffer/bufferOffsetMaxFileSize:10MbufferSegmentMaxFileSize:500MbufferFileCleanWhenRestart:trueui:jetty:# Stay in `localhost` if UI starts up in default mode.# Change it to OS real network IP(binding required), if deploy collector in different machine.host:0.0.0.0port:12800contextPath:/storage:elasticsearch:clusterName:#elasticsearch_clusterNameclusterTransportSniffer:trueclusterNodes:#elasticsearch_clusterNodesindexShardsNumber:2indexReplicasNumber:0highPerformanceMode:true# Batch process setting, refer to https://www.elastic.co/guide/en/elasticsearch/client/java-api/5.5/java-docs-bulk-processor.htmlbulkActions:2000# Execute the bulk every 2000 requestsbulkSize:20# flush the bulk every 20mbflushInterval:10# flush the bulk every 10 seconds whatever the number of requestsconcurrentRequests:2# the number of concurrent requests# Set a timeout on metric data. After the timeout has expired, the metric data will automatically be deleted.traceDataTTL:2880# Unit is minuteminuteMetricDataTTL:90# Unit is minutehourMetricDataTTL:36# Unit is hourdayMetricDataTTL:45# Unit is daymonthMetricDataTTL:18# Unit is month#storage:# h2:# url: jdbc:h2:~/memorydb# userName: saconfiguration:default:#namespace: xxxxx# alarm thresholdapplicationApdexThreshold:2000serviceErrorRateThreshold:10.00serviceAverageResponseTimeThreshold:2000instanceErrorRateThreshold:10.00instanceAverageResponseTimeThreshold:2000applicationErrorRateThreshold:10.00applicationAverageResponseTimeThreshold:2000# thermodynamicthermodynamicResponseTimeStep:50thermodynamicCountOfResponseTimeSteps:40# max collection's size of worker cache collection, setting it smaller when collector OutOfMemory crashed.workerCacheMaxSize:10000#receiver_zipkin:# default:# host: localhost# port: 9411# contextPath: /
webapp.yml
动态配置:密码,grpc等需要绑定主机的ip都需要运行时设置,这里我们在启动skywalking的startup.sh只之前,先执行了两个设置配置的脚本,通过k8s在运行时设置的环境变量来替换需要动态配置的参数
setApplicationEnv.sh
#!/usr/bin/env shsed -i"s/#elasticsearch_clusterNodes/${elasticsearch_clusterNodes}/g"/opt/apache-skywalking-apm-incubating/config/application.ymlsed -i"s/#elasticsearch_clusterName/${elasticsearch_clusterName}/g"/opt/apache-skywalking-apm-incubating/config/application.ymlsed -i"s/#real_host/${real_host}/g"/opt/apache-skywalking-apm-incubating/config/application.yml
setWebAppEnv.sh
#!/usr/bin/env shsed -i"s/#skywalking_password/${skywalking_password}/g"/opt/apache-skywalking-apm-incubating/webapp/webapp.ymlsed -i"s/#real_host/${real_host}/g"/opt/apache-skywalking-apm-incubating/webapp/webapp.yml
保持进程存在:通过在skywalking 启动脚本startup.sh末尾追加"tail -fn 100
/opt/apache-skywalking-apm-incubating/logs/webapp.log",来让进程保持运行,并不断输出webapp.log的日志
Kubernetes中部署
apiVersion:extensions/v1beta1kind:Deploymentmetadata:name:skywalkingnamespace:uatspec:replicas:1selector:matchLabels:app:skywalkingtemplate:metadata:labels:app:skywalkingspec:imagePullSecrets:-name:registry-pull-secretnodeSelector:apm:skywalkingcontainers:-name:skywalkingimage:registry.cn-xx.xx.com/keking/kk-skywalking:5.2imagePullPolicy:Alwaysenv:-name:elasticsearch_clusterNamevalue:elasticsearch-name:elasticsearch_clusterNodesvalue:172.16.16.129:31300-name:skywalking_passwordvalue:xxx-name:real_hostvalueFrom:fieldRef:fieldPath:status.podIPresources:limits:cpu:1000mmemory:4Girequests:cpu:700mmemory:2Gi---apiVersion:v1kind:Servicemetadata:name:skywalkingnamespace:uatlabels:app:skywalkingspec:selector:app:skywalkingports:-name:web-aport:8080targetPort:8080nodePort:31180-name:web-bport:10800targetPort:10800nodePort:31181-name:web-cport:11800targetPort:11800nodePort:31182-name:web-dport:12800targetPort:12800nodePort:31183type:NodePort
Kubernetes部署脚本中唯一需要注意的就是env中关于pod ip的获取,skywalking中有几个ip必须绑定容器的真实ip,这个地方可以通过环境变量设置到容器里面去
结语
整个skywalking容器化部署从测试到可用大概耗时1天,其中花了个多小时整了下谭兄的skywalking-docker镜像(
https://hub.docker.com/r/wutang/skywalking-docker/),发现有个脚本有权限问题(谭兄反馈已解决,还没来的及测试),以及有几个地方自己不是很好控制,便build了自己的docker镜像,其中最大的问题还是解决集群中网络通讯的问题,一开始我把skywalking中的服务ip都设置为0.0.0.0,然后通过集群的nodePort映射出来,这个时候的agent通过集群ip+31181是可以访问到naming服务的,然后通过naming服务获取到的collector gRPC服务缺变成了0.0.0.0:11800, 这个地址agent肯定访问不到collector的,后面通过绑定pod ip的方式解决了这个问题。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)