用Docker swarm快速部署Nebula Graph集群的教程

用Docker swarm快速部署Nebula Graph集群的教程,第1张

用Dockerswarm快速部署NebulaGraph集群的教程

本文主要介绍了利用Dockerswarm快速部署星云图集群的方法。这篇文章很详细,对你的学习或者工作有一定的参考价值。有需要的朋友可以参考一下。

一、导言

本文介绍了如何使用DockerSwarm部署星云图集群。

二。星云星团的构造

2.1环境准备

准备机器

ip

内存(Gb)

Cpu(核心数量)

192

16

192

16

192

16

确保所有机器在安装前都安装了docker。

2.2初始化群集群

在192.168.1.166机器上执行

$dockerswarminit--advertise-addr192.168.1.166 Swarminitialized:currentnode(dxn1zf6l61qsb1josjja83ngz)isnowamanager. Toaddaworkertothisswarm,runthefollowingcommand: dockerswarmjoin\ --tokenSWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c\ 192.168.1.166:2377 Toaddamanagertothisswarm,run'dockerswarmjoin-tokenmanager'andfollowtheinstructions.

2.3加入工人节点

根据init命令的提示内容,加入swarmworker节点,分别在192.168.1.167192.168.1.168上执行。

dockerswarmjoin\ --tokenSWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c\ 192.168.1.166:2377

2.4验证集群

dockernodels IDHOSTNAMESTATUSAVAILABILITYMANAGERSTATUSENGINEVERSION h0az2wzqetpwhl9ybu76yxaen*KF2-DATA-166ReadyActiveReachable18.06.1-ce q6jripaolxsl7xqv3cmv5pxjiKF2-DATA-167ReadyActiveLeader18.06.1-ce h1iql1uvm7123h3gon9so69dyKF2-DATA-168ReadyActive18.06.1-ce

2.5配置docker堆栈

vidocker-stack.yml

配置以下内容

version:'3.6' services: metad0: image:vesoft/nebula-metad:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.166 ---ws_ip=192.168.1.166 ---port=45500 ---data_path=/data/meta ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-166 healthcheck: test:["CMD","curl","-f","http://192.168.1.166:11000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:11000 published:11000 protocol:tcp mode:host -target:11002 published:11002 protocol:tcp mode:host -target:45500 published:45500 protocol:tcp mode:host volumes: -data-metad0:/data/meta -logs-metad0:/logs networks: -nebula-net metad1: image:vesoft/nebula-metad:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.167 ---ws_ip=192.168.1.167 ---port=45500 ---data_path=/data/meta ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-167 healthcheck: test:["CMD","curl","-f","http://192.168.1.167:11000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:11000 published:11000 protocol:tcp mode:host -target:11002 published:11002 protocol:tcp mode:host -target:45500 published:45500 protocol:tcp mode:host volumes: -data-metad1:/data/meta -logs-metad1:/logs networks: -nebula-net metad2: image:vesoft/nebula-metad:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.168 ---ws_ip=192.168.1.168 ---port=45500 ---data_path=/data/meta ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-168 healthcheck: test:["CMD","curl","-f","http://192.168.1.168:11000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:11000 published:11000 protocol:tcp mode:host -target:11002 published:11002 protocol:tcp mode:host -target:45500 published:45500 protocol:tcp mode:host volumes: -data-metad2:/data/meta -logs-metad2:/logs networks: -nebula-net storaged0: image:vesoft/nebula-storaged:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.166 ---ws_ip=192.168.1.166 ---port=44500 ---data_path=/data/storage ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-166 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.166:12000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:12000 published:12000 protocol:tcp mode:host -target:12002 published:12002 protocol:tcp mode:host volumes: -data-storaged0:/data/storage -logs-storaged0:/logs networks: -nebula-net storaged1: image:vesoft/nebula-storaged:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.167 ---ws_ip=192.168.1.167 ---port=44500 ---data_path=/data/storage ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-167 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.167:12000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:12000 published:12000 protocol:tcp mode:host -target:12002 published:12004 protocol:tcp mode:host volumes: -data-storaged1:/data/storage -logs-storaged1:/logs networks: -nebula-net storaged2: image:vesoft/nebula-storaged:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---local_ip=192.168.1.168 ---ws_ip=192.168.1.168 ---port=44500 ---data_path=/data/storage ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-168 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.168:12000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:12000 published:12000 protocol:tcp mode:host -target:12002 published:12006 protocol:tcp mode:host volumes: -data-storaged2:/data/storage -logs-storaged2:/logs networks: -nebula-net graphd1: image:vesoft/nebula-graphd:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---port=3699 ---ws_ip=192.168.1.166 ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-166 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.166:13000/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:3699 published:3699 protocol:tcp mode:host -target:13000 published:13000 protocol:tcp #mode:host -target:13002 published:13002 protocol:tcp mode:host volumes: -logs-graphd:/logs networks: -nebula-net graphd2: image:vesoft/nebula-graphd:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---port=3699 ---ws_ip=192.168.1.167 ---log_dir=/logs ---v=2 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-167 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.167:13001/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:3699 published:3640 protocol:tcp mode:host -target:13000 published:13001 protocol:tcp mode:host -target:13002 published:13003 protocol:tcp #mode:host volumes: -logs-graphd2:/logs networks: -nebula-net graphd3: image:vesoft/nebula-graphd:nightly env_file: -./nebula.env command: ---meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500 ---port=3699 ---ws_ip=192.168.1.168 ---log_dir=/logs ---v=0 ---minloglevel=2 deploy: replicas:1 restart_policy: condition:on-failure placement: constraints: -node.hostname==KF2-DATA-168 depends_on: -metad0 -metad1 -metad2 healthcheck: test:["CMD","curl","-f","http://192.168.1.168:13002/status"] interval:30s timeout:10s retries:3 start_period:20s ports: -target:3699 published:3641 protocol:tcp mode:host -target:13000 published:13002 protocol:tcp #mode:host -target:13002 published:13004 protocol:tcp mode:host volumes: -logs-graphd3:/logs networks: -nebula-net networks: nebula-net: external:true attachable:true name:host volumes: data-metad0: logs-metad0: data-metad1: logs-metad1: data-metad2: logs-metad2: data-storaged0: logs-storaged0: data-storaged1: logs-storaged1: data-storaged2: logs-storaged2: logs-graphd: logs-graphd2: logs-graphd3: docker-stack.yml

编辑星云.环境

添加以下内容

TZ=UTC USER=root

星云.env

2.6启动星云星团

dockerstackdeploynebula-cdocker-stack.yml

三。集群负载平衡和高可用性配置

目前,星云图的客户端(1。x)不提供负载均衡的能力,只是随机选择一个graphd来连接。因此,您应该在生产和使用过程中进行自己的负载平衡和高可用性。

图3.1

整个部署架构分为三层:数据服务层、负载均衡层和高可用性层。如图3.1所示。

负载平衡层:负载平衡客户端请求,并将请求分发到较低的数据服务层。

高可用层:这里实现了haproxy的高可用,保证负载均衡层的服务,保证整个集群的正常服务。

3.1负载平衡配置

Haproxy配置了docker-compose。分别编辑以下三个文件

Dockerfile添加了以下内容

FROMhaproxy:1.7 COPYhaproxy.cfg/usr/local/etc/haproxy/haproxy.cfg EXPOSE3640

构建

Docker-compose.yml添加了以下内容

version:"3.2" services: haproxy: container_name:haproxy build:. volumes: -./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg ports: -3640:3640 restart:always networks: -app_net networks: app_net: external:true

docker-compose.yml

Haproxy.cfg增加了以下内容

global daemon maxconn30000 log127.0.0.1local0info log127.0.0.1local1warning defaults log-format%hr\%ST\%B\%Ts logglobal modehttp optionhttp-keep-alive timeoutconnect5000ms timeoutclient10000ms timeoutserver50000ms timeouthttp-request20000ms #customyourownfrontends&&backends&&listenconf #CUSTOM listengraphd-cluster bind*:3640 modetcp maxconn300 balanceroundrobin serverserver1192.168.1.166:3699maxconn300check serverserver2192.168.1.167:3699maxconn300check serverserver3192.168.1.168:3699maxconn300check listenstats bind*:1080 statsrefresh30s statsuri/stats

3.2启动haproxy

docker-composeup-d

3.2高可用性配置

注意:配置keepalive要提前准备好vip(虚拟ip),192.168.1.99是下面配置中的虚拟ip。

在192.168.1.166、192.168.1.167和192.168.1.168上进行以下配置。

安装keepalived

apt-getupdate&&apt-getupgrade&&apt-getinstallkeepalived-y

更改keepalived配置文件/etc/keepalived/keepalived.conf(在三台机器中,进行如下配置,优先级要设置不同的值来确定优先级)

192.168.1.166机器配置

global_defs{ router_idlb01#标识信息,一个名字而已; } vrrp_scriptchk_haproxy{ script"killall-0haproxy"interval2 } vrrp_instanceVI_1{ stateMASTER interfaceens160 virtual_router_id52 priority999 #设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒 advert_int1 #设置验证类型和密码 authentication{ #设置验证类型,主要有PASS和AH两种 auth_typePASS #设置验证密码,在同一个vrrp_instance下,MASTER与BACKUP必须使用相同的密码才能正常通信 auth_passamber1 } virtual_ipaddress{ #虚拟IP为192.168.1.99/24;绑定接口为ens160;别名ens169:1,主备相同 192.168.1.99/24devens160labelens160:1 } track_script{ chk_haproxy } }

17机器配置

global_defs{ router_idlb01#标识信息,一个名字而已; } vrrp_scriptchk_haproxy{ script"killall-0haproxy"interval2 } vrrp_instanceVI_1{ stateBACKUP interfaceens160 virtual_router_id52 priority888 #设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒 advert_int1 #设置验证类型和密码 authentication{ #设置验证类型,主要有PASS和AH两种 auth_typePASS #设置验证密码,在同一个vrrp_instance下,MASTER与BACKUP必须使用相同的密码才能正常通信 auth_passamber1 } virtual_ipaddress{ #虚拟IP为192.168.1.99/24;绑定接口为ens160;别名ens160:1,主备相同 192.168.1.99/24devens160labelens160:1 } track_script{ chk_haproxy } }

18机器配置

global_defs{ router_idlb01#标识信息,一个名字而已; } vrrp_scriptchk_haproxy{ script"killall-0haproxy"interval2 } vrrp_instanceVI_1{ stateBACKUP interfaceens160 virtual_router_id52 priority777 #设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒 advert_int1 #设置验证类型和密码 authentication{ #设置验证类型,主要有PASS和AH两种 auth_typePASS #设置验证密码,在同一个vrrp_instance下,MASTER与BACKUP必须使用相同的密码才能正常通信 auth_passamber1 } virtual_ipaddress{ #虚拟IP为192.168.1.99/24;绑定接口为ens160;别名ens160:1,主备相同 192.168.1.99/24devens160labelens160:1 } track_script{ chk_haproxy } }

保持激活的相关命令

#启动keepalived systemctlstartkeepalived #使keepalived开机自启 systemctlenablekeeplived #重启keepalived systemctlrestartkeepalived

四。其他人

如何离线部署?只需将图像更改为私有图像库。如果你有任何问题,请过来勾搭。

这就是这篇关于用Dockerswarm快速部署星云图集群的文章。有关Docker部署NebulaGraphcluster的更多信息,请搜索我们以前的文章或继续浏览下面的相关文章。希望大家以后能多多支持我们!

欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zz/774290.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-03
下一篇 2022-05-03

发表评论

登录后才能评论

评论列表(0条)

保存