linux – 堆叠站点上的DRBD磁盘drbd10上的IO高

linux – 堆叠站点上的DRBD磁盘drbd10上的IO高,第1张

概述我们有4个Redhat Box Dell PowerEdge R630(比如a,b,c,d)具有以下 *** 作系统/软件包. RedHat EL 6.5 MySql Enterprise 5.6 DRBD 8.4 Corosync 1.4.7 我们设置了4路堆叠drbd资源,如下所示: 群集群集-1:服务器a和b彼此连接本地lan群集群集-2:服务器c和d Cluster Cluster-1和Clust 我们有4个Redhat Box Dell PowerEdge R630(比如a,b,c,d)具有以下 *** 作系统/软件包.

RedHat EL 6.5 MySql Enterprise 5.6 DRBD 8.4 Corosync 1.4.7

我们设置了4路堆叠drbd资源,如下所示:

群集群集-1:服务器a和b彼此连接本地lan群集群集-2:服务器c和d

Cluster Cluster-1和Cluster-2通过堆叠drbd通过虚拟IP连接,是不同数据中心的一部分.

drbd0磁盘已在每台服务器1GB上本地创建,并进一步连接到drbd10.

整体设置包括4层:Tomcat前端应用 – > rabbitmq – > memcache – > MysqL的/ DRBD

我们正在经历非常高的磁盘IO,即使活动也不是必须的.但是流量/活动将在几周内增加,因此我们担心它会对性能造成非常不利的影响. I / O使用率仅在堆叠的站点上高(有时为90%及以上).辅助站点没有这个问题.当应用程序理想时,使用率很高.

所以请分享一些建议/调整指南,以帮助我们解决问题;

resource clusterdb {protocol C;handlers {pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";fence-peer "/usr/lib/drbd/crm-fence-peer.sh";}startup {degr-wfc-timeout 120; # 2 minutes.outdated-wfc-timeout 2; # 2 seconds.}disk {on-io-error detach;no-disk-barrIEr;no-md-flushes;}net {cram-hmac-alg "sha1";shared-secret "clusterdb";after-sb-0pri disconnect;after-sb-1pri disconnect;after-sb-2pri disconnect;rr-conflict disconnect;}syncer {rate 10M;al-extents 257; on-no-data-accessible io-error; } on sever-1 { device /dev/drbd0; disk /dev/sda2; address 10.170.26.28:7788; Meta-disk internal; } on ever-2 { device /dev/drbd0; disk /dev/sda2; address 10.170.26.27:7788; Meta-disk internal; }}

堆叠配置: –

resource clusterdb_stacked {  protocol A;handlers {pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergencyshutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";fence-peer "/usr/lib/drbd/crm-fence-peer.sh";}startup {degr-wfc-timeout 120; # 2 minutes.outdated-wfc-timeout 2; # 2 seconds.}disk {on-io-error detach;no-disk-barrIEr;no-md-flushes;}net {cram-hmac-alg "sha1";shared-secret "clusterdb";after-sb-0pri disconnect;after-sb-1pri disconnect;after-sb-2pri disconnect;rr-conflict disconnect;}syncer {rate 10M;al-extents 257; on-no-data-accessible io-error; }  stacked-on-top-of clusterdb {    device     /dev/drbd10;    address   10.170.26.28:7788;  } stacked-on-top-of clusterdb_DR {    device     /dev/drbd10;    address    10.170.26.60:7788;   }}

请求的数据: –

Date || svctm(w_wait)|| %util10:32:01 3.07 55.23 94.1110:33:01 3.29 50.75 96.2710:34:01 2.82 41.44 96.1510:35:01 3.01 72.30 96.8610:36:01 4.52 40.41 94.2410:37:01 3.80 50.42 83.8610:38:01 3.03 72.54 97.1710:39:01 4.96 37.08 89.4510:41:01 3.55 66.48 70.1910:45:01 2.91 53.70 89.5710:46:01 2.98 49.49 94.7310:55:01 3.01 48.38 93.7010:56:01 2.98 43.47 97.2611:05:01 2.80 61.84 86.9311:06:01 2.67 43.35 96.8911:07:01 2.68 37.67 95.41

根据评论更新问题: –

它实际上是比较本地和堆叠.

本地服务器之间

[root@pri-site-valsql-a]#Ping pri-site-valsql-bPing pri-site-valsql-b.csn.infra.sm (10.170.24.23) 56(84) bytes of data.64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=1 ttl=64 time=0.143 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=2 ttl=64 time=0.145 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=3 ttl=64 time=0.132 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=4 ttl=64 time=0.145 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=5 ttl=64 time=0.150 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=6 ttl=64 time=0.145 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=7 ttl=64 time=0.132 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=8 ttl=64 time=0.127 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=9 ttl=64 time=0.134 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=10 ttl=64 time=0.149 ms64 bytes from pri-site-valsql-b.csn.infra.sm (10.170.24.23): icmp_seq=11 ttl=64 time=0.147 ms^C--- pri-site-valsql-b.csn.infra.sm Ping statistics ---11 packets transmitted,11 received,0% packet loss,time 10323msrtt min/avg/max/mdev = 0.127/0.140/0.150/0.016 ms

两个堆叠的服务器之间

[root@pri-site-valsql-a]#Ping dr-site-valsql-bPing dr-site-valsql-b.csn.infra.sm (10.170.24.48) 56(84) bytes of data.64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=1 ttl=64 time=9.68 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=2 ttl=64 time=4.51 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=3 ttl=64 time=4.53 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=4 ttl=64 time=4.51 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=5 ttl=64 time=4.51 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=6 ttl=64 time=4.52 ms64 bytes from dr-site-valsql-b.csn.infra.sm (10.170.24.48): icmp_seq=7 ttl=64 time=4.52 ms^C--- dr-site-valsql-b.csn.infra.sm Ping statistics ---7 packets transmitted,7 received,time 6654msrtt min/avg/max/mdev = 4.510/5.258/9.686/1.808 ms[root@pri-site-valsql-a]#

输出显示高I / O: –

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %utildrbd0             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00avg-cpu:  %user   %nice %system %iowait  %steal   %IDle           0.00    0.00    0.06    0.00    0.00   99.94Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %utildrbd0             0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.90    1.50 452.25  90.45avg-cpu:  %user   %nice %system %iowait  %steal   %IDle           0.25    0.00    0.13    0.50    0.00   99.12Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %utildrbd0             0.00     0.00    1.00   44.00     8.00   352.00     8.00     1.07    2.90  18.48  83.15avg-cpu:  %user   %nice %system %iowait  %steal   %IDle           0.13    0.00    0.06    0.25    0.00   99.56Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %utildrbd0             0.00     0.00    0.00   31.00     0.00   248.00     8.00     1.01    2.42  27.00  83.70avg-cpu:  %user   %nice %system %iowait  %steal   %IDle           0.19    0.00    0.06    0.00    0.00   99.75Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %utildrbd0             0.00     0.00    0.00    2.00     0.00    16.00     8.00     0.32    1.50 162.25  32.45

编辑过的属性文件.但仍然没有运气

disk {on-io-error detach;no-disk-barrIEr;no-disk-flushes;no-md-flushes;c-plan-ahead 0;c-fill-target 24M;c-min-rate 80M;c-max-rate 300M;al-extents 3833;}net {cram-hmac-alg "sha1";shared-secret "clusterdb";after-sb-0pri disconnect;after-sb-1pri disconnect;after-sb-2pri disconnect;rr-conflict disconnect;max-epoch-size 20000;max-buffers 20000;unplug-watermark 16;}syncer {rate 100M; on-no-data-accessible io-error; }
解决方法 我没有在配置中看到堆叠资源.你也没有提到任何版本号,但看到如此低的范围让我觉得你正在运行古老的东西(8.3.x)或者遵循一些非常古老的指令.

无论如何,假设您正在使用协议A进行堆叠设备的复制(异步),当IO缓冲时,您仍然会快速填充TCP发送缓冲区,因此在缓冲区刷新时会等待IO等待; DRBD需要将其复制的写入放在某处,并且只能在飞行中有这么多未经确认的复制写入.

IO等待有助于系统负载.如果暂时断开堆叠资源,系统负载是否已解决?这是验证这是问题的一种方法.您还可以使用netstat或ss等查看TCP缓冲区,以查看负载较高时的TCP缓冲区.

除非您的站点之间的连接的延迟和吞吐量是惊人的(暗光纤或其他东西),否则您可能需要/想要使用liNBIT中的DRBD代理;它让你使用系统内存缓冲写入.

总结

以上是内存溢出为你收集整理的linux – 堆叠站点上的DRBD磁盘drbd10上的I / O高全部内容,希望文章能够帮你解决linux – 堆叠站点上的DRBD磁盘drbd10上的I / O高所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/1037740.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-24
下一篇 2022-05-24

发表评论

登录后才能评论

评论列表(0条)

保存