ext4 – 使用drbd进行故障恢复时的Pacemaker错误

ext4 – 使用drbd进行故障恢复时的Pacemaker错误,第1张

概述我的群集中有两个节点,带有drbd pacemaker corosync 当第一个节点发生故障时,第二个节点就会服务并且没关系,但是当我们必须进行故障恢复(node1重新联机)时,它会显示一些错误并且集群停止工作. 它是一个内核为2.6.32-504.12.2.el6.x86_64的CentOS 6集群,这些软件包: kmod-drbd83-8.3.16-3,drbd83-utils-8.3.16 我的群集中有两个节点,带有drbd pacemaker corosync
当第一个节点发生故障时,第二个节点就会服务并且没关系,但是当我们必须进行故障恢复(node1重新联机)时,它会显示一些错误并且集群停止工作.

它是一个内核为2.6.32-504.12.2.el6.x86_64的CentOS 6集群,这些软件包:

kmod-drbd83-8.3.16-3,drbd83-utils-8.3.16-1,corosynclib-1.4.7-1,
corosync-1.4.7-1,pacemaker-1.1.12-4,pacemaker-cluster-libs-1.1.12-4,pacemaker-libs-1.1.12-4,pacemaker-cli-1.1.12-4.

Drbd配置:

resource r0{    startup {        wfc-timeout 30;        outdated-wfc-timeout 20;        degr-wfc-timeout 30;    }net {    cram-hmac-alg sha1;    shared-secret sync_disk;    max-buffers 512;    sndbuf-size 0;}syncer {    rate 100M;    verify-alg sha1;}on XXX2 {    device minor 1;    disk /dev/sdb;    address xx.xx.xx.xx:7789;    Meta-disk internal;}on XXX1 {    device minor 1;    disk /dev/sdb;    address xx.xx.xx.xx:7789;    Meta-disk internal;}}

Corosync:

compatibility: whitetanktotem {    version: 2    secauth: on    interface {        member {            memberaddr: xx.xx.xx.1        }        member {            memberaddr: xx.xx.xx.2        }        ringnumber: 0        bindnetaddr: xx.xx.xx.1        mcastport: 5405        ttl: 1    }    transport: udpu}logging {    fileline: off    to_logfile: yes    to_syslog: yes    deBUG: on    logfile: /var/log/cluster/corosync.log    deBUG: off    timestamp: on    logger_subsys {        subsys: AMF        deBUG: off    }}

起搏器:

node XXX1 \        attributes standby=offnode XXX2 \        attributes standby=offprimitive drbd_res ocf:linbit:drbd \        params drbd_resource=r0 \        op monitor interval=29s role=Master \        op monitor interval=31s role=Slaveprimitive failover_ip IPaddr2 \        params ip=172.16.2.49 cIDr_netmask=32 \        op monitor interval=30s nic=eth0 \        Meta is-managed=trueprimitive fs_res filesystem \        params device="/dev/drbd1" directory="/data" fstype=ext4 \        Meta is-managed=trueprimitive res_exportfs_export1 exportfs \        params fsID=1 directory="/data/export" options="rw,async,insecure,no_subtree_check,no_root_squash,no_all_squash" clIEntspec="*" wait_for_leasetime_on_stop=false \        op monitor interval=40s \        op stop interval=0 timeout=120s \        op start interval=0 timeout=120s \        Meta is-managed=trueprimitive res_exportfs_export2 exportfs \        params fsID=2 directory="/data/teste1" options="rw,no_all_squash" clIEntspec="*" wait_for_leasetime_on_stop=false \        op monitor interval=40s \        op stop interval=0 timeout=120s \        op start interval=0 timeout=120s \        Meta is-managed=trueprimitive res_exportfs_root exportfs \        params clIEntspec="*" options="rw,fsID=root,no_all_squash" directory="/data" fsID=0 unlock_on_stop=false wait_for_leasetime_on_stop=false \        operations $ID=res_exportfs_root-operations \        op monitor interval=30 start-delay=0 \        Metagroup rg_export fs_res res_exportfs_export1 res_exportfs_export2 failover_ipms drbd_master_slave drbd_res \        Meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=trueclone cl_exportfs_root res_exportfs_root \        Metacolocation c_nfs_on_root inf: rg_export cl_exportfs_rootcolocation fs_drbd_colo inf: rg_export drbd_master_slave:Masterorder fs_after_drbd Mandatory: drbd_master_slave:promote rg_export:startorder o_root_before_nfs inf: cl_exportfs_root rg_export:startproperty cib-bootstrap-options: \        expected-quorum-Votes=2 \        last-lrm-refresh=1427814473 \        stonith-enabled=false \        no-quorum-policy=ignore \        dc-version=1.1.11-97629de \        cluster-infrastructure="classic openais (with plugin)"

错误:

res_exportfs_export2_stop_0 on xx.xx.xx.1 'unkNown error' (1): call=47,status=Timed Out,last-rc-change='Tue Mar 31 12:53:04 2015',queued=0ms,exec=20003msres_exportfs_export2_stop_0 on xx.xx.xx.1 'unkNown error' (1): call=47,exec=20003msres_exportfs_export2_stop_0 on xx.xx.xxx.2 'unkNown error' (1): call=52,exec=20001msres_exportfs_export2_stop_0 on xx.xx.xx.2 'unkNown error' (1): call=52,exec=20001ms

还有其他我可以检查的日志吗?

我检查了第二个节点/ dev / drbd1在故障恢复时没有卸载.
如果我重新启动NFS服务并应用规则,一切正常.

编辑:感谢Dok现在正在工作,我只需要将时间调整为120秒并设置启动超时!

解决方法
res_exportfs_export2_stop_0 on xx.xx.xx.1 'unkNown error' (1): call=47,exec=20003ms

显示您的res_exportfs2资源由于超时而无法停止.它可能只是需要更长的超时.尝试为此资源配置停止超时,如下所示:

primitive res_exportfs_export2 exportfs \params fsID=2 directory="/data/teste1" options="rw,no_all_squash" clIEntspec="*" wait_for_leasetime_on_stop=true \op monitor interval=30s \op stop interval=0 timeout=60s

如果超时没有帮助检查消息日志和/或corosync.log在错误中显示的线索(2015年3月31日12:53:04).

总结

以上是内存溢出为你收集整理的ext4 – 使用drbd进行故障恢复时的Pacemaker错误全部内容,希望文章能够帮你解决ext4 – 使用drbd进行故障恢复时的Pacemaker错误所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/1039108.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-24
下一篇 2022-05-24

发表评论

登录后才能评论

评论列表(0条)

保存