- 論壇徽章:
- 0
|
本帖最后由 qqjb 于 2013-11-14 17:06 編輯
我有2臺(tái)vmware esxi主機(jī),已經(jīng)通過(guò)vcenter 192.168.5.71管理。我分別在每臺(tái)宿主上創(chuàng)建了1臺(tái)客戶機(jī),操作系統(tǒng)rhel6.1 x64,在6.1下fence設(shè)備無(wú)法在luci界面下添加,安裝集群軟件后有帶fence_vmware,配置集群時(shí)沒(méi)有配置fence設(shè)備,然后手動(dòng)修改的配置的文件,配置文件如下:
<?xml version="1.0"?>
<cluster config_version="5" name="rhelftp">
<clusternodes>
<clusternode name="192.168.5.91" nodeid="1">
<fence>
<method name="Method">
<device name="vm91" port="192.168.5.91-rhel6-ha01"/>
</method>
</fence>
</clusternode>
<clusternode name="192.168.5.92" nodeid="2">
<fence>
<method name="Method">
<device name="vm92" port="192.168.5.92-rhel6-ha02"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<rm>
<failoverdomains>
<failoverdomain name="rhelftp" nofailback="1" ordered="1" restricted="0">
<failoverdomainnode name="192.168.5.91" priority="2"/>
<failoverdomainnode name="192.168.5.92" priority="5"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="192.168.5.93" monitor_link="on" sleeptime="10"/>
<script file="/etc/rc.d/init.d/vsftpd" name="vsftpd"/>
</resources>
<service domain="rhelftp" name="vsftp" recovery="relocate">
<ip ref="192.168.5.93"/>
<script ref="vsftpd"/>
</service>
<fencedevices>
<fencedevice agent="fence_vmware" ipaddr="192.168.5.71" login="admin@System-Domain" name="vm91" passwd="qsQq#3Mx"/>
<fencedevice agent="fence_vmware" ipaddr="192.168.5.71" login="admin@System-Domain" name="vm92" passwd="qsQq#3Mx"/>
</fencedevices>
</rm>
</cluster>
集群?jiǎn)?dòng)服務(wù),手動(dòng)切換服務(wù)均正常,但在測(cè)試shutdown網(wǎng)口時(shí)發(fā)現(xiàn)無(wú)法fence當(dāng)前的primary主機(jī),但手動(dòng)執(zhí)行fence是成功的,下面為手動(dòng)fence的結(jié)果和ifdown 91的eth0時(shí)日志輸出fence fail的內(nèi)容,各位有遇到這樣的情況嗎,還是我的配置文件里也得加上對(duì)應(yīng)主機(jī)的UUID?還請(qǐng)各位指點(diǎn)下,謝謝!
PS:rhel6.1下有安裝VMware-vSphere-Perl-SDK否則fence_vmware會(huì)報(bào)錯(cuò),我2臺(tái)虛擬宿主機(jī)esxi的版本分別為5.0和5.1,我在客戶機(jī)安裝的均是VMware-vSphere-Perl-SDK5.1的,不知道是否有影響,但從手動(dòng)執(zhí)行fence_vmware去重啟另外一臺(tái)主機(jī)是正常的。
[root@rhelha02 ~]# clustat
Cluster Status for rhelftp @ Thu Nov 14 16:33:55 2013
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
192.168.5.91 1 Online, rgmanager
192.168.5.92 2 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:vsftp 192.168.5.91 started
[root@rhelha02 ~]# fence_vmware -a 192.168.5.71 -l admin@System-Domain -p qsQq#3Mx -n 192.168.5.91-rhel6-ha01 -o reboot
Success: Rebooted
[root@rhelha02 ~]#
[root@rhelha02 ~]# tail -f /var/log/messages
Nov 14 16:32:06 rhelha02 saslauthd[2241]: ipc_init : listening on socket: /var/run/saslauthd/mux
Nov 14 16:32:06 rhelha02 ricci: startup succeeded
Nov 14 16:32:07 rhelha02 rgmanager[2188]: I am node #2
Nov 14 16:32:07 rhelha02 rgmanager[2188]: Resource Group Manager Starting
Nov 14 16:32:07 rhelha02 rgmanager[2188]: Loading Service Data
Nov 14 16:32:09 rhelha02 rgmanager[2188]: Initializing Services
Nov 14 16:32:09 rhelha02 rgmanager[2835]: Executing /etc/rc.d/init.d/vsftpd stop
Nov 14 16:32:09 rhelha02 rgmanager[2188]: Services Initialized
Nov 14 16:32:09 rhelha02 rgmanager[2188]: State change: Local UP
Nov 14 16:32:09 rhelha02 rgmanager[2188]: State change: 192.168.5.91 UP
Nov 14 16:34:16 rhelha02 corosync[1673]: [TOTEM ] A processor failed, forming new configuration.
Nov 14 16:34:18 rhelha02 corosync[1673]: [QUORUM] Members[1]: 2
Nov 14 16:34:18 rhelha02 corosync[1673]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
Nov 14 16:34:18 rhelha02 kernel: dlm: closing connection to node 1
Nov 14 16:34:18 rhelha02 corosync[1673]: [CPG ] downlist received left_list: 1
Nov 14 16:34:18 rhelha02 corosync[1673]: [CPG ] chosen downlist from node r(0) ip(192.168.5.92)
Nov 14 16:34:18 rhelha02 corosync[1673]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 14 16:34:18 rhelha02 fenced[1731]: fencing node 192.168.5.91
Nov 14 16:34:18 rhelha02 rgmanager[2188]: State change: 192.168.5.91 DOWN
Nov 14 16:34:18 rhelha02 fenced[1731]: fence 192.168.5.91 dev 0.0 agent none result: error config agent
Nov 14 16:34:18 rhelha02 fenced[1731]: fence 192.168.5.91 failed
Nov 14 16:34:21 rhelha02 fenced[1731]: fencing node 192.168.5.91
Nov 14 16:34:21 rhelha02 fenced[1731]: fence 192.168.5.91 dev 0.0 agent none result: error config agent
Nov 14 16:34:21 rhelha02 fenced[1731]: fence 192.168.5.91 failed
Nov 14 16:34:24 rhelha02 fenced[1731]: fencing node 192.168.5.91
Nov 14 16:34:24 rhelha02 fenced[1731]: fence 192.168.5.91 dev 0.0 agent none result: error config agent
Nov 14 16:34:24 rhelha02 fenced[1731]: fence 192.168.5.91 failed |
|