PACEMAKER: Unterschied zwischen den Versionen

Aus Xinux Wiki
Zur Navigation springen Zur Suche springen
Zeile 2: Zeile 2:
 
==[ALL] Initial setup==
 
==[ALL] Initial setup==
 
Install required packages:
 
Install required packages:
  sudo apt-get install pacemaker cman resource-agents fence-agents gfs2-utils gfs2-cluster ocfs2-tools-cman openais
+
  sudo apt-get install pacemaker cman resource-agents fence-agents gfs2-utils gfs2-cluster ocfs2-tools-cman openais drbd8-utils
 
  Make sure each host can resolve all other hosts. Best way to achive this is by adding their IPs and hostnames to /etc/hosts on all nodes. In this example, that would be:
 
  Make sure each host can resolve all other hosts. Best way to achive this is by adding their IPs and hostnames to /etc/hosts on all nodes. In this example, that would be:
  

Version vom 7. September 2012, 19:33 Uhr

Installation

[ALL] Initial setup

Install required packages:

sudo apt-get install pacemaker cman resource-agents fence-agents gfs2-utils gfs2-cluster ocfs2-tools-cman openais drbd8-utils
Make sure each host can resolve all other hosts. Best way to achive this is by adding their IPs and hostnames to /etc/hosts on all nodes. In this example, that would be:
eth0
192.168.244.161 fix
192.168.244.162 foxy
eth1
10.168.244.161   fix-ha
10.168.244.162   foxy-ha
Disable o2cb from starting:
update-rc.d -f o2cb remove

[ALL] Create /etc/cluster/cluster.conf

Paste this into /etc/cluster/cluster.conf:

<?xml version="1.0"?>
<cluster config_version="4" name="pacemaker">
    <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
    <clusternodes>
            <clusternode name="server1" nodeid="1" votes="1">
                <fence>
                        <method name="pcmk-redirect">
                                <device name="pcmk" port="server1"/>
                        </method>
                </fence>
            </clusternode>
            <clusternode name="server2" nodeid="2" votes="1">
                <fence>
                        <method name="pcmk-redirect">
                                <device name="pcmk" port="server2"/>
                        </method>
                </fence>
            </clusternode>
            <clusternode name="server3" nodeid="3" votes="1">
                <fence>
                        <method name="pcmk-redirect">
                                <device name="pcmk" port="server3"/>
                        </method>
                </fence>
            </clusternode>
    </clusternodes>
  <fencedevices>
    <fencedevice name="pcmk" agent="fence_pcmk"/>
  </fencedevices>
    <cman/>
</cluster> 

[ALL] Edit /etc/corosync/corosync.conf

Find pacemaker service in /etc/corosync/corosync.conf and bump version to 1:

service {
        # Load the Pacemaker Cluster Resource Manager
        ver:       1
        name:      pacemaker
}

Replace bindnetaddr with the IP of your network. For example:

                bindnetaddr: 10.168.244.0

'0' is not a typo.

[ALL] Enable pacemaker init scripts

update-rc.d -f pacemaker remove
update-rc.d pacemaker start 50 1 2 3 4 5 . stop 01 0 6 .

ALL] Start cman service and then pacemaker service

service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... [  OK  ]
   Waiting for quorum... [  OK  ]
   Starting fenced... [  OK  ]
   Starting dlm_controld... [  OK  ]
   Unfencing self... [  OK  ]
   Joining fence domain... [  OK  ]
service pacemaker start
Starting Pacemaker Cluster Manager: [  OK  ]

[ONE] Setup resources

Wait for a minute until pacemaker declares all nodes online:
# crm status
============
Last updated: Fri Sep  7 21:18:12 2012
Last change: Fri Sep  7 21:17:17 2012 via crmd on fix
Stack: cman
Current DC: fix - partition with quorum
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
2 Nodes configured, unknown expected votes
0 Resources configured.
============ 

Online: [ fix foxy ]

Set up dlm_controld, gfs_controld and o2cb in cluster's CIB. Easiest way to do this is by running

crm configure edit

node fix
node foxy
primitive resDLM ocf:pacemaker:controld \
        params daemon="dlm_controld" \
        op monitor interval="120s"
primitive resGFSD ocf:pacemaker:controld \
        params daemon="gfs_controld" args="" \
        op monitor interval="120s"
primitive resO2CB ocf:pacemaker:o2cb \
        params stack="cman" \
        op monitor interval="120s"
clone cloneDLM resDLM \
        meta globally-unique="false" interleave="true"
clone cloneGFSD resGFSD \
        meta globally-unique="false" interleave="true" target-role="Started"
clone cloneO2CB resO2CB \
        meta globally-unique="false" interleave="true"
colocation colGFSDDLM inf: cloneGFSD cloneDLM
colocation colO2CBDLM inf: cloneO2CB cloneDLM
order ordDLMGFSD 0: cloneDLM cloneGFSD
order ordDLMO2CB 0: cloneDLM cloneO2CB
property $id="cib-bootstrap-options" \
        dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
        cluster-infrastructure="cman" \
       stonith-enabled="false" \
       no-quorum-policy="ignore"
EXTREMELY IMPORTANT: Notice that this example has STONITH disabled. This is just a HOWTO for a basic
setup. You should't be running shared resources with disabled STONITH. Check pacemaker's documentation 
for guidance on setting this up. If you are not sure about this, stop right now!

Save and quit. Running

crm status

should now show all these services running:

# crm status
============
Last updated: Fri Sep  7 21:28:36 2012
Last change: Fri Sep  7 21:26:36 2012 via cibadmin on fix
Stack: cman
Current DC: fix - partition with quorum
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
2 Nodes configured, unknown expected votes
6 Resources configured.
============ 

Online: [ fix foxy ] 

Clone Set: cloneDLM [resDLM]
    Started: [ fix foxy ]
Clone Set: cloneGFSD [resGFSD]
    Started: [ fix foxy ]
Clone Set: cloneO2CB [resO2CB]
    Started: [ fix foxy ]



Create GFS2 and OCFS2 filesystems:

mkfs.gfs2 -p lock_dlm -j4 -t pacemaker:pcmk /dev/vdc mkfs.ocfs2 /dev/vdb

When running mkfs.gfs2, make sure that cluster name is identical with the name setup in /etc/cluster/cluster.conf. In this case, this is 'pacemaker'. Now add remaining resources (filesystems). Run

crm configure edit

and add:

primitive resFS ocf:heartbeat:Filesystem \

       params device="/dev/vdb" directory="/opt" fstype="ocfs2" \
       op monitor interval="120s"

primitive resFS2 ocf:heartbeat:Filesystem \

       params device="/dev/vdc" directory="/mnt" fstype="gfs2" \
       op monitor interval="120s"

clone cloneFS resFS \

       meta interleave="true" ordered="true" target-role="Started"

clone cloneFS2 resFS2 \

       meta interleave="true" ordered="true" target-role="Started"

colocation colFSGFSD inf: cloneFS2 cloneGFSD colocation colFSO2CB inf: cloneFS cloneO2CB order ordGFSDFS 0: cloneGFSD cloneFS2 order ordO2CBFS 0: cloneO2CB cloneFS

Once saved, cluster will show all services running:

  1. crm status
==

Last updated: Thu Apr 26 20:28:21 2012 Last change: Thu Apr 26 20:06:11 2012 via crmd on server1 Stack: cman Current DC: server1 - partition with quorum Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c 3 Nodes configured, unknown expected votes 15 Resources configured.

==

Online: [ server1 server2 server3 ]

Clone Set: cloneDLM [resDLM]
    Started: [ server1 server2 server3 ]
Clone Set: cloneO2CB [resO2CB]
    Started: [ server1 server2 server3 ]
Clone Set: cloneFS [resFS]
    Started: [ server1 server2 server3 ]
Clone Set: cloneGFSD [resGFSD]
    Started: [ server1 server2 server3 ]
Clone Set: cloneFS2 [resFS2]
    Started: [ server1 server2 server3 ]

[ALL] installation pacemaker=

sudo apt-get install pacemaker
edit /etc/default/corosync and enable corosync (START=yes) 

[ONE] generate corosync authkey

sudo corosync-keygen
(this can take a while if there's no enough entropy; download ubuntu iso image on the same machine while generating to speed it up or use keyboard to generate entropy)
copy /etc/corosync/authkey to all servers that will form this cluster (make sure it is owned by root:root and has 400 permissions). 

[ALL] configure corosync

In /etc/corosync/corosync.conf, replace bindnetaddr (by default it is 127.0.0.1), with the network address 
of your server,  replacing the last number by 0 to get the network address. For example, if your IP is 
192.168.1.101, then you would put 192.168.1.0.

ALL hab ich gefunden? kein plan ob da das vermisste paket dabei is=

apt-get install drbd8-utils iscsitarget ocfs2-tools pacemaker corosync libdlm3 openais \ 
ocfs2-tools-pacemaker iscsitarget-dkms lm-sensors ocfs2-tools-cman resource-agents fence-agents
for X in {drbd,o2cb,ocfs2}; do update-rc.d -f ${X} disable; done
---------
die beiden fehlen - libdlm3-pacemaker  dlm-pcmk

[ALL] start corosync

sudo /etc/init.d/corosync start

[ALL] Install DRBD and other needed tools

sudo apt-get install linux-headers-server psmisc drbd8-utils

[ALL] Pacemaker soll drbd managen

sudo update-rc.d -f drbd remove

CRM

crm

crm_mon

cibadmin

Kompelette CIB löschen

cibadmin --force -E