/usr/share/doc/drbd-doc/users-guide/s-pacemaker-floating-peers.html is in drbd-doc 8.4~20151102-1.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>8.5. Configuring DRBD to replicate between two SAN-backed Pacemaker clusters</title><link rel="stylesheet" type="text/css" href="default.css" /><meta name="generator" content="DocBook XSL Stylesheets V1.79.1" /><link rel="home" href="drbd-users-guide.html" title="The DRBD User’s Guide" /><link rel="up" href="ch-pacemaker.html" title="Chapter 8. Integrating DRBD with Pacemaker clusters" /><link rel="prev" href="s-pacemaker-stacked-resources.html" title="8.4. Using stacked DRBD resources in Pacemaker clusters" /><link rel="next" href="ch-rhcs.html" title="Chapter 9. Integrating DRBD with Red Hat Cluster" /></head><body><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="3" align="center">8.5. Configuring DRBD to replicate between two SAN-backed Pacemaker clusters</th></tr><tr><td width="20%" align="left"><a accesskey="p" href="s-pacemaker-stacked-resources.html">Prev</a> </td><th width="60%" align="center">Chapter 8. Integrating DRBD with Pacemaker clusters</th><td width="20%" align="right"> <a accesskey="n" href="ch-rhcs.html">Next</a></td></tr></table><hr /></div><div class="section"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a id="s-pacemaker-floating-peers"></a>8.5. Configuring DRBD to replicate between two SAN-backed Pacemaker clusters</h2></div></div></div><p>This is a somewhat advanced setup usually employed in split-site
configurations. It involves two separate Pacemaker clusters, where
each cluster has access to a separate Storage Area Network (SAN). DRBD
is then used to replicate data stored on that SAN, across an IP link
between sites.</p><p>Consider the following illustration to describe the concept.</p><div class="figure"><a id="idm45883813911504"></a><p class="title"><strong>Figure 8.3. Using DRBD to replicate between SAN-based clusters</strong></p><div class="figure-contents"><div class="mediaobject"><img src="drbd-pacemaker-floating-peers.png" alt="drbd-pacemaker-floating-peers" /></div></div></div><br class="figure-break" /><p>Which of the individual nodes in each site currently acts as the DRBD
peer is not explicitly defined — the DRBD peers
<a class="link" href="s-floating-peers.html" title="2.16. Floating peers">are said to <span class="emphasis"><em>float</em></span></a>; that is, DRBD binds to
virtual IP addresses not tied to a specific physical machine.</p><div class="note" style="margin-left: 0.5in; margin-right: 0.5in;"><table border="0" summary="Note"><tr><td rowspan="2" align="center" valign="top" width="25"><img alt="[Note]" src="images/note.png" /></td><th align="left">Note</th></tr><tr><td align="left" valign="top"><p>This type of setup is usually deployed together with
<a class="link" href="s-drbd-proxy.html" title="2.14. Long-distance replication with DRBD Proxy">DRBD Proxy</a>and/or <a class="link" href="s-truck-based-replication.html" title="2.15. Truck based replication">truck based replication</a>.</p></td></tr></table></div><p>Since this type of setup deals with shared storage, configuring and
testing STONITH is absolutely vital for it to work properly.</p><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="s-pacemaker-floating-peers-drbd-config"></a>8.5.1. DRBD resource configuration</h3></div></div></div><p>To enable your DRBD resource to float, configure it in <code class="literal">drbd.conf</code> in
the following fashion:</p><pre class="programlisting">resource <resource> {
...
device /dev/drbd0;
disk /dev/sda1;
meta-disk internal;
floating 10.9.9.100:7788;
floating 10.9.10.101:7788;
}</pre><p>The <code class="literal">floating</code> keyword replaces the <code class="literal">on <host></code> sections normally
found in the resource configuration. In this mode, DRBD identifies
peers by IP address and TCP port, rather than by host name. It is
important to note that the addresses specified must be virtual cluster
IP addresses, rather than physical node IP addresses, for floating to
function properly. As shown in the example, in split-site
configurations the two floating addresses can be expected to belong to
two separate IP networks — it is thus vital for routers and firewalls
to properly allow DRBD replication traffic between the nodes.</p></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="s-pacemaker-floating-peers-crm-config"></a>8.5.2. Pacemaker resource configuration</h3></div></div></div><p>A DRBD floating peers setup, in terms of Pacemaker configuration,
involves the following items (in each of the two Pacemaker clusters
involved):</p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem">
A virtual cluster IP address.
</li><li class="listitem">
A master/slave DRBD resource (using the DRBD OCF resource agent).
</li><li class="listitem">
Pacemaker constraints ensuring that resources are started on the
correct nodes, and in the correct order.
</li></ul></div><p>To configure a resource named <code class="literal">mysql</code> in a floating peers
configuration in a 2-node cluster, using the replication address
<code class="literal">10.9.9.100</code>, configure Pacemaker with the following <code class="literal">crm</code> commands:</p><pre class="screen">crm configure
crm(live)configure# primitive p_ip_float_left ocf:heartbeat:IPaddr2 \
params ip=10.9.9.100
crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd \
params drbd_resource=mysql
crm(live)configure# ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" \
clone-max="1" clone-node-max="1" \
notify="true" target-role="Master"
crm(live)configure# order drbd_after_left \
inf: p_ip_float_left ms_drbd_mysql
crm(live)configure# colocation drbd_on_left \
inf: ms_drbd_mysql p_ip_float_left
crm(live)configure# commit
bye</pre><p>After adding this configuration to the CIB, Pacemaker will execute the
following actions:</p><div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem">
Bring up the IP address 10.9.9.100 (on either <code class="literal">alice</code> or <code class="literal">bob</code>).
</li><li class="listitem">
Bring up the DRBD resource according to the IP address configured.
</li><li class="listitem">
Promote the DRBD resource to the Primary role.
</li></ol></div><p>Then, in order to create the matching configuration in the other
cluster, configure <span class="emphasis"><em>that</em></span> Pacemaker instance with the following
commands:</p><pre class="screen">crm configure
crm(live)configure# primitive p_ip_float_right ocf:heartbeat:IPaddr2 \
params ip=10.9.10.101
crm(live)configure# primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource=mysql
crm(live)configure# ms ms_drbd_mysql drbd_mysql \
meta master-max="1" master-node-max="1" \
clone-max="1" clone-node-max="1" \
notify="true" target-role="Slave"
crm(live)configure# order drbd_after_right \
inf: p_ip_float_right ms_drbd_mysql
crm(live)configure# colocation drbd_on_right
inf: ms_drbd_mysql p_ip_float_right
crm(live)configure# commit
bye</pre><p>After adding this configuration to the CIB, Pacemaker will execute the
following actions:</p><div class="orderedlist"><ol class="orderedlist" type="1"><li class="listitem">
Bring up the IP address 10.9.10.101 (on either <code class="literal">charlie</code> or
<code class="literal">daisy</code>).
</li><li class="listitem">
Bring up the DRBD resource according to the IP address configured.
</li><li class="listitem">
Leave the DRBD resource in the Secondary role (due to
<code class="literal">target-role="Slave"</code>).
</li></ol></div></div><div class="section"><div class="titlepage"><div><div><h3 class="title"><a id="s-pacemaker-floating-peers-site-fail-over"></a>8.5.3. Site fail-over</h3></div></div></div><p>In split-site configurations, it may be necessary to transfer services
from one site to another. This may be a consequence of a scheduled
transition, or of a disastrous event. In case the transition is a
normal, anticipated event, the recommended course of action is this:</p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem">
Connect to the cluster on the site about to relinquish resources,
and change the affected DRBD resource’s <code class="literal">target-role</code> attribute from
<code class="literal">Master</code> to <code class="literal">Slave</code>. This will shut down any resources depending on
the Primary role of the DRBD resource, demote it, and continue to
run, ready to receive updates from a new Primary.
</li><li class="listitem">
Connect to the cluster on the site about to take over resources, and
change the affected DRBD resource’s <code class="literal">target-role</code> attribute from
<code class="literal">Slave</code> to <code class="literal">Master</code>. This will promote the DRBD resources, start any
other Pacemaker resources depending on the Primary role of the DRBD
resource, and replicate updates to the remote site.
</li><li class="listitem">
To fail back, simply reverse the procedure.
</li></ul></div><p>In the event that of a catastrophic outage on the active site, it can
be expected that the site is off line and no longer replicated to the
backup site. In such an event:</p><div class="itemizedlist"><ul class="itemizedlist" style="list-style-type: disc; "><li class="listitem">
Connect to the cluster on the still-functioning site resources, and
change the affected DRBD resource’s <code class="literal">target-role</code> attribute from
<code class="literal">Slave</code> to <code class="literal">Master</code>. This will promote the DRBD resources, and start
any other Pacemaker resources depending on the Primary role of the
DRBD resource.
</li><li class="listitem">
When the original site is restored or rebuilt, you may connect the
DRBD resources again, and subsequently fail back using the reverse
procedure.
</li></ul></div></div></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="s-pacemaker-stacked-resources.html">Prev</a> </td><td width="20%" align="center"><a accesskey="u" href="ch-pacemaker.html">Up</a></td><td width="40%" align="right"> <a accesskey="n" href="ch-rhcs.html">Next</a></td></tr><tr><td width="40%" align="left" valign="top">8.4. Using stacked DRBD resources in Pacemaker clusters </td><td width="20%" align="center"><a accesskey="h" href="drbd-users-guide.html">Home</a></td><td width="40%" align="right" valign="top"> Chapter 9. Integrating DRBD with Red Hat Cluster</td></tr></table></div></body></html>
|