/usr/share/doc/xapian-doc/replication.html is in xapian-doc 1.2.16-2ubuntu1.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 | <?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="Docutils 0.8.1: http://docutils.sourceforge.net/" />
<title>Xapian Database Replication Users Guide</title>
<style type="text/css">
/*
:Author: David Goodger (goodger@python.org)
:Id: $Id: html4css1.css 7056 2011-06-17 10:50:48Z milde $
:Copyright: This stylesheet has been placed in the public domain.
Default cascading style sheet for the HTML output of Docutils.
See http://docutils.sf.net/docs/howto/html-stylesheets.html for how to
customize this style sheet.
*/
/* used to remove borders from tables and images */
.borderless, table.borderless td, table.borderless th {
border: 0 }
table.borderless td, table.borderless th {
/* Override padding for "table.docutils td" with "! important".
The right padding separates the table cells. */
padding: 0 0.5em 0 0 ! important }
.first {
/* Override more specific margin styles with "! important". */
margin-top: 0 ! important }
.last, .with-subtitle {
margin-bottom: 0 ! important }
.hidden {
display: none }
a.toc-backref {
text-decoration: none ;
color: black }
blockquote.epigraph {
margin: 2em 5em ; }
dl.docutils dd {
margin-bottom: 0.5em }
object[type="image/svg+xml"], object[type="application/x-shockwave-flash"] {
overflow: hidden;
}
/* Uncomment (and remove this text!) to get bold-faced definition list terms
dl.docutils dt {
font-weight: bold }
*/
div.abstract {
margin: 2em 5em }
div.abstract p.topic-title {
font-weight: bold ;
text-align: center }
div.admonition, div.attention, div.caution, div.danger, div.error,
div.hint, div.important, div.note, div.tip, div.warning {
margin: 2em ;
border: medium outset ;
padding: 1em }
div.admonition p.admonition-title, div.hint p.admonition-title,
div.important p.admonition-title, div.note p.admonition-title,
div.tip p.admonition-title {
font-weight: bold ;
font-family: sans-serif }
div.attention p.admonition-title, div.caution p.admonition-title,
div.danger p.admonition-title, div.error p.admonition-title,
div.warning p.admonition-title {
color: red ;
font-weight: bold ;
font-family: sans-serif }
/* Uncomment (and remove this text!) to get reduced vertical space in
compound paragraphs.
div.compound .compound-first, div.compound .compound-middle {
margin-bottom: 0.5em }
div.compound .compound-last, div.compound .compound-middle {
margin-top: 0.5em }
*/
div.dedication {
margin: 2em 5em ;
text-align: center ;
font-style: italic }
div.dedication p.topic-title {
font-weight: bold ;
font-style: normal }
div.figure {
margin-left: 2em ;
margin-right: 2em }
div.footer, div.header {
clear: both;
font-size: smaller }
div.line-block {
display: block ;
margin-top: 1em ;
margin-bottom: 1em }
div.line-block div.line-block {
margin-top: 0 ;
margin-bottom: 0 ;
margin-left: 1.5em }
div.sidebar {
margin: 0 0 0.5em 1em ;
border: medium outset ;
padding: 1em ;
background-color: #ffffee ;
width: 40% ;
float: right ;
clear: right }
div.sidebar p.rubric {
font-family: sans-serif ;
font-size: medium }
div.system-messages {
margin: 5em }
div.system-messages h1 {
color: red }
div.system-message {
border: medium outset ;
padding: 1em }
div.system-message p.system-message-title {
color: red ;
font-weight: bold }
div.topic {
margin: 2em }
h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
margin-top: 0.4em }
h1.title {
text-align: center }
h2.subtitle {
text-align: center }
hr.docutils {
width: 75% }
img.align-left, .figure.align-left, object.align-left {
clear: left ;
float: left ;
margin-right: 1em }
img.align-right, .figure.align-right, object.align-right {
clear: right ;
float: right ;
margin-left: 1em }
img.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left }
.align-center {
clear: both ;
text-align: center }
.align-right {
text-align: right }
/* reset inner alignment in figures */
div.align-right {
text-align: inherit }
/* div.align-center * { */
/* text-align: left } */
ol.simple, ul.simple {
margin-bottom: 1em }
ol.arabic {
list-style: decimal }
ol.loweralpha {
list-style: lower-alpha }
ol.upperalpha {
list-style: upper-alpha }
ol.lowerroman {
list-style: lower-roman }
ol.upperroman {
list-style: upper-roman }
p.attribution {
text-align: right ;
margin-left: 50% }
p.caption {
font-style: italic }
p.credits {
font-style: italic ;
font-size: smaller }
p.label {
white-space: nowrap }
p.rubric {
font-weight: bold ;
font-size: larger ;
color: maroon ;
text-align: center }
p.sidebar-title {
font-family: sans-serif ;
font-weight: bold ;
font-size: larger }
p.sidebar-subtitle {
font-family: sans-serif ;
font-weight: bold }
p.topic-title {
font-weight: bold }
pre.address {
margin-bottom: 0 ;
margin-top: 0 ;
font: inherit }
pre.literal-block, pre.doctest-block, pre.math {
margin-left: 2em ;
margin-right: 2em }
span.classifier {
font-family: sans-serif ;
font-style: oblique }
span.classifier-delimiter {
font-family: sans-serif ;
font-weight: bold }
span.interpreted {
font-family: sans-serif }
span.option {
white-space: nowrap }
span.pre {
white-space: pre }
span.problematic {
color: red }
span.section-subtitle {
/* font-size relative to parent (h1..h6 element) */
font-size: 80% }
table.citation {
border-left: solid 1px gray;
margin-left: 1px }
table.docinfo {
margin: 2em 4em }
table.docutils {
margin-top: 0.5em ;
margin-bottom: 0.5em }
table.footnote {
border-left: solid 1px black;
margin-left: 1px }
table.docutils td, table.docutils th,
table.docinfo td, table.docinfo th {
padding-left: 0.5em ;
padding-right: 0.5em ;
vertical-align: top }
table.docutils th.field-name, table.docinfo th.docinfo-name {
font-weight: bold ;
text-align: left ;
white-space: nowrap ;
padding-left: 0 }
h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
font-size: 100% }
ul.auto-toc {
list-style-type: none }
</style>
</head>
<body>
<div class="document" id="xapian-database-replication-users-guide">
<h1 class="title">Xapian Database Replication Users Guide</h1>
<!-- Copyright (C) 2008 Lemur Consulting Ltd -->
<!-- Copyright (C) 2008,2010,2011,2012 Olly Betts -->
<div class="contents topic" id="table-of-contents">
<p class="topic-title first">Table of contents</p>
<ul class="simple">
<li><a class="reference internal" href="#introduction" id="id1">Introduction</a></li>
<li><a class="reference internal" href="#backend-support" id="id2">Backend Support</a></li>
<li><a class="reference internal" href="#setting-up-replicated-databases" id="id3">Setting up replicated databases</a></li>
<li><a class="reference internal" href="#limitations" id="id4">Limitations</a><ul>
<li><a class="reference internal" href="#calling-reopen" id="id5">Calling reopen</a></li>
</ul>
</li>
<li><a class="reference internal" href="#alternative-approaches" id="id6">Alternative approaches</a><ul>
<li><a class="reference internal" href="#copying-database-after-each-update" id="id7">Copying database after each update</a></li>
<li><a class="reference internal" href="#synchronise-database-using-rsync" id="id8">Synchronise database using rsync</a></li>
<li><a class="reference internal" href="#use-a-binary-diff-algorithm" id="id9">Use a binary diff algorithm</a></li>
<li><a class="reference internal" href="#serve-database-from-master-to-slaves-over-nfs" id="id10">Serve database from master to slaves over NFS</a></li>
<li><a class="reference internal" href="#use-the-remote-database-backend-facility" id="id11">Use the "remote database backend" facility</a></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="introduction">
<h1><a class="toc-backref" href="#id1">Introduction</a></h1>
<p>It is often desirable to maintain multiple copies of a Xapian database, having
a "master" database which modifications are made on, and a set of secondary
(read-only, "slave") databases which these modifications propagate to. For
example, to support a high query load there may be many search servers, each
with a local copy of the database, and a single indexing server. In order to
allow scaling to a large number of search servers, with large databases and
frequent updates, we need an database replication implementation to have the
following characteristics:</p>
<blockquote>
<ul class="simple">
<li>Data transfer is (at most) proportional to the size of the updates, rather
than the size of the database, to allow frequent small updates to large
databases to be replicated efficiently.</li>
<li>Searching (on the slave databases) and indexing (on the master database) can
continue during synchronisation.</li>
<li>Data cached (in memory) on the slave databases is not discarded (unless it's
actually out of date) as updates arrive, to ensure that searches continue to
be performed quickly during and after updates.</li>
<li>Synchronising each slave database involves low overhead (both IO and CPU) on
the server holding the master database, so that many slaves can be updated
from a single master without overloading it.</li>
<li>Database synchronisation can be recovered after network outages or server
failures without manual intervention and without excessive data transfer.</li>
</ul>
</blockquote>
<p>The database replication protocol is intended to support replicating a single
writable database to multiple (read-only) search servers, while satisfying all
of the above properties. It is not intended to support replication of multiple
writable databases - there must always be a single master database to which all
modifications are made.</p>
<p>This document gives an overview of how and why to use the replication protocol.
For technical details of the implementation of the replication protocol, see
the separate <a class="reference external" href="replication_protocol.html">Replication Protocol</a> document.</p>
</div>
<div class="section" id="backend-support">
<h1><a class="toc-backref" href="#id2">Backend Support</a></h1>
<p>Replication is supported by the chert, flint, and brass database backends,
and can cleanly handle the
master switching database type (a full copy is sent in this situation). It
doesn't make a lot of sense to support replication for the remote backend.
Replication of inmemory databases isn't currently available. We have a longer
term aim to replace the current inmemory backend with the current disk based
backend (e.g. chert) but storing its data in memory. Once this is done, it
would probably be easy to support replication of inmemory databases.</p>
</div>
<div class="section" id="setting-up-replicated-databases">
<h1><a class="toc-backref" href="#id3">Setting up replicated databases</a></h1>
<!-- FIXME - expand this section. -->
<p>To replicate a database efficiently from one master machine to other machines,
there is one configuration step to be performed on the master machine, and two
servers to run.</p>
<p>Firstly, on the master machine, the indexer must be run with the environment
variable <cite>XAPIAN_MAX_CHANGESETS</cite> set to a non-zero value, which will cause
changeset files to be created whenever a transaction is committed. A
changeset file allows the transaction to be replayed efficiently on a replica
of the database.</p>
<p>The value which <cite>XAPIAN_MAX_CHANGESETS</cite> is set to determines the maximum number
of changeset files which will be kept. The best number to keep depends on how
you frequently you run replication and how big your transactions are - if all
the changeset files needed to update a replica aren't present, a full copy of
the database will be sent, but at some point that becomes more efficient
anyway. <cite>10</cite> is probably a good value to start with.</p>
<p>Secondly, also on the master machine, run the <cite>xapian-replicate-server</cite> server
to serve the databases which are to be replicated. This takes various
parameters to control the directory that databases are found in, and the
network interface to serve on. The <cite>--help</cite> option will cause usage
information to be displayed. For example, if <cite>/var/search/dbs`</cite> contains a
set of Xapian databases to be replicated:</p>
<pre class="literal-block">
xapian-replicate-server /var/search/dbs -p 7010
</pre>
<p>would run a server allowing access to these databases, on port 7010.</p>
<p>Finally, on the client machine, run the <cite>xapian-replicate</cite> server to keep an
individual database up-to-date. This will contact the server on the specified
host and port, and copy the database with the name (on the master) specified in
the <cite>-m</cite> option to the client. One non-option argument is required - this is
the name that the database should be stored in on the slave machine. For
example, contacting the above server from the same machine:</p>
<pre class="literal-block">
xapian-replicate -h 127.0.0.1 -p 7010 -m foo foo2
</pre>
<p>would produce a database "foo2" containing a replica of the database
"/var/search/dbs/foo". Note that the first time you run this, this command
will create the foo2 directory and populate it with appropriate files; you
should not create this directory yourself.</p>
<p>As of 1.2.5, if you don't specify the master name, the same name is used
remotely and locally, so this will replicate remote database "foo2" to
local database "foo2":</p>
<pre class="literal-block">
xapian-replicate -h 127.0.0.1 -p 7010 foo2
</pre>
<p>Both the server and client can be run in "one-shot" mode, by passing <cite>-o</cite>.
This may be particularly useful for the client, to allow a shell script to be
used to cycle through a set of databases, updating each in turn (and then
probably sleeping for a period).</p>
</div>
<div class="section" id="limitations">
<h1><a class="toc-backref" href="#id4">Limitations</a></h1>
<p>It is possible to confuse the replication system in some cases, such that an
invalid database will be produced on the client. However, this is easy to
avoid in practice.</p>
<p>To confuse the replication system, the following needs to happen:</p>
<blockquote>
<ul class="simple">
<li>Start with two databases, A and B.</li>
<li>Start a replication of database A.</li>
<li>While the replication is in progress, swap B in place of A (ie, by moving
the files around, such that B is now at the path of A).</li>
<li>While the replication is still in progress, swap A back in place of B.</li>
</ul>
</blockquote>
<p>If this happens, the replication process will not detect the change in
database, and you are likely to end up with a database on the client which
contains parts of A and B mixed together. You will need to delete the damaged
database on the client, and re-run the replication.</p>
<p>To avoid this, simply avoid swapping a database back in place of another one.
Or at least, if you must do this, wait until any replications in progress when
you were using the original database have finished.</p>
<div class="section" id="calling-reopen">
<h2><a class="toc-backref" href="#id5">Calling reopen</a></h2>
<p><cite>Database::reopen()</cite> is usually an efficient way to ensure that a database is
up-to-date with the latest changes. Unfortunately, it does not currently work
as you might expect with databases which are being updated by the replication
client. The workaround is simple; don't use the reopen() method on such
databases: instead, you should close the database and open it
again from scratch.</p>
<p>Briefly, the issue is that the databases created by the replication client are
created in a subdirectory of the target path supplied to the client, rather
than at that path. A "stub database" file is then created in that directory,
pointing to the database. This allows the database which readers open to be
switched atomically after a database copy has occurred. The reopen() method
doesn't re-read the stub database file in this situation, so ends up
attempting to read the old database which has been deleted.</p>
<p>We intend to fix this issue in the Brass backend (currently under development
by eliminating this hidden use of a stub database file).</p>
</div>
</div>
<div class="section" id="alternative-approaches">
<h1><a class="toc-backref" href="#id6">Alternative approaches</a></h1>
<p>Without using the database replication protocol, there are various ways in
which the "single master, multiple slaves" setup could be implemented.</p>
<blockquote>
<ul class="simple">
<li>Copy database from master to all slaves after each update, then swap the new
database for the old.</li>
<li>Synchronise databases from the master to the slaves using rsync.</li>
<li>Keep copy of database on master from before each update, and use a binary
diff algorithm (e.g., xdelta) to calculate the changes, and then apply these
same changes to the databases on each slave.</li>
<li>Serve database from master to slaves over NFS (or other remote file system).</li>
<li>Use the "remote database backend" facility of Xapian to allow slave servers
to search the database directly on the master.</li>
</ul>
</blockquote>
<p>All of these could be made to work but have various drawbacks, and fail to
satisfy all the desired characteristics. Let's examine them in detail:</p>
<div class="section" id="copying-database-after-each-update">
<h2><a class="toc-backref" href="#id7">Copying database after each update</a></h2>
<p>Databases could be pushed to the slaves after each update simply by copying the
entire database from the master (using scp, ftp, http or one of the many other
transfer options). After the copy is completed, the new database would be made
live by indirecting access through a stub database and switching what it points to.</p>
<p>After a sufficient interval to allow searches in progress on the old database to
complete, the old database would be removed. (On UNIX filesystems, the old
database could be unlinked immediately, and the resources used by it would be
automatically freed as soon as the current searches using it complete.)</p>
<p>This approach has the advantage of simplicity, and also ensures that the
databases can be correctly re-synchronised after network outages or hardware
failure.</p>
<p>However, this approach would involve copying a large amount of data for each
update, however small the update was. Also, because the search server would
have to switch to access new files each time an update was pushed, the search
server will be likely to experience poor performance due to commonly accessed
pages falling out of the disk cache during the update. In particular, although
some of the newly pushed data would be likely to be in the cache immediately
after the update, if the combination of the old and new database sizes exceeds
the size of the memory available on the search servers for caching, either some
of the live database will be dropped from the cache resulting in poor
performance during the update, or some of the new database will not initially
be present in the cache after update.</p>
</div>
<div class="section" id="synchronise-database-using-rsync">
<h2><a class="toc-backref" href="#id8">Synchronise database using rsync</a></h2>
<p>Rsync works by calculating hashes for the content on the client and the server,
sending the hashes from the client to the server, and then calculating (on the
server) which pieces of the file need to be sent to update the client. This
results in a fairly low amount of network traffic, but puts a fairly high CPU
load on the server. This would result in a large load being placed on the
master server if a large number of slaves tried to synchronise with it.</p>
<p>Also, rsync will not reliably update the database in a manner which allows the
database on a slave to be searched while being updated - therefore, a copy or
snapshot of the database would need to be taken first to allow searches to
continue (accessing the copy) while the database is being synchronised.</p>
<p>If a copy is used, the caching problems discussed in the previous section would
apply again. If a snapshotting filesystem is used, it may be possible to take
a read-only snapshot copy cheaply (and without encountering poor caching
behaviour), but filesystems with support for this are not always available, and
may require considerable effort to set up even if they are available.</p>
</div>
<div class="section" id="use-a-binary-diff-algorithm">
<h2><a class="toc-backref" href="#id9">Use a binary diff algorithm</a></h2>
<p>If a copy of the database on the master before the update was kept, a binary
diff algorithm (such as "xdelta") could be used to compare the old and new
versions of the database. This would produce a patch file which could be
transferred to the slaves, and then applied - avoiding the need for specific
calculations to be performed for each slave.</p>
<p>However, this requires a copy or snapshot to be taken on the master - which has
the same problems as previously discussed. A copy or snapshot would also need
to be taken on the slave, since a patch from xdelta couldn't safely be applied
to a live database.</p>
</div>
<div class="section" id="serve-database-from-master-to-slaves-over-nfs">
<h2><a class="toc-backref" href="#id10">Serve database from master to slaves over NFS</a></h2>
<p>NFS allows a section of a filesystem to be exported to a remote host. Xapian
is quite capable of searching a database which is exported in such a manner,
and thus NFS can be used to quickly and easily share a database from the master
to multiple slaves.</p>
<p>A reasonable setup might be to use a powerful machine with a fast disk as the
master, and use that same machine as an NFS server. Then, multiple slaves can
connect to that NFS server for searching the database. This setup is quite
convenient, because it separates the indexing workload from the search workload
to a reasonable extent, but may lead to performance problems.</p>
<p>There are two main problems which are likely to be encountered. Firstly, in
order to work efficiently, NFS clients (or the OS filesystem layer above NFS)
cache information read from the remote file system in memory. If there is
insufficient memory available to cache the whole database in memory, searches
will occasionally need to access parts of the database which are held only on
the master server. Such searches will take a long time to complete, because
the round-trip time for an access to a disk block on the master is typically a
lot slower than the round-trip time for access to a local disk. Additionally,
if the local network experiences problems, or the master server fails (or gets
overloaded due to all the search requests), the searches will be unable to be
completed.</p>
<p>Also, when a file is modified, the NFS protocol has no way of indicating that
only a small set of blocks in the file have been modified. The caching is all
implemented by NFS clients, which can do little other than check the file
modification time periodically, and invalidate all cached blocks for the file
if the modification time has changed. For the Linux client, the time between
checks can be configured by setting the acregmin and acregmax mount options,
but whatever these are set to, the whole file will be dropped from the cache
when any modification is found.</p>
<p>This means that, after every update to the database on the master, searches on
the slaves will have to fetch all the blocks required for their search across
the network, which will likely result in extremely slow search times until the
cache on the slaves gets populated properly again.</p>
</div>
<div class="section" id="use-the-remote-database-backend-facility">
<h2><a class="toc-backref" href="#id11">Use the "remote database backend" facility</a></h2>
<p>Xapian has supported a "remote" database backend since the very early days of
the project. This allows a search to be run against a database on a remote
machine, which may seem to be exactly what we want. However, the "remote"
database backend works by performing most of the work for a search on the
remote end - in the situation we're concerned with, this would mean that most
of the work was performed on the master, while slaves remain largely idle.</p>
<p>The "remote" database backend is intended to allow a large database to be
split, at the document level, between multiple hosts. This allows systems to
be built which search a very large database with some degree of parallelism
(and thus provide faster individual searches than a system searching a single
database locally). In contrast, the database replication protocol is intended
to allow a database to be copied to multiple machines to support a high
concurrent search load (and thus to allow a higher throughput of searches).</p>
<p>In some cases (i.e., a very large database and a high concurrent search load)
it may be perfectly reasonable to use both the database replication protocol in
conjunction with the "remote" database backend to get both of these advantages
- the two systems solve different problems.</p>
</div>
</div>
</div>
</body>
</html>
|