/usr/share/doc/slony1-2-doc/adminguide/slonyupgrade.html is in slony1-2-doc 2.0.7-3build1.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd">
<HTML
><HEAD
><TITLE
> Slony-I Upgrade </TITLE
><META
NAME="GENERATOR"
CONTENT="Modular DocBook HTML Stylesheet Version 1.79"><LINK
REV="MADE"
HREF="mailto:slony1-general@lists.slony.info"><LINK
REL="HOME"
TITLE="Slony-I 2.0.7 Documentation"
HREF="index.html"><LINK
REL="UP"
TITLE="Advanced Topics"
HREF="advanced.html"><LINK
REL="PREVIOUS"
TITLE="Partitioning Support "
HREF="partitioning.html"><LINK
REL="NEXT"
TITLE="Log Analysis"
HREF="loganalysis.html"><LINK
REL="STYLESHEET"
TYPE="text/css"
HREF="stylesheet.css"><META
HTTP-EQUIV="Content-Type"
CONTENT="text/html; charset=ISO-8859-1"><META
NAME="creation"
CONTENT="2011-12-03T11:44:27"></HEAD
><BODY
CLASS="SECT1"
><DIV
CLASS="NAVHEADER"
><TABLE
SUMMARY="Header navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TH
COLSPAN="5"
ALIGN="center"
VALIGN="bottom"
><SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> 2.0.7 Documentation</TH
></TR
><TR
><TD
WIDTH="10%"
ALIGN="left"
VALIGN="top"
><A
HREF="partitioning.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="10%"
ALIGN="left"
VALIGN="top"
><A
HREF="advanced.html"
>Fast Backward</A
></TD
><TD
WIDTH="60%"
ALIGN="center"
VALIGN="bottom"
>Chapter 4. Advanced Topics</TD
><TD
WIDTH="10%"
ALIGN="right"
VALIGN="top"
><A
HREF="advanced.html"
>Fast Forward</A
></TD
><TD
WIDTH="10%"
ALIGN="right"
VALIGN="top"
><A
HREF="loganalysis.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
></TABLE
><HR
ALIGN="LEFT"
WIDTH="100%"></DIV
><DIV
CLASS="SECT1"
><H1
CLASS="SECT1"
><A
NAME="SLONYUPGRADE"
>4.6. <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> Upgrade</A
></H1
><A
NAME="AEN1904"
></A
><P
> Minor <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> versions can be upgraded using the slonik <A
HREF="stmtupdatefunctions.html"
>SLONIK UPDATE FUNCTIONS</A
>> command. This includes upgrades from
2.0.x to a newer version 2.0.y version. </P
><P
> When upgrading <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
>, the installation on all nodes in a
cluster must be upgraded at once, using the <A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
>
command <A
HREF="stmtupdatefunctions.html"
>SLONIK UPDATE FUNCTIONS</A
>.</P
><P
> While this requires temporarily stopping replication, it does
not forcibly require an outage for applications that submit
updates. </P
><P
>The proper upgrade procedure is thus:</P
><P
></P
><UL
><LI
><P
> Stop the <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> processes on all nodes.
(<SPAN
CLASS="emphasis"
><I
CLASS="EMPHASIS"
>e.g.</I
></SPAN
> - old version of <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
>)</P
></LI
><LI
><P
> Install the new version of <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> software on all
nodes.</P
></LI
><LI
><P
> Execute a <A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
> script containing the
command <TT
CLASS="COMMAND"
>update functions (id = [whatever]);</TT
> for
each node in the cluster.</P
><DIV
CLASS="NOTE"
><BLOCKQUOTE
CLASS="NOTE"
><P
><B
>Note: </B
>Remember that your slonik upgrade script like all other
slonik scripts must contain the proper preamble commands to function.</P
></BLOCKQUOTE
></DIV
></LI
><LI
><P
> Start all slons. </P
></LI
></UL
><P
> The overall operation is relatively safe: If there is any
mismatch between component versions, the <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> will refuse to start
up, which provides protection against corruption. </P
><P
> You need to be sure that the C library containing SPI trigger
functions has been copied into place in the <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> build. There
are multiple possible approaches to this:</P
><P
> The trickiest part of this is ensuring that the C library
containing SPI functions is copied into place in the <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> build;
the easiest and safest way to handle this is to have two separate
<SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> builds, one for each <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> version, where the postmaster
is shut down and then restarted against the <SPAN
CLASS="QUOTE"
>"new"</SPAN
> build;
that approach requires a brief database outage on each node.</P
><P
> While that approach has been found to be easier and safer,
nothing prevents one from carefully copying <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> components for
the new version into place to overwrite the old version as
the <SPAN
CLASS="QUOTE"
>"install"</SPAN
> step. That might <SPAN
CLASS="emphasis"
><I
CLASS="EMPHASIS"
>not</I
></SPAN
>
work on <SPAN
CLASS="TRADEMARK"
>Windows</SPAN
>™ if it locks library files that
are in use. It is also important to make sure that any connections
to the database are restarted after the new binary is installed. </P
><P
></P
><DIV
CLASS="VARIABLELIST"
><DL
><DT
>Run <TT
CLASS="COMMAND"
>make install</TT
> to install new
<SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> components on top of the old</DT
><DD
><P
>If you build <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> on the same system on which it
is to be deployed, and build from sources, overwriting the old with
the new is as easy as <TT
CLASS="COMMAND"
>make install</TT
>. There is no
need to restart a database backend; just to stop <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> processes,
run the <TT
CLASS="COMMAND"
>UPDATE FUNCTIONS</TT
> script, and start new
<A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> processes.</P
><P
> Unfortunately, this approach requires having a build
environment on the same host as the deployment. That may not be
consistent with efforts to use common <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> and <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> binaries
across a set of nodes. </P
></DD
><DT
>Create a new <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> and <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> build</DT
><DD
><P
>With this approach, the old <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> build with old
<SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> components persists after switching to a new <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> build
with new <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> components. In order to switch to the new <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
>
build, you need to restart the
<SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> <TT
CLASS="COMMAND"
>postmaster</TT
>, therefore interrupting
applications, in order to get it to be aware of the location of the
new components. </P
></DD
></DL
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="AEN1974"
>4.6.1. Incompatibilities between 1.2 and 2.0</A
></H2
><A
NAME="AEN1976"
></A
><DIV
CLASS="SECT3"
><H3
CLASS="SECT3"
><A
NAME="AEN1978"
>4.6.1.1. TABLE ADD KEY issue in <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> 2.0</A
></H3
><P
> The TABLE ADD KEY slonik command has been removed in version 2.0.
This means that all tables must have a set of columns that form a
unique key for the table.
If you are upgrading from a previous <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> version and are using a
<SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> created primary key then you will need to modify your table
to have its own primary key before installing <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> version 2.0</P
></DIV
><DIV
CLASS="SECT3"
><H3
CLASS="SECT3"
><A
NAME="AEN1985"
>4.6.1.2. New Trigger Handling in <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> Version 2</A
></H3
><P
> One of the major changes to <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> is that enabling/disabling
of triggers and rules now takes place as plain SQL, supported by
<SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> 8.3+, rather than via <SPAN
CLASS="QUOTE"
>"hacking"</SPAN
> on the system
catalog. </P
><P
> As a result, <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> users should be aware of the <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
>
syntax for <TT
CLASS="COMMAND"
>ALTER TABLE</TT
>, as that is how they can
accomplish what was formerly accomplished via <A
HREF="stmtstoretrigger.html"
>SLONIK STORE TRIGGER</A
> and <A
HREF="stmtdroptrigger.html"
>SLONIK DROP TRIGGER</A
>. </P
></DIV
><DIV
CLASS="SECT3"
><H3
CLASS="SECT3"
><A
NAME="AEN1998"
>4.6.1.3. SUBSCRIBE SET goes to the origin</A
></H3
><P
> New in 2.0.5 (but not older versions of 2.0.x) is that
<A
HREF="stmtsubscribeset.html"
>SLONIK SUBSCRIBE SET</A
> commands are submitted by
slonik to the set origin not the provider. This means that you
only need to issue <A
HREF="stmtwaitevent.html"
>SLONIK WAIT FOR EVENT</A
> on the set origin
to wait for the subscription process to complete.</P
></DIV
><DIV
CLASS="SECT3"
><H3
CLASS="SECT3"
><A
NAME="AEN2003"
>4.6.1.4. WAIT FOR EVENT requires WAIT ON</A
></H3
><P
> With version 2.0 the WAIT FOR EVENT slonik command requires
that the WAIT ON parameter be specified. Any slonik scripts that
were assuming a default value will need to be modified</P
></DIV
></DIV
><DIV
CLASS="SECT2"
><H2
CLASS="SECT2"
><A
NAME="UPGRADE20"
>4.6.2. Upgrading to <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> version 2</A
></H2
><A
NAME="AEN2009"
></A
><P
> The version 2 branch is <SPAN
CLASS="emphasis"
><I
CLASS="EMPHASIS"
>substantially</I
></SPAN
>
different from earlier releases, dropping support for versions of
<SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> prior to 8.3, as in version 8.3, support for a
<SPAN
CLASS="QUOTE"
>"session replication role"</SPAN
> was added, thereby eliminating
the need for system catalog hacks as well as the
not-entirely-well-supported <TT
CLASS="ENVAR"
>xxid</TT
> data type. </P
><P
> As a result of the replacement of the <TT
CLASS="ENVAR"
>xxid</TT
> type
with a (native-to-8.3) <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> transaction XID type, the <A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
>
command <A
HREF="stmtupdatefunctions.html"
>SLONIK UPDATE FUNCTIONS</A
> is quite inadequate to
the process of upgrading earlier versions of <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> to version
2.</P
><P
> In version 2.0.2, we have added a new option to <A
HREF="stmtsubscribeset.html"
>SLONIK SUBSCRIBE SET</A
>, <TT
CLASS="COMMAND"
>OMIT COPY</TT
>, which
allows taking an alternative approach to upgrade which amounts to:</P
><P
></P
><UL
><LI
><P
> Uninstall old version of <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> </P
><P
> When <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> uninstalls itself, catalog corruptions are fixed back up.</P
></LI
><LI
><P
> Install <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> version 2 </P
></LI
><LI
><P
> Resubscribe, with <TT
CLASS="COMMAND"
>OMIT COPY</TT
></P
></LI
></UL
><DIV
CLASS="WARNING"
><P
></P
><TABLE
CLASS="WARNING"
BORDER="1"
WIDTH="100%"
><TR
><TD
ALIGN="CENTER"
><B
>Warning</B
></TD
></TR
><TR
><TD
ALIGN="LEFT"
><P
> There is a large <SPAN
CLASS="QUOTE"
>"foot gun"</SPAN
> here: during
part of the process, <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> is not installed in any form, and if an
application updates one or another of the databases, the
resubscription, omitting copying data, will be left with data
<SPAN
CLASS="emphasis"
><I
CLASS="EMPHASIS"
>out of sync.</I
></SPAN
> </P
><P
> The administrator <SPAN
CLASS="emphasis"
><I
CLASS="EMPHASIS"
>must take care</I
></SPAN
>; <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
>
has no way to help ensure the integrity of the data during this
process.</P
></TD
></TR
></TABLE
></DIV
><P
> The following process is suggested to help make the upgrade
process as safe as possible, given the above risks. </P
><P
></P
><UL
><LI
><P
> Use <A
HREF="appendix.html#SLONIKCONFDUMP"
>Section 5.1.10</A
> to generate a
<A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
> script to recreate the replication cluster. </P
><P
> Be sure to verify the <A
HREF="admconninfo.html"
>SLONIK ADMIN CONNINFO</A
> statements,
as the values are pulled are drawn from the PATH configuration, which
may not necessarily be suitable for running <A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
>. </P
><P
> This step may be done before the application outage. </P
></LI
><LI
><P
> Determine what triggers have <A
HREF="stmtstoretrigger.html"
>SLONIK STORE TRIGGER</A
> configuration on subscriber nodes.</P
><P
>Trigger handling has
fundamentally changed between <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> 1.2 and 2.0. </P
><P
> Generally speaking, what needs to happen is to query
<TT
CLASS="ENVAR"
>sl_table</TT
> on each node, and, for any triggers found in
<TT
CLASS="ENVAR"
>sl_table</TT
>, it is likely to be appropriate to set up a
script indicating either <TT
CLASS="COMMAND"
>ENABLE REPLICA TRIGGER</TT
> or
<TT
CLASS="COMMAND"
>ENABLE ALWAYS TRIGGER</TT
> for these triggers.</P
><P
> This step may be done before the application outage. </P
></LI
><LI
><P
> Begin an application outage during which updates should no longer be applied to the database. </P
></LI
><LI
><P
> To ensure that applications cease to make changes, it would be appropriate to lock them out via modifications to <TT
CLASS="FILENAME"
>pg_hba.conf</TT
> </P
></LI
><LI
><P
> Ensure replication is entirely caught up, via examination of the <TT
CLASS="ENVAR"
>sl_status</TT
> view, and any application data that may seem appropriate. </P
></LI
><LI
><P
> Shut down <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> processes. </P
></LI
><LI
><P
> Uninstall the old version of <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> from the database. </P
><P
> This involves running a <A
HREF="slonik.html"
><SPAN
CLASS="APPLICATION"
>slonik</SPAN
></A
> script that runs <A
HREF="stmtuninstallnode.html"
>SLONIK UNINSTALL NODE</A
> against each node in the cluster. </P
></LI
><LI
><P
> Ensure new <SPAN
CLASS="PRODUCTNAME"
>Slony-I</SPAN
> binaries are in place. </P
><P
> A convenient way to handle this is to have old and new in different directories alongside two <SPAN
CLASS="PRODUCTNAME"
>PostgreSQL</SPAN
> builds, stop the <SPAN
CLASS="APPLICATION"
>postmaster</SPAN
>, repoint to the new directory, and restart the <SPAN
CLASS="APPLICATION"
>postmaster</SPAN
>. </P
></LI
><LI
><P
> Run the script that reconfigures replication as generated earlier. </P
><P
> This script should probably be split into two portions to be run separately:</P
><P
></P
><UL
><LI
><P
> Firstly, set up nodes, paths, sets, and such </P
></LI
><LI
><P
> At this point, start up <A
HREF="slon.html"
><SPAN
CLASS="APPLICATION"
>slon</SPAN
></A
> processes </P
></LI
><LI
><P
> Then, run the portion which runs <A
HREF="stmtsubscribeset.html"
>SLONIK SUBSCRIBE SET</A
> </P
></LI
></UL
><P
> Splitting the <A
HREF="appendix.html#SLONIKCONFDUMP"
>Section 5.1.10</A
> script as described above is left as an exercise for the reader.</P
></LI
><LI
><P
> If there were triggers that needed to be activated on subscriber nodes, this is the time to activate them. </P
></LI
><LI
><P
> At this point, the cluster should be back up and running, ready to be reconfigured so that applications may access it again. </P
></LI
></UL
></DIV
></DIV
><DIV
CLASS="NAVFOOTER"
><HR
ALIGN="LEFT"
WIDTH="100%"><TABLE
SUMMARY="Footer navigation table"
WIDTH="100%"
BORDER="0"
CELLPADDING="0"
CELLSPACING="0"
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
><A
HREF="partitioning.html"
ACCESSKEY="P"
>Prev</A
></TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="index.html"
ACCESSKEY="H"
>Home</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
><A
HREF="loganalysis.html"
ACCESSKEY="N"
>Next</A
></TD
></TR
><TR
><TD
WIDTH="33%"
ALIGN="left"
VALIGN="top"
>Partitioning Support</TD
><TD
WIDTH="34%"
ALIGN="center"
VALIGN="top"
><A
HREF="advanced.html"
ACCESSKEY="U"
>Up</A
></TD
><TD
WIDTH="33%"
ALIGN="right"
VALIGN="top"
>Log Analysis</TD
></TR
></TABLE
></DIV
></BODY
></HTML
>
|