/usr/share/doc/cedar-backup3-doc/manual/manual.txt is in cedar-backup3-doc 3.1.7-2.
This file is owned by root:root, with mode 0o644.
The actual contents of the file can be viewed below.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 4244 4245 4246 4247 4248 4249 4250 4251 4252 4253 4254 4255 4256 4257 4258 4259 4260 4261 4262 4263 4264 4265 4266 4267 4268 4269 4270 4271 4272 4273 4274 4275 4276 4277 4278 4279 4280 4281 4282 4283 4284 4285 4286 4287 4288 4289 4290 4291 4292 4293 4294 4295 4296 4297 4298 4299 4300 4301 4302 4303 4304 4305 4306 4307 4308 4309 4310 4311 4312 4313 4314 4315 4316 4317 4318 4319 4320 4321 4322 4323 4324 4325 4326 4327 4328 4329 4330 4331 4332 4333 4334 4335 4336 4337 4338 4339 4340 4341 4342 4343 4344 4345 4346 4347 4348 4349 4350 4351 4352 4353 4354 4355 4356 4357 4358 4359 4360 4361 4362 4363 4364 4365 4366 4367 4368 4369 4370 4371 4372 4373 4374 4375 4376 4377 4378 4379 4380 4381 4382 4383 4384 4385 4386 4387 4388 4389 4390 4391 4392 4393 4394 4395 4396 4397 4398 4399 4400 4401 4402 4403 4404 4405 4406 4407 4408 4409 4410 4411 4412 4413 4414 4415 4416 4417 4418 4419 4420 4421 4422 4423 4424 4425 4426 4427 4428 4429 4430 4431 4432 4433 4434 4435 4436 4437 4438 4439 4440 4441 4442 4443 4444 4445 4446 4447 4448 4449 4450 4451 4452 4453 4454 4455 4456 4457 4458 4459 4460 4461 4462 4463 4464 4465 4466 4467 4468 4469 4470 4471 4472 4473 4474 4475 4476 4477 4478 4479 4480 4481 4482 4483 4484 4485 4486 4487 4488 4489 4490 4491 4492 4493 4494 4495 4496 4497 4498 4499 4500 4501 4502 4503 4504 4505 4506 4507 4508 4509 4510 4511 4512 4513 4514 4515 4516 4517 4518 4519 4520 4521 4522 4523 4524 4525 4526 4527 4528 4529 4530 4531 4532 4533 4534 4535 4536 4537 4538 4539 4540 4541 4542 4543 4544 4545 4546 4547 4548 4549 4550 4551 4552 4553 4554 4555 4556 4557 4558 4559 4560 4561 4562 4563 4564 4565 4566 4567 4568 4569 4570 4571 4572 4573 4574 4575 4576 4577 4578 4579 4580 4581 4582 4583 4584 4585 4586 4587 4588 4589 4590 4591 4592 4593 4594 4595 4596 4597 4598 4599 4600 4601 4602 4603 4604 4605 4606 4607 4608 4609 4610 4611 4612 4613 4614 4615 4616 4617 4618 4619 4620 4621 4622 4623 4624 4625 4626 4627 4628 4629 4630 4631 4632 4633 4634 4635 4636 4637 4638 4639 4640 4641 4642 4643 4644 4645 4646 4647 4648 4649 4650 4651 4652 4653 4654 4655 4656 4657 4658 4659 4660 4661 4662 4663 4664 4665 4666 4667 4668 4669 4670 4671 4672 4673 4674 4675 4676 4677 4678 4679 4680 4681 4682 4683 4684 4685 4686 4687 4688 4689 4690 4691 4692 4693 4694 4695 4696 4697 4698 4699 4700 4701 4702 4703 4704 4705 4706 4707 4708 4709 4710 4711 4712 4713 4714 4715 4716 4717 4718 4719 4720 4721 4722 4723 4724 4725 4726 4727 4728 4729 4730 4731 4732 4733 4734 4735 4736 4737 4738 4739 4740 4741 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 4753 4754 4755 4756 4757 4758 4759 4760 4761 4762 4763 4764 4765 4766 4767 4768 4769 4770 4771 4772 4773 4774 4775 4776 4777 4778 4779 4780 4781 4782 4783 4784 4785 4786 4787 4788 4789 4790 4791 4792 4793 4794 4795 4796 4797 4798 4799 4800 4801 4802 4803 4804 4805 4806 4807 4808 4809 4810 4811 4812 4813 4814 4815 4816 4817 4818 4819 4820 4821 4822 4823 4824 4825 4826 4827 4828 4829 4830 4831 4832 4833 4834 4835 4836 4837 4838 4839 4840 4841 4842 4843 4844 4845 4846 4847 4848 4849 4850 4851 4852 4853 4854 4855 4856 4857 4858 4859 4860 4861 4862 4863 4864 4865 4866 4867 4868 4869 4870 4871 4872 4873 4874 4875 4876 4877 4878 4879 4880 4881 4882 4883 4884 4885 4886 4887 4888 4889 4890 4891 4892 4893 4894 4895 4896 4897 4898 4899 4900 4901 4902 4903 4904 4905 4906 4907 4908 4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 4926 4927 4928 4929 4930 4931 4932 4933 4934 4935 4936 4937 4938 4939 4940 4941 4942 4943 4944 4945 4946 4947 4948 4949 4950 4951 4952 4953 4954 4955 4956 4957 4958 4959 4960 4961 4962 4963 4964 4965 4966 4967 4968 4969 4970 4971 4972 4973 4974 4975 4976 4977 4978 4979 4980 4981 4982 4983 4984 4985 4986 4987 4988 4989 4990 4991 4992 4993 4994 4995 4996 4997 4998 4999 5000 5001 5002 5003 5004 5005 5006 5007 5008 5009 5010 5011 5012 5013 5014 5015 5016 5017 5018 5019 5020 5021 5022 5023 5024 5025 5026 5027 5028 5029 5030 5031 5032 5033 5034 5035 5036 5037 5038 5039 5040 5041 5042 5043 5044 5045 5046 5047 5048 5049 5050 5051 5052 5053 5054 5055 5056 5057 5058 5059 5060 5061 5062 5063 5064 5065 5066 5067 5068 5069 5070 5071 5072 5073 5074 5075 5076 5077 5078 5079 5080 5081 5082 5083 5084 5085 5086 5087 5088 5089 5090 5091 5092 5093 5094 5095 5096 5097 5098 5099 5100 5101 5102 5103 5104 5105 5106 5107 5108 5109 5110 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 5181 5182 5183 5184 5185 5186 5187 5188 5189 5190 5191 5192 5193 5194 5195 5196 5197 5198 5199 5200 5201 5202 5203 5204 5205 5206 5207 5208 5209 5210 5211 5212 5213 5214 5215 5216 5217 5218 5219 5220 5221 5222 5223 5224 5225 5226 5227 5228 5229 5230 5231 5232 5233 5234 5235 5236 5237 5238 5239 5240 5241 5242 5243 5244 5245 5246 5247 5248 5249 5250 5251 5252 5253 5254 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 5319 5320 5321 5322 5323 5324 5325 5326 5327 5328 5329 5330 5331 5332 5333 5334 5335 5336 5337 5338 5339 5340 5341 5342 5343 5344 5345 5346 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 5360 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5376 5377 5378 5379 5380 5381 5382 5383 5384 5385 5386 5387 5388 5389 5390 5391 5392 5393 5394 5395 5396 5397 5398 5399 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 5422 5423 5424 5425 5426 5427 5428 5429 5430 5431 5432 5433 5434 5435 5436 5437 5438 5439 5440 5441 5442 5443 5444 5445 5446 5447 5448 5449 5450 5451 5452 5453 5454 5455 5456 5457 5458 5459 5460 5461 5462 5463 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 5487 5488 5489 5490 5491 5492 5493 5494 5495 5496 5497 5498 5499 5500 5501 5502 5503 5504 5505 5506 5507 5508 5509 5510 5511 5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 5559 5560 5561 5562 5563 5564 5565 5566 5567 5568 5569 5570 5571 5572 5573 5574 5575 5576 5577 5578 5579 5580 5581 5582 5583 5584 5585 5586 5587 5588 5589 5590 5591 5592 5593 5594 5595 5596 5597 5598 5599 5600 5601 5602 5603 5604 5605 5606 5607 5608 5609 5610 5611 5612 5613 5614 5615 5616 5617 5618 5619 5620 5621 5622 5623 5624 5625 5626 5627 5628 5629 5630 5631 5632 5633 5634 5635 5636 5637 5638 5639 5640 5641 5642 5643 5644 5645 5646 5647 5648 5649 5650 5651 5652 5653 5654 5655 5656 5657 5658 5659 5660 5661 5662 5663 5664 5665 5666 5667 5668 5669 5670 5671 5672 5673 5674 5675 5676 5677 5678 5679 5680 5681 5682 5683 5684 5685 5686 5687 5688 5689 5690 5691 5692 5693 5694 5695 5696 5697 5698 5699 5700 5701 5702 5703 5704 5705 5706 5707 5708 5709 5710 5711 5712 5713 5714 5715 5716 5717 5718 5719 5720 5721 5722 5723 5724 5725 5726 5727 5728 5729 5730 5731 5732 5733 5734 5735 5736 5737 5738 5739 5740 5741 5742 5743 5744 5745 5746 5747 5748 5749 5750 5751 5752 5753 5754 5755 5756 5757 5758 5759 5760 5761 5762 5763 5764 5765 5766 5767 5768 5769 5770 5771 5772 5773 5774 5775 5776 5777 5778 5779 5780 5781 5782 5783 5784 5785 5786 5787 5788 5789 5790 5791 5792 5793 5794 5795 5796 5797 5798 5799 5800 5801 5802 5803 5804 5805 5806 5807 5808 5809 5810 5811 5812 5813 5814 5815 5816 5817 5818 5819 5820 5821 5822 5823 5824 5825 5826 5827 5828 5829 5830 5831 5832 5833 5834 5835 5836 5837 5838 5839 5840 5841 5842 5843 5844 5845 5846 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 5871 5872 5873 5874 5875 5876 5877 5878 5879 5880 5881 5882 5883 5884 5885 5886 5887 5888 5889 5890 5891 5892 5893 5894 5895 5896 5897 5898 5899 5900 5901 5902 5903 5904 5905 5906 5907 5908 5909 5910 5911 5912 5913 5914 5915 5916 5917 5918 5919 5920 5921 5922 5923 5924 5925 5926 5927 5928 5929 5930 5931 5932 5933 5934 5935 5936 5937 5938 5939 5940 5941 5942 5943 5944 5945 5946 5947 5948 5949 5950 5951 5952 5953 5954 5955 5956 5957 5958 5959 5960 5961 5962 5963 5964 5965 5966 5967 5968 5969 5970 5971 5972 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 5994 5995 5996 5997 5998 5999 6000 6001 6002 6003 6004 6005 6006 6007 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 6021 6022 6023 6024 6025 6026 6027 6028 6029 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042 6043 6044 6045 6046 6047 6048 6049 6050 6051 6052 6053 6054 6055 6056 6057 6058 6059 6060 6061 6062 6063 6064 6065 6066 6067 6068 6069 6070 6071 6072 6073 6074 6075 6076 6077 6078 6079 6080 6081 6082 6083 6084 6085 6086 6087 6088 6089 6090 6091 6092 6093 6094 6095 6096 6097 6098 6099 6100 6101 6102 6103 6104 6105 6106 6107 6108 6109 6110 6111 6112 6113 6114 6115 6116 6117 6118 6119 6120 6121 6122 6123 6124 6125 6126 6127 6128 6129 6130 6131 6132 6133 6134 6135 6136 6137 6138 6139 6140 6141 6142 6143 6144 6145 6146 6147 6148 6149 6150 6151 6152 6153 6154 6155 6156 6157 6158 6159 6160 6161 6162 6163 6164 6165 6166 6167 6168 6169 6170 6171 6172 6173 6174 6175 6176 6177 6178 6179 6180 6181 6182 6183 6184 6185 6186 6187 6188 6189 6190 6191 6192 6193 6194 6195 6196 6197 6198 6199 6200 6201 6202 6203 6204 6205 6206 6207 6208 6209 6210 6211 6212 6213 6214 6215 6216 6217 6218 6219 6220 6221 6222 6223 6224 6225 6226 6227 6228 6229 6230 6231 6232 6233 6234 6235 6236 6237 6238 6239 6240 6241 6242 6243 6244 6245 6246 6247 6248 6249 6250 6251 6252 6253 6254 6255 6256 6257 6258 6259 6260 6261 6262 6263 6264 6265 6266 6267 6268 6269 6270 6271 6272 6273 6274 6275 6276 6277 6278 6279 6280 6281 6282 6283 6284 6285 6286 6287 6288 6289 6290 6291 6292 6293 6294 6295 6296 6297 6298 6299 6300 6301 6302 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 6315 6316 6317 6318 6319 6320 6321 6322 6323 6324 6325 6326 6327 6328 6329 6330 6331 6332 6333 6334 6335 6336 6337 6338 6339 6340 6341 6342 6343 6344 6345 6346 6347 6348 6349 6350 6351 6352 6353 6354 6355 6356 6357 6358 6359 6360 6361 6362 6363 6364 6365 6366 6367 6368 6369 6370 6371 6372 6373 6374 6375 6376 6377 6378 6379 6380 6381 6382 6383 6384 6385 6386 6387 6388 6389 6390 6391 6392 6393 6394 6395 6396 6397 6398 6399 6400 6401 6402 6403 6404 6405 6406 6407 6408 6409 6410 6411 6412 6413 6414 6415 6416 6417 6418 6419 6420 6421 6422 6423 6424 6425 6426 6427 6428 6429 6430 6431 6432 6433 6434 6435 6436 6437 6438 6439 6440 6441 6442 6443 6444 6445 6446 6447 6448 6449 6450 6451 6452 6453 6454 6455 6456 6457 6458 6459 6460 6461 6462 6463 6464 6465 6466 6467 6468 6469 6470 6471 6472 6473 6474 6475 6476 6477 6478 6479 6480 6481 6482 6483 6484 6485 6486 6487 6488 6489 6490 6491 6492 6493 6494 6495 6496 6497 6498 6499 6500 6501 6502 6503 6504 6505 6506 6507 6508 6509 6510 6511 6512 6513 6514 6515 6516 6517 6518 6519 6520 6521 6522 6523 6524 6525 6526 6527 6528 6529 6530 6531 6532 6533 6534 6535 6536 6537 6538 6539 6540 6541 6542 6543 6544 6545 6546 6547 6548 6549 6550 6551 6552 6553 6554 6555 6556 6557 6558 6559 6560 6561 6562 6563 6564 6565 6566 6567 6568 6569 6570 6571 6572 6573 6574 6575 6576 6577 6578 6579 6580 6581 6582 6583 6584 6585 6586 6587 6588 6589 6590 6591 6592 6593 6594 6595 6596 6597 6598 6599 6600 6601 6602 6603 6604 6605 6606 6607 6608 6609 6610 6611 6612 6613 6614 6615 6616 6617 6618 6619 6620 6621 6622 6623 6624 6625 6626 6627 6628 6629 6630 6631 6632 6633 6634 6635 6636 6637 6638 6639 6640 6641 6642 6643 6644 6645 6646 6647 6648 6649 6650 6651 6652 6653 6654 6655 6656 6657 6658 6659 6660 6661 6662 6663 6664 6665 6666 6667 6668 6669 6670 6671 6672 6673 6674 6675 6676 6677 6678 6679 6680 6681 6682 6683 6684 6685 6686 6687 6688 6689 6690 6691 6692 6693 6694 6695 6696 6697 6698 6699 6700 6701 6702 6703 6704 6705 6706 6707 6708 6709 6710 6711 6712 6713 6714 6715 6716 6717 6718 6719 6720 6721 6722 6723 6724 6725 6726 6727 6728 6729 6730 6731 6732 6733 6734 6735 6736 6737 6738 6739 6740 6741 6742 6743 6744 6745 6746 6747 6748 6749 6750 6751 6752 6753 6754 6755 6756 6757 6758 6759 6760 6761 6762 6763 6764 6765 6766 6767 6768 6769 6770 6771 6772 6773 6774 6775 6776 6777 6778 6779 6780 6781 6782 6783 6784 6785 6786 6787 6788 6789 6790 6791 6792 6793 6794 6795 6796 6797 6798 6799 6800 6801 6802 6803 6804 6805 6806 6807 6808 6809 6810 6811 6812 6813 6814 6815 6816 6817 6818 6819 6820 6821 6822 6823 6824 6825 6826 6827 6828 6829 6830 6831 6832 6833 6834 6835 6836 6837 6838 6839 6840 6841 6842 6843 6844 6845 6846 6847 6848 6849 6850 6851 6852 6853 6854 6855 6856 6857 6858 6859 6860 6861 6862 6863 6864 6865 6866 6867 6868 6869 6870 6871 6872 6873 6874 6875 6876 6877 6878 6879 6880 6881 6882 6883 6884 6885 6886 6887 6888 6889 6890 6891 6892 6893 6894 6895 6896 6897 6898 6899 6900 6901 6902 6903 6904 6905 6906 6907 6908 6909 6910 6911 6912 6913 6914 6915 6916 6917 6918 6919 6920 6921 6922 6923 6924 6925 6926 6927 6928 6929 6930 6931 6932 6933 6934 6935 6936 6937 6938 6939 6940 6941 6942 6943 6944 6945 6946 6947 6948 6949 6950 6951 6952 6953 6954 6955 6956 6957 6958 6959 6960 6961 6962 6963 6964 6965 6966 6967 6968 6969 6970 6971 6972 6973 6974 6975 6976 6977 6978 6979 6980 6981 6982 6983 6984 6985 6986 6987 6988 6989 6990 6991 6992 6993 6994 6995 6996 6997 6998 6999 7000 7001 7002 7003 7004 7005 7006 7007 7008 7009 7010 7011 7012 7013 7014 7015 7016 7017 7018 7019 7020 7021 7022 7023 7024 7025 | Cedar Backup 3 Software Manual
Kenneth J. Pronovici
Copyright 2005-2008,2013-2015 Kenneth J. Pronovici
This work is free; you can redistribute it and/or modify it under the terms of
the GNU General Public License (the "GPL"), Version 2, as published by the Free
Software Foundation.
For the purposes of the GPL, the "preferred form of modification" for this work
is the original Docbook XML text files. If you choose to distribute this work
in a compiled form (i.e. if you distribute HTML, PDF or Postscript documents
based on the original Docbook XML text files), you must also consider image
files to be "source code" if those images are required in order to construct a
complete and readable compiled version of the work.
This work is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
Copies of the GNU General Public License are available from the Free Software
Foundation website, http://www.gnu.org/. You may also write the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Table of Contents
Preface
Purpose
Audience
Conventions Used in This Book
Typographic Conventions
Icons
Organization of This Manual
Acknowledgments
1. Introduction
What is Cedar Backup?
Migrating from Version 2 to Version 3
How to Get Support
History
2. Basic Concepts
General Architecture
Data Recovery
Cedar Backup Pools
The Backup Process
The Collect Action
The Stage Action
The Store Action
The Purge Action
The All Action
The Validate Action
The Initialize Action
The Rebuild Action
Coordination between Master and Clients
Managed Backups
Media and Device Types
Incremental Backups
Extensions
3. Installation
Background
Installing on a Debian System
Installing from Source
Installing Dependencies
Installing the Source Package
4. Command Line Tools
Overview
The cback3 command
Introduction
Syntax
Switches
Actions
The cback3-amazons3-sync command
Introduction
Syntax
Switches
The cback3-span command
Introduction
Syntax
Switches
Using cback3-span
Sample run
5. Configuration
Overview
Configuration File Format
Sample Configuration File
Reference Configuration
Options Configuration
Peers Configuration
Collect Configuration
Stage Configuration
Store Configuration
Purge Configuration
Extensions Configuration
Setting up a Pool of One
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure your writer device.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test your backup.
Step 9: Modify the backup cron jobs.
Setting up a Client Peer Node
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure the master in your backup pool.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test your backup.
Step 9: Modify the backup cron jobs.
Setting up a Master Peer Node
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure your writer device.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test connectivity to client machines.
Step 9: Test your backup.
Step 10: Modify the backup cron jobs.
Configuring your Writer Device
Device Types
Devices identified by by device name
Devices identified by SCSI id
Linux Notes
Finding your Linux CD Writer
Mac OS X Notes
Optimized Blanking Stategy
6. Official Extensions
System Information Extension
Amazon S3 Extension
Subversion Extension
MySQL Extension
PostgreSQL Extension
Mbox Extension
Encrypt Extension
Split Extension
Capacity Extension
A. Extension Architecture Interface
B. Dependencies
C. Data Recovery
Finding your Data
Recovering Filesystem Data
Full Restore
Partial Restore
Recovering MySQL Data
Recovering Subversion Data
Recovering Mailbox Data
Recovering Data split by the Split Extension
D. Securing Password-less SSH Connections
E. Copyright
Preface
Table of Contents
Purpose
Audience
Conventions Used in This Book
Typographic Conventions
Icons
Organization of This Manual
Acknowledgments
Purpose
This software manual has been written to document version 2 of Cedar Backup,
originally released in early 2005.
Audience
This manual has been written for computer-literate administrators who need to
use and configure Cedar Backup on their Linux or UNIX-like system. The examples
in this manual assume the reader is relatively comfortable with UNIX and
command-line interfaces.
Conventions Used in This Book
This section covers the various conventions used in this manual.
Typographic Conventions
Term
Used for first use of important terms.
Command
Used for commands, command output, and switches
Replaceable
Used for replaceable items in code and text
Filenames
Used for file and directory names
Icons
Note
This icon designates a note relating to the surrounding text.
Tip
This icon designates a helpful tip relating to the surrounding text.
Warning
This icon designates a warning relating to the surrounding text.
Organization of This Manual
Chapter 1, Introduction
Provides some some general history about Cedar Backup, what needs it is
intended to meet, how to get support, and how to migrate from version 2 to
version 3.
Chapter 2, Basic Concepts
Discusses the basic concepts of a Cedar Backup infrastructure, and
specifies terms used throughout the rest of the manual.
Chapter 3, Installation
Explains how to install the Cedar Backup package either from the Python
source distribution or from the Debian package.
Chapter 4, Command Line Tools
Discusses the various Cedar Backup command-line tools, including the
primary cback3 command.
Chapter 5, Configuration
Provides detailed information about how to configure Cedar Backup.
Chapter 6, Official Extensions
Describes each of the officially-supported Cedar Backup extensions.
Appendix A, Extension Architecture Interface
Specifies the Cedar Backup extension architecture interface, through which
third party developers can write extensions to Cedar Backup.
Appendix B, Dependencies
Provides some additional information about the packages which Cedar Backup
relies on, including information about how to find documentation and
packages on non-Debian systems.
Appendix C, Data Recovery
Cedar Backup provides no facility for restoring backups, assuming the
administrator can handle this infrequent task. This appendix provides some
notes for administrators to work from.
Appendix D, Securing Password-less SSH Connections
Password-less SSH connections are a necessary evil when remote backup
processes need to execute without human interaction. This appendix
describes some ways that you can reduce the risk to your backup pool should
your master machine be compromised.
Acknowledgments
The structure of this manual and some of the basic boilerplate has been taken
from the book Version Control with Subversion. Thanks to the authors (and
O'Reilly) for making this excellent reference available under a free and open
license.
Chapter 1. Introduction
Table of Contents
What is Cedar Backup?
Migrating from Version 2 to Version 3
How to Get Support
History
“Only wimps use tape backup: real men just upload their important stuff on
ftp, and let the rest of the world mirror it.”— Linus Torvalds, at the
release of Linux 2.0.8 in July of 1996.
What is Cedar Backup?
Cedar Backup is a software package designed to manage system backups for a pool
of local and remote machines. Cedar Backup understands how to back up
filesystem data as well as MySQL and PostgreSQL databases and Subversion
repositories. It can also be easily extended to support other kinds of data
sources.
Cedar Backup is focused around weekly backups to a single CD or DVD disc, with
the expectation that the disc will be changed or overwritten at the beginning
of each week. If your hardware is new enough (and almost all hardware is
today), Cedar Backup can write multisession discs, allowing you to add
incremental data to a disc on a daily basis.
Alternately, Cedar Backup can write your backups to the Amazon S3 cloud rather
than relying on physical media.
Besides offering command-line utilities to manage the backup process, Cedar
Backup provides a well-organized library of backup-related functionality,
written in the Python 3 programming language.
There are many different backup software implementations out there in the open
source world. Cedar Backup aims to fill a niche: it aims to be a good fit for
people who need to back up a limited amount of important data on a regular
basis. Cedar Backup isn't for you if you want to back up your huge MP3
collection every night, or if you want to back up a few hundred machines.
However, if you administer a small set of machines and you want to run daily
incremental backups for things like system configuration, current email, small
web sites, Subversion or Mercurial repositories, or small MySQL databases, then
Cedar Backup is probably worth your time.
Cedar Backup has been developed on a Debian GNU/Linux system and is primarily
supported on Debian and other Linux systems. However, since it is written in
portable Python 3, it should run without problems on just about any UNIX-like
operating system. In particular, full Cedar Backup functionality is known to
work on Debian and SuSE Linux systems, and client functionality is also known
to work on FreeBSD and Mac OS X systems.
To run a Cedar Backup client, you really just need a working Python 3
installation. To run a Cedar Backup master, you will also need a set of other
executables, most of which are related to building and writing CD/DVD images or
talking to the Amazon S3 infrastructure. A full list of dependencies is
provided in the section called “Installing Dependencies”.
Migrating from Version 2 to Version 3
The main difference between Cedar Backup version 2 and Cedar Backup version 3
is the targeted Python interpreter. Cedar Backup version 2 was designed for
Python 2, while version 3 is a conversion of the original code to Python 3.
Other than that, both versions are functionally equivalent. The configuration
format is unchanged, and you can mix-and-match masters and clients of different
versions in the same backup pool. Both versions will be fully supported until
around the time of the Python 2 end-of-life in 2020, but you should plan to
migrate sooner than that if possible.
A major design goal for version 3 was to facilitate easy migration testing for
users, by making it possible to install version 3 on the same server where
version 2 was already in use. A side effect of this design choice is that all
of the executables, configuration files, and logs changed names in version 3.
Where version 2 used "cback", version 3 uses "cback3": cback3.conf instead of
cback.conf, cback3.log instead of cback.log, etc.
So, while migrating from version 2 to version 3 is relatively straightforward,
you will have to make some changes manually. You will need to create a new
configuration file (or soft link to the old one), modify your cron jobs to use
the new executable name, etc. You can migrate one server at a time in your pool
with no ill effects, or even incrementally migrate a single server by using
version 2 and version 3 on different days of the week or for different parts of
the backup.
How to Get Support
Cedar Backup is open source software that is provided to you at no cost. It is
provided with no warranty, not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. However, that said, someone can usually help you solve
whatever problems you might see.
If you experience a problem, your best bet is to file an issue in the issue
tracker at BitBucket. ^[1] When the source code was hosted at SourceForge,
there was a mailing list. However, it was very lightly used in the last years
before I abandoned SourceForge, and I have decided not to replace it.
If you are not comfortable discussing your problem in public or listing it in a
public database, or if you need to send along information that you do not want
made public, then you can write <support@cedar-solutions.com>. That mail will
go directly to me. If you write the support address about a bug, a “scrubbed”
bug report will eventually end up in the public bug database anyway, so if at
all possible you should use the public reporting mechanisms. One of the
strengths of the open-source software development model is its transparency.
Regardless of how you report your problem, please try to provide as much
information as possible about the behavior you observed and the environment in
which the problem behavior occurred. ^[2]
In particular, you should provide: the version of Cedar Backup that you are
using; how you installed Cedar Backup (i.e. Debian package, source package,
etc.); the exact command line that you executed; any error messages you
received, including Python stack traces (if any); and relevant sections of the
Cedar Backup log. It would be even better if you could describe exactly how to
reproduce the problem, for instance by including your entire configuration file
and/or specific information about your system that might relate to the problem.
However, please do not provide huge sections of debugging logs unless you are
sure they are relevant or unless someone asks for them.
Tip
Sometimes, the error that Cedar Backup displays can be rather cryptic. This is
because under internal error conditions, the text related to an exception might
get propogated all of the way up to the user interface. If the message you
receive doesn't make much sense, or if you suspect that it results from an
internal error, you might want to re-run Cedar Backup with the --stack option.
This forces Cedar Backup to dump the entire Python stack trace associated with
the error, rather than just printing the last message it received. This is good
information to include along with a bug report, as well.
History
Cedar Backup began life in late 2000 as a set of Perl scripts called kbackup.
These scripts met an immediate need (which was to back up skyjammer.com and
some personal machines) but proved to be unstable, overly verbose and rather
difficult to maintain.
In early 2002, work began on a rewrite of kbackup. The goal was to address many
of the shortcomings of the original application, as well as to clean up the
code and make it available to the general public. While doing research related
to code I could borrow or base the rewrite on, I discovered that there was
already an existing backup package with the name kbackup, so I decided to
change the name to Cedar Backup instead.
Because I had become fed up with the prospect of maintaining a large volume of
Perl code, I decided to abandon that language in favor of Python. ^[3] At the
time, I chose Python mostly because I was interested in learning it, but in
retrospect it turned out to be a very good decision. From my perspective,
Python has almost all of the strengths of Perl, but few of its inherent
weaknesses (I feel that primarily, Python code often ends up being much more
readable than Perl code).
Around this same time, skyjammer.com and cedar-solutions.com were converted to
run Debian GNU/Linux (potato) ^[4] and I entered the Debian new maintainer
queue, so I also made it a goal to implement Debian packages along with a
Python source distribution for the new release.
Version 1.0 of Cedar Backup was released in June of 2002. We immediately began
using it to back up skyjammer.com and cedar-solutions.com, where it proved to
be much more stable than the original code.
In the meantime, I continued to improve as a Python programmer and also started
doing a significant amount of professional development in Java. It soon became
obvious that the internal structure of Cedar Backup 1.0, while much better than
kbackup, still left something to be desired. In November 2003, I began an
attempt at cleaning up the codebase. I converted all of the internal
documentation to use Epydoc, ^[5] and updated the code to use the
newly-released Python logging package ^[6] after having a good experience with
Java's log4j. However, I was still not satisfied with the code, which did not
lend itself to the automated regression testing I had used when working with
junit in my Java code.
So, rather than releasing the cleaned-up code, I instead began another
ground-up rewrite in May 2004. With this rewrite, I applied everything I had
learned from other Java and Python projects I had undertaken over the last few
years. I structured the code to take advantage of Python's unique ability to
blend procedural code with object-oriented code, and I made automated unit
testing a primary requirement. The result was the 2.0 release, which is
cleaner, more compact, better focused, and better documented than any release
before it. Utility code is less application-specific, and is now usable as a
general-purpose library. The 2.0 release also includes a complete regression
test suite of over 3000 tests, which will help to ensure that quality is
maintained as development continues into the future. ^[7]
The 3.0 release of Cedar Backup is a Python 3 conversion of the 2.0 release,
with minimal additional functionality. The conversion from Python 2 to Python 3
started in mid-2015, about 5 years before the anticipated deprecation of Python
2 in 2020. Most users should consider transitioning to the 3.0 release.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[1] See https://bitbucket.org/cedarsolutions/cedar-backup3/issues.
^[2] See Simon Tatham's excellent bug reporting tutorial: http://
www.chiark.greenend.org.uk/~sgtatham/bugs.html .
^[3] See http://www.python.org/ .
^[4] Debian's stable releases are named after characters in the Toy Story
movie.
^[5] Epydoc is a Python code documentation tool. See http://
epydoc.sourceforge.net/.
^[6] See http://docs.python.org/lib/module-logging.html .
^[7] Tests are implemented using Python's unit test framework. See http://
docs.python.org/lib/module-unittest.html.
Chapter 2. Basic Concepts
Table of Contents
General Architecture
Data Recovery
Cedar Backup Pools
The Backup Process
The Collect Action
The Stage Action
The Store Action
The Purge Action
The All Action
The Validate Action
The Initialize Action
The Rebuild Action
Coordination between Master and Clients
Managed Backups
Media and Device Types
Incremental Backups
Extensions
General Architecture
Cedar Backup is architected as a Python package (library) and a single
executable (a Python script). The Python package provides both
application-specific code and general utilities that can be used by programs
other than Cedar Backup. It also includes modules that can be used by third
parties to extend Cedar Backup or provide related functionality.
The cback3 script is designed to run as root, since otherwise it's difficult to
back up system directories or write to the CD/DVD device. However, pains are
taken to use the backup user's effective user id (specified in configuration)
when appropriate. Note: this does not mean that cback3 runs setuid^[8] or
setgid. However, all files on disk will be owned by the backup user, and and
all rsh-based network connections will take place as the backup user.
The cback3 script is configured via command-line options and an XML
configuration file on disk. The configuration file is normally stored in /etc/
cback3.conf, but this path can be overridden at runtime. See Chapter 5,
Configuration for more information on how Cedar Backup is configured.
Warning
You should be aware that backups to CD/DVD media can probably be read by any
user which has permissions to mount the CD/DVD writer. If you intend to leave
the backup disc in the drive at all times, you may want to consider this when
setting up device permissions on your machine. See also the section called
“Encrypt Extension”.
Data Recovery
Cedar Backup does not include any facility to restore backups. Instead, it
assumes that the administrator (using the procedures and references in
Appendix C, Data Recovery) can handle the task of restoring their own system,
using the standard system tools at hand.
If I were to maintain recovery code in Cedar Backup, I would almost certainly
end up in one of two situations. Either Cedar Backup would only support simple
recovery tasks, and those via an interface a lot like that of the underlying
system tools; or Cedar Backup would have to include a hugely complicated
interface to support more specialized (and hence useful) recovery tasks like
restoring individual files as of a certain point in time. In either case, I
would end up trying to maintain critical functionality that would be rarely
used, and hence would also be rarely tested by end-users. I am uncomfortable
asking anyone to rely on functionality that falls into this category.
My primary goal is to keep the Cedar Backup codebase as simple and focused as
possible. I hope you can understand how the choice of providing documentation,
but not code, seems to strike the best balance between managing code complexity
and providing the functionality that end-users need.
Cedar Backup Pools
There are two kinds of machines in a Cedar Backup pool. One machine (the master
) has a CD or DVD writer on it and writes the backup to disc. The others (
clients) collect data to be written to disc by the master. Collectively, the
master and client machines in a pool are called peer machines.
Cedar Backup has been designed primarily for situations where there is a single
master and a set of other clients that the master interacts with. However, it
will just as easily work for a single machine (a backup pool of one) and in
fact more users seem to use it like this than any other way.
The Backup Process
The Cedar Backup backup process is structured in terms of a set of decoupled
actions which execute independently (based on a schedule in cron) rather than
through some highly coordinated flow of control.
This design decision has both positive and negative consequences. On the one
hand, the code is much simpler and can choose to simply abort or log an error
if its expectations are not met. On the other hand, the administrator must
coordinate the various actions during initial set-up. See the section called
“Coordination between Master and Clients” (later in this chapter) for more
information on this subject.
A standard backup run consists of four steps (actions), some of which execute
on the master machine, and some of which execute on one or more client
machines. These actions are: collect, stage, store and purge.
In general, more than one action may be specified on the command-line. If more
than one action is specified, then actions will be taken in a sensible order
(generally collect, stage, store, purge). A special all action is also allowed,
which implies all of the standard actions in the same sensible order.
The cback3 command also supports several actions that are not part of the
standard backup run and cannot be executed along with any other actions. These
actions are validate, initialize and rebuild. All of the various actions are
discussed further below.
See Chapter 5, Configuration for more information on how a backup run is
configured.
Flexibility
Cedar Backup was designed to be flexible. It allows you to decide for yourself
which backup steps you care about executing (and when you execute them), based
on your own situation and your own priorities.
As an example, I always back up every machine I own. I typically keep 7-10 days
of staging directories around, but switch CD/DVD media mostly every week. That
way, I can periodically take a disc off-site in case the machine gets stolen or
damaged.
If you're not worried about these risks, then there's no need to write to disc.
In fact, some users prefer to use their master machine as a simple “
consolidation point”. They don't back up any data on the master, and don't
write to disc at all. They just use Cedar Backup to handle the mechanics of
moving backed-up data to a central location. This isn't quite what Cedar Backup
was written to do, but it is flexible enough to meet their needs.
The Collect Action
The collect action is the first action in a standard backup run. It executes on
both master and client nodes. Based on configuration, this action traverses the
peer's filesystem and gathers files to be backed up. Each configured high-level
directory is collected up into its own tar file in the collect directory. The
tarfiles can either be uncompressed (.tar) or compressed with either gzip
(.tar.gz) or bzip2 (.tar.bz2).
There are three supported collect modes: daily, weekly and incremental.
Directories configured for daily backups are backed up every day. Directories
configured for weekly backups are backed up on the first day of the week.
Directories configured for incremental backups are traversed every day, but
only the files which have changed (based on a saved-off SHA hash) are actually
backed up.
Collect configuration also allows for a variety of ways to filter files and
directories out of the backup. For instance, administrators can configure an
ignore indicator file ^[9] or specify absolute paths or filename patterns ^[10]
to be excluded. You can even configure a backup “link farm” rather than
explicitly listing files and directories in configuration.
This action is optional on the master. You only need to configure and execute
the collect action on the master if you have data to back up on that machine.
If you plan to use the master only as a “consolidation point” to collect data
from other machines, then there is no need to execute the collect action there.
If you run the collect action on the master, it behaves the same there as
anywhere else, and you have to stage the master's collected data just like any
other client (typically by configuring a local peer in the stage action).
The Stage Action
The stage action is the second action in a standard backup run. It executes on
the master peer node. The master works down the list of peers in its backup
pool and stages (copies) the collected backup files from each of them into a
daily staging directory by peer name.
For the purposes of this action, the master node can be configured to treat
itself as a client node. If you intend to back up data on the master, configure
the master as a local peer. Otherwise, just configure each of the clients as a
remote peer.
Local and remote client peers are treated differently. Local peer collect
directories are assumed to be accessible via normal copy commands (i.e. on a
mounted filesystem) while remote peer collect directories are accessed via an
RSH-compatible command such as ssh.
If a given peer is not ready to be staged, the stage process will log an error,
abort the backup for that peer, and then move on to its other peers. This way,
one broken peer cannot break a backup for other peers which are up and running.
Keep in mind that Cedar Backup is flexible about what actions must be executed
as part of a backup. If you would prefer, you can stop the backup process at
this step, and skip the store step. In this case, the staged directories will
represent your backup rather than a disc.
Note
Directories “collected” by another process can be staged by Cedar Backup. If
the file cback.collect exists in a collect directory when the stage action is
taken, then that directory will be staged.
The Store Action
The store action is the third action in a standard backup run. It executes on
the master peer node. The master machine determines the location of the current
staging directory, and then writes the contents of that staging directory to
disc. After the contents of the directory have been written to disc, an
optional validation step ensures that the write was successful.
If the backup is running on the first day of the week, if the drive does not
support multisession discs, or if the --full option is passed to the cback3
command, the disc will be rebuilt from scratch. Otherwise, a new ISO session
will be added to the disc each day the backup runs.
This action is entirely optional. If you would prefer to just stage backup data
from a set of peers to a master machine, and have the staged directories
represent your backup rather than a disc, this is fine.
Warning
The store action is not supported on the Mac OS X (darwin) platform. On that
platform, the “automount” function of the Finder interferes significantly with
Cedar Backup's ability to mount and unmount media and write to the CD or DVD
hardware. The Cedar Backup writer and image functionality works on this
platform, but the effort required to fight the operating system about who owns
the media and the device makes it nearly impossible to execute the store action
successfully.
Current Staging Directory
The store action tries to be smart about finding the current staging directory.
It first checks the current day's staging directory. If that directory exists,
and it has not yet been written to disc (i.e. there is no store indicator),
then it will be used. Otherwise, the store action will look for an unused
staging directory for either the previous day or the next day, in that order. A
warning will be written to the log under these circumstances (controlled by the
<warn_midnite> configuration value).
This behavior varies slightly when the --full option is in effect. Under these
circumstances, any existing store indicator will be ignored. Also, the store
action will always attempt to use the current day's staging directory, ignoring
any staging directories for the previous day or the next day. This way, running
a full store action more than once concurrently will always produce the same
results. (You might imagine a use case where a person wants to make several
copies of the same full backup.)
The Purge Action
The purge action is the fourth and final action in a standard backup run. It
executes both on the master and client peer nodes. Configuration specifies how
long to retain files in certain directories, and older files and empty
directories are purged.
Typically, collect directories are purged daily, and stage directories are
purged weekly or slightly less often (if a disc gets corrupted, older backups
may still be available on the master). Some users also choose to purge the
configured working directory (which is used for temporary files) to eliminate
any leftover files which might have resulted from changes to configuration.
The All Action
The all action is a pseudo-action which causes all of the actions in a standard
backup run to be executed together in order. It cannot be combined with any
other actions on the command line.
Extensions cannot be executed as part of the all action. If you need to execute
an extended action, you must specify the other actions you want to run
individually on the command line. ^[11]
The all action does not have its own configuration. Instead, it relies on the
individual configuration sections for all of the other actions.
The Validate Action
The validate action is used to validate configuration on a particular peer
node, either master or client. It cannot be combined with any other actions on
the command line.
The validate action checks that the configuration file can be found, that the
configuration file is valid, and that certain portions of the configuration
file make sense (for instance, making sure that specified users exist,
directories are readable and writable as necessary, etc.).
The Initialize Action
The initialize action is used to initialize media for use with Cedar Backup.
This is an optional step. By default, Cedar Backup does not need to use
initialized media and will write to whatever media exists in the writer device.
However, if the “check media” store configuration option is set to true, Cedar
Backup will check the media before writing to it and will error out if the
media has not been initialized.
Initializing the media consists of writing a mostly-empty image using a known
media label (the media label will begin with “CEDAR BACKUP”).
Note that only rewritable media (CD-RW, DVD+RW) can be initialized. It doesn't
make any sense to initialize media that cannot be rewritten (CD-R, DVD+R),
since Cedar Backup would then not be able to use that media for a backup. You
can still configure Cedar Backup to check non-rewritable media; in this case,
the check will also pass if the media is apparently unused (i.e. has no media
label).
The Rebuild Action
The rebuild action is an exception-handling action that is executed independent
of a standard backup run. It cannot be combined with any other actions on the
command line.
The rebuild action attempts to rebuild “this week's” disc from any remaining
unpurged staging directories. Typically, it is used to make a copy of a backup,
replace lost or damaged media, or to switch to new media mid-week for some
other reason.
To decide what data to write to disc again, the rebuild action looks back and
finds the first day of the current week. Then, it finds any remaining staging
directories between that date and the current date. If any staging directories
are found, they are all written to disc in one big ISO session.
The rebuild action does not have its own configuration. It relies on
configuration for other other actions, especially the store action.
Coordination between Master and Clients
Unless you are using Cedar Backup to manage a “pool of one”, you will need to
set up some coordination between your clients and master to make everything
work properly. This coordination isn't difficult — it mostly consists of making
sure that operations happen in the right order — but some users are suprised
that it is required and want to know why Cedar Backup can't just “take care of
it for me”.
Essentially, each client must finish collecting all of its data before the
master begins staging it, and the master must finish staging data from a client
before that client purges its collected data. Administrators may need to
experiment with the time between the collect and purge entries so that the
master has enough time to stage data before it is purged.
Managed Backups
Cedar Backup also supports an optional feature called the “managed backup”.
This feature is intended for use with remote clients where cron is not
available.
When managed backups are enabled, managed clients must still be configured as
usual. However, rather than using a cron job on the client to execute the
collect and purge actions, the master executes these actions on the client via
a remote shell.
To make this happen, first set up one or more managed clients in Cedar Backup
configuration. Then, invoke Cedar Backup with the --managed command-line
option. Whenever Cedar Backup invokes an action locally, it will invoke the
same action on each of the managed clients.
Technically, this feature works for any client, not just clients that don't
have cron available. Used this way, it can simplify the setup process, because
cron only has to be configured on the master. For some users, that may be
motivation enough to use this feature all of the time.
However, please keep in mind that this feature depends on a stable network. If
your network connection drops, your backup will be interrupted and will not be
complete. It is even possible that some of the Cedar Backup metadata (like
incremental backup state) will be corrupted. The risk is not high, but it is
something you need to be aware of if you choose to use this optional feature.
Media and Device Types
Cedar Backup is focused around writing backups to CD or DVD media using a
standard SCSI or IDE writer. In Cedar Backup terms, the disc itself is referred
to as the media, and the CD/DVD drive is referred to as the device or sometimes
the backup device. ^[12]
When using a new enough backup device, a new “multisession” ISO image ^[13] is
written to the media on the first day of the week, and then additional
multisession images are added to the media each day that Cedar Backup runs.
This way, the media is complete and usable at the end of every backup run, but
a single disc can be used all week long. If your backup device does not support
multisession images — which is really unusual today — then a new ISO image will
be written to the media each time Cedar Backup runs (and you should probably
confine yourself to the “daily” backup mode to avoid losing data).
Cedar Backup currently supports four different kinds of CD media:
cdr-74
74-minute non-rewritable CD media
cdrw-74
74-minute rewritable CD media
cdr-80
80-minute non-rewritable CD media
cdrw-80
80-minute rewritable CD media
I have chosen to support just these four types of CD media because they seem to
be the most “standard” of the various types commonly sold in the U.S. as of
this writing (early 2005). If you regularly use an unsupported media type and
would like Cedar Backup to support it, send me information about the capacity
of the media in megabytes (MB) and whether it is rewritable.
Cedar Backup also supports two kinds of DVD media:
dvd+r
Single-layer non-rewritable DVD+R media
dvd+rw
Single-layer rewritable DVD+RW media
The underlying growisofs utility does support other kinds of media (including
DVD-R, DVD-RW and BlueRay) which work somewhat differently than standard DVD+R
and DVD+RW media. I don't support these other kinds of media because I haven't
had any opportunity to work with them. The same goes for dual-layer media of
any type.
Incremental Backups
Cedar Backup supports three different kinds of backups for individual collect
directories. These are daily, weekly and incremental backups. Directories using
the daily mode are backed up every day. Directories using the weekly mode are
only backed up on the first day of the week, or when the --full option is used.
Directories using the incremental mode are always backed up on the first day of
the week (like a weekly backup), but after that only the files which have
changed are actually backed up on a daily basis.
In Cedar Backup, incremental backups are not based on date, but are instead
based on saved checksums, one for each backed-up file. When a full backup is
run, Cedar Backup gathers a checksum value ^[14] for each backed-up file. The
next time an incremental backup is run, Cedar Backup checks its list of file/
checksum pairs for each file that might be backed up. If the file's checksum
value does not match the saved value, or if the file does not appear in the
list of file/checksum pairs, then it will be backed up and a new checksum value
will be placed into the list. Otherwise, the file will be ignored and the
checksum value will be left unchanged.
Cedar Backup stores the file/checksum pairs in .sha files in its working
directory, one file per configured collect directory. The mappings in these
files are reset at the start of the week or when the --full option is used.
Because these files are used for an entire week, you should never purge the
working directory more frequently than once per week.
Extensions
Imagine that there is a third party developer who understands how to back up a
certain kind of database repository. This third party might want to integrate
his or her specialized backup into the Cedar Backup process, perhaps thinking
of the database backup as a sort of “collect” step.
Prior to Cedar Backup version 2, any such integration would have been
completely independent of Cedar Backup itself. The “external” backup
functionality would have had to maintain its own configuration and would not
have had access to any Cedar Backup configuration.
Starting with version 2, Cedar Backup allows extensions to the backup process.
An extension is an action that isn't part of the standard backup process (i.e.
not collect, stage, store or purge), but can be executed by Cedar Backup when
properly configured.
Extension authors implement an “action process” function with a certain
interface, and are allowed to add their own sections to the Cedar Backup
configuration file, so that all backup configuration can be centralized. Then,
the action process function is associated with an action name which can be
executed from the cback3 command line like any other action.
Hopefully, as the Cedar Backup user community grows, users will contribute
their own extensions back to the community. Well-written general-purpose
extensions will be accepted into the official codebase.
Note
Users should see Chapter 5, Configuration for more information on how
extensions are configured, and Chapter 6, Official Extensions for details on
all of the officially-supported extensions.
Developers may be interested in Appendix A, Extension Architecture Interface.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[8] See http://en.wikipedia.org/wiki/Setuid
^[9] Analagous to .cvsignore in CVS
^[10] In terms of Python regular expressions
^[11] Some users find this surprising, because extensions are configured with
sequence numbers. I did it this way because I felt that running extensions as
part of the all action would sometimes result in surprising behavior. I am not
planning to change the way this works.
^[12] My original backup device was an old Sony CRX140E 4X CD-RW drive. It has
since died, and I currently develop using a Lite-On 1673S DVD±RW drive.
^[13] An ISO image is the standard way of creating a filesystem to be copied to
a CD or DVD. It is essentially a “filesystem-within-a-file” and many UNIX
operating systems can actually mount ISO image files just like hard drives,
floppy disks or actual CDs. See Wikipedia for more information: http://
en.wikipedia.org/wiki/ISO_image.
^[14] The checksum is actually an SHA cryptographic hash. See Wikipedia for
more information: http://en.wikipedia.org/wiki/SHA-1.
Chapter 3. Installation
Table of Contents
Background
Installing on a Debian System
Installing from Source
Installing Dependencies
Installing the Source Package
Background
There are two different ways to install Cedar Backup. The easiest way is to
install the pre-built Debian packages. This method is painless and ensures that
all of the correct dependencies are available, etc.
If you are running a Linux distribution other than Debian or you are running
some other platform like FreeBSD or Mac OS X, then you must use the Python
source distribution to install Cedar Backup. When using this method, you need
to manage all of the dependencies yourself.
Non-Linux Platforms
Cedar Backup has been developed on a Debian GNU/Linux system and is primarily
supported on Debian and other Linux systems. However, since it is written in
portable Python 3, it should run without problems on just about any UNIX-like
operating system. In particular, full Cedar Backup functionality is known to
work on Debian and SuSE Linux systems, and client functionality is also known
to work on FreeBSD and Mac OS X systems.
To run a Cedar Backup client, you really just need a working Python 3
installation. To run a Cedar Backup master, you will also need a set of other
executables, most of which are related to building and writing CD/DVD images. A
full list of dependencies is provided further on in this chapter.
Installing on a Debian System
The easiest way to install Cedar Backup onto a Debian system is by using a tool
such as apt-get or aptitude.
If you are running a Debian release which contains Cedar Backup, you can use
your normal Debian mirror as an APT data source. (The Debian “jessie” release
is the first release to contain Cedar Backup 3.) Otherwise, you need to install
from the Cedar Solutions APT data source. ^[15] To do this, add the Cedar
Solutions APT data source to your /etc/apt/sources.list file.
After you have configured the proper APT data source, install Cedar Backup
using this set of commands:
$ apt-get update
$ apt-get install cedar-backup3 cedar-backup3-doc
Several of the Cedar Backup dependencies are listed as “recommended” rather
than required. If you are installing Cedar Backup on a master machine, you must
install some or all of the recommended dependencies, depending on which actions
you intend to execute. The stage action normally requires ssh, and the store
action requires eject and either cdrecord/mkisofs or dvd+rw-tools. Clients must
also install some sort of ssh server if a remote master will collect backups
from them.
If you would prefer, you can also download the .deb files and install them by
hand with a tool such as dpkg. You can find these files files in the Cedar
Solutions APT source.
In either case, once the package has been installed, you can proceed to
configuration as described in Chapter 5, Configuration.
Note
The Debian package-management tools must generally be run as root. It is safe
to install Cedar Backup to a non-standard location and run it as a non-root
user. However, to do this, you must install the source distribution instead of
the Debian package.
Installing from Source
On platforms other than Debian, Cedar Backup is installed from a Python source
distribution. ^[16] You will have to manage dependencies on your own.
Tip
Many UNIX-like distributions provide an automatic or semi-automatic way to
install packages like the ones Cedar Backup requires (think RPMs for Mandrake
or RedHat, Gentoo's Portage system, the Fink project for Mac OS X, or the BSD
ports system). If you are not sure how to install these packages on your
system, you might want to check out Appendix B, Dependencies. This appendix
provides links to “upstream” source packages, plus as much information as I
have been able to gather about packages for non-Debian platforms.
Installing Dependencies
Cedar Backup requires a number of external packages in order to function
properly. Before installing Cedar Backup, you must make sure that these
dependencies are met.
Cedar Backup is written in Python 3 and requires version 3.4 or greater of the
language.
Additionally, remote client peer nodes must be running an RSH-compatible
server, such as the ssh server, and master nodes must have an RSH-compatible
client installed if they need to connect to remote peer machines.
Master machines also require several other system utilities, most having to do
with writing and validating CD/DVD media. On master machines, you must make
sure that these utilities are available if you want to to run the store action:
• mkisofs
• eject
• mount
• unmount
• volname
Then, you need this utility if you are writing CD media:
• cdrecord
or these utilities if you are writing DVD media:
• growisofs
All of these utilities are common and are easy to find for almost any UNIX-like
operating system.
Installing the Source Package
Python source packages are fairly easy to install. They are distributed as
.tar.gz files which contain Python source code, a manifest and an installation
script called setup.py.
Once you have downloaded the source package from the Cedar Solutions website, ^
[15] untar it:
$ zcat CedarBackup3-3.0.0.tar.gz | tar xvf -
This will create a directory called (in this case) CedarBackup3-3.0.0. The
version number in the directory will always match the version number in the
filename.
If you have root access and want to install the package to the “standard”
Python location on your system, then you can install the package in two simple
steps:
$ cd CedarBackup3-3.0.0
$ python3 setup.py install
Make sure that you are using Python 3.4 or better to execute setup.py.
You may also wish to run the unit tests before actually installing anything.
Run them like so:
python3 util/test.py
If any unit test reports a failure on your system, please email me the output
from the unit test, so I can fix the problem. ^[17] This is particularly
important for non-Linux platforms where I do not have a test system available
to me.
Some users might want to choose a different install location or change other
install parameters. To get more information about how setup.py works, use the
--help option:
$ python3 setup.py --help
$ python3 setup.py install --help
In any case, once the package has been installed, you can proceed to
configuration as described in Chapter 5, Configuration.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[15] See http://cedar-solutions.com/debian.html
^[16] See http://docs.python.org/lib/module-distutils.html .
^[17] <support@cedar-solutions.com>
Chapter 4. Command Line Tools
Table of Contents
Overview
The cback3 command
Introduction
Syntax
Switches
Actions
The cback3-amazons3-sync command
Introduction
Syntax
Switches
The cback3-span command
Introduction
Syntax
Switches
Using cback3-span
Sample run
Overview
Cedar Backup comes with three command-line programs: cback3,
cback3-amazons3-sync, and cback3-span.
The cback3 command is the primary command line interface and the only Cedar
Backup program that most users will ever need.
The cback3-amazons3-sync tool is used for synchronizing entire directories of
files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar
Backup process.
Users who have a lot of data to back up — more than will fit on a single CD or
DVD — can use the interactive cback3-span tool to split their data between
multiple discs.
The cback3 command
Introduction
Cedar Backup's primary command-line interface is the cback3 command. It
controls the entire backup process.
Syntax
The cback3 command has the following syntax:
Usage: cback3 [switches] action(s)
The following switches are accepted:
-h, --help Display this usage/help listing
-V, --version Display version information
-b, --verbose Print verbose output as well as logging to disk
-q, --quiet Run quietly (display no output to the screen)
-c, --config Path to config file (default: /etc/cback3.conf)
-f, --full Perform a full backup, regardless of configuration
-M, --managed Include managed clients when executing actions
-N, --managed-only Include ONLY managed clients when executing actions
-l, --logfile Path to logfile (default: /var/log/cback3.log)
-o, --owner Logfile ownership, user:group (default: root:adm)
-m, --mode Octal logfile permissions mode (default: 640)
-O, --output Record some sub-command (i.e. cdrecord) output to the log
-d, --debug Write debugging information to the log (implies --output)
-s, --stack Dump a Python stack trace instead of swallowing exceptions
-D, --diagnostics Print runtime diagnostics to the screen and exit
The following actions may be specified:
all Take all normal actions (collect, stage, store, purge)
collect Take the collect action
stage Take the stage action
store Take the store action
purge Take the purge action
rebuild Rebuild "this week's" disc if possible
validate Validate configuration only
initialize Initialize media for use with Cedar Backup
You may also specify extended actions that have been defined in
configuration.
You must specify at least one action to take. More than one of
the "collect", "stage", "store" or "purge" actions and/or
extended actions may be specified in any arbitrary order; they
will be executed in a sensible order. The "all", "rebuild",
"validate", and "initialize" actions may not be combined with
other actions.
Note that the all action only executes the standard four actions. It never
executes any of the configured extensions. ^[18]
Switches
-h, --help
Display usage/help listing.
-V, --version
Display version information.
-b, --verbose
Print verbose output to the screen as well writing to the logfile. When
this option is enabled, most information that would normally be written to
the logfile will also be written to the screen.
-q, --quiet
Run quietly (display no output to the screen).
-c, --config
Specify the path to an alternate configuration file. The default
configuration file is /etc/cback3.conf.
-f, --full
Perform a full backup, regardless of configuration. For the collect action,
this means that any existing information related to incremental backups
will be ignored and rewritten; for the store action, this means that a new
disc will be started.
-M, --managed
Include managed clients when executing actions. If the action being
executed is listed as a managed action for a managed client, execute the
action on that client after executing the action locally.
-N, --managed-only
Include only managed clients when executing actions. If the action being
executed is listed as a managed action for a managed client, execute the
action on that client — but do not execute the action locally.
-l, --logfile
Specify the path to an alternate logfile. The default logfile file is /var/
log/cback3.log.
-o, --owner
Specify the ownership of the logfile, in the form user:group. The default
ownership is root:adm, to match the Debian standard for most logfiles. This
value will only be used when creating a new logfile. If the logfile already
exists when the cback3 command is executed, it will retain its existing
ownership and mode. Only user and group names may be used, not numeric uid
and gid values.
-m, --mode
Specify the permissions for the logfile, using the numeric mode as in chmod
(1). The default mode is 0640 (-rw-r-----). This value will only be used
when creating a new logfile. If the logfile already exists when the cback3
command is executed, it will retain its existing ownership and mode.
-O, --output
Record some sub-command output to the logfile. When this option is enabled,
all output from system commands will be logged. This might be useful for
debugging or just for reference.
-d, --debug
Write debugging information to the logfile. This option produces a high
volume of output, and would generally only be needed when debugging a
problem. This option implies the --output option, as well.
-s, --stack
Dump a Python stack trace instead of swallowing exceptions. This forces
Cedar Backup to dump the entire Python stack trace associated with an
error, rather than just propagating last message it received back up to the
user interface. Under some circumstances, this is useful information to
include along with a bug report.
-D, --diagnostics
Display runtime diagnostic information and then exit. This diagnostic
information is often useful when filing a bug report.
Actions
You can find more information about the various actions in the section called
“The Backup Process” (in Chapter 2, Basic Concepts). In general, you may
specify any combination of the collect, stage, store or purge actions, and the
specified actions will be executed in a sensible order. Or, you can specify one
of the all, rebuild, validate, or initialize actions (but these actions may not
be combined with other actions).
If you have configured any Cedar Backup extensions, then the actions associated
with those extensions may also be specified on the command line. If you specify
any other actions along with an extended action, the actions will be executed
in a sensible order per configuration. The all action never executes extended
actions, however.
The cback3-amazons3-sync command
Introduction
The cback3-amazons3-sync tool is used for synchronizing entire directories of
files up to an Amazon S3 cloud storage bucket, outside of the normal Cedar
Backup process.
This might be a good option for some types of data, as long as you understand
the limitations around retrieving previous versions of objects that get
modified or deleted as part of a sync. S3 does support versioning, but it won't
be quite as easy to get at those previous versions as with an explicit
incremental backup like cback3 provides. Cedar Backup does not provide any
tooling that would help you retrieve previous versions.
The underlying functionality relies on the AWS CLI toolset. Before you use this
extension, you need to set up your Amazon S3 account and configure AWS CLI as
detailed in Amazons's setup guide. The aws command will be executed as the same
user that is executing the cback3-amazons3-sync command, so make sure you
configure it as the proper user. (This is different than the amazons3
extension, which is designed to execute as root and switches over to the
configured backup user to execute AWS CLI commands.)
Syntax
The cback3-amazons3-sync command has the following syntax:
Usage: cback3-amazons3-sync [switches] sourceDir s3bucketUrl
Cedar Backup Amazon S3 sync tool.
This Cedar Backup utility synchronizes a local directory to an Amazon S3
bucket. After the sync is complete, a validation step is taken. An
error is reported if the contents of the bucket do not match the
source directory, or if the indicated size for any file differs.
This tool is a wrapper over the AWS CLI command-line tool.
The following arguments are required:
sourceDir The local source directory on disk (must exist)
s3BucketUrl The URL to the target Amazon S3 bucket
The following switches are accepted:
-h, --help Display this usage/help listing
-V, --version Display version information
-b, --verbose Print verbose output as well as logging to disk
-q, --quiet Run quietly (display no output to the screen)
-l, --logfile Path to logfile (default: /var/log/cback3.log)
-o, --owner Logfile ownership, user:group (default: root:adm)
-m, --mode Octal logfile permissions mode (default: 640)
-O, --output Record some sub-command (i.e. aws) output to the log
-d, --debug Write debugging information to the log (implies --output)
-s, --stack Dump Python stack trace instead of swallowing exceptions
-D, --diagnostics Print runtime diagnostics to the screen and exit
-v, --verifyOnly Only verify the S3 bucket contents, do not make changes
-w, --ignoreWarnings Ignore warnings about problematic filename encodings
Typical usage would be something like:
cback3-amazons3-sync /home/myuser s3://example.com-backup/myuser
This will sync the contents of /home/myuser into the indicated bucket.
Switches
-h, --help
Display usage/help listing.
-V, --version
Display version information.
-b, --verbose
Print verbose output to the screen as well writing to the logfile. When
this option is enabled, most information that would normally be written to
the logfile will also be written to the screen.
-q, --quiet
Run quietly (display no output to the screen).
-l, --logfile
Specify the path to an alternate logfile. The default logfile file is /var/
log/cback3.log.
-o, --owner
Specify the ownership of the logfile, in the form user:group. The default
ownership is root:adm, to match the Debian standard for most logfiles. This
value will only be used when creating a new logfile. If the logfile already
exists when the cback3-amazons3-sync command is executed, it will retain
its existing ownership and mode. Only user and group names may be used, not
numeric uid and gid values.
-m, --mode
Specify the permissions for the logfile, using the numeric mode as in chmod
(1). The default mode is 0640 (-rw-r-----). This value will only be used
when creating a new logfile. If the logfile already exists when the
cback3-amazons3-sync command is executed, it will retain its existing
ownership and mode.
-O, --output
Record some sub-command output to the logfile. When this option is enabled,
all output from system commands will be logged. This might be useful for
debugging or just for reference.
-d, --debug
Write debugging information to the logfile. This option produces a high
volume of output, and would generally only be needed when debugging a
problem. This option implies the --output option, as well.
-s, --stack
Dump a Python stack trace instead of swallowing exceptions. This forces
Cedar Backup to dump the entire Python stack trace associated with an
error, rather than just propagating last message it received back up to the
user interface. Under some circumstances, this is useful information to
include along with a bug report.
-D, --diagnostics
Display runtime diagnostic information and then exit. This diagnostic
information is often useful when filing a bug report.
-v, --verifyOnly
Only verify the S3 bucket contents against the directory on disk. Do not
make any changes to the S3 bucket or transfer any files. This is intended
as a quick check to see whether the sync is up-to-date.
Although no files are transferred, the tool will still execute the source
filename encoding check, discussed below along with --ignoreWarnings.
-w, --ignoreWarnings
The AWS CLI S3 sync process is very picky about filename encoding. Files
that the Linux filesystem handles with no problems can cause problems in S3
if the filename cannot be encoded properly in your configured locale. As of
this writing, filenames like this will cause the sync process to abort
without transferring all files as expected.
To avoid confusion, the cback3-amazons3-sync tries to guess which files in
the source directory will cause problems, and refuses to execute the AWS
CLI S3 sync if any problematic files exist. If you'd rather proceed anyway,
use --ignoreWarnings.
If problematic files are found, then you have basically two options: either
correct your locale (i.e. if you have set LANG=C) or rename the file so it
can be encoded properly in your locale. The error messages will tell you
the expected encoding (from your locale) and the actual detected encoding
for the filename.
The cback3-span command
Introduction
Cedar Backup was designed — and is still primarily focused — around weekly
backups to a single CD or DVD. Most users who back up more data than fits on a
single disc seem to stop their backup process at the stage step, using Cedar
Backup as an easy way to collect data.
However, some users have expressed a need to write these large kinds of backups
to disc — if not every day, then at least occassionally. The cback3-span tool
was written to meet those needs. If you have staged more data than fits on a
single CD or DVD, you can use cback3-span to split that data between multiple
discs.
cback3-span is not a general-purpose disc-splitting tool. It is a specialized
program that requires Cedar Backup configuration to run. All it can do is read
Cedar Backup configuration, find any staging directories that have not yet been
written to disc, and split the files in those directories between discs.
cback3-span accepts many of the same command-line options as cback3, but must
be run interactively. It cannot be run from cron. This is intentional. It is
intended to be a useful tool, not a new part of the backup process (that is the
purpose of an extension).
In order to use cback3-span, you must configure your backup such that the
largest individual backup file can fit on a single disc. The command will not
split a single file onto more than one disc. All it can do is split large
directories onto multiple discs. Files in those directories will be arbitrarily
split up so that space is utilized most efficiently.
Syntax
The cback3-span command has the following syntax:
Usage: cback3-span [switches]
Cedar Backup 'span' tool.
This Cedar Backup utility spans staged data between multiple discs.
It is a utility, not an extension, and requires user interaction.
The following switches are accepted, mostly to set up underlying
Cedar Backup functionality:
-h, --help Display this usage/help listing
-V, --version Display version information
-b, --verbose Print verbose output as well as logging to disk
-c, --config Path to config file (default: /etc/cback3.conf)
-l, --logfile Path to logfile (default: /var/log/cback3.log)
-o, --owner Logfile ownership, user:group (default: root:adm)
-m, --mode Octal logfile permissions mode (default: 640)
-O, --output Record some sub-command (i.e. cdrecord) output to the log
-d, --debug Write debugging information to the log (implies --output)
-s, --stack Dump a Python stack trace instead of swallowing exceptions
Switches
-h, --help
Display usage/help listing.
-V, --version
Display version information.
-b, --verbose
Print verbose output to the screen as well writing to the logfile. When
this option is enabled, most information that would normally be written to
the logfile will also be written to the screen.
-c, --config
Specify the path to an alternate configuration file. The default
configuration file is /etc/cback3.conf.
-l, --logfile
Specify the path to an alternate logfile. The default logfile file is /var/
log/cback3.log.
-o, --owner
Specify the ownership of the logfile, in the form user:group. The default
ownership is root:adm, to match the Debian standard for most logfiles. This
value will only be used when creating a new logfile. If the logfile already
exists when the cback3 command is executed, it will retain its existing
ownership and mode. Only user and group names may be used, not numeric uid
and gid values.
-m, --mode
Specify the permissions for the logfile, using the numeric mode as in chmod
(1). The default mode is 0640 (-rw-r-----). This value will only be used
when creating a new logfile. If the logfile already exists when the cback3
command is executed, it will retain its existing ownership and mode.
-O, --output
Record some sub-command output to the logfile. When this option is enabled,
all output from system commands will be logged. This might be useful for
debugging or just for reference. Cedar Backup uses system commands mostly
for dealing with the CD/DVD recorder and its media.
-d, --debug
Write debugging information to the logfile. This option produces a high
volume of output, and would generally only be needed when debugging a
problem. This option implies the --output option, as well.
-s, --stack
Dump a Python stack trace instead of swallowing exceptions. This forces
Cedar Backup to dump the entire Python stack trace associated with an
error, rather than just propagating last message it received back up to the
user interface. Under some circumstances, this is useful information to
include along with a bug report.
Using cback3-span
As discussed above, the cback3-span is an interactive command. It cannot be run
from cron.
You can typically use the default answer for most questions. The only two
questions that you may not want the default answer for are the fit algorithm
and the cushion percentage.
The cushion percentage is used by cback3-span to determine what capacity to
shoot for when splitting up your staging directories. A 650 MB disc does not
fit fully 650 MB of data. It's usually more like 627 MB of data. The cushion
percentage tells cback3-span how much overhead to reserve for the filesystem.
The default of 4% is usually OK, but if you have problems you may need to
increase it slightly.
The fit algorithm tells cback3-span how it should determine which items should
be placed on each disc. If you don't like the result from one algorithm, you
can reject that solution and choose a different algorithm.
The four available fit algorithms are:
worst
The worst-fit algorithm.
The worst-fit algorithm proceeds through a sorted list of items (sorted
from smallest to largest) until running out of items or meeting capacity
exactly. If capacity is exceeded, the item that caused capacity to be
exceeded is thrown away and the next one is tried. The algorithm
effectively includes the maximum number of items possible in its search for
optimal capacity utilization. It tends to be somewhat slower than either
the best-fit or alternate-fit algorithm, probably because on average it has
to look at more items before completing.
best
The best-fit algorithm.
The best-fit algorithm proceeds through a sorted list of items (sorted from
largest to smallest) until running out of items or meeting capacity
exactly. If capacity is exceeded, the item that caused capacity to be
exceeded is thrown away and the next one is tried. The algorithm
effectively includes the minimum number of items possible in its search for
optimal capacity utilization. For large lists of mixed-size items, it's not
unusual to see the algorithm achieve 100% capacity utilization by including
fewer than 1% of the items. Probably because it often has to look at fewer
of the items before completing, it tends to be a little faster than the
worst-fit or alternate-fit algorithms.
first
The first-fit algorithm.
The first-fit algorithm proceeds through an unsorted list of items until
running out of items or meeting capacity exactly. If capacity is exceeded,
the item that caused capacity to be exceeded is thrown away and the next
one is tried. This algorithm generally performs more poorly than the other
algorithms both in terms of capacity utilization and item utilization, but
can be as much as an order of magnitude faster on large lists of items
because it doesn't require any sorting.
alternate
A hybrid algorithm that I call alternate-fit.
This algorithm tries to balance small and large items to achieve better
end-of-disk performance. Instead of just working one direction through a
list, it alternately works from the start and end of a sorted list (sorted
from smallest to largest), throwing away any item which causes capacity to
be exceeded. The algorithm tends to be slower than the best-fit and
first-fit algorithms, and slightly faster than the worst-fit algorithm,
probably because of the number of items it considers on average before
completing. It often achieves slightly better capacity utilization than the
worst-fit algorithm, while including slightly fewer items.
Sample run
Below is a log showing a sample cback3-span run.
================================================
Cedar Backup 'span' tool
================================================
This the Cedar Backup span tool. It is used to split up staging
data when that staging data does not fit onto a single disc.
This utility operates using Cedar Backup configuration. Configuration
specifies which staging directory to look at and which writer device
and media type to use.
Continue? [Y/n]:
===
Cedar Backup store configuration looks like this:
Source Directory...: /tmp/staging
Media Type.........: cdrw-74
Device Type........: cdwriter
Device Path........: /dev/cdrom
Device SCSI ID.....: None
Drive Speed........: None
Check Data Flag....: True
No Eject Flag......: False
Is this OK? [Y/n]:
===
Please wait, indexing the source directory (this may take a while)...
===
The following daily staging directories have not yet been written to disc:
/tmp/staging/2007/02/07
/tmp/staging/2007/02/08
/tmp/staging/2007/02/09
/tmp/staging/2007/02/10
/tmp/staging/2007/02/11
/tmp/staging/2007/02/12
/tmp/staging/2007/02/13
/tmp/staging/2007/02/14
The total size of the data in these directories is 1.00 GB.
Continue? [Y/n]:
===
Based on configuration, the capacity of your media is 650.00 MB.
Since estimates are not perfect and there is some uncertainly in
media capacity calculations, it is good to have a "cushion",
a percentage of capacity to set aside. The cushion reduces the
capacity of your media, so a 1.5% cushion leaves 98.5% remaining.
What cushion percentage? [4.00]:
===
The real capacity, taking into account the 4.00% cushion, is 627.25 MB.
It will take at least 2 disc(s) to store your 1.00 GB of data.
Continue? [Y/n]:
===
Which algorithm do you want to use to span your data across
multiple discs?
The following algorithms are available:
first....: The "first-fit" algorithm
best.....: The "best-fit" algorithm
worst....: The "worst-fit" algorithm
alternate: The "alternate-fit" algorithm
If you don't like the results you will have a chance to try a
different one later.
Which algorithm? [worst]:
===
Please wait, generating file lists (this may take a while)...
===
Using the "worst-fit" algorithm, Cedar Backup can split your data
into 2 discs.
Disc 1: 246 files, 615.97 MB, 98.20% utilization
Disc 2: 8 files, 412.96 MB, 65.84% utilization
Accept this solution? [Y/n]: n
===
Which algorithm do you want to use to span your data across
multiple discs?
The following algorithms are available:
first....: The "first-fit" algorithm
best.....: The "best-fit" algorithm
worst....: The "worst-fit" algorithm
alternate: The "alternate-fit" algorithm
If you don't like the results you will have a chance to try a
different one later.
Which algorithm? [worst]: alternate
===
Please wait, generating file lists (this may take a while)...
===
Using the "alternate-fit" algorithm, Cedar Backup can split your data
into 2 discs.
Disc 1: 73 files, 627.25 MB, 100.00% utilization
Disc 2: 181 files, 401.68 MB, 64.04% utilization
Accept this solution? [Y/n]: y
===
Please place the first disc in your backup device.
Press return when ready.
===
Initializing image...
Writing image to disc...
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[18] Some users find this surprising, because extensions are configured with
sequence numbers. I did it this way because I felt that running extensions as
part of the all action would sometimes result in “surprising” behavior. Better
to be definitive than confusing.
Chapter 5. Configuration
Table of Contents
Overview
Configuration File Format
Sample Configuration File
Reference Configuration
Options Configuration
Peers Configuration
Collect Configuration
Stage Configuration
Store Configuration
Purge Configuration
Extensions Configuration
Setting up a Pool of One
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure your writer device.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test your backup.
Step 9: Modify the backup cron jobs.
Setting up a Client Peer Node
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure the master in your backup pool.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test your backup.
Step 9: Modify the backup cron jobs.
Setting up a Master Peer Node
Step 1: Decide when you will run your backup.
Step 2: Make sure email works.
Step 3: Configure your writer device.
Step 4: Configure your backup user.
Step 5: Create your backup tree.
Step 6: Create the Cedar Backup configuration file.
Step 7: Validate the Cedar Backup configuration file.
Step 8: Test connectivity to client machines.
Step 9: Test your backup.
Step 10: Modify the backup cron jobs.
Configuring your Writer Device
Device Types
Devices identified by by device name
Devices identified by SCSI id
Linux Notes
Finding your Linux CD Writer
Mac OS X Notes
Optimized Blanking Stategy
Overview
Configuring Cedar Backup is unfortunately somewhat complicated. The good news
is that once you get through the initial configuration process, you'll hardly
ever have to change anything. Even better, the most typical changes (i.e.
adding and removing directories from a backup) are easy.
First, familiarize yourself with the concepts in Chapter 2, Basic Concepts. In
particular, be sure that you understand the differences between a master and a
client. (If you only have one machine, then your machine will act as both a
master and a client, and we'll refer to your setup as a pool of one.) Then,
install Cedar Backup per the instructions in Chapter 3, Installation.
Once everything has been installed, you are ready to begin configuring Cedar
Backup. Look over the section called “The cback3 command” (in Chapter 4,
Command Line Tools) to become familiar with the command line interface. Then,
look over the section called “Configuration File Format” (below) and create a
configuration file for each peer in your backup pool. To start with, create a
very simple configuration file, then expand it later. Decide now whether you
will store the configuration file in the standard place (/etc/cback3.conf) or
in some other location.
After you have all of the configuration files in place, configure each of your
machines, following the instructions in the appropriate section below (for
master, client or pool of one). Since the master and client(s) must communicate
over the network, you won't be able to fully configure the master without
configuring each client and vice-versa. The instructions are clear on what
needs to be done.
Which Platform?
Cedar Backup has been designed for use on all UNIX-like systems. However, since
it was developed on a Debian GNU/Linux system, and because I am a Debian
developer, the packaging is prettier and the setup is somewhat simpler on a
Debian system than on a system where you install from source.
The configuration instructions below have been generalized so they should work
well regardless of what platform you are running (i.e. RedHat, Gentoo, FreeBSD,
etc.). If instructions vary for a particular platform, you will find a note
related to that platform.
I am always open to adding more platform-specific hints and notes, so write me
if you find problems with these instructions.
Configuration File Format
Cedar Backup is configured through an XML ^[19] configuration file, usually
called /etc/cback3.conf. The configuration file contains the following
sections: reference, options, collect, stage, store, purge and extensions.
All configuration files must contain the two general configuration sections,
the reference section and the options section. Besides that, administrators
need only configure actions they intend to use. For instance, on a client
machine, administrators will generally only configure the collect and purge
sections, while on a master machine they will have to configure all four
action-related sections. ^[20] The extensions section is always optional and
can be omitted unless extensions are in use.
Note
Even though the Mac OS X (darwin) filesystem is not case-sensitive, Cedar
Backup configuration is generally case-sensitive on that platform, just like on
all other platforms. For instance, even though the files “Ken” and “ken” might
be the same on the Mac OS X filesystem, an exclusion in Cedar Backup
configuration for “ken” will only match the file if it is actually on the
filesystem with a lower-case “k” as its first letter. This won't surprise the
typical UNIX user, but might surprise someone who's gotten into the “Mac
Mindset”.
Sample Configuration File
Both the Python source distribution and the Debian package come with a sample
configuration file. The Debian package includes its sample in /usr/share/doc/
cedar-backup3/examples/cback3.conf.sample.
This is a sample configuration file similar to the one provided in the source
package. Documentation below provides more information about each of the
individual configuration sections.
<?xml version="1.0"?>
<cb_config>
<reference>
<author>Kenneth J. Pronovici</author>
<revision>1.3</revision>
<description>Sample</description>
</reference>
<options>
<starting_day>tuesday</starting_day>
<working_dir>/opt/backup/tmp</working_dir>
<backup_user>backup</backup_user>
<backup_group>group</backup_group>
<rcp_command>/usr/bin/scp -B</rcp_command>
</options>
<peers>
<peer>
<name>debian</name>
<type>local</type>
<collect_dir>/opt/backup/collect</collect_dir>
</peer>
</peers>
<collect>
<collect_dir>/opt/backup/collect</collect_dir>
<collect_mode>daily</collect_mode>
<archive_mode>targz</archive_mode>
<ignore_file>.cbignore</ignore_file>
<dir>
<abs_path>/etc</abs_path>
<collect_mode>incr</collect_mode>
</dir>
<file>
<abs_path>/home/root/.profile</abs_path>
<collect_mode>weekly</collect_mode>
</file>
</collect>
<stage>
<staging_dir>/opt/backup/staging</staging_dir>
</stage>
<store>
<source_dir>/opt/backup/staging</source_dir>
<media_type>cdrw-74</media_type>
<device_type>cdwriter</device_type>
<target_device>/dev/cdrw</target_device>
<target_scsi_id>0,0,0</target_scsi_id>
<drive_speed>4</drive_speed>
<check_data>Y</check_data>
<check_media>Y</check_media>
<warn_midnite>Y</warn_midnite>
</store>
<purge>
<dir>
<abs_path>/opt/backup/stage</abs_path>
<retain_days>7</retain_days>
</dir>
<dir>
<abs_path>/opt/backup/collect</abs_path>
<retain_days>0</retain_days>
</dir>
</purge>
</cb_config>
Reference Configuration
The reference configuration section contains free-text elements that exist only
for reference.. The section itself is required, but the individual elements may
be left blank if desired.
This is an example reference configuration section:
<reference>
<author>Kenneth J. Pronovici</author>
<revision>Revision 1.3</revision>
<description>Sample</description>
<generator>Yet to be Written Config Tool (tm)</description>
</reference>
The following elements are part of the reference configuration section:
author
Author of the configuration file.
Restrictions: None
revision
Revision of the configuration file.
Restrictions: None
description
Description of the configuration file.
Restrictions: None
generator
Tool that generated the configuration file, if any.
Restrictions: None
Options Configuration
The options configuration section contains configuration options that are not
specific to any one action.
This is an example options configuration section:
<options>
<starting_day>tuesday</starting_day>
<working_dir>/opt/backup/tmp</working_dir>
<backup_user>backup</backup_user>
<backup_group>backup</backup_group>
<rcp_command>/usr/bin/scp -B</rcp_command>
<rsh_command>/usr/bin/ssh</rsh_command>
<cback_command>/usr/bin/cback</cback_command>
<managed_actions>collect, purge</managed_actions>
<override>
<command>cdrecord</command>
<abs_path>/opt/local/bin/cdrecord</abs_path>
</override>
<override>
<command>mkisofs</command>
<abs_path>/opt/local/bin/mkisofs</abs_path>
</override>
<pre_action_hook>
<action>collect</action>
<command>echo "I AM A PRE-ACTION HOOK RELATED TO COLLECT"</command>
</pre_action_hook>
<post_action_hook>
<action>collect</action>
<command>echo "I AM A POST-ACTION HOOK RELATED TO COLLECT"</command>
</post_action_hook>
</options>
The following elements are part of the options configuration section:
starting_day
Day that starts the week.
Cedar Backup is built around the idea of weekly backups. The starting day
of week is the day that media will be rebuilt from scratch and that
incremental backup information will be cleared.
Restrictions: Must be a day of the week in English, i.e. monday, tuesday,
etc. The validation is case-sensitive.
working_dir
Working (temporary) directory to use for backups.
This directory is used for writing temporary files, such as tar file or ISO
filesystem images as they are being built. It is also used to store
day-to-day information about incremental backups.
The working directory should contain enough free space to hold temporary
tar files (on a client) or to build an ISO filesystem image (on a master).
Restrictions: Must be an absolute path
backup_user
Effective user that backups should run as.
This user must exist on the machine which is being configured and should
not be root (although that restriction is not enforced).
This value is also used as the default remote backup user for remote peers.
Restrictions: Must be non-empty
backup_group
Effective group that backups should run as.
This group must exist on the machine which is being configured, and should
not be root or some other “powerful” group (although that restriction is
not enforced).
Restrictions: Must be non-empty
rcp_command
Default rcp-compatible copy command for staging.
The rcp command should be the exact command used for remote copies,
including any required options. If you are using scp, you should pass it
the -B option, so scp will not ask for any user input (which could hang the
backup). A common example is something like /usr/bin/scp -B.
This value is used as the default value for all remote peers. Technically,
this value is not needed by clients, but we require it for all config files
anyway.
Restrictions: Must be non-empty
rsh_command
Default rsh-compatible command to use for remote shells.
The rsh command should be the exact command used for remote shells,
including any required options.
This value is used as the default value for all managed clients. It is
optional, because it is only used when executing actions on managed
clients. However, each managed client must either be able to read the value
from options configuration or must set the value explicitly.
Restrictions: Must be non-empty
cback_command
Default cback-compatible command to use on managed remote clients.
The cback command should be the exact command used for for executing cback
on a remote managed client, including any required command-line options. Do
not list any actions in the command line, and do not include the --full
command-line option.
This value is used as the default value for all managed clients. It is
optional, because it is only used when executing actions on managed
clients. However, each managed client must either be able to read the value
from options configuration or must set the value explicitly.
Note: if this command-line is complicated, it is often better to create a
simple shell script on the remote host to encapsulate all of the options.
Then, just reference the shell script in configuration.
Restrictions: Must be non-empty
managed_actions
Default set of actions that are managed on remote clients.
This is a comma-separated list of actions that the master will manage on
behalf of remote clients. Typically, it would include only collect-like
actions and purge.
This value is used as the default value for all managed clients. It is
optional, because it is only used when executing actions on managed
clients. However, each managed client must either be able to read the value
from options configuration or must set the value explicitly.
Restrictions: Must be non-empty.
override
Command to override with a customized path.
This is a subsection which contains a command to override with a customized
path. This functionality would be used if root's $PATH does not include a
particular required command, or if there is a need to use a version of a
command that is different than the one listed on the $PATH. Most users will
only use this section when directed to, in order to fix a problem.
This section is optional, and can be repeated as many times as necessary.
This subsection must contain the following two fields:
command
Name of the command to be overridden, i.e. “cdrecord”.
Restrictions: Must be a non-empty string.
abs_path
The absolute path where the overridden command can be found.
Restrictions: Must be an absolute path.
pre_action_hook
Hook configuring a command to be executed before an action.
This is a subsection which configures a command to be executed immediately
before a named action. It provides a way for administrators to associate
their own custom functionality with standard Cedar Backup actions or with
arbitrary extensions.
This section is optional, and can be repeated as many times as necessary.
This subsection must contain the following two fields:
action
Name of the Cedar Backup action that the hook is associated with. The
action can be a standard backup action (collect, stage, etc.) or can be
an extension action. No validation is done to ensure that the
configured action actually exists.
Restrictions: Must be a non-empty string.
command
Name of the command to be executed. This item can either specify the
path to a shell script of some sort (the recommended approach) or can
include a complete shell command.
Note: if you choose to provide a complete shell command rather than the
path to a script, you need to be aware of some limitations of Cedar
Backup's command-line parser. You cannot use a subshell (via the
`command` or $(command) syntaxes) or any shell variable in your command
line. Additionally, the command-line parser only recognizes the
double-quote character (") to delimit groupings or strings on the
command-line. The bottom line is, you are probably best off writing a
shell script of some sort for anything more sophisticated than very
simple shell commands.
Restrictions: Must be a non-empty string.
post_action_hook
Hook configuring a command to be executed after an action.
This is a subsection which configures a command to be executed immediately
after a named action. It provides a way for administrators to associate
their own custom functionality with standard Cedar Backup actions or with
arbitrary extensions.
This section is optional, and can be repeated as many times as necessary.
This subsection must contain the following two fields:
action
Name of the Cedar Backup action that the hook is associated with. The
action can be a standard backup action (collect, stage, etc.) or can be
an extension action. No validation is done to ensure that the
configured action actually exists.
Restrictions: Must be a non-empty string.
command
Name of the command to be executed. This item can either specify the
path to a shell script of some sort (the recommended approach) or can
include a complete shell command.
Note: if you choose to provide a complete shell command rather than the
path to a script, you need to be aware of some limitations of Cedar
Backup's command-line parser. You cannot use a subshell (via the
`command` or $(command) syntaxes) or any shell variable in your command
line. Additionally, the command-line parser only recognizes the
double-quote character (") to delimit groupings or strings on the
command-line. The bottom line is, you are probably best off writing a
shell script of some sort for anything more sophisticated than very
simple shell commands.
Restrictions: Must be a non-empty string.
Peers Configuration
The peers configuration section contains a list of the peers managed by a
master. This section is only required on a master.
This is an example peers configuration section:
<peers>
<peer>
<name>machine1</name>
<type>local</type>
<collect_dir>/opt/backup/collect</collect_dir>
</peer>
<peer>
<name>machine2</name>
<type>remote</type>
<backup_user>backup</backup_user>
<collect_dir>/opt/backup/collect</collect_dir>
<ignore_failures>all</ignore_failures>
</peer>
<peer>
<name>machine3</name>
<type>remote</type>
<managed>Y</managed>
<backup_user>backup</backup_user>
<collect_dir>/opt/backup/collect</collect_dir>
<rcp_command>/usr/bin/scp</rcp_command>
<rsh_command>/usr/bin/ssh</rsh_command>
<cback_command>/usr/bin/cback</cback_command>
<managed_actions>collect, purge</managed_actions>
</peer>
</peers>
The following elements are part of the peers configuration section:
peer (local version)
Local client peer in a backup pool.
This is a subsection which contains information about a specific local
client peer managed by a master.
This section can be repeated as many times as is necessary. At least one
remote or local peer must be configured.
The local peer subsection must contain the following fields:
name
Name of the peer, typically a valid hostname.
For local peers, this value is only used for reference. However, it is
good practice to list the peer's hostname here, for consistency with
remote peers.
Restrictions: Must be non-empty, and unique among all peers.
type
Type of this peer.
This value identifies the type of the peer. For a local peer, it must
always be local.
Restrictions: Must be local.
collect_dir
Collect directory to stage from for this peer.
The master will copy all files in this directory into the appropriate
staging directory. Since this is a local peer, the directory is assumed
to be reachable via normal filesystem operations (i.e. cp).
Restrictions: Must be an absolute path.
ignore_failures
Ignore failure mode for this peer
The ignore failure mode indicates whether “not ready to be staged”
errors should be ignored for this peer. This option is intended to be
used for peers that are up only intermittently, to cut down on the
number of error emails received by the Cedar Backup administrator.
The "none" mode means that all errors will be reported. This is the
default behavior. The "all" mode means to ignore all failures. The
"weekly" mode means to ignore failures for a start-of-week or full
backup. The "daily" mode means to ignore failures for any backup that
is not either a full backup or a start-of-week backup.
Restrictions: If set, must be one of "none", "all", "daily", or
"weekly".
peer (remote version)
Remote client peer in a backup pool.
This is a subsection which contains information about a specific remote
client peer managed by a master. A remote peer is one which can be reached
via an rsh-based network call.
This section can be repeated as many times as is necessary. At least one
remote or local peer must be configured.
The remote peer subsection must contain the following fields:
name
Hostname of the peer.
For remote peers, this must be a valid DNS hostname or IP address which
can be resolved during an rsh-based network call.
Restrictions: Must be non-empty, and unique among all peers.
type
Type of this peer.
This value identifies the type of the peer. For a remote peer, it must
always be remote.
Restrictions: Must be remote.
managed
Indicates whether this peer is managed.
A managed peer (or managed client) is a peer for which the master
manages all of the backup activites via a remote shell.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
collect_dir
Collect directory to stage from for this peer.
The master will copy all files in this directory into the appropriate
staging directory. Since this is a remote peer, the directory is
assumed to be reachable via rsh-based network operations (i.e. scp or
the configured rcp command).
Restrictions: Must be an absolute path.
ignore_failures
Ignore failure mode for this peer
The ignore failure mode indicates whether “not ready to be staged”
errors should be ignored for this peer. This option is intended to be
used for peers that are up only intermittently, to cut down on the
number of error emails received by the Cedar Backup administrator.
The "none" mode means that all errors will be reported. This is the
default behavior. The "all" mode means to ignore all failures. The
"weekly" mode means to ignore failures for a start-of-week or full
backup. The "daily" mode means to ignore failures for any backup that
is not either a full backup or a start-of-week backup.
Restrictions: If set, must be one of "none", "all", "daily", or
"weekly".
backup_user
Name of backup user on the remote peer.
This username will be used when copying files from the remote peer via
an rsh-based network connection.
This field is optional. if it doesn't exist, the backup will use the
default backup user from the options section.
Restrictions: Must be non-empty.
rcp_command
The rcp-compatible copy command for this peer.
The rcp command should be the exact command used for remote copies,
including any required options. If you are using scp, you should pass
it the -B option, so scp will not ask for any user input (which could
hang the backup). A common example is something like /usr/bin/scp -B.
This field is optional. if it doesn't exist, the backup will use the
default rcp command from the options section.
Restrictions: Must be non-empty.
rsh_command
The rsh-compatible command for this peer.
The rsh command should be the exact command used for remote shells,
including any required options.
This value only applies if the peer is managed.
This field is optional. if it doesn't exist, the backup will use the
default rsh command from the options section.
Restrictions: Must be non-empty
cback_command
The cback-compatible command for this peer.
The cback command should be the exact command used for for executing
cback on the peer as part of a managed backup. This value must include
any required command-line options. Do not list any actions in the
command line, and do not include the --full command-line option.
This value only applies if the peer is managed.
This field is optional. if it doesn't exist, the backup will use the
default cback command from the options section.
Note: if this command-line is complicated, it is often better to create
a simple shell script on the remote host to encapsulate all of the
options. Then, just reference the shell script in configuration.
Restrictions: Must be non-empty
managed_actions
Set of actions that are managed for this peer.
This is a comma-separated list of actions that the master will manage
on behalf this peer. Typically, it would include only collect-like
actions and purge.
This value only applies if the peer is managed.
This field is optional. if it doesn't exist, the backup will use the
default list of managed actions from the options section.
Restrictions: Must be non-empty.
Collect Configuration
The collect configuration section contains configuration options related the
the collect action. This section contains a variable number of elements,
including an optional exclusion section and a repeating subsection used to
specify which directories and/or files to collect. You can also configure an
ignore indicator file, which lets users mark their own directories as not
backed up.
Using a Link Farm
Sometimes, it's not very convenient to list directories one by one in the Cedar
Backup configuration file. For instance, when backing up your home directory,
you often exclude as many directories as you include. The ignore file mechanism
can be of some help, but it still isn't very convenient if there are a lot of
directories to ignore (or if new directories pop up all of the time).
In this situation, one option is to use a link farm rather than listing all of
the directories in configuration. A link farm is a directory that contains
nothing but a set of soft links to other files and directories. Normally, Cedar
Backup does not follow soft links, but you can override this behavior for
individual directories using the link_depth and dereference options (see
below).
When using a link farm, you still have to deal with each backed-up directory
individually, but you don't have to modify configuration. Some users find that
this works better for them.
In order to actually execute the collect action, you must have configured at
least one collect directory or one collect file. However, if you are only
including collect configuration for use by an extension, then it's OK to leave
out these sections. The validation will take place only when the collect action
is executed.
This is an example collect configuration section:
<collect>
<collect_dir>/opt/backup/collect</collect_dir>
<collect_mode>daily</collect_mode>
<archive_mode>targz</archive_mode>
<ignore_file>.cbignore</ignore_file>
<exclude>
<abs_path>/etc</abs_path>
<pattern>.*\.conf</pattern>
</exclude>
<file>
<abs_path>/home/root/.profile</abs_path>
</file>
<dir>
<abs_path>/etc</abs_path>
</dir>
<dir>
<abs_path>/var/log</abs_path>
<collect_mode>incr</collect_mode>
</dir>
<dir>
<abs_path>/opt</abs_path>
<collect_mode>weekly</collect_mode>
<exclude>
<abs_path>/opt/large</abs_path>
<rel_path>backup</rel_path>
<pattern>.*tmp</pattern>
</exclude>
</dir>
</collect>
The following elements are part of the collect configuration section:
collect_dir
Directory to collect files into.
On a client, this is the directory which tarfiles for individual collect
directories are written into. The master then stages files from this
directory into its own staging directory.
This field is always required. It must contain enough free space to collect
all of the backed-up files on the machine in a compressed form.
Restrictions: Must be an absolute path
collect_mode
Default collect mode.
The collect mode describes how frequently a directory is backed up. See the
section called “The Collect Action” (in Chapter 2, Basic Concepts) for more
information.
This value is the collect mode that will be used by default during the
collect process. Individual collect directories (below) may override this
value. If all individual directories provide their own value, then this
default value may be omitted from configuration.
Note: if your backup device does not suppport multisession discs, then you
should probably use the daily collect mode to avoid losing data.
Restrictions: Must be one of daily, weekly or incr.
archive_mode
Default archive mode for collect files.
The archive mode maps to the way that a backup file is stored. A value tar
means just a tarfile (file.tar); a value targz means a gzipped tarfile
(file.tar.gz); and a value tarbz2 means a bzipped tarfile (file.tar.bz2)
This value is the archive mode that will be used by default during the
collect process. Individual collect directories (below) may override this
value. If all individual directories provide their own value, then this
default value may be omitted from configuration.
Restrictions: Must be one of tar, targz or tarbz2.
ignore_file
Default ignore file name.
The ignore file is an indicator file. If it exists in a given directory,
then that directory will be recursively excluded from the backup as if it
were explicitly excluded in configuration.
The ignore file provides a way for individual users (who might not have
access to Cedar Backup configuration) to control which of their own
directories get backed up. For instance, users with a ~/tmp directory might
not want it backed up. If they create an ignore file in their directory
(e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.
This value is the ignore file name that will be used by default during the
collect process. Individual collect directories (below) may override this
value. If all individual directories provide their own value, then this
default value may be omitted from configuration.
Restrictions: Must be non-empty
recursion_level
Recursion level to use when collecting directories.
This is an integer value that Cedar Backup will consider when generating
archive files for a configured collect directory.
Normally, Cedar Backup generates one archive file per collect directory.
So, if you collect /etc you get etc.tar.gz. Most of the time, this is what
you want. However, you may sometimes wish to generate multiple archive
files for a single collect directory.
The most obvious example is for /home. By default, Cedar Backup will
generate home.tar.gz. If instead, you want one archive file per home
directory you can set a recursion level of 1. Cedar Backup will generate
home-user1.tar.gz, home-user2.tar.gz, etc.
Higher recursion levels (2, 3, etc.) are legal, and it doesn't matter if
the configured recursion level is deeper than the directory tree that is
being collected. You can use a negative recursion level (like -1) to
specify an infinite level of recursion. This will exhaust the tree in the
same way as if the recursion level is set too high.
This field is optional. if it doesn't exist, the backup will use the
default recursion level of zero.
Restrictions: Must be an integer.
exclude
List of paths or patterns to exclude from the backup.
This is a subsection which contains a set of absolute paths and patterns to
be excluded across all configured directories. For a given directory, the
set of absolute paths and patterns to exclude is built from this list and
any list that exists on the directory itself. Directories cannot override
or remove entries that are in this list, however.
This section is optional, and if it exists can also be empty.
The exclude subsection can contain one or more of each of the following
fields:
abs_path
An absolute path to be recursively excluded from the backup.
If a directory is excluded, then all of its children are also
recursively excluded. For instance, a value /var/log/apache would
exclude any files within /var/log/apache as well as files within other
directories under /var/log/apache.
This field can be repeated as many times as is necessary.
Restrictions: Must be an absolute path.
pattern
A pattern to be recursively excluded from the backup.
The pattern must be a Python regular expression. ^[21] It is assumed to
be bounded at front and back by the beginning and end of the string
(i.e. it is treated as if it begins with ^ and ends with $).
If the pattern causes a directory to be excluded, then all of the
children of that directory are also recursively excluded. For instance,
a value .*apache.* might match the /var/log/apache directory. This
would exclude any files within /var/log/apache as well as files within
other directories under /var/log/apache.
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty
file
A file to be collected.
This is a subsection which contains information about a specific file to be
collected (backed up).
This section can be repeated as many times as is necessary. At least one
collect directory or collect file must be configured when the collect
action is executed.
The collect file subsection contains the following fields:
abs_path
Absolute path of the file to collect.
Restrictions: Must be an absolute path.
collect_mode
Collect mode for this file
The collect mode describes how frequently a file is backed up. See the
section called “The Collect Action” (in Chapter 2, Basic Concepts) for
more information.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Note: if your backup device does not suppport multisession discs, then
you should probably confine yourself to the daily collect mode, to
avoid losing data.
Restrictions: Must be one of daily, weekly or incr.
archive_mode
Archive mode for this file.
The archive mode maps to the way that a backup file is stored. A value
tar means just a tarfile (file.tar); a value targz means a gzipped
tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile
(file.tar.bz2)
This field is optional. if it doesn't exist, the backup will use the
default archive mode.
Restrictions: Must be one of tar, targz or tarbz2.
dir
A directory to be collected.
This is a subsection which contains information about a specific directory
to be collected (backed up).
This section can be repeated as many times as is necessary. At least one
collect directory or collect file must be configured when the collect
action is executed.
The collect directory subsection contains the following fields:
abs_path
Absolute path of the directory to collect.
The path may be either a directory, a soft link to a directory, or a
hard link to a directory. All three are treated the same at this level.
The contents of the directory will be recursively collected. The backup
will contain all of the files in the directory, as well as the contents
of all of the subdirectories within the directory, etc.
Soft links within the directory are treated as files, i.e. they are
copied verbatim (as a link) and their contents are not backed up.
Restrictions: Must be an absolute path.
collect_mode
Collect mode for this directory
The collect mode describes how frequently a directory is backed up. See
the section called “The Collect Action” (in Chapter 2, Basic Concepts)
for more information.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Note: if your backup device does not suppport multisession discs, then
you should probably confine yourself to the daily collect mode, to
avoid losing data.
Restrictions: Must be one of daily, weekly or incr.
archive_mode
Archive mode for this directory.
The archive mode maps to the way that a backup file is stored. A value
tar means just a tarfile (file.tar); a value targz means a gzipped
tarfile (file.tar.gz); and a value tarbz2 means a bzipped tarfile
(file.tar.bz2)
This field is optional. if it doesn't exist, the backup will use the
default archive mode.
Restrictions: Must be one of tar, targz or tarbz2.
ignore_file
Ignore file name for this directory.
The ignore file is an indicator file. If it exists in a given
directory, then that directory will be recursively excluded from the
backup as if it were explicitly excluded in configuration.
The ignore file provides a way for individual users (who might not have
access to Cedar Backup configuration) to control which of their own
directories get backed up. For instance, users with a ~/tmp directory
might not want it backed up. If they create an ignore file in their
directory (e.g. ~/tmp/.cbignore), then Cedar Backup will ignore it.
This field is optional. If it doesn't exist, the backup will use the
default ignore file name.
Restrictions: Must be non-empty
link_depth
Link depth value to use for this directory.
The link depth is maximum depth of the tree at which soft links should
be followed. So, a depth of 0 does not follow any soft links within the
collect directory, a depth of 1 follows only links immediately within
the collect directory, a depth of 2 follows the links at the next level
down, etc.
This field is optional. If it doesn't exist, the backup will assume a
value of zero, meaning that soft links within the collect directory
will never be followed.
Restrictions: If set, must be an integer ≥ 0.
dereference
Whether to dereference soft links.
If this flag is set, links that are being followed will be dereferenced
before being added to the backup. The link will be added (as a link),
and then the directory or file that the link points at will be added as
well.
This value only applies to a directory where soft links are being
followed (per the link_depth configuration option). It never applies to
a configured collect directory itself, only to other directories within
the collect directory.
This field is optional. If it doesn't exist, the backup will assume
that links should never be dereferenced.
Restrictions: Must be a boolean (Y or N).
exclude
List of paths or patterns to exclude from the backup.
This is a subsection which contains a set of paths and patterns to be
excluded within this collect directory. This list is combined with the
program-wide list to build a complete list for the directory.
This section is entirely optional, and if it exists can also be empty.
The exclude subsection can contain one or more of each of the following
fields:
abs_path
An absolute path to be recursively excluded from the backup.
If a directory is excluded, then all of its children are also
recursively excluded. For instance, a value /var/log/apache would
exclude any files within /var/log/apache as well as files within
other directories under /var/log/apache.
This field can be repeated as many times as is necessary.
Restrictions: Must be an absolute path.
rel_path
A relative path to be recursively excluded from the backup.
The path is assumed to be relative to the collect directory itself.
For instance, if the configured directory is /opt/web a configured
relative path of something/else would exclude the path /opt/web/
something/else.
If a directory is excluded, then all of its children are also
recursively excluded. For instance, a value something/else would
exclude any files within something/else as well as files within
other directories under something/else.
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty.
pattern
A pattern to be excluded from the backup.
The pattern must be a Python regular expression. ^[21] It is
assumed to be bounded at front and back by the beginning and end of
the string (i.e. it is treated as if it begins with ^ and ends with
$).
If the pattern causes a directory to be excluded, then all of the
children of that directory are also recursively excluded. For
instance, a value .*apache.* might match the /var/log/apache
directory. This would exclude any files within /var/log/apache as
well as files within other directories under /var/log/apache.
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty
Stage Configuration
The stage configuration section contains configuration options related the the
stage action. The section indicates where date from peers can be staged to.
This section can also (optionally) override the list of peers so that not all
peers are staged. If you provide any peers in this section, then the list of
peers here completely replaces the list of peers in the peers configuration
section for the purposes of staging.
This is an example stage configuration section for the simple case where the
list of peers is taken from peers configuration:
<stage>
<staging_dir>/opt/backup/stage</staging_dir>
</stage>
This is an example stage configuration section that overrides the default list
of peers:
<stage>
<staging_dir>/opt/backup/stage</staging_dir>
<peer>
<name>machine1</name>
<type>local</type>
<collect_dir>/opt/backup/collect</collect_dir>
</peer>
<peer>
<name>machine2</name>
<type>remote</type>
<backup_user>backup</backup_user>
<collect_dir>/opt/backup/collect</collect_dir>
</peer>
</stage>
The following elements are part of the stage configuration section:
staging_dir
Directory to stage files into.
This is the directory into which the master stages collected data from each
of the clients. Within the staging directory, data is staged into
date-based directories by peer name. For instance, peer “daystrom” backed
up on 19 Feb 2005 would be staged into something like 2005/02/19/daystrom
relative to the staging directory itself.
This field is always required. The directory must contain enough free space
to stage all of the files collected from all of the various machines in a
backup pool. Many administrators set up purging to keep staging directories
around for a week or more, which requires even more space.
Restrictions: Must be an absolute path
peer (local version)
Local client peer in a backup pool.
This is a subsection which contains information about a specific local
client peer to be staged (backed up). A local peer is one whose collect
directory can be reached without requiring any rsh-based network calls. It
is possible that a remote peer might be staged as a local peer if its
collect directory is mounted to the master via NFS, AFS or some other
method.
This section can be repeated as many times as is necessary. At least one
remote or local peer must be configured.
Remember, if you provide any local or remote peer in staging configuration,
the global peer configuration is completely replaced by the staging peer
configuration.
The local peer subsection must contain the following fields:
name
Name of the peer, typically a valid hostname.
For local peers, this value is only used for reference. However, it is
good practice to list the peer's hostname here, for consistency with
remote peers.
Restrictions: Must be non-empty, and unique among all peers.
type
Type of this peer.
This value identifies the type of the peer. For a local peer, it must
always be local.
Restrictions: Must be local.
collect_dir
Collect directory to stage from for this peer.
The master will copy all files in this directory into the appropriate
staging directory. Since this is a local peer, the directory is assumed
to be reachable via normal filesystem operations (i.e. cp).
Restrictions: Must be an absolute path.
peer (remote version)
Remote client peer in a backup pool.
This is a subsection which contains information about a specific remote
client peer to be staged (backed up). A remote peer is one whose collect
directory can only be reached via an rsh-based network call.
This section can be repeated as many times as is necessary. At least one
remote or local peer must be configured.
Remember, if you provide any local or remote peer in staging configuration,
the global peer configuration is completely replaced by the staging peer
configuration.
The remote peer subsection must contain the following fields:
name
Hostname of the peer.
For remote peers, this must be a valid DNS hostname or IP address which
can be resolved during an rsh-based network call.
Restrictions: Must be non-empty, and unique among all peers.
type
Type of this peer.
This value identifies the type of the peer. For a remote peer, it must
always be remote.
Restrictions: Must be remote.
collect_dir
Collect directory to stage from for this peer.
The master will copy all files in this directory into the appropriate
staging directory. Since this is a remote peer, the directory is
assumed to be reachable via rsh-based network operations (i.e. scp or
the configured rcp command).
Restrictions: Must be an absolute path.
backup_user
Name of backup user on the remote peer.
This username will be used when copying files from the remote peer via
an rsh-based network connection.
This field is optional. if it doesn't exist, the backup will use the
default backup user from the options section.
Restrictions: Must be non-empty.
rcp_command
The rcp-compatible copy command for this peer.
The rcp command should be the exact command used for remote copies,
including any required options. If you are using scp, you should pass
it the -B option, so scp will not ask for any user input (which could
hang the backup). A common example is something like /usr/bin/scp -B.
This field is optional. if it doesn't exist, the backup will use the
default rcp command from the options section.
Restrictions: Must be non-empty.
Store Configuration
The store configuration section contains configuration options related the the
store action. This section contains several optional fields. Most fields
control the way media is written using the writer device.
This is an example store configuration section:
<store>
<source_dir>/opt/backup/stage</source_dir>
<media_type>cdrw-74</media_type>
<device_type>cdwriter</device_type>
<target_device>/dev/cdrw</target_device>
<target_scsi_id>0,0,0</target_scsi_id>
<drive_speed>4</drive_speed>
<check_data>Y</check_data>
<check_media>Y</check_media>
<warn_midnite>Y</warn_midnite>
<no_eject>N</no_eject>
<refresh_media_delay>15</refresh_media_delay>
<eject_delay>2</eject_delay>
<blank_behavior>
<mode>weekly</mode>
<factor>1.3</factor>
</blank_behavior>
</store>
The following elements are part of the store configuration section:
source_dir
Directory whose contents should be written to media.
This directory must be a Cedar Backup staging directory, as configured in
the staging configuration section. Only certain data from that directory
(typically, data from the current day) will be written to disc.
Restrictions: Must be an absolute path
device_type
Type of the device used to write the media.
This field controls which type of writer device will be used by Cedar
Backup. Currently, Cedar Backup supports CD writers (cdwriter) and DVD
writers (dvdwriter).
This field is optional. If it doesn't exist, the cdwriter device type is
assumed.
Restrictions: If set, must be either cdwriter or dvdwriter.
media_type
Type of the media in the device.
Unless you want to throw away a backup disc every week, you are probably
best off using rewritable media.
You must choose a media type that is appropriate for the device type you
chose above. For more information on media types, see the section called
“Media and Device Types” (in Chapter 2, Basic Concepts).
Restrictions: Must be one of cdr-74, cdrw-74, cdr-80 or cdrw-80 if device
type is cdwriter; or one of dvd+r or dvd+rw if device type is dvdwriter.
target_device
Filesystem device name for writer device.
This value is required for both CD writers and DVD writers.
This is the UNIX device name for the writer drive, for instance /dev/scd0
or a symlink like /dev/cdrw.
In some cases, this device name is used to directly write to media. This is
true all of the time for DVD writers, and is true for CD writers when a
SCSI id (see below) has not been specified.
Besides this, the device name is also needed in order to do several
pre-write checks (such as whether the device might already be mounted) as
well as the post-write consistency check, if enabled.
Note: some users have reported intermittent problems when using a symlink
as the target device on Linux, especially with DVD media. If you experience
problems, try using the real device name rather than the symlink.
Restrictions: Must be an absolute path.
target_scsi_id
SCSI id for the writer device.
This value is optional for CD writers and is ignored for DVD writers.
If you have configured your CD writer hardware to work through the normal
filesystem device path, then you can leave this parameter unset. Cedar
Backup will just use the target device (above) when talking to cdrecord.
Otherwise, if you have SCSI CD writer hardware or you have configured your
non-SCSI hardware to operate like a SCSI device, then you need to provide
Cedar Backup with a SCSI id it can use when talking with cdrecord.
For the purposes of Cedar Backup, a valid SCSI identifier must either be in
the standard SCSI identifier form scsibus,target,lun or in the
specialized-method form <method>:scsibus,target,lun.
An example of a standard SCSI identifier is 1,6,2. Today, the two most
common examples of the specialized-method form are ATA:scsibus,target,lun
and ATAPI:scsibus,target,lun, but you may occassionally see other values
(like OLDATAPI in some forks of cdrecord).
See the section called “Configuring your Writer Device” for more
information on writer devices and how they are configured.
Restrictions: If set, must be a valid SCSI identifier.
drive_speed
Speed of the drive, i.e. 2 for a 2x device.
This field is optional. If it doesn't exist, the underlying device-related
functionality will use the default drive speed.
For DVD writers, it is best to leave this value unset, so growisofs can
pick an appropriate speed. For CD writers, since media can be
speed-sensitive, it is probably best to set a sensible value based on your
specific writer and media.
Restrictions: If set, must be an integer ≥ 1.
check_data
Whether the media should be validated.
This field indicates whether a resulting image on the media should be
validated after the write completes, by running a consistency check against
it. If this check is enabled, the contents of the staging directory are
directly compared to the media, and an error is reported if there is a
mismatch.
Practice shows that some drives can encounter an error when writing a
multisession disc, but not report any problems. This consistency check
allows us to catch the problem. By default, the consistency check is
disabled, but most users should choose to enable it unless they have a good
reason not to.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
check_media
Whether the media should be checked before writing to it.
By default, Cedar Backup does not check its media before writing to it. It
will write to any media in the backup device. If you set this flag to Y,
Cedar Backup will make sure that the media has been initialized before
writing to it. (Rewritable media is initialized using the initialize
action.)
If the configured media is not rewritable (like CD-R), then this behavior
is modified slightly. For this kind of media, the check passes either if
the media has been initialized or if the media appears unused.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
warn_midnite
Whether to generate warnings for crossing midnite.
This field indicates whether warnings should be generated if the store
operation has to cross a midnite boundary in order to find data to write to
disc. For instance, a warning would be generated if valid store data was
only found in the day before or day after the current day.
Configuration for some users is such that the store operation will always
cross a midnite boundary, so they will not care about this warning. Other
users will expect to never cross a boundary, and want to be notified that
something “strange” might have happened.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
no_eject
Indicates that the writer device should not be ejected.
Under some circumstances, Cedar Backup ejects (opens and closes) the writer
device. This is done because some writer devices need to re-load the media
before noticing a media state change (like a new session).
For most writer devices this is safe, because they have a tray that can be
opened and closed. If your writer device does not have a tray and Cedar
Backup does not properly detect this, then set this flag. Cedar Backup will
not ever issue an eject command to your writer.
Note: this could cause problems with your backup. For instance, with many
writers, the check data step may fail if the media is not reloaded first.
If this happens to you, you may need to get a different writer device.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
refresh_media_delay
Number of seconds to delay after refreshing media
This field is optional. If it doesn't exist, no delay will occur.
Some devices seem to take a little while to stablize after refreshing the
media (i.e. closing and opening the tray). During this period, operations
on the media may fail. If your device behaves like this, you can try
setting a delay of 10-15 seconds.
Restrictions: If set, must be an integer ≥ 1.
eject_delay
Number of seconds to delay after ejecting the tray
This field is optional. If it doesn't exist, no delay will occur.
If your system seems to have problems opening and closing the tray, one
possibility is that the open/close sequence is happening too quickly —
either the tray isn't fully open when Cedar Backup tries to close it, or it
doesn't report being open. To work around that problem, set an eject delay
of a few seconds.
Restrictions: If set, must be an integer ≥ 1.
blank_behavior
Optimized blanking strategy.
For more information about Cedar Backup's optimized blanking strategy, see
the section called “Optimized Blanking Stategy”.
This entire configuration section is optional. However, if you choose to
provide it, you must configure both a blanking mode and a blanking factor.
blank_mode
Blanking mode.
Restrictions:Must be one of "daily" or "weekly".
blank_factor
Blanking factor.
Restrictions:Must be a floating point number ≥ 0.
Purge Configuration
The purge configuration section contains configuration options related the the
purge action. This section contains a set of directories to be purged, along
with information about the schedule at which they should be purged.
Typically, Cedar Backup should be configured to purge collect directories daily
(retain days of 0).
If you are tight on space, staging directories can also be purged daily.
However, if you have space to spare, you should consider purging about once per
week. That way, if your backup media is damaged, you will be able to recreate
the week's backup using the rebuild action.
You should also purge the working directory periodically, once every few weeks
or once per month. This way, if any unneeded files are left around, perhaps
because a backup was interrupted or because configuration changed, they will
eventually be removed. The working directory should not be purged any more
frequently than once per week, otherwise you will risk destroying data used for
incremental backups.
This is an example purge configuration section:
<purge>
<dir>
<abs_path>/opt/backup/stage</abs_path>
<retain_days>7</retain_days>
</dir>
<dir>
<abs_path>/opt/backup/collect</abs_path>
<retain_days>0</retain_days>
</dir>
</purge>
The following elements are part of the purge configuration section:
dir
A directory to purge within.
This is a subsection which contains information about a specific directory
to purge within.
This section can be repeated as many times as is necessary. At least one
purge directory must be configured.
The purge directory subsection contains the following fields:
abs_path
Absolute path of the directory to purge within.
The contents of the directory will be purged based on age. The purge
will remove any files that were last modified more than “retain days”
days ago. Empty directories will also eventually be removed. The purge
directory itself will never be removed.
The path may be either a directory, a soft link to a directory, or a
hard link to a directory. Soft links within the directory (if any) are
treated as files.
Restrictions: Must be an absolute path.
retain_days
Number of days to retain old files.
Once it has been more than this many days since a file was last
modified, it is a candidate for removal.
Restrictions: Must be an integer ≥ 0.
Extensions Configuration
The extensions configuration section is used to configure third-party
extensions to Cedar Backup. If you don't intend to use any extensions, or don't
know what extensions are, then you can safely leave this section out of your
configuration file. It is optional.
Extensions configuration is used to specify “extended actions” implemented by
code external to Cedar Backup. An administrator can use this section to map
command-line Cedar Backup actions to third-party extension functions.
Each extended action has a name, which is mapped to a Python function within a
particular module. Each action also has an index associated with it. This index
is used to properly order execution when more than one action is specified on
the command line. The standard actions have predefined indexes, and extended
actions are interleaved into the normal order of execution using those indexes.
The collect action has index 100, the stage index has action 200, the store
action has index 300 and the purge action has index 400.
Warning
Extended actions should always be configured to run before the standard action
they are associated with. This is because of the way indicator files are used
in Cedar Backup. For instance, the staging process considers the collect action
to be complete for a peer if the file cback.collect can be found in that peer's
collect directory.
If you were to run the standard collect action before your other collect-like
actions, the indicator file would be written after the collect action completes
but before all of the other actions even run. Because of this, there's a chance
the stage process might back up the collect directory before the entire set of
collect-like actions have completed — and you would get no warning about this
in your email!
So, imagine that a third-party developer provided a Cedar Backup extension to
back up a certain kind of database repository, and you wanted to map that
extension to the “database” command-line action. You have been told that this
function is called “foo.bar()”. You think of this backup as a “collect” kind of
action, so you want it to be performed immediately before the collect action.
To configure this extension, you would list an action with a name “database”, a
module “foo”, a function name “bar” and an index of “99”.
This is how the hypothetical action would be configured:
<extensions>
<action>
<name>database</name>
<module>foo</module>
<function>bar</function>
<index>99</index>
</action>
</extensions>
The following elements are part of the extensions configuration section:
action
This is a subsection that contains configuration related to a single
extended action.
This section can be repeated as many times as is necessary.
The action subsection contains the following fields:
name
Name of the extended action.
Restrictions: Must be a non-empty string consisting of only lower-case
letters and digits.
module
Name of the Python module associated with the extension function.
Restrictions: Must be a non-empty string and a valid Python identifier.
function
Name of the Python extension function within the module.
Restrictions: Must be a non-empty string and a valid Python identifier.
index
Index of action, for execution ordering.
Restrictions: Must be an integer ≥ 0.
Setting up a Pool of One
Cedar Backup has been designed primarily for situations where there is a single
master and a set of other clients that the master interacts with. However, it
will just as easily work for a single machine (a backup pool of one).
Once you complete all of these configuration steps, your backups will run as
scheduled out of cron. Any errors that occur will be reported in daily emails
to your root user (or the user that receives root's email). If you don't
receive any emails, then you know your backup worked.
Note: all of these configuration steps should be run as the root user, unless
otherwise indicated.
Tip
This setup procedure discusses how to set up Cedar Backup in the “normal case”
for a pool of one. If you would like to modify the way Cedar Backup works (for
instance, by ignoring the store stage and just letting your backup sit in a
staging directory), you can do that. You'll just have to modify the procedure
below based on information in the remainder of the manual.
Step 1: Decide when you will run your backup.
There are four parts to a Cedar Backup run: collect, stage, store and purge.
The usual way of setting off these steps is through a set of cron jobs.
Although you won't create your cron jobs just yet, you should decide now when
you will run your backup so you are prepared for later.
Backing up large directories and creating ISO filesystem images can be
intensive operations, and could slow your computer down significantly. Choose a
backup time that will not interfere with normal use of your computer. Usually,
you will want the backup to occur every day, but it is possible to configure
cron to execute the backup only one day per week, three days per week, etc.
Warning
Because of the way Cedar Backup works, you must ensure that your backup always
runs on the first day of your configured week. This is because Cedar Backup
will only clear incremental backup information and re-initialize your media
when running on the first day of the week. If you skip running Cedar Backup on
the first day of the week, your backups will likely be “confused” until the
next week begins, or until you re-run the backup using the --full flag.
Step 2: Make sure email works.
Cedar Backup relies on email for problem notification. This notification works
through the magic of cron. Cron will email any output from each job it executes
to the user associated with the job. Since by default Cedar Backup only writes
output to the terminal if errors occur, this ensures that notification emails
will only be sent out if errors occur.
In order to receive problem notifications, you must make sure that email works
for the user which is running the Cedar Backup cron jobs (typically root).
Refer to your distribution's documentation for information on how to configure
email on your system. Note that you may prefer to configure root's email to
forward to some other user, so you do not need to check the root user's mail in
order to see Cedar Backup errors.
Step 3: Configure your writer device.
Before using Cedar Backup, your writer device must be properly configured. If
you have configured your CD/DVD writer hardware to work through the normal
filesystem device path, then you just need to know the path to the device on
disk (something like /dev/cdrw). Cedar Backup will use the this device path
both when talking to a command like cdrecord and when doing filesystem
operations like running media validation.
Your other option is to configure your CD writer hardware like a SCSI device
(either because it is a SCSI device or because you are using some sort of
interface that makes it look like one). In this case, Cedar Backup will use the
SCSI id when talking to cdrecord and the device path when running filesystem
operations.
See the section called “Configuring your Writer Device” for more information on
writer devices and how they are configured.
Note
There is no need to set up your CD/DVD device if you have decided not to
execute the store action.
Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be
used for CD writers, not DVD writers.
Step 4: Configure your backup user.
Choose a user to be used for backups. Some platforms may come with a “ready
made” backup user. For other platforms, you may have to create a user yourself.
You may choose any id you like, but a descriptive name such as backup or cback
is a good choice. See your distribution's documentation for information on how
to add a user.
Note
Standard Debian systems come with a user named backup. You may choose to stay
with this user or create another one.
Step 5: Create your backup tree.
Cedar Backup requires a backup directory tree on disk. This directory tree must
be roughly three times as big as the amount of data that will be backed up on a
nightly basis, to allow for the data to be collected, staged, and then placed
into an ISO filesystem image on disk. (This is one disadvantage to using Cedar
Backup in single-machine pools, but in this day of really large hard drives, it
might not be an issue.) Note that if you elect not to purge the staging
directory every night, you will need even more space.
You should create a collect directory, a staging directory and a working
(temporary) directory. One recommended layout is this:
/opt/
backup/
collect/
stage/
tmp/
If you will be backing up sensitive information (i.e. password files), it is
recommended that these directories be owned by the backup user (whatever you
named it), with permissions 700.
Note
You don't have to use /opt as the root of your directory structure. Use
anything you would like. I use /opt because it is my “dumping ground” for
filesystems that Debian does not manage.
Some users have requested that the Debian packages set up a more “standard”
location for backups right out-of-the-box. I have resisted doing this because
it's difficult to choose an appropriate backup location from within the
package. If you would prefer, you can create the backup directory structure
within some existing Debian directory such as /var/backups or /var/tmp.
Step 6: Create the Cedar Backup configuration file.
Following the instructions in the section called “Configuration File Format”
(above) create a configuration file for your machine. Since you are working
with a pool of one, you must configure all four action-specific sections:
collect, stage, store and purge.
The usual location for the Cedar Backup config file is /etc/cback3.conf. If you
change the location, make sure you edit your cronjobs (below) to point the
cback3 script at the correct config file (using the --config option).
Warning
Configuration files should always be writable only by root (or by the file
owner, if the owner is not root).
If you intend to place confidential information into the Cedar Backup
configuration file, make sure that you set the filesystem permissions on the
file appropriately. For instance, if you configure any extensions that require
passwords or other similar information, you should make the file readable only
to root or to the file owner (if the owner is not root).
Step 7: Validate the Cedar Backup configuration file.
Use the command cback3 validate to validate your configuration file. This
command checks that the configuration file can be found and parsed, and also
checks for typical configuration problems, such as invalid CD/DVD device
entries.
Note: the most common cause of configuration problems is in not closing XML
tags properly. Any XML tag that is “opened” must be “closed” appropriately.
Step 8: Test your backup.
Place a valid CD/DVD disc in your drive, and then use the command cback3 --full
all. You should execute this command as root. If the command completes with no
output, then the backup was run successfully.
Just to be sure that everything worked properly, check the logfile (/var/log/
cback3.log) for errors and also mount the CD/DVD disc to be sure it can be
read.
If Cedar Backup ever completes “normally” but the disc that is created is not
usable, please report this as a bug. ^[22] To be safe, always enable the
consistency check option in the store configuration section.
Step 9: Modify the backup cron jobs.
Since Cedar Backup should be run as root, one way to configure the cron job is
to add a line like this to your /etc/crontab file:
30 00 * * * root cback3 all
Or, you can create an executable script containing just these lines and place
that file in the /etc/cron.daily directory:
#/bin/sh
cback3 all
You should consider adding the --output or -O switch to your cback3
command-line in cron. This will result in larger logs, but could help diagnose
problems when commands like cdrecord or mkisofs fail mysteriously.
Note
For general information about using cron, see the manpage for crontab(5).
On a Debian system, execution of daily backups is controlled by the file /etc/
cron.d/cedar-backup3. As installed, this file contains several different
settings, all commented out. Uncomment the “Single machine (pool of one)” entry
in the file, and change the line so that the backup goes off when you want it
to.
Setting up a Client Peer Node
Cedar Backup has been designed to backup entire “pools” of machines. In any
given pool, there is one master and some number of clients. Most of the work
takes place on the master, so configuring a client is a little simpler than
configuring a master.
Backups are designed to take place over an RSH or SSH connection. Because RSH
is generally considered insecure, you are encouraged to use SSH rather than
RSH. This document will only describe how to configure Cedar Backup to use SSH;
if you want to use RSH, you're on your own.
Once you complete all of these configuration steps, your backups will run as
scheduled out of cron. Any errors that occur will be reported in daily emails
to your root user (or the user that receives root's email). If you don't
receive any emails, then you know your backup worked.
Note: all of these configuration steps should be run as the root user, unless
otherwise indicated.
Note
See Appendix D, Securing Password-less SSH Connections for some important notes
on how to optionally further secure password-less SSH connections to your
clients.
Step 1: Decide when you will run your backup.
There are four parts to a Cedar Backup run: collect, stage, store and purge.
The usual way of setting off these steps is through a set of cron jobs.
Although you won't create your cron jobs just yet, you should decide now when
you will run your backup so you are prepared for later.
Backing up large directories and creating ISO filesystem images can be
intensive operations, and could slow your computer down significantly. Choose a
backup time that will not interfere with normal use of your computer. Usually,
you will want the backup to occur every day, but it is possible to configure
cron to execute the backup only one day per week, three days per week, etc.
Warning
Because of the way Cedar Backup works, you must ensure that your backup always
runs on the first day of your configured week. This is because Cedar Backup
will only clear incremental backup information and re-initialize your media
when running on the first day of the week. If you skip running Cedar Backup on
the first day of the week, your backups will likely be “confused” until the
next week begins, or until you re-run the backup using the --full flag.
Step 2: Make sure email works.
Cedar Backup relies on email for problem notification. This notification works
through the magic of cron. Cron will email any output from each job it executes
to the user associated with the job. Since by default Cedar Backup only writes
output to the terminal if errors occur, this neatly ensures that notification
emails will only be sent out if errors occur.
In order to receive problem notifications, you must make sure that email works
for the user which is running the Cedar Backup cron jobs (typically root).
Refer to your distribution's documentation for information on how to configure
email on your system. Note that you may prefer to configure root's email to
forward to some other user, so you do not need to check the root user's mail in
order to see Cedar Backup errors.
Step 3: Configure the master in your backup pool.
You will not be able to complete the client configuration until at least step 3
of the master's configuration has been completed. In particular, you will need
to know the master's public SSH identity to fully configure a client.
To find the master's public SSH identity, log in as the backup user on the
master and cat the public identity file ~/.ssh/id_rsa.pub:
user@machine> cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0vOKjlfwohPg1oPRdrmwHk75l3mI9Tb/WRZfVnu2Pw69
uyphM9wBLRo6QfOC2T8vZCB8o/ZIgtAM3tkM0UgQHxKBXAZ+H36TOgg7BcI20I93iGtzpsMA/uXQy8kH
HgZooYqQ9pw+ZduXgmPcAAv2b5eTm07wRqFt/U84k6bhTzs= user@machine
Step 4: Configure your backup user.
Choose a user to be used for backups. Some platforms may come with a "ready
made" backup user. For other platforms, you may have to create a user yourself.
You may choose any id you like, but a descriptive name such as backup or cback
is a good choice. See your distribution's documentation for information on how
to add a user.
Note
Standard Debian systems come with a user named backup. You may choose to stay
with this user or create another one.
Once you have created your backup user, you must create an SSH keypair for it.
Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f
~/.ssh/id_rsa:
user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/home/user/.ssh'.
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
The default permissions for this directory should be fine. However, if the
directory existed before you ran ssh-keygen, then you may need to modify the
permissions. Make sure that the ~/.ssh directory is readable only by the backup
user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable
only by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is
writable only by the backup user (i.e. mode 600 or mode 644).
Finally, take the master's public SSH identity (which you found in step 2) and
cut-and-paste it into the file ~/.ssh/authorized_keys. Make sure the identity
value is pasted into the file all on one line, and that the authorized_keys
file is owned by your backup user and has permissions 600.
If you have other preferences or standard ways of setting up your users' SSH
configuration (i.e. different key type, etc.), feel free to do things your way.
The important part is that the master must be able to SSH into a client with no
password entry required.
Step 5: Create your backup tree.
Cedar Backup requires a backup directory tree on disk. This directory tree must
be roughly as big as the amount of data that will be backed up on a nightly
basis (more if you elect not to purge it all every night).
You should create a collect directory and a working (temporary) directory. One
recommended layout is this:
/opt/
backup/
collect/
tmp/
If you will be backing up sensitive information (i.e. password files), it is
recommended that these directories be owned by the backup user (whatever you
named it), with permissions 700.
Note
You don't have to use /opt as the root of your directory structure. Use
anything you would like. I use /opt because it is my “dumping ground” for
filesystems that Debian does not manage.
Some users have requested that the Debian packages set up a more "standard"
location for backups right out-of-the-box. I have resisted doing this because
it's difficult to choose an appropriate backup location from within the
package. If you would prefer, you can create the backup directory structure
within some existing Debian directory such as /var/backups or /var/tmp.
Step 6: Create the Cedar Backup configuration file.
Following the instructions in the section called “Configuration File Format”
(above), create a configuration file for your machine. Since you are working
with a client, you must configure all action-specific sections for the collect
and purge actions.
The usual location for the Cedar Backup config file is /etc/cback3.conf. If you
change the location, make sure you edit your cronjobs (below) to point the
cback3 script at the correct config file (using the --config option).
Warning
Configuration files should always be writable only by root (or by the file
owner, if the owner is not root).
If you intend to place confidental information into the Cedar Backup
configuration file, make sure that you set the filesystem permissions on the
file appropriately. For instance, if you configure any extensions that require
passwords or other similar information, you should make the file readable only
to root or to the file owner (if the owner is not root).
Step 7: Validate the Cedar Backup configuration file.
Use the command cback3 validate to validate your configuration file. This
command checks that the configuration file can be found and parsed, and also
checks for typical configuration problems. This command only validates
configuration on the one client, not the master or any other clients in a pool.
Note: the most common cause of configuration problems is in not closing XML
tags properly. Any XML tag that is “opened” must be “closed” appropriately.
Step 8: Test your backup.
Use the command cback3 --full collect purge. If the command completes with no
output, then the backup was run successfully. Just to be sure that everything
worked properly, check the logfile (/var/log/cback3.log) for errors.
Step 9: Modify the backup cron jobs.
Since Cedar Backup should be run as root, you should add a set of lines like
this to your /etc/crontab file:
30 00 * * * root cback3 collect
30 06 * * * root cback3 purge
You should consider adding the --output or -O switch to your cback3
command-line in cron. This will result in larger logs, but could help diagnose
problems when commands like cdrecord or mkisofs fail mysteriously.
You will need to coordinate the collect and purge actions on the client so that
the collect action completes before the master attempts to stage, and so that
the purge action does not begin until after the master has completed staging.
Usually, allowing an hour or two between steps should be sufficient. ^[23]
Note
For general information about using cron, see the manpage for crontab(5).
On a Debian system, execution of daily backups is controlled by the file /etc/
cron.d/cedar-backup3. As installed, this file contains several different
settings, all commented out. Uncomment the “Client machine” entries in the
file, and change the lines so that the backup goes off when you want it to.
Setting up a Master Peer Node
Cedar Backup has been designed to backup entire “pools” of machines. In any
given pool, there is one master and some number of clients. Most of the work
takes place on the master, so configuring a master is somewhat more complicated
than configuring a client.
Backups are designed to take place over an RSH or SSH connection. Because RSH
is generally considered insecure, you are encouraged to use SSH rather than
RSH. This document will only describe how to configure Cedar Backup to use SSH;
if you want to use RSH, you're on your own.
Once you complete all of these configuration steps, your backups will run as
scheduled out of cron. Any errors that occur will be reported in daily emails
to your root user (or whichever other user receives root's email). If you don't
receive any emails, then you know your backup worked.
Note: all of these configuration steps should be run as the root user, unless
otherwise indicated.
Tip
This setup procedure discusses how to set up Cedar Backup in the “normal case”
for a master. If you would like to modify the way Cedar Backup works (for
instance, by ignoring the store stage and just letting your backup sit in a
staging directory), you can do that. You'll just have to modify the procedure
below based on information in the remainder of the manual.
Step 1: Decide when you will run your backup.
There are four parts to a Cedar Backup run: collect, stage, store and purge.
The usual way of setting off these steps is through a set of cron jobs.
Although you won't create your cron jobs just yet, you should decide now when
you will run your backup so you are prepared for later.
Keep in mind that you do not necessarily have to run the collect action on the
master. See notes further below for more information.
Backing up large directories and creating ISO filesystem images can be
intensive operations, and could slow your computer down significantly. Choose a
backup time that will not interfere with normal use of your computer. Usually,
you will want the backup to occur every day, but it is possible to configure
cron to execute the backup only one day per week, three days per week, etc.
Warning
Because of the way Cedar Backup works, you must ensure that your backup always
runs on the first day of your configured week. This is because Cedar Backup
will only clear incremental backup information and re-initialize your media
when running on the first day of the week. If you skip running Cedar Backup on
the first day of the week, your backups will likely be “confused” until the
next week begins, or until you re-run the backup using the --full flag.
Step 2: Make sure email works.
Cedar Backup relies on email for problem notification. This notification works
through the magic of cron. Cron will email any output from each job it executes
to the user associated with the job. Since by default Cedar Backup only writes
output to the terminal if errors occur, this neatly ensures that notification
emails will only be sent out if errors occur.
In order to receive problem notifications, you must make sure that email works
for the user which is running the Cedar Backup cron jobs (typically root).
Refer to your distribution's documentation for information on how to configure
email on your system. Note that you may prefer to configure root's email to
forward to some other user, so you do not need to check the root user's mail in
order to see Cedar Backup errors.
Step 3: Configure your writer device.
Before using Cedar Backup, your writer device must be properly configured. If
you have configured your CD/DVD writer hardware to work through the normal
filesystem device path, then you just need to know the path to the device on
disk (something like /dev/cdrw). Cedar Backup will use the this device path
both when talking to a command like cdrecord and when doing filesystem
operations like running media validation.
Your other option is to configure your CD writer hardware like a SCSI device
(either because it is a SCSI device or because you are using some sort of
interface that makes it look like one). In this case, Cedar Backup will use the
SCSI id when talking to cdrecord and the device path when running filesystem
operations.
See the section called “Configuring your Writer Device” for more information on
writer devices and how they are configured.
Note
There is no need to set up your CD/DVD device if you have decided not to
execute the store action.
Due to the underlying utilities that Cedar Backup uses, the SCSI id may only be
used for CD writers, not DVD writers.
Step 4: Configure your backup user.
Choose a user to be used for backups. Some platforms may come with a “ready
made” backup user. For other platforms, you may have to create a user yourself.
You may choose any id you like, but a descriptive name such as backup or cback
is a good choice. See your distribution's documentation for information on how
to add a user.
Note
Standard Debian systems come with a user named backup. You may choose to stay
with this user or create another one.
Once you have created your backup user, you must create an SSH keypair for it.
Log in as your backup user, and then run the command ssh-keygen -t rsa -N "" -f
~/.ssh/id_rsa:
user@machine> ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Created directory '/home/user/.ssh'.
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
11:3e:ad:72:95:fe:96:dc:1e:3b:f4:cc:2c:ff:15:9e user@machine
The default permissions for this directory should be fine. However, if the
directory existed before you ran ssh-keygen, then you may need to modify the
permissions. Make sure that the ~/.ssh directory is readable only by the backup
user (i.e. mode 700), that the ~/.ssh/id_rsa file is only readable and writable
by the backup user (i.e. mode 600) and that the ~/.ssh/id_rsa.pub file is
writable only by the backup user (i.e. mode 600 or mode 644).
If you have other preferences or standard ways of setting up your users' SSH
configuration (i.e. different key type, etc.), feel free to do things your way.
The important part is that the master must be able to SSH into a client with no
password entry required.
Step 5: Create your backup tree.
Cedar Backup requires a backup directory tree on disk. This directory tree must
be roughly large enough hold twice as much data as will be backed up from the
entire pool on a given night, plus space for whatever is collected on the
master itself. This will allow for all three operations - collect, stage and
store - to have enough space to complete. Note that if you elect not to purge
the staging directory every night, you will need even more space.
You should create a collect directory, a staging directory and a working
(temporary) directory. One recommended layout is this:
/opt/
backup/
collect/
stage/
tmp/
If you will be backing up sensitive information (i.e. password files), it is
recommended that these directories be owned by the backup user (whatever you
named it), with permissions 700.
Note
You don't have to use /opt as the root of your directory structure. Use
anything you would like. I use /opt because it is my “dumping ground” for
filesystems that Debian does not manage.
Some users have requested that the Debian packages set up a more “standard”
location for backups right out-of-the-box. I have resisted doing this because
it's difficult to choose an appropriate backup location from within the
package. If you would prefer, you can create the backup directory structure
within some existing Debian directory such as /var/backups or /var/tmp.
Step 6: Create the Cedar Backup configuration file.
Following the instructions in the section called “Configuration File Format”
(above), create a configuration file for your machine. Since you are working
with a master machine, you would typically configure all four action-specific
sections: collect, stage, store and purge.
Note
Note that the master can treat itself as a “client” peer for certain actions.
As an example, if you run the collect action on the master, then you will stage
that data by configuring a local peer representing the master.
Something else to keep in mind is that you do not really have to run the
collect action on the master. For instance, you may prefer to just use your
master machine as a “consolidation point” machine that just collects data from
the other client machines in a backup pool. In that case, there is no need to
collect data on the master itself.
The usual location for the Cedar Backup config file is /etc/cback3.conf. If you
change the location, make sure you edit your cronjobs (below) to point the
cback3 script at the correct config file (using the --config option).
Warning
Configuration files should always be writable only by root (or by the file
owner, if the owner is not root).
If you intend to place confidental information into the Cedar Backup
configuration file, make sure that you set the filesystem permissions on the
file appropriately. For instance, if you configure any extensions that require
passwords or other similar information, you should make the file readable only
to root or to the file owner (if the owner is not root).
Step 7: Validate the Cedar Backup configuration file.
Use the command cback3 validate to validate your configuration file. This
command checks that the configuration file can be found and parsed, and also
checks for typical configuration problems, such as invalid CD/DVD device
entries. This command only validates configuration on the master, not any
clients that the master might be configured to connect to.
Note: the most common cause of configuration problems is in not closing XML
tags properly. Any XML tag that is “opened” must be “closed” appropriately.
Step 8: Test connectivity to client machines.
This step must wait until after your client machines have been at least
partially configured. Once the backup user(s) have been configured on the
client machine(s) in a pool, attempt an SSH connection to each client.
Log in as the backup user on the master, and then use the command ssh
user@machine where user is the name of backup user on the client machine, and
machine is the name of the client machine.
If you are able to log in successfully to each client without entering a
password, then things have been configured properly. Otherwise, double-check
that you followed the user setup instructions for the master and the clients.
Step 9: Test your backup.
Make sure that you have configured all of the clients in your backup pool. On
all of the clients, execute cback3 --full collect. (You will probably have
already tested this command on each of the clients, so it should succeed.)
When all of the client backups have completed, place a valid CD/DVD disc in
your drive, and then use the command cback3 --full all. You should execute this
command as root. If the command completes with no output, then the backup was
run successfully.
Just to be sure that everything worked properly, check the logfile (/var/log/
cback3.log) on the master and each of the clients, and also mount the CD/DVD
disc on the master to be sure it can be read.
You may also want to run cback3 purge on the master and each client once you
have finished validating that everything worked.
If Cedar Backup ever completes “normally” but the disc that is created is not
usable, please report this as a bug. ^[22] To be safe, always enable the
consistency check option in the store configuration section.
Step 10: Modify the backup cron jobs.
Since Cedar Backup should be run as root, you should add a set of lines like
this to your /etc/crontab file:
30 00 * * * root cback3 collect
30 02 * * * root cback3 stage
30 04 * * * root cback3 store
30 06 * * * root cback3 purge
You should consider adding the --output or -O switch to your cback3
command-line in cron. This will result in larger logs, but could help diagnose
problems when commands like cdrecord or mkisofs fail mysteriously.
You will need to coordinate the collect and purge actions on clients so that
their collect actions complete before the master attempts to stage, and so that
their purge actions do not begin until after the master has completed staging.
Usually, allowing an hour or two between steps should be sufficient. ^[23]
Note
For general information about using cron, see the manpage for crontab(5).
On a Debian system, execution of daily backups is controlled by the file /etc/
cron.d/cedar-backup3. As installed, this file contains several different
settings, all commented out. Uncomment the “Master machine” entries in the
file, and change the lines so that the backup goes off when you want it to.
Configuring your Writer Device
Device Types
In order to execute the store action, you need to know how to identify your
writer device. Cedar Backup supports two kinds of device types: CD writers and
DVD writers. DVD writers are always referenced through a filesystem device name
(i.e. /dev/dvd). CD writers can be referenced either through a SCSI id, or
through a filesystem device name. Which you use depends on your operating
system and hardware.
Devices identified by by device name
For all DVD writers, and for CD writers on certain platforms, you will
configure your writer device using only a device name. If your writer device
works this way, you should just specify <target_device> in configuration. You
can either leave <target_scsi_id> blank or remove it completely. The writer
device will be used both to write to the device and for filesystem operations —
for instance, when the media needs to be mounted to run the consistency check.
Devices identified by SCSI id
Cedar Backup can use devices identified by SCSI id only when configured to use
the cdwriter device type.
In order to use a SCSI device with Cedar Backup, you must know both the SCSI id
<target_scsi_id> and the device name <target_device>. The SCSI id will be used
to write to media using cdrecord; and the device name will be used for other
filesystem operations.
A true SCSI device will always have an address scsibus,target,lun (i.e. 1,6,2).
This should hold true on most UNIX-like systems including Linux and the various
BSDs (although I do not have a BSD system to test with currently). The SCSI
address represents the location of your writer device on the one or more SCSI
buses that you have available on your system.
On some platforms, it is possible to reference non-SCSI writer devices (i.e. an
IDE CD writer) using an emulated SCSI id. If you have configured your non-SCSI
writer device to have an emulated SCSI id, provide the filesystem device path
in <target_device> and the SCSI id in <target_scsi_id>, just like for a real
SCSI device.
You should note that in some cases, an emulated SCSI id takes the same form as
a normal SCSI id, while in other cases you might see a method name prepended to
the normal SCSI id (i.e. “ATA:1,1,1”).
Linux Notes
On a Linux system, IDE writer devices often have a emulated SCSI address, which
allows SCSI-based software to access the device through an IDE-to-SCSI
interface. Under these circumstances, the first IDE writer device typically has
an address 0,0,0. However, support for the IDE-to-SCSI interface has been
deprecated and is not well-supported in newer kernels (kernel 2.6.x and later).
Newer Linux kernels can address ATA or ATAPI drives without SCSI emulation by
prepending a “method” indicator to the emulated device address. For instance,
ATA:0,0,0 or ATAPI:0,0,0 are typical values.
However, even this interface is deprecated as of late 2006, so with relatively
new kernels you may be better off using the filesystem device path directly
rather than relying on any SCSI emulation.
Finding your Linux CD Writer
Here are some hints about how to find your Linux CD writer hardware. First, try
to reference your device using the filesystem device path:
cdrecord -prcap dev=/dev/cdrom
Running this command on my hardware gives output that looks like this (just the
top few lines):
Device type : Removable CD-ROM
Version : 0
Response Format: 2
Capabilities :
Vendor_info : 'LITE-ON '
Identification : 'DVDRW SOHW-1673S'
Revision : 'JS02'
Device seems to be: Generic mmc2 DVD-R/DVD-RW.
Drive capabilities, per MMC-3 page 2A:
If this works, and the identifying information at the top of the output looks
like your CD writer device, you've probably found a working configuration.
Place the device path into <target_device> and leave <target_scsi_id> blank.
If this doesn't work, you should try to find an ATA or ATAPI device:
cdrecord -scanbus dev=ATA
cdrecord -scanbus dev=ATAPI
On my development system, I get a result that looks something like this for
ATA:
scsibus1:
1,0,0 100) 'LITE-ON ' 'DVDRW SOHW-1673S' 'JS02' Removable CD-ROM
1,1,0 101) *
1,2,0 102) *
1,3,0 103) *
1,4,0 104) *
1,5,0 105) *
1,6,0 106) *
1,7,0 107) *
Again, if you get a result that you recognize, you have again probably found a
working configuraton. Place the associated device path (in my case, /dev/cdrom)
into <target_device> and put the emulated SCSI id (in this case, ATA:1,0,0)
into <target_scsi_id>.
Any further discussion of how to configure your CD writer hardware is outside
the scope of this document. If you have tried the hints above and still can't
get things working, you may want to reference the Linux CDROM HOWTO (http://
www.tldp.org/HOWTO/CDROM-HOWTO) or the ATA RAID HOWTO (http://www.tldp.org/
HOWTO/ATA-RAID-HOWTO/index.html) for more information.
Mac OS X Notes
On a Mac OS X (darwin) system, things get strange. Apple has abandoned
traditional SCSI device identifiers in favor of a system-wide resource id. So,
on a Mac, your writer device will have a name something like
IOCompactDiscServices (for a CD writer) or IODVDServices (for a DVD writer). If
you have multiple drives, the second drive probably has a number appended, i.e.
IODVDServices/2 for the second DVD writer. You can try to figure out what the
name of your device is by grepping through the output of the command ioreg -l.^
[24]
Unfortunately, even if you can figure out what device to use, I can't really
support the store action on this platform. In OS X, the “automount” function of
the Finder interferes significantly with Cedar Backup's ability to mount and
unmount media and write to the CD or DVD hardware. The Cedar Backup writer and
image functionality does work on this platform, but the effort required to
fight the operating system about who owns the media and the device makes it
nearly impossible to execute the store action successfully.
Optimized Blanking Stategy
When the optimized blanking strategy has not been configured, Cedar Backup uses
a simplistic approach: rewritable media is blanked at the beginning of every
week, period.
Since rewritable media can be blanked only a finite number of times before
becoming unusable, some users — especially users of rewritable DVD media with
its large capacity — may prefer to blank the media less often.
If the optimized blanking strategy is configured, Cedar Backup will use a
blanking factor and attempt to determine whether future backups will fit on the
current media. If it looks like backups will fit, then the media will not be
blanked.
This feature will only be useful (assuming single disc is used for the whole
week's backups) if the estimated total size of the weekly backup is
considerably smaller than the capacity of the media (no more than 50% of the
total media capacity), and only if the size of the backup can be expected to
remain fairly constant over time (no frequent rapid growth expected).
There are two blanking modes: daily and weekly. If the weekly blanking mode is
set, Cedar Backup will only estimate future capacity (and potentially blank the
disc) once per week, on the starting day of the week. If the daily blanking
mode is set, Cedar Backup will estimate future capacity (and potentially blank
the disc) every time it is run. You should only use the daily blanking mode in
conjunction with daily collect configuration, otherwise you will risk losing
data.
If you are using the daily blanking mode, you can typically set the blanking
value to 1.0. This will cause Cedar Backup to blank the media whenever there is
not enough space to store the current day's backup.
If you are using the weekly blanking mode, then finding the correct blanking
factor will require some experimentation. Cedar Backup estimates future
capacity based on the configured blanking factor. The disc will be blanked if
the following relationship is true:
bytes available / (1 + bytes required) ≤ blanking factor
Another way to look at this is to consider the blanking factor as a sort of
(upper) backup growth estimate:
Total size of weekly backup / Full backup size at the start of the week
This ratio can be estimated using a week or two of previous backups. For
instance, take this example, where March 10 is the start of the week and March
4 through March 9 represent the incremental backups from the previous week:
/opt/backup/staging# du -s 2007/03/*
3040 2007/03/01
3044 2007/03/02
6812 2007/03/03
3044 2007/03/04
3152 2007/03/05
3056 2007/03/06
3060 2007/03/07
3056 2007/03/08
4776 2007/03/09
6812 2007/03/10
11824 2007/03/11
In this case, the ratio is approximately 4:
6812 + (3044 + 3152 + 3056 + 3060 + 3056 + 4776) / 6812 = 3.9571
To be safe, you might choose to configure a factor of 5.0.
Setting a higher value reduces the risk of exceeding media capacity mid-week
but might result in blanking the media more often than is necessary.
If you run out of space mid-week, then the solution is to run the rebuild
action. If this happens frequently, a higher blanking factor value should be
used.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[19] See http://www.xml.com/pub/a/98/10/guide0.html for a basic introduction
to XML.
^[20] See the section called “The Backup Process”, in Chapter 2, Basic Concepts
.
^[21] See http://docs.python.org/lib/re-syntax.html
^[22] See https://bitbucket.org/cedarsolutions/cedar-backup3/issues.
^[23] See the section called “Coordination between Master and Clients” in
Chapter 2, Basic Concepts.
^[24] Thanks to the file README.macosX in the cdrtools-2.01+01a01 source tree
for this information
Chapter 6. Official Extensions
Table of Contents
System Information Extension
Amazon S3 Extension
Subversion Extension
MySQL Extension
PostgreSQL Extension
Mbox Extension
Encrypt Extension
Split Extension
Capacity Extension
System Information Extension
The System Information Extension is a simple Cedar Backup extension used to
save off important system recovery information that might be useful when
reconstructing a “broken” system. It is intended to be run either immediately
before or immediately after the standard collect action.
This extension saves off the following information to the configured Cedar
Backup collect directory. Saved off data is always compressed using bzip2.
• Currently-installed Debian packages via dpkg --get-selections
• Disk partition information via fdisk -l
• System-wide mounted filesystem contents, via ls -laR
The Debian-specific information is only collected on systems where /usr/bin/
dpkg exists.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>sysinfo</name>
<module>CedarBackup3.extend.sysinfo</module>
<function>executeAction</function>
<index>99</index>
</action>
</extensions>
This extension relies on the options and collect configuration sections in the
standard Cedar Backup configuration file, but requires no new configuration of
its own.
Amazon S3 Extension
The Amazon S3 extension writes data to Amazon S3 cloud storage rather than to
physical media. It is intended to replace the store action, but you can also
use it alongside the store action if you'd prefer to backup your data in more
than one place. This extension must be run after the stage action.
The underlying functionality relies on the AWS CLI toolset. Before you use this
extension, you need to set up your Amazon S3 account and configure AWS CLI as
detailed in Amazons's setup guide. The extension assumes that the backup is
being executed as root, and switches over to the configured backup user to run
the aws program. So, make sure you configure the AWS CLI tools as the backup
user and not root. (This is different than the amazons3 sync tool extension,
which executes AWS CLI command as the same user that is running the tool.)
When using physical media via the standard store action, there is an implicit
limit to the size of a backup, since a backup must fit on a single disc. Since
there is no physical media, no such limit exists for Amazon S3 backups. This
leaves open the possibility that Cedar Backup might construct an
unexpectedly-large backup that the administrator is not aware of. Over time,
this might become expensive, either in terms of network bandwidth or in terms
of Amazon S3 storage and I/O charges. To mitigate this risk, set a reasonable
maximum size using the configuration elements shown below. If the backup fails,
you have a chance to review what made the backup larger than you expected, and
you can either correct the problem (i.e. remove a large temporary directory
that got inadvertently included in the backup) or change configuration to take
into account the new "normal" maximum size.
You can optionally configure Cedar Backup to encrypt data before sending it to
S3. To do that, provide a complete command line using the ${input} and $
{output} variables to represent the original input file and the encrypted
output file. This command will be executed as the backup user.
For instance, you can use something like this with GPG:
/usr/bin/gpg -c --no-use-agent --batch --yes --passphrase-file /home/backup/.passphrase -o ${output} ${input}
The GPG mechanism depends on a strong passphrase for security. One way to
generate a strong passphrase is using your system random number generator,
i.e.:
dd if=/dev/urandom count=20 bs=1 | xxd -ps
(See StackExchange for more details about that advice.) If you decide to use
encryption, make sure you save off the passphrase in a safe place, so you can
get at your backup data later if you need to. And obviously, make sure to set
permissions on the passphrase file so it can only be read by the backup user.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>amazons3</name>
<module>CedarBackup3.extend.amazons3</module>
<function>executeAction</function>
<index>201</index> <!-- just after stage -->
</action>
</extensions>
This extension relies on the options and staging configuration sections in the
standard Cedar Backup configuration file, and then also requires its own
amazons3 configuration section. This is an example configuration section with
encryption disabled:
<amazons3>
<s3_bucket>example.com-backup/staging</s3_bucket>
</amazons3>
The following elements are part of the Amazon S3 configuration section:
warn_midnite
Whether to generate warnings for crossing midnite.
This field indicates whether warnings should be generated if the Amazon S3
operation has to cross a midnite boundary in order to find data to write to
the cloud. For instance, a warning would be generated if valid data was
only found in the day before or day after the current day.
Configuration for some users is such that the amazons3 operation will
always cross a midnite boundary, so they will not care about this warning.
Other users will expect to never cross a boundary, and want to be notified
that something “strange” might have happened.
This field is optional. If it doesn't exist, then N will be assumed.
Restrictions: Must be a boolean (Y or N).
s3_bucket
The name of the Amazon S3 bucket that data will be written to.
This field configures the S3 bucket that your data will be written to. In
S3, buckets are named globally. For uniqueness, you would typically use the
name of your domain followed by some suffix, such as example.com-backup. If
you want, you can specify a subdirectory within the bucket, such as
example.com-backup/staging.
Restrictions: Must be non-empty.
encrypt
Command used to encrypt backup data before upload to S3
If this field is provided, then data will be encrypted before it is
uploaded to Amazon S3. You must provide the entire command used to encrypt
a file, including the ${input} and ${output} variables. An example GPG
command is shown above, but you can use any mechanism you choose. The
command will be run as the configured backup user.
Restrictions: If provided, must be non-empty.
full_size_limit
Maximum size of a full backup
If this field is provided, then a size limit will be applied to full
backups. If the total size of the selected staging directory is greater
than the limit, then the backup will fail.
You can enter this value in two different forms. It can either be a simple
number, in which case the value is assumed to be in bytes; or it can be a
number followed by a unit (KB, MB, GB).
Valid examples are “10240”, “250 MB” or “1.1 GB”.
Restrictions: Must be a value as described above, greater than zero.
incr_size_limit
Maximum size of an incremental backup
If this field is provided, then a size limit will be applied to incremental
backups. If the total size of the selected staging directory is greater
than the limit, then the backup will fail.
You can enter this value in two different forms. It can either be a simple
number, in which case the value is assumed to be in bytes; or it can be a
number followed by a unit (KB, MB, GB).
Valid examples are “10240”, “250 MB” or “1.1 GB”.
Restrictions: Must be a value as described above, greater than zero.
Subversion Extension
The Subversion Extension is a Cedar Backup extension used to back up Subversion
^[25] version control repositories via the Cedar Backup command line. It is
intended to be run either immediately before or immediately after the standard
collect action.
Each configured Subversion repository can be backed using the same collect
modes allowed for filesystems in the standard Cedar Backup collect action
(weekly, daily, incremental) and the output can be compressed using either gzip
or bzip2.
There are two different kinds of Subversion repositories at this writing: BDB
(Berkeley Database) and FSFS (a "filesystem within a filesystem"). This
extension backs up both kinds of repositories in the same way, using svnadmin
dump in an incremental mode.
It turns out that FSFS repositories can also be backed up just like any other
filesystem directory. If you would rather do the backup that way, then use the
normal collect action rather than this extension. If you decide to do that, be
sure to consult the Subversion documentation and make sure you understand the
limitations of this kind of backup.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>subversion</name>
<module>CedarBackup3.extend.subversion</module>
<function>executeAction</function>
<index>99</index>
</action>
</extensions>
This extension relies on the options and collect configuration sections in the
standard Cedar Backup configuration file, and then also requires its own
subversion configuration section. This is an example Subversion configuration
section:
<subversion>
<collect_mode>incr</collect_mode>
<compress_mode>bzip2</compress_mode>
<repository>
<abs_path>/opt/public/svn/docs</abs_path>
</repository>
<repository>
<abs_path>/opt/public/svn/web</abs_path>
<compress_mode>gzip</compress_mode>
</repository>
<repository_dir>
<abs_path>/opt/private/svn</abs_path>
<collect_mode>daily</collect_mode>
</repository_dir>
</subversion>
The following elements are part of the Subversion configuration section:
collect_mode
Default collect mode.
The collect mode describes how frequently a Subversion repository is backed
up. The Subversion extension recognizes the same collect modes as the
standard Cedar Backup collect action (see Chapter 2, Basic Concepts).
This value is the collect mode that will be used by default during the
backup process. Individual repositories (below) may override this value. If
all individual repositories provide their own value, then this default
value may be omitted from configuration.
Note: if your backup device does not suppport multisession discs, then you
should probably use the daily collect mode to avoid losing data.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Default compress mode.
Subversion repositories backups are just specially-formatted text files,
and often compress quite well using gzip or bzip2. The compress mode
describes how the backed-up data will be compressed, if at all.
This value is the compress mode that will be used by default during the
backup process. Individual repositories (below) may override this value. If
all individual repositories provide their own value, then this default
value may be omitted from configuration.
Restrictions: Must be one of none, gzip or bzip2.
repository
A Subversion repository be collected.
This is a subsection which contains information about a specific Subversion
repository to be backed up.
This section can be repeated as many times as is necessary. At least one
repository or repository directory must be configured.
The repository subsection contains the following fields:
collect_mode
Collect mode for this repository.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Compress mode for this repository.
This field is optional. If it doesn't exist, the backup will use the
default compress mode.
Restrictions: Must be one of none, gzip or bzip2.
abs_path
Absolute path of the Subversion repository to back up.
Restrictions: Must be an absolute path.
repository_dir
A Subversion parent repository directory be collected.
This is a subsection which contains information about a Subversion parent
repository directory to be backed up. Any subdirectory immediately within
this directory is assumed to be a Subversion repository, and will be backed
up.
This section can be repeated as many times as is necessary. At least one
repository or repository directory must be configured.
The repository_dir subsection contains the following fields:
collect_mode
Collect mode for this repository.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Compress mode for this repository.
This field is optional. If it doesn't exist, the backup will use the
default compress mode.
Restrictions: Must be one of none, gzip or bzip2.
abs_path
Absolute path of the Subversion repository to back up.
Restrictions: Must be an absolute path.
exclude
List of paths or patterns to exclude from the backup.
This is a subsection which contains a set of paths and patterns to be
excluded within this subversion parent directory.
This section is entirely optional, and if it exists can also be empty.
The exclude subsection can contain one or more of each of the following
fields:
rel_path
A relative path to be excluded from the backup.
The path is assumed to be relative to the subversion parent
directory itself. For instance, if the configured subversion parent
directory is /opt/svn a configured relative path of software would
exclude the path /opt/svn/software.
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty.
pattern
A pattern to be excluded from the backup.
The pattern must be a Python regular expression. ^[21] It is
assumed to be bounded at front and back by the beginning and end of
the string (i.e. it is treated as if it begins with ^ and ends with
$).
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty
MySQL Extension
The MySQL Extension is a Cedar Backup extension used to back up MySQL ^[26]
databases via the Cedar Backup command line. It is intended to be run either
immediately before or immediately after the standard collect action.
Note
This extension always produces a full backup. There is currently no facility
for making incremental backups. If/when someone has a need for this and can
describe how to do it, I will update this extension or provide another.
The backup is done via the mysqldump command included with the MySQL product.
Output can be compressed using gzip or bzip2. Administrators can configure the
extension either to back up all databases or to back up only specific
databases.
The extension assumes that all configured databases can be backed up by a
single user. Often, the “root” database user will be used. An alternative is to
create a separate MySQL “backup” user and grant that user rights to read (but
not write) various databases as needed. This second option is probably your
best choice.
Warning
The extension accepts a username and password in configuration. However, you
probably do not want to list those values in Cedar Backup configuration. This
is because Cedar Backup will provide these values to mysqldump via the
command-line --user and --password switches, which will be visible to other
users in the process listing.
Instead, you should configure the username and password in one of MySQL's
configuration files. Typically, that would be done by putting a stanza like
this in /root/.my.cnf:
[mysqldump]
user = root
password = <secret>
Of course, if you are executing the backup as a user other than root, then you
would create the file in that user's home directory instead.
As a side note, it is also possible to configure .my.cnf such that Cedar Backup
can back up a remote database server:
[mysqldump]
host = remote.host
For this to work, you will also need to grant privileges properly for the user
which is executing the backup. See your MySQL documentation for more
information about how this can be done.
Regardless of whether you are using ~/.my.cnf or /etc/cback3.conf to store
database login and password information, you should be careful about who is
allowed to view that information. Typically, this means locking down
permissions so that only the file owner can read the file contents (i.e. use
mode 0600).
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>mysql</name>
<module>CedarBackup3.extend.mysql</module>
<function>executeAction</function>
<index>99</index>
</action>
</extensions>
This extension relies on the options and collect configuration sections in the
standard Cedar Backup configuration file, and then also requires its own mysql
configuration section. This is an example MySQL configuration section:
<mysql>
<compress_mode>bzip2</compress_mode>
<all>Y</all>
</mysql>
If you have decided to configure login information in Cedar Backup rather than
using MySQL configuration, then you would add the username and password fields
to configuration:
<mysql>
<user>root</user>
<password>password</password>
<compress_mode>bzip2</compress_mode>
<all>Y</all>
</mysql>
The following elements are part of the MySQL configuration section:
user
Database user.
The database user that the backup should be executed as. Even if you list
more than one database (below) all backups must be done as the same user.
Typically, this would be root (i.e. the database root user, not the system
root user).
This value is optional. You should probably configure the username and
password in MySQL configuration instead, as discussed above.
Restrictions: If provided, must be non-empty.
password
Password associated with the database user.
This value is optional. You should probably configure the username and
password in MySQL configuration instead, as discussed above.
Restrictions: If provided, must be non-empty.
compress_mode
Compress mode.
MySQL databases dumps are just specially-formatted text files, and often
compress quite well using gzip or bzip2. The compress mode describes how
the backed-up data will be compressed, if at all.
Restrictions: Must be one of none, gzip or bzip2.
all
Indicates whether to back up all databases.
If this value is Y, then all MySQL databases will be backed up. If this
value is N, then one or more specific databases must be specified (see
below).
If you choose this option, the entire database backup will go into one big
dump file.
Restrictions: Must be a boolean (Y or N).
database
Named database to be backed up.
If you choose to specify individual databases rather than all databases,
then each database will be backed up into its own dump file.
This field can be repeated as many times as is necessary. At least one
database must be configured if the all option (above) is set to N. You may
not configure any individual databases if the all option is set to Y.
Restrictions: Must be non-empty.
PostgreSQL Extension
Community-contributed Extension
This is a community-contributed extension provided by Antoine Beaupre ("The
Anarcat"). I have added regression tests around the configuration parsing code
and I will maintain this section in the user manual based on his source code
documentation.
Unfortunately, I don't have any PostgreSQL databases with which to test the
functional code. While I have code-reviewed the code and it looks both sensible
and safe, I have to rely on the author to ensure that it works properly.
The PostgreSQL Extension is a Cedar Backup extension used to back up PostgreSQL
^[27] databases via the Cedar Backup command line. It is intended to be run
either immediately before or immediately after the standard collect action.
The backup is done via the pg_dump or pg_dumpall commands included with the
PostgreSQL product. Output can be compressed using gzip or bzip2.
Administrators can configure the extension either to back up all databases or
to back up only specific databases.
The extension assumes that the current user has passwordless access to the
database since there is no easy way to pass a password to the pg_dump client.
This can be accomplished using appropriate configuration in the pg_hda.conf
file.
This extension always produces a full backup. There is currently no facility
for making incremental backups.
Warning
Once you place PostgreSQL configuration into the Cedar Backup configuration
file, you should be careful about who is allowed to see that information. This
is because PostgreSQL configuration will contain information about available
PostgreSQL databases and usernames. Typically, you might want to lock down
permissions so that only the file owner can read the file contents (i.e. use
mode 0600).
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>postgresql</name>
<module>CedarBackup3.extend.postgresql</module>
<function>executeAction</function>
<index>99</index>
</action>
</extensions>
This extension relies on the options and collect configuration sections in the
standard Cedar Backup configuration file, and then also requires its own
postgresql configuration section. This is an example PostgreSQL configuration
section:
<postgresql>
<compress_mode>bzip2</compress_mode>
<user>username</user>
<all>Y</all>
</postgresql>
If you decide to back up specific databases, then you would list them
individually, like this:
<postgresql>
<compress_mode>bzip2</compress_mode>
<user>username</user>
<all>N</all>
<database>db1</database>
<database>db2</database>
</postgresql>
The following elements are part of the PostgreSQL configuration section:
user
Database user.
The database user that the backup should be executed as. Even if you list
more than one database (below) all backups must be done as the same user.
This value is optional.
Consult your PostgreSQL documentation for information on how to configure a
default database user outside of Cedar Backup, and for information on how
to specify a database password when you configure a user within Cedar
Backup. You will probably want to modify pg_hda.conf.
Restrictions: If provided, must be non-empty.
compress_mode
Compress mode.
PostgreSQL databases dumps are just specially-formatted text files, and
often compress quite well using gzip or bzip2. The compress mode describes
how the backed-up data will be compressed, if at all.
Restrictions: Must be one of none, gzip or bzip2.
all
Indicates whether to back up all databases.
If this value is Y, then all PostgreSQL databases will be backed up. If
this value is N, then one or more specific databases must be specified (see
below).
If you choose this option, the entire database backup will go into one big
dump file.
Restrictions: Must be a boolean (Y or N).
database
Named database to be backed up.
If you choose to specify individual databases rather than all databases,
then each database will be backed up into its own dump file.
This field can be repeated as many times as is necessary. At least one
database must be configured if the all option (above) is set to N. You may
not configure any individual databases if the all option is set to Y.
Restrictions: Must be non-empty.
Mbox Extension
The Mbox Extension is a Cedar Backup extension used to incrementally back up
UNIX-style “mbox” mail folders via the Cedar Backup command line. It is
intended to be run either immediately before or immediately after the standard
collect action.
Mbox mail folders are not well-suited to being backed up by the normal Cedar
Backup incremental backup process. This is because active folders are typically
appended to on a daily basis. This forces the incremental backup process to
back them up every day in order to avoid losing data. This can result in quite
a bit of wasted space when backing up large mail folders.
What the mbox extension does is leverage the grepmail utility to back up only
email messages which have been received since the last incremental backup. This
way, even if a folder is added to every day, only the recently-added messages
are backed up. This can potentially save a lot of space.
Each configured mbox file or directory can be backed using the same collect
modes allowed for filesystems in the standard Cedar Backup collect action
(weekly, daily, incremental) and the output can be compressed using either gzip
or bzip2.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>mbox</name>
<module>CedarBackup3.extend.mbox</module>
<function>executeAction</function>
<index>99</index>
</action>
</extensions>
This extension relies on the options and collect configuration sections in the
standard Cedar Backup configuration file, and then also requires its own mbox
configuration section. This is an example mbox configuration section:
<mbox>
<collect_mode>incr</collect_mode>
<compress_mode>gzip</compress_mode>
<file>
<abs_path>/home/user1/mail/greylist</abs_path>
<collect_mode>daily</collect_mode>
</file>
<dir>
<abs_path>/home/user2/mail</abs_path>
</dir>
<dir>
<abs_path>/home/user3/mail</abs_path>
<exclude>
<rel_path>spam</rel_path>
<pattern>.*debian.*</pattern>
</exclude>
</dir>
</mbox>
Configuration is much like the standard collect action. Differences come from
the fact that mbox directories are not collected recursively.
Unlike collect configuration, exclusion information can only be configured at
the mbox directory level (there are no global exclusions). Another difference
is that no absolute exclusion paths are allowed — only relative path exclusions
and patterns.
The following elements are part of the mbox configuration section:
collect_mode
Default collect mode.
The collect mode describes how frequently an mbox file or directory is
backed up. The mbox extension recognizes the same collect modes as the
standard Cedar Backup collect action (see Chapter 2, Basic Concepts).
This value is the collect mode that will be used by default during the
backup process. Individual files or directories (below) may override this
value. If all individual files or directories provide their own value, then
this default value may be omitted from configuration.
Note: if your backup device does not suppport multisession discs, then you
should probably use the daily collect mode to avoid losing data.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Default compress mode.
Mbox file or directory backups are just text, and often compress quite well
using gzip or bzip2. The compress mode describes how the backed-up data
will be compressed, if at all.
This value is the compress mode that will be used by default during the
backup process. Individual files or directories (below) may override this
value. If all individual files or directories provide their own value, then
this default value may be omitted from configuration.
Restrictions: Must be one of none, gzip or bzip2.
file
An individual mbox file to be collected.
This is a subsection which contains information about an individual mbox
file to be backed up.
This section can be repeated as many times as is necessary. At least one
mbox file or directory must be configured.
The file subsection contains the following fields:
collect_mode
Collect mode for this file.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Compress mode for this file.
This field is optional. If it doesn't exist, the backup will use the
default compress mode.
Restrictions: Must be one of none, gzip or bzip2.
abs_path
Absolute path of the mbox file to back up.
Restrictions: Must be an absolute path.
dir
An mbox directory to be collected.
This is a subsection which contains information about an mbox directory to
be backed up. An mbox directory is a directory containing mbox files. Every
file in an mbox directory is assumed to be an mbox file. Mbox directories
are not collected recursively. Only the files immediately within the
configured directory will be backed-up and any subdirectories will be
ignored.
This section can be repeated as many times as is necessary. At least one
mbox file or directory must be configured.
The dir subsection contains the following fields:
collect_mode
Collect mode for this file.
This field is optional. If it doesn't exist, the backup will use the
default collect mode.
Restrictions: Must be one of daily, weekly or incr.
compress_mode
Compress mode for this file.
This field is optional. If it doesn't exist, the backup will use the
default compress mode.
Restrictions: Must be one of none, gzip or bzip2.
abs_path
Absolute path of the mbox directory to back up.
Restrictions: Must be an absolute path.
exclude
List of paths or patterns to exclude from the backup.
This is a subsection which contains a set of paths and patterns to be
excluded within this mbox directory.
This section is entirely optional, and if it exists can also be empty.
The exclude subsection can contain one or more of each of the following
fields:
rel_path
A relative path to be excluded from the backup.
The path is assumed to be relative to the mbox directory itself.
For instance, if the configured mbox directory is /home/user2/mail
a configured relative path of SPAM would exclude the path /home/
user2/mail/SPAM.
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty.
pattern
A pattern to be excluded from the backup.
The pattern must be a Python regular expression. ^[21] It is
assumed to be bounded at front and back by the beginning and end of
the string (i.e. it is treated as if it begins with ^ and ends with
$).
This field can be repeated as many times as is necessary.
Restrictions: Must be non-empty
Encrypt Extension
The Encrypt Extension is a Cedar Backup extension used to encrypt backups. It
does this by encrypting the contents of a master's staging directory each day
after the stage action is run. This way, backed-up data is encrypted both when
sitting on the master and when written to disc. This extension must be run
before the standard store action, otherwise unencrypted data will be written to
disc.
There are several differents ways encryption could have been built in to or
layered on to Cedar Backup. I asked the mailing list for opinions on the
subject in January 2007 and did not get a lot of feedback, so I chose the
option that was simplest to understand and simplest to implement. If other
encryption use cases make themselves known in the future, this extension can be
enhanced or replaced.
Currently, this extension supports only GPG. However, it would be
straightforward to support other public-key encryption mechanisms, such as
OpenSSL.
Warning
If you decide to encrypt your backups, be absolutely sure that you have your
GPG secret key saved off someplace safe — someplace other than on your backup
disc. If you lose your secret key, your backup will be useless.
I suggest that before you rely on this extension, you should execute a dry run
and make sure you can successfully decrypt the backup that is written to disc.
Before configuring the Encrypt extension, you must configure GPG. Either create
a new keypair or use an existing one. Determine which user will execute your
backup (typically root) and have that user import and lsign the public half of
the keypair. Then, save off the secret half of the keypair someplace safe,
apart from your backup (i.e. on a floppy disk or USB drive). Make sure you know
the recipient name associated with the public key because you'll need it to
configure Cedar Backup. (If you can run gpg -e -r "Recipient Name" file.txt and
it executes cleanly with no user interaction required, you should be OK.)
An encrypted backup has the same file structure as a normal backup, so all of
the instructions in Appendix C, Data Recovery apply. The only difference is
that encrypted files will have an additional .gpg extension (so for instance
file.tar.gz becomes file.tar.gz.gpg). To recover decrypted data, simply log on
as a user which has access to the secret key and decrypt the .gpg file that you
are interested in. Then, recover the data as usual.
Note: I am being intentionally vague about how to configure and use GPG,
because I do not want to encourage neophytes to blindly use this extension. If
you do not already understand GPG well enough to follow the two paragraphs
above, do not use this extension. Instead, before encrypting your backups,
check out the excellent GNU Privacy Handbook at http://www.gnupg.org/gph/en/
manual.html and gain an understanding of how encryption can help you or hurt
you.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>encrypt</name>
<module>CedarBackup3.extend.encrypt</module>
<function>executeAction</function>
<index>301</index>
</action>
</extensions>
This extension relies on the options and staging configuration sections in the
standard Cedar Backup configuration file, and then also requires its own
encrypt configuration section. This is an example Encrypt configuration
section:
<encrypt>
<encrypt_mode>gpg</encrypt_mode>
<encrypt_target>Backup User</encrypt_target>
</encrypt>
The following elements are part of the Encrypt configuration section:
encrypt_mode
Encryption mode.
This value specifies which encryption mechanism will be used by the
extension.
Currently, only the GPG public-key encryption mechanism is supported.
Restrictions: Must be gpg.
encrypt_target
Encryption target.
The value in this field is dependent on the encryption mode. For the gpg
mode, this is the name of the recipient whose public key will be used to
encrypt the backup data, i.e. the value accepted by gpg -r.
Split Extension
The Split Extension is a Cedar Backup extension used to split up large files
within staging directories. It is probably only useful in combination with the
cback3-span command, which requires individual files within staging directories
to each be smaller than a single disc.
You would normally run this action immediately after the standard stage action,
but you could also choose to run it by hand immediately before running
cback3-span.
The split extension uses the standard UNIX split tool to split the large files
up. This tool simply splits the files on bite-size boundaries. It has no
knowledge of file formats.
Note: this means that in order to recover the data in your original large file,
you must have every file that the original file was split into. Think carefully
about whether this is what you want. It doesn't sound like a huge limitation.
However, cback3-span might put an indivdual file on any disc in a set — the
files split from one larger file will not necessarily be together. That means
you will probably need every disc in your backup set in order to recover any
data from the backup set.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions>
<action>
<name>split</name>
<module>CedarBackup3.extend.split</module>
<function>executeAction</function>
<index>299</index>
</action>
</extensions>
This extension relies on the options and staging configuration sections in the
standard Cedar Backup configuration file, and then also requires its own split
configuration section. This is an example Split configuration section:
<split>
<size_limit>250 MB</size_limit>
<split_size>100 MB</split_size>
</split>
The following elements are part of the Split configuration section:
size_limit
Size limit.
Files with a size strictly larger than this limit will be split by the
extension.
You can enter this value in two different forms. It can either be a simple
number, in which case the value is assumed to be in bytes; or it can be a
number followed by a unit (KB, MB, GB).
Valid examples are “10240”, “250 MB” or “1.1 GB”.
Restrictions: Must be a size as described above.
split_size
Split size.
This is the size of the chunks that a large file will be split into. The
final chunk may be smaller if the split size doesn't divide evenly into the
file size.
You can enter this value in two different forms. It can either be a simple
number, in which case the value is assumed to be in bytes; or it can be a
number followed by a unit (KB, MB, GB).
Valid examples are “10240”, “250 MB” or “1.1 GB”.
Restrictions: Must be a size as described above.
Capacity Extension
The capacity extension checks the current capacity of the media in the writer
and prints a warning if the media exceeds an indicated capacity. The capacity
is indicated either by a maximum percentage utilized or by a minimum number of
bytes that must remain unused.
This action can be run at any time, but is probably best run as the last action
on any given day, so you get as much notice as possible that your media is full
and needs to be replaced.
To enable this extension, add the following section to the Cedar Backup
configuration file:
<extensions> <action>
<name>capacity</name>
<module>CedarBackup3.extend.capacity</module>
<function>executeAction</function>
<index>299</index>
</action>
</extensions>
This extension relies on the options and store configuration sections in the
standard Cedar Backup configuration file, and then also requires its own
capacity configuration section. This is an example Capacity configuration
section that configures the extension to warn if the media is more than 95.5%
full:
<capacity>
<max_percentage>95.5</max_percentage>
</capacity>
This example configures the extension to warn if the media has fewer than 16 MB
free:
<capacity>
<min_bytes>16 MB</min_bytes>
</capacity>
The following elements are part of the Capacity configuration section:
max_percentage
Maximum percentage of the media that may be utilized.
You must provide either this value or the min_bytes value.
Restrictions: Must be a floating point number between 0.0 and 100.0
min_bytes
Minimum number of free bytes that must be available.
You can enter this value in two different forms. It can either be a simple
number, in which case the value is assumed to be in bytes; or it can be a
number followed by a unit (KB, MB, GB).
Valid examples are “10240”, “250 MB” or “1.1 GB”.
You must provide either this value or the max_percentage value.
Restrictions: Must be a byte quantity as described above.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
^[25] See http://subversion.org
^[26] See http://www.mysql.com
^[27] See http://www.postgresql.org/
Appendix A. Extension Architecture Interface
The Cedar Backup Extension Architecture Interface is the application
programming interface used by third-party developers to write Cedar Backup
extensions. This appendix briefly specifies the interface in enough detail for
someone to succesfully implement an extension.
You will recall that Cedar Backup extensions are third-party pieces of code
which extend Cedar Backup's functionality. Extensions can be invoked from the
Cedar Backup command line and are allowed to place their configuration in Cedar
Backup's configuration file.
There is a one-to-one mapping between a command-line extended action and an
extension function. The mapping is configured in the Cedar Backup configuration
file using a section something like this:
<extensions>
<action>
<name>database</name>
<module>foo</module>
<function>bar</function>
<index>101</index>
</action>
</extensions>
In this case, the action “database” has been mapped to the extension function
foo.bar().
Extension functions can take any actions they would like to once they have been
invoked, but must abide by these rules:
1. Extensions may not write to stdout or stderr using functions such as print
or sys.write.
2. All logging must take place using the Python logging facility.
Flow-of-control logging should happen on the CedarBackup3.log topic.
Authors can assume that ERROR will always go to the terminal, that INFO and
WARN will always be logged, and that DEBUG will be ignored unless debugging
is enabled.
3. Any time an extension invokes a command-line utility, it must be done
through the CedarBackup3.util.executeCommand function. This will help keep
Cedar Backup safer from format-string attacks, and will make it easier to
consistently log command-line process output.
4. Extensions may not return any value.
5. Extensions must throw a Python exception containing a descriptive message
if processing fails. Extension authors can use their judgement as to what
constitutes failure; however, any problems during execution should result
in either a thrown exception or a logged message.
6. Extensions may rely only on Cedar Backup functionality that is advertised
as being part of the public interface. This means that extensions cannot
directly make use of methods, functions or values starting with with the _
character. Furthermore, extensions should only rely on parts of the public
interface that are documented in the online Epydoc documentation.
7. Extension authors are encouraged to extend the Cedar Backup public
interface through normal methods of inheritence. However, no extension is
allowed to directly change Cedar Backup code in a way that would affect how
Cedar Backup itself executes when the extension has not been invoked. For
instance, extensions would not be allowed to add new command-line options
or new writer types.
8. Extensions must be written to assume an empty locale set (no $LC_*
settings) and $LANG=C. For the typical open-source software project, this
would imply writing output-parsing code against the English localization
(if any). The executeCommand function does sanitize the environment to
enforce this configuration.
Extension functions take three arguments: the path to configuration on disk, a
CedarBackup3.cli.Options object representing the command-line options in
effect, and a CedarBackup3.config.Config object representing parsed standard
configuration.
def function(configPath, options, config):
"""Sample extension function."""
pass
This interface is structured so that simple extensions can use standard
configuration without having to parse it for themselves, but more complicated
extensions can get at the configuration file on disk and parse it again as
needed.
The interface to the CedarBackup3.cli.Options and CedarBackup3.config.Config
classes has been thoroughly documented using Epydoc, and the documentation is
available on the Cedar Backup website. The interface is guaranteed to change
only in backwards-compatible ways unless the Cedar Backup major version number
is bumped (i.e. from 2 to 3).
If an extension needs to add its own configuration information to the Cedar
Backup configuration file, this extra configuration must be added in a new
configuration section using a name that does not conflict with standard
configuration or other known extensions.
For instance, our hypothetical database extension might require configuration
indicating the path to some repositories to back up. This information might go
into a section something like this:
<database>
<repository>/path/to/repo1</repository>
<repository>/path/to/repo2</repository>
</database>
In order to read this new configuration, the extension code can either inherit
from the Config object and create a subclass that knows how to parse the new
database config section, or can write its own code to parse whatever it needs
out of the file. Either way, the resulting code is completely independent of
the standard Cedar Backup functionality.
Appendix B. Dependencies
Python 3.4 (or later)
┌────────┬──────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼──────────────────────────────────────────────────────────┤
│upstream│http://www.python.org │
├────────┼──────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/python/python3.4 │
├────────┼──────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=python3│
└────────┴──────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
RSH Server and Client
Although Cedar Backup will technically work with any RSH-compatible server
and client pair (such as the classic “rsh” client), most users should only
use an SSH (secure shell) server and client.
The defacto standard today is OpenSSH. Some systems package the server and
the client together, and others package the server and the client
separately. Note that master nodes need an SSH client, and client nodes
need to run an SSH server.
┌────────┬──────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼──────────────────────────────────────────────────────────┤
│upstream│http://www.openssh.com/ │
├────────┼──────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/net/ssh │
├────────┼──────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=openssh│
└────────┴──────────────────────────────────────────────────────────┘
If you can't find SSH client or server packages for your system, install
from the package source, using the “upstream” link.
mkisofs
The mkisofs command is used create ISO filesystem images that can later be
written to backup media.
On Debian platforms, mkisofs is not distributed and genisoimage is used
instead. The Debian package takes care of this for you.
┌────────┬──────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼──────────────────────────────────────────────────────────┤
│upstream│https://en.wikipedia.org/wiki/Cdrtools │
├────────┼──────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=mkisofs│
└────────┴──────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
cdrecord
The cdrecord command is used to write ISO images to CD media in a backup
device.
On Debian platforms, cdrecord is not distributed and wodim is used instead.
The Debian package takes care of this for you.
┌────────┬───────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼───────────────────────────────────────────────────────────┤
│upstream│https://en.wikipedia.org/wiki/Cdrtools │
├────────┼───────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=cdrecord│
└────────┴───────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
dvd+rw-tools
The dvd+rw-tools package provides the growisofs utility, which is used to
write ISO images to DVD media in a backup device.
┌────────┬───────────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼───────────────────────────────────────────────────────────────┤
│upstream│http://fy.chalmers.se/~appro/linux/DVD+RW/ │
├────────┼───────────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/utils/dvd+rw-tools │
├────────┼───────────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=dvd+rw-tools│
└────────┴───────────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
eject and volname
The eject command is used to open and close the tray on a backup device (if
the backup device has a tray). Sometimes, the tray must be opened and
closed in order to "reset" the device so it notices recent changes to a
disc.
The volname command is used to determine the volume name of media in a
backup device.
┌────────┬────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼────────────────────────────────────────────────────────┤
│upstream│http://sourceforge.net/projects/eject │
├────────┼────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/utils/eject │
├────────┼────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=eject│
└────────┴────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
mount and umount
The mount and umount commands are used to mount and unmount CD/DVD media
after it has been written, in order to run a consistency check.
┌────────┬────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼────────────────────────────────────────────────────────┤
│upstream│https://www.kernel.org/pub/linux/utils/util-linux/ │
├────────┼────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/base/mount │
├────────┼────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=mount│
└────────┴────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
grepmail
The grepmail command is used by the mbox extension to pull out only recent
messages from mbox mail folders.
┌────────┬───────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼───────────────────────────────────────────────────────────┤
│upstream│http://sourceforge.net/projects/grepmail/ │
├────────┼───────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/mail/grepmail │
├────────┼───────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=grepmail│
└────────┴───────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
gpg
The gpg command is used by the encrypt extension to encrypt files.
┌────────┬────────────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼────────────────────────────────────────────────────────┤
│upstream│https://www.gnupg.org/ │
├────────┼────────────────────────────────────────────────────────┤
│Debian │http://packages.debian.org/stable/utils/gnupg │
├────────┼────────────────────────────────────────────────────────┤
│RPM │http://rpmfind.net/linux/rpm2html/search.php?query=gnupg│
└────────┴────────────────────────────────────────────────────────┘
If you can't find a package for your system, install from the package
source, using the “upstream” link.
split
The split command is used by the split extension to split up large files.
This command is typically part of the core operating system install and is
not distributed in a separate package.
AWS CLI
AWS CLI is Amazon's official command-line tool for interacting with the
Amazon Web Services infrastruture. Cedar Backup uses AWS CLI to copy backup
data up to Amazon S3 cloud storage.
After you install AWS CLI, you need to configure your connection to AWS
with an appropriate access id and access key. Amazon provides a good setup
guide.
┌────────┬─────────────────────────────────────────┐
│ Source │ URL │
├────────┼─────────────────────────────────────────┤
│upstream│http://aws.amazon.com/documentation/cli/ │
├────────┼─────────────────────────────────────────┤
│Debian │https://packages.debian.org/stable/awscli│
└────────┴─────────────────────────────────────────┘
The initial implementation of the amazons3 extension was written using AWS
CLI 1.4. As of this writing, not all Linux distributions include a package
for this version. On these platforms, the easiest way to install it is via
PIP: apt-get install python3-pip, and then pip3 install awscli. The Debian
package includes an appropriate dependency starting with the jessie
release.
Chardet
The cback3-amazons3-sync command relies on the Chardet Python package to
check filename encoding. You only need this package if you are going to use
the sync tool.
┌────────┬──────────────────────────────────────────────────┐
│ Source │ URL │
├────────┼──────────────────────────────────────────────────┤
│upstream│https://github.com/chardet/chardet │
├────────┼──────────────────────────────────────────────────┤
│debian │https://packages.debian.org/stable/python3-chardet│
└────────┴──────────────────────────────────────────────────┘
Appendix C. Data Recovery
Table of Contents
Finding your Data
Recovering Filesystem Data
Full Restore
Partial Restore
Recovering MySQL Data
Recovering Subversion Data
Recovering Mailbox Data
Recovering Data split by the Split Extension
Finding your Data
The first step in data recovery is finding the data that you want to recover.
You need to decide whether you are going to to restore off backup media, or out
of some existing staging data that has not yet been purged. The only difference
is, if you purge staging data less frequently than once per week, you might
have some data available in the staging directories which would not be found on
your backup media, depending on how you rotate your media. (And of course, if
your system is trashed or stolen, you probably will not have access to your old
staging data in any case.)
Regardless of the data source you choose, you will find the data organized in
the same way. The remainder of these examples will work off an example backup
disc, but the contents of the staging directory will look pretty much like the
contents of the disc, with data organized first by date and then by backup peer
name.
This is the root directory of my example disc:
root:/mnt/cdrw# ls -l
total 4
drwxr-x--- 3 backup backup 4096 Sep 01 06:30 2005/
In this root directory is one subdirectory for each year represented in the
backup. In this example, the backup represents data entirely from the year
2005. If your configured backup week happens to span a year boundary, there
would be two subdirectories here (for example, one for 2005 and one for 2006).
Within each year directory is one subdirectory for each month represented in
the backup.
root:/mnt/cdrw/2005# ls -l
total 2
dr-xr-xr-x 6 root root 2048 Sep 11 05:30 09/
In this example, the backup represents data entirely from the month of
September, 2005. If your configured backup week happens to span a month
boundary, there would be two subdirectories here (for example, one for August
2005 and one for September 2005).
Within each month directory is one subdirectory for each day represented in the
backup.
root:/mnt/cdrw/2005/09# ls -l
total 8
dr-xr-xr-x 5 root root 2048 Sep 7 05:30 07/
dr-xr-xr-x 5 root root 2048 Sep 8 05:30 08/
dr-xr-xr-x 5 root root 2048 Sep 9 05:30 09/
dr-xr-xr-x 5 root root 2048 Sep 11 05:30 11/
Depending on how far into the week your backup media is from, you might have as
few as one daily directory in here, or as many as seven.
Within each daily directory is a stage indicator (indicating when the directory
was staged) and one directory for each peer configured in the backup:
root:/mnt/cdrw/2005/09/07# ls -l
total 10
dr-xr-xr-x 2 root root 2048 Sep 7 02:31 host1/
-r--r--r-- 1 root root 0 Sep 7 03:27 cback.stage
dr-xr-xr-x 2 root root 4096 Sep 7 02:30 host2/
dr-xr-xr-x 2 root root 4096 Sep 7 03:23 host3/
In this case, you can see that my backup includes three machines, and that the
backup data was staged on September 7, 2005 at 03:27.
Within the directory for a given host are all of the files collected on that
host. This might just include tarfiles from a normal Cedar Backup collect run,
and might also include files “collected” from Cedar Backup extensions or by
other third-party processes on your system.
root:/mnt/cdrw/2005/09/07/host1# ls -l
total 157976
-r--r--r-- 1 root root 11206159 Sep 7 02:30 boot.tar.bz2
-r--r--r-- 1 root root 0 Sep 7 02:30 cback.collect
-r--r--r-- 1 root root 3199 Sep 7 02:30 dpkg-selections.txt.bz2
-r--r--r-- 1 root root 908325 Sep 7 02:30 etc.tar.bz2
-r--r--r-- 1 root root 389 Sep 7 02:30 fdisk-l.txt.bz2
-r--r--r-- 1 root root 1003100 Sep 7 02:30 ls-laR.txt.bz2
-r--r--r-- 1 root root 19800 Sep 7 02:30 mysqldump.txt.bz2
-r--r--r-- 1 root root 4133372 Sep 7 02:30 opt-local.tar.bz2
-r--r--r-- 1 root root 44794124 Sep 8 23:34 opt-public.tar.bz2
-r--r--r-- 1 root root 30028057 Sep 7 02:30 root.tar.bz2
-r--r--r-- 1 root root 4747070 Sep 7 02:30 svndump-0:782-opt-svn-repo1.txt.bz2
-r--r--r-- 1 root root 603863 Sep 7 02:30 svndump-0:136-opt-svn-repo2.txt.bz2
-r--r--r-- 1 root root 113484 Sep 7 02:30 var-lib-jspwiki.tar.bz2
-r--r--r-- 1 root root 19556660 Sep 7 02:30 var-log.tar.bz2
-r--r--r-- 1 root root 14753855 Sep 7 02:30 var-mail.tar.bz2
As you can see, I back up variety of different things on host1. I run the
normal collect action, as well as the sysinfo, mysql and subversion extensions.
The resulting backup files are named in a way that makes it easy to determine
what they represent.
Files of the form *.tar.bz2 represent directories backed up by the collect
action. The first part of the name (before “.tar.bz2”), represents the path to
the directory. For example, boot.tar.gz contains data from /boot, and
var-lib-jspwiki.tar.bz2 contains data from /var/lib/jspwiki.
The fdisk-l.txt.bz2, ls-laR.tar.bz2 and dpkg-selections.tar.bz2 files are
produced by the sysinfo extension.
The mysqldump.txt.bz2 file is produced by the mysql extension. It represents a
system-wide database dump, because I use the “all” flag in configuration. If I
were to configure Cedar Backup to dump individual datbases, then the filename
would contain the database name (something like mysqldump-bugs.txt.bz2).
Finally, the files of the form svndump-*.txt.bz2 are produced by the subversion
extension. There is one dump file for each configured repository, and the dump
file name represents the name of the repository and the revisions in that dump.
So, the file svndump-0:782-opt-svn-repo1.txt.bz2 represents revisions 0-782 of
the repository at /opt/svn/repo1. You can tell that this file contains a full
backup of the repository to this point, because the starting revision is zero.
Later incremental backups would have a non-zero starting revision, i.e. perhaps
783-785, followed by 786-800, etc.
Recovering Filesystem Data
Filesystem data is gathered by the standard Cedar Backup collect action. This
data is placed into files of the form *.tar. The first part of the name (before
“.tar”), represents the path to the directory. For example, boot.tar would
contain data from /boot, and var-lib-jspwiki.tar would contain data from /var/
lib/jspwiki. (As a special case, data from the root directory would be placed
in -.tar). Remember that your tarfile might have a bzip2 (.bz2) or gzip (.gz)
extension, depending on what compression you specified in configuration.
If you are using full backups every day, the latest backup data is always
within the latest daily directory stored on your backup media or within your
staging directory. If you have some or all of your directories configured to do
incremental backups, then the first day of the week holds the full backups and
the other days represent incremental differences relative to that first day of
the week.
Where to extract your backup
If you are restoring a home directory or some other non-system directory as
part of a full restore, it is probably fine to extract the backup directly into
the filesystem.
If you are restoring a system directory like /etc as part of a full restore,
extracting directly into the filesystem is likely to break things, especially
if you re-installed a newer version of your operating system than the one you
originally backed up. It's better to extract directories like this to a
temporary location and pick out only the files you find you need.
When doing a partial restore, I suggest always extracting to a temporary
location. Doing it this way gives you more control over what you restore, and
helps you avoid compounding your original problem with another one (like
overwriting the wrong file, oops).
Full Restore
To do a full system restore, find the newest applicable full backup and extract
it. If you have some incremental backups, extract them into the same place as
the full backup, one by one starting from oldest to newest. (This way, if a
file changed every day you will always get the latest one.)
All of the backed-up files are stored in the tar file in a relative fashion, so
you can extract from the tar file either directly into the filesystem, or into
a temporary location.
For example, to restore boot.tar.bz2 directly into /boot, execute tar from your
root directory (/):
root:/# bzcat boot.tar.bz2 | tar xvf -
Of course, use zcat or just cat, depending on what kind of compression is in
use.
If you want to extract boot.tar.gz into a temporary location like /tmp/boot
instead, just change directories first. In this case, you'd execute the tar
command from within /tmp instead of /.
root:/tmp# bzcat boot.tar.bz2 | tar xvf -
Again, use zcat or just cat as appropriate.
For more information, you might want to check out the manpage or GNU info
documentation for the tar command.
Partial Restore
Most users will need to do a partial restore much more frequently than a full
restore. Perhaps you accidentally removed your home directory, or forgot to
check in some version of a file before deleting it. Or, perhaps the person who
packaged Apache for your system blew away your web server configuration on
upgrade (it happens). The solution to these and other kinds of problems is a
partial restore (assuming you've backed up the proper things).
The procedure is similar to a full restore. The specific steps depend on how
much information you have about the file you are looking for. Where with a full
restore, you can confidently extract the full backup followed by each of the
incremental backups, this might not be what you want when doing a partial
restore. You may need to take more care in finding the right version of a file
— since the same file, if changed frequently, would appear in more than one
backup.
Start by finding the backup media that contains the file you are looking for.
If you rotate your backup media, and your last known “contact” with the file
was a while ago, you may need to look on older media to find it. This may take
some effort if you are not sure when the change you are trying to correct took
place.
Once you have decided to look at a particular piece of backup media, find the
correct peer (host), and look for the file in the full backup:
root:/tmp# bzcat boot.tar.bz2 | tar tvf - path/to/file
Of course, use zcat or just cat, depending on what kind of compression is in
use.
The tvf tells tar to search for the file in question and just list the results
rather than extracting the file. Note that the filename is relative (with no
starting /). Alternately, you can omit the path/to/file and search through the
output using more or less
If you haven't found what you are looking for, work your way through the
incremental files for the directory in question. One of them may also have the
file if it changed during the course of the backup. Or, move to older or newer
media and see if you can find the file there.
Once you have found your file, extract it using xvf:
root:/tmp# bzcat boot.tar.bz2 | tar xvf - path/to/file
Again, use zcat or just cat as appropriate.
Inspect the file and make sure it's what you're looking for. Again, you may
need to move to older or newer media to find the exact version of your file.
For more information, you might want to check out the manpage or GNU info
documentation for the tar command.
Recovering MySQL Data
MySQL data is gathered by the Cedar Backup mysql extension. This extension
always creates a full backup each time it runs. This wastes some space, but
makes it easy to restore database data. The following procedure describes how
to restore your MySQL database from the backup.
Warning
I am not a MySQL expert. I am providing this information for reference. I have
tested these procedures on my own MySQL installation; however, I only have a
single database for use by Bugzilla, and I may have misunderstood something
with regard to restoring individual databases as a user other than root. If you
have any doubts, test the procedure below before relying on it!
MySQL experts and/or knowledgable Cedar Backup users: feel free to write me and
correct any part of this procedure.
First, find the backup you are interested in. If you have specified “all
databases” in configuration, you will have a single backup file, called
mysqldump.txt. If you have specified individual databases in configuration,
then you will have files with names like mysqldump-database.txt instead. In
either case, your file might have a .gz or .bz2 extension depending on what
kind of compression you specified in configuration.
If you are restoring an “all databases” backup, make sure that you have
correctly created the root user and know its password. Then, execute:
daystrom:/# bzcat mysqldump.txt.bz2 | mysql -p -u root
Of course, use zcat or just cat, depending on what kind of compression is in
use.
Because the database backup includes CREATE DATABASE SQL statements, this
command should take care of creating all of the databases within the backup, as
well as populating them.
If you are restoring a backup for a specific database, you have two choices. If
you have a root login, you can use the same command as above:
daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u root
Otherwise, you can create the database and its login first (or have someone
create it) and then use a database-specific login to execute the restore:
daystrom:/# bzcat mysqldump-database.txt.bz2 | mysql -p -u user database
Again, use zcat or just cat as appropriate.
For more information on using MySQL, see the documentation on the MySQL web
site, http://mysql.org/, or the manpages for the mysql and mysqldump commands.
Recovering Subversion Data
Subversion data is gathered by the Cedar Backup subversion extension. Cedar
Backup will create either full or incremental backups, but the procedure for
restoring is the same for both. Subversion backups are always taken on a
per-repository basis. If you need to restore more than one repository, follow
the procedures below for each repository you are interested in.
First, find the backup or backups you are interested in. Typically, you will
need the full backup from the first day of the week and each incremental backup
from the other days of the week.
The subversion extension creates files of the form svndump-*.txt. These files
might have a .gz or .bz2 extension depending on what kind of compression you
specified in configuration. There is one dump file for each configured
repository, and the dump file name represents the name of the repository and
the revisions in that dump. So, the file svndump-0:782-opt-svn-repo1.txt.bz2
represents revisions 0-782 of the repository at /opt/svn/repo1. You can tell
that this file contains a full backup of the repository to this point, because
the starting revision is zero. Later incremental backups would have a non-zero
starting revision, i.e. perhaps 783-785, followed by 786-800, etc.
Next, if you still have the old Subversion repository around, you might want to
just move it off (rename the top-level directory) before executing the restore.
Or, you can restore into a temporary directory and rename it later to its real
name once you've checked it out. That is what my example below will show.
Next, you need to create a new Subversion repository to hold the restored data.
This example shows an FSFS repository, but that is an arbitrary choice. You can
restore from an FSFS backup into a FSFS repository or a BDB repository. The
Subversion dump format is “backend-agnostic”.
root:/tmp# svnadmin create --fs-type=fsfs testrepo
Next, load the full backup into the repository:
root:/tmp# bzcat svndump-0:782-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
Of course, use zcat or just cat, depending on what kind of compression is in
use.
Follow that with loads for each of the incremental backups:
root:/tmp# bzcat svndump-783:785-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
root:/tmp# bzcat svndump-786:800-opt-svn-repo1.txt.bz2 | svnadmin load testrepo
Again, use zcat or just cat as appropriate.
When this is done, your repository will be restored to the point of the last
commit indicated in the svndump file (in this case, to revision 800).
Note
Note: don't be surprised if, when you test this, the restored directory doesn't
have exactly the same contents as the original directory. I can't explain why
this happens, but if you execute svnadmin dump on both old and new
repositories, the results are identical. This means that the repositories do
contain the same content.
For more information on using Subversion, see the book Version Control with
Subversion (http://svnbook.red-bean.com/) or the Subversion FAQ (http://
subversion.tigris.org/faq.html).
Recovering Mailbox Data
Mailbox data is gathered by the Cedar Backup mbox extension. Cedar Backup will
create either full or incremental backups, but both kinds of backups are
treated identically when restoring.
Individual mbox files and mbox directories are treated a little differently,
since individual files are just compressed, but directories are collected into
a tar archive.
First, find the backup or backups you are interested in. Typically, you will
need the full backup from the first day of the week and each incremental backup
from the other days of the week.
The mbox extension creates files of the form mbox-*. Backup files for
individual mbox files might have a .gz or .bz2 extension depending on what kind
of compression you specified in configuration. Backup files for mbox
directories will have a .tar, .tar.gz or .tar.bz2 extension, again depending on
what kind of compression you specified in configuration.
There is one backup file for each configured mbox file or directory. The backup
file name represents the name of the file or directory and the date it was
backed up. So, the file mbox-20060624-home-user-mail-greylist represents the
backup for /home/user/mail/greylist run on 24 Jun 2006. Likewise,
mbox-20060624-home-user-mail.tar represents the backup for the /home/user/mail
directory run on that same date.
Once you have found the files you are looking for, the restoration procedure is
fairly simple. First, concatenate all of the backup files together. Then, use
grepmail to eliminate duplicate messages (if any).
Here is an example for a single backed-up file:
root:/tmp# rm restore.mbox # make sure it's not left over
root:/tmp# cat mbox-20060624-home-user-mail-greylist >> restore.mbox
root:/tmp# cat mbox-20060625-home-user-mail-greylist >> restore.mbox
root:/tmp# cat mbox-20060626-home-user-mail-greylist >> restore.mbox
root:/tmp# grepmail -a -u restore.mbox > nodups.mbox
At this point, nodups.mbox contains all of the backed-up messages from /home/
user/mail/greylist.
Of course, if your backups are compressed, you'll have to use zcat or bzcat
rather than just cat.
If you are backing up mbox directories rather than individual files, see the
filesystem instructions for notes on now to extract the individual files from
inside tar archives. Extract the files you are interested in, and then
concatenate them together just like shown above for the individual case.
Recovering Data split by the Split Extension
The Split extension takes large files and splits them up into smaller files.
Typically, it would be used in conjunction with the cback3-span command.
The split up files are not difficult to work with. Simply find all of the files
— which could be split between multiple discs — and concatenate them together.
root:/tmp# rm usr-src-software.tar.gz # make sure it's not there
root:/tmp# cat usr-src-software.tar.gz_00001 >> usr-src-software.tar.gz
root:/tmp# cat usr-src-software.tar.gz_00002 >> usr-src-software.tar.gz
root:/tmp# cat usr-src-software.tar.gz_00003 >> usr-src-software.tar.gz
Then, use the resulting file like usual.
Remember, you need to have all of the files that the original large file was
split into before this will work. If you are missing a file, the result of the
concatenation step will be either a corrupt file or a truncated file (depending
on which chunks you did not include).
Appendix D. Securing Password-less SSH Connections
Cedar Backup relies on password-less public key SSH connections to make various
parts of its backup process work. Password-less scp is used to stage files from
remote clients to the master, and password-less ssh is used to execute actions
on managed clients.
Normally, it is a good idea to avoid password-less SSH connections in favor of
using an SSH agent. The SSH agent manages your SSH connections so that you
don't need to type your passphrase over and over. You get most of the benefits
of a password-less connection without the risk. Unfortunately, because Cedar
Backup has to execute without human involvement (through a cron job), use of an
agent really isn't feasable. We have to rely on true password-less public keys
to give the master access to the client peers.
Traditionally, Cedar Backup has relied on a “segmenting” strategy to minimize
the risk. Although the backup typically runs as root — so that all parts of the
filesystem can be backed up — we don't use the root user for network
connections. Instead, we use a dedicated backup user on the master to initiate
network connections, and dedicated users on each of the remote peers to accept
network connections.
With this strategy in place, an attacker with access to the backup user on the
master (or even root access, really) can at best only get access to the backup
user on the remote peers. We still concede a local attack vector, but at least
that vector is restricted to an unprivileged user.
Some Cedar Backup users may not be comfortable with this risk, and others may
not be able to implement the segmentation strategy — they simply may not have a
way to create a login which is only used for backups.
So, what are these users to do? Fortunately there is a solution. The SSH
authorized keys file supports a way to put a “filter” in place on an SSH
connection. This excerpt is from the AUTHORIZED_KEYS FILE FORMAT section of man
8 sshd:
command="command"
Specifies that the command is executed whenever this key is used for
authentication. The command supplied by the user (if any) is ignored. The
command is run on a pty if the client requests a pty; otherwise it is run
without a tty. If an 8-bit clean channel is required, one must not request
a pty or should specify no-pty. A quote may be included in the command by
quoting it with a backslash. This option might be useful to restrict
certain public keys to perform just a specific operation. An example might
be a key that permits remote backups but nothing else. Note that the client
may specify TCP and/or X11 forwarding unless they are explicitly prohibited.
Note that this option applies to shell, command or subsystem execution.
Essentially, this gives us a way to authenticate the commands that are being
executed. We can either accept or reject commands, and we can even provide a
readable error message for commands we reject. The filter is applied on the
remote peer, to the key that provides the master access to the remote peer.
So, let's imagine that we have two hosts: master “mickey”, and peer “minnie”.
Here is the original ~/.ssh/authorized_keys file for the backup user on minnie
(remember, this is all on one line in the file):
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp3MsSpVB9q9iZ+awek120391k;mm0c221=3=km
=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9FtAZ9U+MmpL901231asdkl;ai1-923ma9s=9=
1-2341=-a0sd=-sa0=1z= backup@mickey
This line is the public key that minnie can use to identify the backup user on
mickey. Assuming that there is no passphrase on the private key back on mickey,
the backup user on mickey can get direct access to minnie.
To put the filter in place, we add a command option to the key, like this:
command="/opt/backup/validate-backup" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAxw7EnqVULBFgPcut3WYp
3MsSpVB9q9iZ+awek120391k;mm0c221=3=km=m=askdalkS82mlF7SusBTcXiCk1BGsg7axZ2sclgK+FfWV1Jm0/I9yo9F
tAZ9U+MmpL901231asdkl;ai1-923ma9s=9=1-2341=-a0sd=-sa0=1z= backup@mickey
Basically, the command option says that whenever this key is used to
successfully initiate a connection, the /opt/backup/validate-backup command
will be run instead of the real command that came over the SSH connection.
Fortunately, the interface gives the command access to certain shell variables
that can be used to invoke the original command if you want to.
A very basic validate-backup script might look something like this:
#!/bin/bash
if [[ "${SSH_ORIGINAL_COMMAND}" == "ls -l" ]] ; then
${SSH_ORIGINAL_COMMAND}
else
echo "Security policy does not allow command [${SSH_ORIGINAL_COMMAND}]."
exit 1
fi
This script allows exactly ls -l and nothing else. If the user attempts some
other command, they get a nice error message telling them that their command
has been disallowed.
For remote commands executed over ssh, the original command is exactly what the
caller attempted to invoke. For remote copies, the commands are either scp -f
file (copy from the peer to the master) or scp -t file (copy to the peer from
the master).
If you want, you can see what command SSH thinks it is executing by using ssh
-v or scp -v. The command will be right at the top, something like this:
Executing: program /usr/bin/ssh host mickey, user (unspecified), command scp -v -f .profile
OpenSSH_4.3p2 Debian-9, OpenSSL 0.9.8c 05 Sep 2006
debug1: Reading configuration data /home/backup/.ssh/config
debug1: Applying options for daystrom
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
Omit the -v and you have your command: scp -f .profile.
For a normal, non-managed setup, you need to allow the following commands,
where /path/to/collect/ is replaced with the real path to the collect directory
on the remote peer:
scp -f /path/to/collect/cback.collect
scp -f /path/to/collect/*
scp -t /path/to/collect/cback.stage
If you are configuring a managed client, then you also need to list the exact
command lines that the master will be invoking on the managed client. You are
guaranteed that the master will invoke one action at a time, so if you list two
lines per action (full and non-full) you should be fine. Here's an example for
the collect action:
/usr/bin/cback3 --full collect
/usr/bin/cback3 collect
Of course, you would have to list the actual path to the cback3 executable —
exactly the one listed in the <cback_command> configuration option for your
managed peer.
I hope that there is enough information here for interested users to implement
something that makes them comfortable. I have resisted providing a complete
example script, because I think everyone's setup will be different. However,
feel free to write if you are working through this and you have questions.
Appendix E. Copyright
Copyright (c) 2004-2011,2013-2015
Kenneth J. Pronovici
This work is free; you can redistribute it and/or modify it under
the terms of the GNU General Public License (the "GPL"), Version 2,
as published by the Free Software Foundation.
For the purposes of the GPL, the "preferred form of modification"
for this work is the original Docbook XML text files. If you
choose to distribute this work in a compiled form (i.e. if you
distribute HTML, PDF or Postscript documents based on the original
Docbook XML text files), you must also consider image files to be
"source code" if those images are required in order to construct a
complete and readable compiled version of the work.
This work is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Copies of the GNU General Public License are available from
the Free Software Foundation website, http://www.gnu.org/.
You may also write the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
====================================================================
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
====================================================================
|