This file is indexed.

/usr/share/doc/sphinx3/sphinxman_manual.html is in sphinx3-doc 0.8-0ubuntu1.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
  21
  22
  23
  24
  25
  26
  27
  28
  29
  30
  31
  32
  33
  34
  35
  36
  37
  38
  39
  40
  41
  42
  43
  44
  45
  46
  47
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
  <title>SphinxTrain Documentation</title>
  <style type="text/css">
     pre { font-size: medium; background: #f0f8ff; padding: 2mm; border-style: ridge ; color: teal}
     code {font-size: medium; color: teal}
  </style>
</head>
 <body>

<a name="top">INDEX</a>
<p>(This is under construction.)
<!======================================================================>
<ol>
   <li><a href="#0"><font color="red">Before you train</font></a>
   <ul>
      <li><a href="#00">The general-procedure chart</a>
      <li><a href="#01">Modeling context-dependent phones with 
          untied states: some memory requirements</a>
      <li><a href="#02">Data preparation</a>
          <ul>
          <li><a href="#02a">When you have a very small closed vocabulary</a>
          </ul>
      <li><a href="#03">The set of base and higher order feature vectors</a>
      <ul>
         <li><a href="#031">Feature streams</a>
      </ul>
      <li><a href="#04">Force-alignment</a>
   </ul>
<!======================================================================>
   <li><a href="#2"><font color="red">Training continuous models</font></a>
   <ul>
      <li><a href="#20">Creating the CI model definition file</a>  
      <li><a href="#21">Creating the HMM topology file</a>
      <li><a href="#22">Flat initialization of CI model parameters</a>
      <li><a href="#23">Training CI models</a>
      <li><a href="#24">Creating the CD untied model definition file</a>
      <li><a href="#25">Flat initialization of CD untied model parameters</a>
      <li><a href="#26">Training CD untied models</a>
      <li><a href="#27">Building decision trees for parameter sharing</a>
      <ul>
         <li><a href="#28">Generating the linguistic questions</a>
      </ul>
      <li><a href="#29">Pruning the decision trees</a>
      <li><a href="#30">Creating the CD tied model definition file</a>
      <li><a href="#31">Initializing and training cd tied gaussian 
          mixture models </a>
  </ul>
<!=======================================================================>
  <li><a href="#3"><font color="red">Training semi-continuous models</font></a>
  <ul>
     <li><a href="#3b">Vector quantization</a>
     <li><a href="#3d">Creating the CI model definition file</a>
     <li><a href="#3e">Creating the HMM topology file</a>
     <li><a href="#3c">Flat initialization of CI model parameters</a>
     <li><a href="#3f">Training CI models</a>
     <li><a href="#3g">Creating the CD untied model definition file</a>
     <li><a href="#3h">Flat initialization of CD untied model parameters</a>
     <li><a href="#3i">Training CD untied models</a>
     <li><a href="#3j">Building decision trees for parameter sharing</a>
     <ul>
        <li><a href="#3k">Generating the linguistic questions</a>
     </ul>
     <li><a href="#3l">Pruning the decision trees</a>
     <li><a href="#3m">Creating the CD tied model definition file</a>
     <li><a href="#3n">Initializing and training cd tied models </a>
     <li><a href="#3a">Deleted interpolation</a>
  </ul>
  <li><a href="#4">SPHINX2 data and model formats</a>
  <li><a href="#4b">SPHINX3 data and model formats</a>
  <li><a href="#5">Training multilingual models</a>
  <li><a href="#6">The training lexicon</a>
  <li><a href="#7">Converting SPHINX3 format models to SPHINX2 format</a>
  <li><a href="#8">Updating or adapting existing model sets</a>
  <li><a href="#9">Using the SPHINX-III decoder with semi-continuous and
        continuous models</a>
</ol>
<hr>
<!=========================================================================>
This part of the manual describes the procedure(s) for training acoustic
models using the Sphinx3 trainer. General training procedures are described
first, and followed by more detailed descriptions of the programs and scripts
used, and the analysis of their logs and other outputs. 
<p>


<a name="0"></a>
<a name="00"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">BEFORE YOU TRAIN</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>THE GENERAL-PROCEDURE CHART</td>
</table>
<!------------------------------------------------------------------------->
<pre>
                         Training chart for the
                         sphinx2  trainer
                        =========================
                                OBSOLETE
               (The sphinx2 trainer is no longer used in CMU)




                         Training chart for the
                         sphinx3  trainer
                        =========================
                             type of model
                                   |
                    ----------------------------------
                    |                                |
               CONTINUOUS                      SEMI-CONTINUOUS
                    |                                |
                    |                         vector-qunatization
                    |                                |
                    ----------------------------------
                                   |...make ci mdef
                                   |...flat_initialize CI models
                             training CI models
                                   |...make cd untied mdef
                                   |...initialize
                                   |
                             training CD untied models
                                   |
                                   |
                                   |
                             decision tree building
                                   |...prune trees
                                   |...tie states
                                   |...make cd tied mdef
                             training CD tied models
                                   |
                                   |
recursive            ----------------------------------
gaussian splitting.. |                                |
                 continuous models              semi-continuous models
                     |                                |
                     |                                | 
                -----------                           |
                |         |                    deleted interpolation
          decode with   ADAPT                         |
          sphinx3         |                           |---ADAPT
          decoder <-------                            |     |
                                                ----------------
                          make cd tied mdef ... | .............|
                          with decode dict and  |           convert to
                          pruned trees          |           sphinx2
                                         decode with           |
                                         sphinx3               |
                                         decoder               |     
                                                               |
                                                            decode with
                                                            sphinx2
                                                            decoder
                                                  (currently opensource
                                                   and restricted to
                                                   working with sampling
                                                   rates 8khz and 16khz.
                                                   Once the s3 trainer is
                                                   released, this will have
                                                   to change to allow
                                                   people who train with
                                                   different sampling rates
                                                   to use this decoder)

</pre>
<p>
<a href="#top">back to index</a>
<hr>


<a name="01"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">BEFORE YOU TRAIN</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>MODELING CONTEXT-DEPENDENT PHONES WITH UNTIED STATES: SOME MEMORY REQUIREMENTS</td>
</table>
<!------------------------------------------------------------------------->
<p>
Modeling Context-dependent phones (ex. triphones) with untied states requires
the largest amount of hardware resources. Take a moment to check if you
have enough. The resources required depend on the type of model you are 
going to train, the dimensionality and configuration of your feature vectors,
and the number of states in the HMMs.
<p>
<b><u>Semi-continuous models</u></b>
<p>
To train 5-state/HMM models for 10,000 triphones:
<pre>
5 states/triphone                    = 50,000 states
For a 4-stream feature-set, each     = 1024 floating point numbers/state
state has a total of 4*256 mixture   
weights
                                     = 205Mb buffer for 50,000 states
</pre>

Corresponding to each of the four feature streams, there are 256 means 
and 256 variances in the codebook. ALL these, and ALL the mixture weights
and transition matrices are loaded in into the RAM, and during training
an additional buffer of equal size is allocated to store intermediate
results. These are later written out into the hard disk when the calculations
for the current training iteration are complete. Note that
there are as many transition matrices as you have phones (40-50
for the English language, depending on your dictionary) All this amounts
to allocating well over 400 Mb of RAM.
<p>
This is a bottleneck for machines with smaller memory. No matter how large
your training corpus is, you can actually train only about 10,000 triphones
at the cd-untied stage if you have ~400 Mb of RAM (A 100 hour broadcast
news corpus typically has 40,000 triphones). You could train more if
your machine is capable of handling the memory demands effectively (this
could be done, for example, by having a large amount of swap space). If
you are training on multiple machines, *each* will require this much memory.
In addition, at the end of each iteration, you have to transmit all buffers
to a single machine that performs the norm. Networking issues need to
be considered here. 
<p>

The cd-untied models are used to build trees. The number of triphones you
train at this stage directly affects the quality of the trees, which would
have to be built using fewer triphones than are actually present in the
training set if you do not have enough memory.
<p>
<b><u>Continuous models</u></b>
<p>
For 10,000 triphones:
<pre>
5 states/triphone         = 50,000 states
39 means (assuming a
39-component feature
vector) and 39
variances per state       = 79 floating points per state
                          = 15.8Mb buffer for 50,000 states
</pre>
Thus we can train 12 times as many triphones as we can when we have
semicontinuous models for the same amount of memory. Since we can use
more triphones to train (and hence more information) the decision trees
are better, and eventually result in better recognition performance.
<p>
<a href="#top">back to index</a>
<hr>


<a name="02"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">BEFORE YOU TRAIN</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>DATA PREPARATION</td>
</table>
<!------------------------------------------------------------------------->
<p>
You will need the following files to begin training:
<ol>
<li>A set of <b>feature files</b> computed from the audio training data, one each for every
recording you have in the training corpus. Each recording can be transformed
into a sequence of feature vectors using a front-end executable provided with
the SPHIN-III training package. Each front-end executable provided performs
a different analysis of the speech signals and computes a different type 
of feature.
<p>
<li>A <b> control file</b> containing the list of feature-set filenames with
full paths to them. An example of the entries in this file:
<pre>
dir/subdir1/utt1
dir/subdir1/utt2
dir/subdir2/utt3
</pre>
Note that the extensions are not given. They will be provided separately 
to the trainer. It is a good idea to give unique names to all feature 
files, even if 
including the full paths seems to make each entry in the control file
unique. You will find later that this provides a lot of flexibility
for doing many things.
<p>
<li>A <b>transcript file</b> in which the transcripts corresponding to the 
feature files are listed in exactly the same order as the feature
filenames in the control file.
<p>
<li>A <b>main dictionary</b> which has all acoustic events and words in 
       the transcripts mapped onto the acoustic units you want to train. 
       Redundancy in the form of extra words is permitted. The dictionary
       must have all alternate pronunciations marked with paranthesized serial
       numbers starting from (2) for the second pronunciation. The marker
       (1) is omitted. Here's an example:
<pre>             
DIRECTING            D AY R EH K T I ng
DIRECTING(2)         D ER EH K T I ng
DIRECTING(3)         D I R EH K T I ng
</pre>
<p>
<li>A <b>filler dictionary</b>, which usually lists the
        non-speech events as "words" and maps them to user_defined phones.
        This dictionary must at least have the entries
<pre>
&#60s>     SIL
&#60sil>   SIL
&#60/s>    SIL  
</pre>
The entries stand for
<pre>
&#60s>     : begining-utterance silence
&#60sil>   : within-utterance silence
&#60/s>    : end-utterance silence
</pre>  

Note that the words &#60s>, &#60/s> and &#60sil> are treated as special
words and are required to be present in the filler dictionary. At least
one of these must be mapped on to a phone called "SIL". The phone
SIL is treated in a special manner and is required to be present.
The sphinx expects you to name the acoustic events corresponding to your
general background condition as SIL. For clean speech these events may
actually be silences, but for noisy speech these may be the most general
kind of background noise that prevails in the database. 
Other noises can then be modelled by phones defined by the user.
<p>
During training SIL replaces every phone flanked by "+" as the context
for adjacent phones. The phones flanked by "+" are only modeled as CI
phones and are not used as contexts for triphones. If you do not want
this to happen you may map your fillers to phones that are not flanked
by "+".
<p>
<li>A <b>phonelist</b>, which is a list of all acoustic units that you want to 
train models for. The SPHINX does not permit you to have units 
other than those in your dictionaries. All units in your
        two dictionaries must be listed here. In other words, your phonelist
must have exactly the same units used in your dictionaries, no more and no
less. Each phone must be listed on a separate line in the file, begining from
the left, with no extra spaces after the phone. an example:
<pre>
AA
AE
OW
B
CH
</pre>
</ol>
Here's a quick checklist to verify your data preparation before you train:
<ol>
<li> Are all the transcript words in the dictionary/filler dictionary?
<li>  Make sure that the size of transcript matches the .ctl file.
<li> Check the boundaries defined in the .ctl file to make sure they exist
     ie, you have all the frames that are listed in the control file
<li> Verify the phonelist against the dictionary and fillerdict
</ol>
<p>
<a name="02a"></a>
<b><u>When you have a very small closed vocabulary (50-60 words)</u></b>
<p>
If you have only about 50-60 words in your vocabulary, and if your entire
test data vocabulary is covered by the training data, then you are probably
better off training word models rather than phone models. To do this, 
   simply define the phoneset as your set of words themselves and
   have a dictionary that maps each word to itself and train.
Also, use a lesser number of fillers, and if you do need to train phone models
   make sure that each of your tied states has enough counts (at least
   5 or 10 instances of each).
<p>
<hr>

<a name="03"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">BEFORE YOU TRAIN</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>THE SET OF BASE AND HIGHER ORDER FEATURE VECTORS</td>
</table>
<!------------------------------------------------------------------------->
<p>
    The set of feature vectors you have computed using the Sphinx front-end
    executable is called the set of <b>base</b> feature vectors. This
set of base features can be extended to include what are called <b>
higher order</b> features. Some common extensions are 
<ol type="a">
<li>The set of difference vectors, where the component-wise
            difference between *some* succeeding and preceding vector(s),
            used to get an estimate of the slope or trend at the current
            time instant, are the "extension" of the current vector. 
            These are called "delta" features. A more appropriate name
            would be the "trend" features.

<li>The set of difference vectors of difference vectors. The
            component-wise difference between the succeeding and preceding
            "delta" vectors are the "extension" of the current vector. These
            are called "double delta" features

<li>The set of difference vectors, where the component-wise
            difference between the n^th succeeding and n^th preceding vector
            are the "extension" of the current vector. These are called
            "long-term delta" features, differing from the "delta" features
            in just that they capture trends over a longer window of time.

<li>The vector composed of the first elements of the current vector
            and the first elements of some of the above "extension" vectors. 
            This is called the "power" feature, and its dimensionality is 
            less than or equal to the total number of feature types you 
            consider.
</ol>
<p>
<a name="031"></a>
<!------------------------------------------------------------------------->
<b><u>Feature streams</u></b>
<!------------------------------------------------------------------------->
<p>
In semi-continuous models, it is a usual practice to keep the identities of
the base vectors and their "extension" vectors separate. Each such set is
called a "feature stream". You must specify how many feature
streams you want to use in your semi-continuous models and how you want them
arranged. 
The feature-set options currently supported by the Sphinx are:
<p>
  c/1..L-1/,d/1..L-1/,c/0/d/0/dd/0/,dd/1..L-1/ : read this as
  cepstra/second to last component,<br>
  deltacepstra/second to last component,<br>
  cepstra/first component deltacepstra/first component doubledeltacepstra/first component, <br>
  doubledeltacepstra/second to last component
<p> 
This is a 4-stream feature vector used mostly in semi-continuous models.
 There is no particular advantage to this arrangement - any permutation
  would give you the same models, with parameters written in different
  orders.
 
<p>
Here's something that's not obvious from the notation used for the
4-stream feature set:  the dimensionality of the 4-stream feature vector 
is 12cepstra+24deltas+3powerterms+12doubledeltas
<p>
the deltas are computed as the difference between the cepstra two frames
removed on either side of the current frame (12 of these), followed by
the difference between the cepstra four frames
removed on either side of the current frame (12 of these). The power stream
uses the first component of the two-frames-removed deltas, computed using C<sub>0</sub>.
<p>
(more to come....)
<hr>


<a name="2"></a>
<a name="20"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CI MODEL DEFINITION FILE</td>
</table>
<!------------------------------------------------------------------------->
<p>
The first step is to prepare a <b>model definition</b> file for the context
independent (CI) phones. The function of a model definition file is to
define or provide a unique numerical identity to every state of every HMM
that you are going to train, and to provide an order which will be followed
in writing out the model parameters in the model parameter files. During
the training, the states are referenced only by these numbers. The model
definition file thus partly specifies your <b>model architecture</b> and
is thus usually stored in a directory named "model_architecture". You are of
course free to store it where you please, unless you are running the
training scripts provided with the SPHINX-III package.
<p>
To generate this <b> CI model definition file</b>, use the executable <b><font
color="green">mk_model_def</font></b> with the following flag settings:
<p>
<table border="1">
<tr><td valign="top"> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td valign="top"> -phonelstfn </td><td> phonelist </td></tr>
<tr><td valign="top"> -moddeffn   </td><td> name of the CI model definition file that you want to create. Full path must be provided</td></tr>
<tr><td valign="top"> -n_state_pm </td><td> number of states per HMM in the
models that you want to train. If you want to train 3 state HMMs, write "3"
here, without the double quotes</td></tr>
</table>
<p>
 Pipe the standard output into a log file <b>ci_mdef.log</b> (say).
   If you have listed only three phones in your phonelist,
   and specify that you want to build three state HMMs for each
   of these phones, then your model-definition file will look like this:
<pre>                    
# Generated by <path_to_binary>/mk_model_def on Thu Aug 10 14:57:15 2000
0.3
3 n_base
0 n_tri
12 n_state_map
9 n_tied_state
9 n_tied_ci_state
3 n_tied_tmat
#
# Columns definitions
#base lft  rt p attrib   tmat  ...state id's ...
SIL    -   -  - filler    0    0       1      2     N
A      -   -  -    n/a    1    3       4      5     N
B      -   -  -    n/a    2    6       7      8     N

The # lines are simply comments. The rest of the variables mean the following:

  n_base      : no. of phones (also called "base" phones) that you have
  n_tri       : no. of triphones (we will explain this later)
  n_state_map : Total no. of HMM states (emitting and non-emitting)
                The Sphinx appends an extra terminal non-emitting state
                to every HMM, hence for 3 phones, each specified by
                the user to be modeled by a 3-state HMM, this number
                will be 3phones*4states = 12
  n_tied_state: no. of states of all phones after state-sharing is done. 
                We do not share states at this stage. Hence this number is the
                same as the total number of emitting states, 3*3=9
n_tied_ci_state:no. of states for your "base" phones after state-sharing
                is done. At this stage, the number of "base" phones is
                the same as the number of "all" phones  that you are modeling.
                This number is thus again the total number of emitting
                states, 3*3=9
 n_tied_tmat   :The HMM for each CI phone has a transition probability matrix 
                 associated it. This is the total number of transition 
                 matrices for the given set of models. In this case, this 
                 number is 3.

Columns definitions: The following columns are defined:
       base  : name of each phone
       lft   : left-context of the phone (- if none)
       rt    : right-context of the phone (- if none)
       p     : position of a triphone (not required at this stage)
       attrib: attribute of phone. In the phone list, if the phone is "SIL", 
	       or if the phone is enclosed by "+", as in "+BANG+", the sphinx 
	       understands these phones to be non-speech events. These are 
	       also called "filler" phones, and the attribute "filler" is 
	       assigned to each such phone. The base phones have no special 
	       attributes, and hence are labelled as "n/a", standing for 
	       "no attribute"   
      tmat   : the id of the transition matrix associated with the phone
 state id's  : the ids of the HMM states associated with any phone. This list
               is terminated by an "N" which stands for a non-emitting
               state. No id is assigned to it. However, it exists, and is
               listed.
</pre>
<a name="21"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE HMM TOPOLOGY FILE</td>
</table>
<!------------------------------------------------------------------------->
<p>
The HMM topology file consists of a matrix with boolean entries, each entry
indiactes whether a specific transition from state=row_number to
state=column_number is permitted in the HMMs or not. For example a
3-state HMM with no skips permitted beteen states would have a topology
file with the following entries:
<pre>
4
1.0     1.0     0.0     0.0
0.0     1.0     1.0     0.0
0.0     0.0     1.0     1.0 
</pre> 

The number 4 is total the number of sates in an HMMs. The SPHINX
automatically appends a fourth non-emitting terminating state to the 3
state HMM. The first entry of 1.0 means that a transition from state 1 to
state 1 (itself) is permitted. Accordingly, the transition matrix estimated
for any phone would have a "transition-probability" in place of this
boolean entry. Where the entry is 0.0, the corresponding transition
probability will not be estimated (will be 0).
<p>
You can either write out the topology file manually, or
use the script script make_topology.pl provided with the SPHINX package to
do this. The script needs the following arguments:
<pre>
        states_per_hmm : this is merely an integer specifying the
                         number of states per hmm
        skipstate      : "yes" or "no" depending on whether you
                         want the HMMs to have skipped state transitions
                         or not.
</pre>
<p>
 Note that the topology file
is common for all HMMs and is a single file containing the topology
definition matrix. This file also defines your model architecture and is
usually placed in the model_architecture directory. This is however
optional, but recommended. If you are running scripts from the SPHINX
training package, you will find the file created in the model_architecture
directory.
<p> 


<a name="22"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>FLAT INITIALIZATION OF CI MODEL PARAMETERS</td>
</table>
<!------------------------------------------------------------------------->
<p>
CI models consist of 4 parameter files :
<ul>
<li><b>mixture_weights</b>: the weights given to every Gaussian in the Gaussian mixture corresponding to a state
<li><b>transition_matrices</b>: the matrix of state transition probabilities
<li><b>means</b>: means of all Gaussians
<li><b>variances</b>: variances of all Gaussians
</ul>
<p>
To begin training the CI models, each of these files must have some initial
entries, ie, they must be "initialized". The mixture_weights and
transition_matrices are initialized using the executable <b><font
color="green">mk_flat</font></b>. It needs the following arguments:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td> -moddeffn </td><td> CI model definition file </td></tr>
<tr><td> -topo </td><td>  HMM topology file </td></tr>
<tr><td> -mixwfn </td><td>  file in which you want to write the
initialized mixture weights </td></tr>
<tr><td> -tmatfn </td><td>  file in which you want to write the
initialized transition matrices </td></tr>
<tr><td> -nstream </td><td> number of independent feature streams, for
continuous models this number should be set to "1", without the double quotes
</td></tr>
<tr><td> -ndensity </td><td> number of Gaussians modeling each state. For CI
models, this number should be set to "1" </td></tr>
</table>
<p>
To initialize the means and variances, global values of these parameters are
first estimated and then copied into appropriate positions in the parameter
files.

The global mean is computed using
   all the vectors you have in your feature files. This is usually
   a very large number, so the job is divided into many parts. At this
   stage you tell the Sphinx how many parts you want it to divide this
   operation into (depending on the computing facilities you have)
   and the Sphinx "accumulates" or gathers up the vectors for each part 
   separately and writes it into an intermediate buffer on your machine. 
The executable <b><font color="green">init_gau</font></b> is used
for this purpose. It needs the following arguments:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION  </td></tr>
<tr><td> -accumdir </td><td> directory in which you want to write the
intermediate buffers </td></tr>
<tr><td> -ctlfn </td><td> control file </td></tr>
<tr><td> -part  </td><td>  part number </td></tr>
<tr><td> -npart </td><td>  total number of parts </td></tr>
<tr><td> -cepdir </td><td>  path to feature files - this will be appended 
before all paths given in the control file </td></tr>
<tr><td> -cepext </td><td>  filename extension of feature files, eg. "mfc"
for files called a/b/c.mfc. Double quotes are not needed </td></tr>
<tr><td> -feat  </td><td>  type of feature </td></tr>
<tr><td> -ceplen </td><td>  dimensionality of base feature vectors </td></tr>
<tr><td> -agc </td><td>  automatic gain control factor(max/none) </td></tr>
<tr><td> -cmn  </td><td>  cepstral mean normalization(yes/no) </td></tr>
<tr><td> -varnorm </td><td>  variance normalization(yes/no) </td></tr>
</table>
<p>

 Once the buffers are written, the contents of the buffers are
"normalized" or  used to compute a global mean value for the feature vectors. 
This is done using the executable <b><font color="green">norm</font></b> with
the following flag settings:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION  </td></tr>
<tr><td> -accumdir </td><td> buffer directory </td></tr>
<tr><td> -meanfn </td><td> file in which you want to write the global mean </td></tr>
<tr><td> -feat </td><td> type of feature </td></tr>
<tr><td> -ceplen </td><td> dimensionality of base feature vector </td></tr>
</table>
<p>
The next step is to "accumulate" the vectors for computing a global variance
value. The executable <b><font color="green">init_gau</font></b>, when 
called a second time around, takes the value of the global mean and
collects a set of (vector-globalmean)<sup>2</sup> values for the
entire data set. This time around, this executable needs the following
arguments:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION  </td></tr>
<tr><td> -accumdir </td><td> directory in which you want to write the
intermediate buffers </td></tr>
<tr><td> -meanfn </td><td>  globalmean file </td></tr> 
<tr><td> -ctlfn </td><td> control file </td></tr>
<tr><td> -part  </td><td>  part number </td></tr>
<tr><td> -npart </td><td>  total number of parts </td></tr>
<tr><td> -cepdir </td><td>  path to feature files - this will be appended
before all paths given in the control file </td></tr>
<tr><td> -cepext </td><td>  filename extension of feature files, eg. "mfc"
for files called a/b/c.mfc. Double quotes are not needed </td></tr>
<tr><td> -feat  </td><td>  type of feature </td></tr>
<tr><td> -ceplen </td><td>  dimensionality of base feature vectors </td></tr>
<tr><td> -agc </td><td>  automatic gain control factor(max/none) </td></tr>
<tr><td> -cmn  </td><td>  cepstral mean normalization(yes/no) </td></tr>
<tr><td> -varnorm </td><td>  variance normalization(yes/no) </td></tr>
</table>
<p>
Again, once the buffers are written, the contents of the buffers are
"normalized" or  used to compute a global variance value for the feature 
vectors. This is again done using the executable 
<b><font color="green">norm</font></b> with
the following flag settings:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION  </td></tr>
<tr><td> -accumdir </td><td> buffer directory </td></tr>
<tr><td> -varfn </td><td> file in which you want to write the global variance
 </td></tr>
<tr><td> -feat </td><td> type of feature </td></tr>
<tr><td> -ceplen </td><td> dimensionality of base feature vector </td></tr>
</table>
<p>
Once the global mean and global variance are computed, they have to be
copied into the means and variances of every
state of each of the HMMs. The global mean is written
into appropriate state positions in a <b>means</b> file while the
global variance is written into appropriate state positions in a <b>variances</b> file. If you are using the scripts provided with the SPHINX package,
you will find these files with "flatinitial" as part of its name in
the model_parameters directory.
<p>
The flat <b>means</b> and <b>variances</b> file can be  created using  the
executable <b><font color="green">cp_parm</font></b>. In order to be able
    to use this executable you will have to create a <b>copyoperations
map</b> file which is
    a two-column file, with the left column id-ing the state *to* which 
    the global value has to be copied, and the right column id-ing the state
    *from* which it has to be copied. If there are "nphones"
       CI phones and each state has "nEstate_per_hmm" EMITTING states, there
       will be ntotal_Estates = nphones * nEstate_per_hmm lines in the
       copyoperations map file; the state id-s (on the left column) run from 0
       thru (ntotal_Estates - 1). Here is an example
       for a 3-state hmm (nEstate_per_hmm = 3) for two phones (nphones = 2)
       (ntotal_Estates = 6; so, state ids would vary from 0-5):
<pre>
0   0
1   0
2   0
3   0
4   0
5   0
</pre>
<b><font color="green">cp_parm</font></b> requires the following arguments.
<p>
<table border="1">
<tr><td> FLAG  </td><td> DESCRIPTION </td></tr>
<tr><td> -cpopsfn   </td><td>  copyoperations map file </td></tr>
<tr><td> -igaufn  </td><td> input global mean (or variance) file </td></tr>
<tr><td> -ncbout  </td><td> number of phones times the number of states per
HMM (ie, total number of states) </td></tr>
<tr><td> -ogaufn  </td><td> output initialized means (or variances) file </td></tr>
</table>
<p>
<b><font color="green">cp_parm</font></b> has to be run twice, once for
copying the means, and once for copying the variances. This completes the
initialization process for CI training.
<hr>
<p>



<a name="23"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>TRAINING CONTEXT INDEPENDENT MODELS</td>
</table>
<!------------------------------------------------------------------------->
<p>
Once the flat initialization is done, you are ready to begin training the acoustic models
    for the base or "context-independent" or CI phones. This step is
    called CI-training. In CI-training, the flat-initialized models
    are re-estimated through the forward-backward re-estimation algorithm
    called the Baum-Welch algorithm. This is an iterative re-estimation
process, so you have to run many "passes" of the Baum-Welch re-estimation
over your training data. Each of these passes, or iterations, results in a
slightly better set of models for the CI phones. However, since the 
objective function maximized in each of theses passes is the likelihood,
too many iterations would ultimately result in models which fit very
closely to the training data. you might not want this to happen for many 
reasons. Typically, 5-8 iterations of Baum-Welch are sufficient for 
getting good estimates of the CI models. You can automatically determine the
number of iterations that you need by looking at the total likelihood of the
training data at the end of the first iteration and deciding on a
"convergence ratio" of likelihoods. This is simply the ratio of the
total likelihood in the current iteration to that of the previous iteration.
As the models get more and more fitted to the training data in each
iteration, the training data likelihoods typically increase monotonically.
The convergence ratio is therefore a small positive number. The convergence
ratio becomes smaller and smaller as the iterations progress, since each
time the current models are a little less different from the previous ones.
Convergence ratios are data and task specific, but  typical values at which
you may stop the Baum-Welch iterations for your CI training may
range from 0.1-0.001. When the models are variance-normalized, the convergence ratios are much smaller.
<p>
The executable used to run a Buam-Welch iteration is called "bw", and takes the
following example arguments for training continuous CI models:
<table border="1", noshade>
<tr><td align="center"> FLAG     </td> <td align="center"> DESCRIPTION </td> </tr>
<tr><td valign="top"> -moddeffn</td> <td> model definition file for CI phones </td></tr>
<tr><td valign="top"> -ts2cbfn </td> <td> this flag should be set to ".cont." if
                             you are training continuous models, and to
                             ".semi." if you are <a href="#3f">training semi-continuous 
                             models</a>, without the double quotes </td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which the 
mixture-weights from the previous iteration are stored. Full path must be 
                             provided</td></tr>
<tr><td valign="top"> -mwfloor </td> <td> Floor value for the mixture weights. Any number 
                             below the floor value is set to the floor 
                             value.</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which 
 the transition matrices from the previous iteration are stored. 
Full path must be  provided</td></tr>
<tr><td valign="top"> -meanfn  </td> <td>name of the file in which 
the means from the previous iteration are stored. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which 
the variances fromt he previous iteration are stored.
Full path must be provided</td></tr>         
<tr><td valign="top"> -dictfn  </td> <td> Dictionary </td></tr>
<tr><td valign="top"> -fdictfn </td> <td> Filler dictionary</td></tr>
<tr><td valign="top"> -ctlfn   </td> <td> control file </td></tr>
<tr><td valign="top"> -part    </td> <td> You can split the training into N equal parts by
setting a flag. If there are M utterances in your control file, then this will
enable you to run the training separately on each (M/N)<sup>th</sup> part. This
flag may be set to specify which of these parts you want to currently train 
on. As an example, if your total number of parts is 3, this flag can take
one of the values 1,2 or 3</td></tr>
<tr><td valign="top"> -npart   </td><td>  number of parts in which you have split 
                             the training </td></tr>
<tr><td valign="top"> -cepdir  </td><td> directory where your feature files are
                            stored</td></tr>
<tr><td valign="top"> -cepext  </td><td> the extension that comes after the name listed
in the control file. For example, you may have a file called a/b/c.d and
may have listed a/b/c in your control file. Then this flag must be given the
argument "d", without the double quotes or the dot before it </td></tr>
<tr><td valign="top"> -lsnfn   </td><td> name of the transcript file </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate results from each part of your training will be written in this directory. If you have T means to estimate, then

the size of the mean buffer from the current part of your training will
be T*4 bytes (say). There will likewise be a variance buffer, a buffer for
mixture weights, and a buffer for transition matrices</td></tr>
<tr><td valign="top"> -varfloor  </td><td> minimum variance value allowed </td></tr>
<tr><td valign="top"> -topn  </td><td> no. of gaussians to consider for computing the likelihood of each state. For example, if you have 8 gaussians/state models and topn is 4, then the 4 most
likely gaussian are used. </td></tr>
<tr><td valign="top"> -abeam </td><td> forward beamwidth</td></tr>
<tr><td valign="top"> -bbeam </td><td> backward beamwidth</td></tr>
<tr><td valign="top"> -agc   </td><td> automatic gain control</td></tr>
<tr><td valign="top"> -cmn   </td><td> cepstral mean normalization</td></tr>
<tr><td valign="top"> -varnorm  </td><td> variance normalization</td></tr>
<tr><td valign="top"> -meanreest </td><td> mean re-estimation</td></tr>
<tr><td valign="top"> -varreest </td><td>  variance re-estimation</td></tr>
<tr><td valign="top"> -2passvar </td><td> Setting this flag to "yes" lets bw 
use the previous means in the estimation of the variance. The current variance
is then estimated as E[(x - prev_mean)<sup>2</sup>]. If this flag is set to 
"no" the current estimate of the means are used to estimate variances. This 
requires the estimation of variance as E[x<sup>2</sup>] - (E[x])<sup>2</sup>, 
an unstable estimator that sometimes results in negative estimates of the 
variance due to arithmetic imprecision</td></tr>
<tr><td valign="top"> -tmatreest </td><td> re-estimate transition matrices or not</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
<p>

If you have run the training in many parts, or even if you have run the
training in one part, the executable for Baum-Welch described above generates
only intermediate buffer(s). The final model parameters, namely the
means, variances, mixture-weights and transition matrices, have to be
estimated using the values stored in these buffers. This is done by the
executable called "norm", which takes the following arguments:
<table border="1", noshade>

<tr><td align="center"> FLAG </td> <td align="center"> DESCRIPTION </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate buffer directory</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which you want 
                                          to write the mixture weights.
Full path must be provided</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which you want to write
                             the transition matrices. Full path must be
                             provided</td></tr>
<tr><td valign="top"> -meanfn  </td> <td>name of the file in which you want to write
                             the means. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which you want to write
                             the variances. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
If you have not re-estimated any of the model parameters in the bw step, then
the corresponding flag must be omitted from the argument given to the
norm executable. The executable will otherwise try to read a non-existent 
buffer from the buffer directory and will not go through. Thus if you have
set -meanreest to be "no" in the argument for bw, then the flag -meanfn must
not be given in the argument for norm. This is useful mostly during adaptation.
<p>
Iterations of baum-welch and norm finally result CI models. The iterations
can be stopped once the likelihood on the training data converges. The
model parameters computed by norm in the final iteration are now used
to initialize the models for context-dependent phones (triphones) with
untied states. This is the next major step of the training process. We
refer to the process of training triphones HMMs with untied states as the
"CD untied training".
<hr>
<p>


<a name="24"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CD UNTIED MODEL DEFINITION FILE</td>
</table>
<!------------------------------------------------------------------------->
<p>
The next step is the CD-untied training, in which HMMs are trained
for all context-dependent phones (usually triphones) that are seen in the 
training corpus. For the CD-untied training, we first need to 
to generate a model definition file for all the triphones
occuring in the training set. This is done in several steps: 
<ul> 

First, a list of all triphones possible in the vocabulary is generated
from the dictionary. To get this complete list of triphones from the
dictionary, it is first necessary to write the list of phones in the
following format:
<pre>
phone1 0 0 0 0
phone2 0 0 0 0
phone3 0 0 0 0
phone4 0 0 0 0
...
</pre>
The phonelist used for the CI training must be used to generate this, and
the order in which the phones are listed must be the same.
<p>
Next, a temporary dictionary is generated, which has all words except
the filler words (words enclosed in ++()++ ). The entry 
<pre>
SIL    SIL
</pre>
must be
added to this temporary dictionary, and the dictionary must be sorted in
alphabetical order.  The program "quick_count" provided with the SPHINX-III
package can now be used to generate the list of all possible triphones from
the temporary dictionary.  It takes the following arguments:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td valign="top"> -q </td><td> mandatory flag to tell quick_count 
to consider all word pairs while constructing triphone list </td></tr>
<tr><td> -p </td><td> formatted phonelist </td></tr>
<tr><td> -b </td><td> temporary dictionary </td></tr>
<tr><td> -o </td><td> output triphone list </td></tr>
</table>
<p>
Here is a typical output from quick_count
<pre>
AA(AA,AA)s              1
AA(AA,AE)b              1
AA(AA,AO)1              1
AA(AA,AW)e              1
</pre>
<p>
The "1" in AA(AA,AO)1 indicates that this is a word-internal triphone. This
is a carry over from Sphinx-II. The output from quick_count has to be
now written into the following format:

<pre>
AA AA AA s
AA AA AE b
AA AA AO i
AA AA AW e
</pre>

This can be done by simply replacing "(", ",", and ")" in the 
output of quick_count by a space and printing only the first four
columns. 
While
doing so, all instances of " 1" must be replaced by " i". To the top of
the resulting file the list of CI phones must be appened in the following
format
<pre>
AA - - -
AE - - -
AO - - -
AW - - -
..
..                                                         
AA AA AA s
AA AA AE b
AA AA AO i
AA AA AW e
</pre>
<br>
<em>For example, if the output of the quick_count is stored in
a file named "quick_count.out", the following perl command will
generate the phone list in the desired form.
perl -nae '$F[0] =~ s/\(|\)|\,/ /g; $F[0] =~ s/1/i/g; print $F[0]; if ($F[0] =~ /\s+$/){print "i"}; print "\n"' quick_count.out</em>
<p>
The above list of triphones (and phones) is converted to the model
definition file that lists all possible triphones from the dictionary. The
program used from this is "mk_model_def" with the following arguments
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td> -moddeffn  </td><td> model definition file with all possible triphones(alltriphones_mdef)to be written</td></tr>
<tr><td> -phonelstfn </td><td>  list of all triphones </td></tr>
<tr><td> -n_state_pm </td></tr> number of states per HMM </td></tr>
</table>
                                                                              
In the next step we find the number of times each of the triphones
listed in the alltriphones_mdef occured in the training corpus
To do this we call the program "param_cnt" which takes the following
arguments:
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td> -moddeffn  </td><td> model definition file with all possible triphones(alltriphones_mdef)</td></tr>
<tr><td> -ts2cbfn  </td><td>  takes the value ".cont." if you are building continuous models</td></tr>
<tr><td> -ctlfn  </td><td> control file corresponding to your training 
transcripts</td></tr>
<tr><td> -lsnfn  </td><td> transcript file for training </td></tr>
<tr><td> -dictfn  </td><td>  training dictionary </td></tr>
<tr><td> -fdictfn   </td><td>  filler dictionary </td></tr>
<tr><td> -paramtype  </td><td>   write  "phone" here, without the double 
quotes</td></tr>
<tr><td> -segdir </td><td>   /dev/null </td></tr>
</table>
<p>
param_cnt writes out the counts for each of the triphones onto stdout.
All other messages are sent to stderr. The stdout therefore has to
be directed into a file. If you are using csh or tcsh it would be done
in the following manner:
<pre>
(param_cnt [arguments] > triphone_count_file) >&! LOG
</pre>
Here's an example of the output of this program
<pre>
+GARBAGE+ - - - 98
+LAUGH+ - - - 29
SIL - - - 31694
AA - - - 0
AE - - - 0
...
AA AA AA s 1
AA AA AE s 0
AA AA AO s 4
</pre>
The final number in each row shows the number of times that particular
triphone (or filler phone) has occured in the training corpus. Not that
if all possible triphones of a CI phone are listed in the all_triphones.mdef
the CI phone itself will have 0 counts since all instances of it would have
been mapped to a triphone.                               
<p>
This list of counted triphones is used to shortlist the triphones that
have occured a minimum number (threshold) of times. The shortlisted
triphones appear in the same format as the file from which they have been
selected. 
The shortlisted triphone list has the same format as the triphone list used
to generate the all_triphones.mdef. The formatted list of CI phones has to
be included in this as before. So, in the earlier example, if a threshold
of 4 were used, we would obtain the shortlisted triphone list as
<pre>
AA - - -
AE - - -
AO - - -
AW - - -
..
..                                 
AA AA AO s
..
</pre>

The threshold is adjusted such that the total number of triphones
above the threshold is less that the maximum number of triphones that the
system can train (or that you wish to train). It is good to train as many
triphones as possible. The maximum number of triphones may however be
dependent on the memory available on your machine. The logistics related
to this are described in the beginning of this manual.
<p>
Note that thresholding is usually done so to reduce the number of
triphones, in order that the resulting models will be small enough to fit
in the computer's memory. If this is not a problem, then the threshold can
be set to a smaller number. If the triphone occurs too few times, however,
(ie, if the threshold is too small), there will not be enough data to train
the HMM state distributions properly. This would lead to poorly
estimated CD untied models, which in turn may affect the decision trees
which are to be built using these models in the next major step of the
training. 
<p>

A model definition file is now created to include only these shortlisted
triphones.  This is the final model definition file to be used for the
CD untied training. The reduced triphone list is then to the model 
definition file using mk_model_def with the following arguments:
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td> -moddeffn  </td><td> model definition file for CD untied 
training</td></tr>
<tr><td> -phonelstfn </td><td>  list of shortlisted triphones </td></tr>
<tr><td> -n_state_pm </td></tr> number of states per HMM </td></tr>
</table>
</ul>
<p>
Finally, therefore, a model definition file which
lists all CI phones and seen triphones  is constructed. This file, like
the CI model-definition file, assigns unique id's to each HMM state
and serves as a reference file for handling and identifying the CD-untied 
model parameters. Here is an example of the CD-untied model-definition file:
If you have listed five phones in your phones.list file,
<p>
SIL
B
AE
T
<p>
   and specify that you want to build three state HMMs for each
   of these phones, and if you have one utterance listed in your
transcript file:
<p> &#60s> BAT A TAB &#60/s>
for which your dictionary and fillerdict entries are:
<pre>
Fillerdict:
&#60s>   SIL
&#60/s>  SIL
</pre>
<pre>
Dictionary:
A      AX 
BAT    B AE T
TAB    T AE B
</pre>
<p>
then your CD-untied model-definition file will look like this:
<pre>
# Generated by <path_to_binary>/mk_model_def on Thu Aug 10 14:57:15 2000
0.3
5 n_base
7 n_tri
48 n_state_map
36 n_tied_state
15 n_tied_ci_state
5 n_tied_tmat                                                                  
#
# Columns definitions
#base lft  rt p attrib   tmat  ...state id's ...
SIL     -   -  - filler    0    0       1      2     N
AE      -   -  -    n/a    1    3       4      5     N
AX      -   -  -    n/a    2    6       7      8     N
B       -   -  -    n/a    3    9       10     11    N
T       -   -  -    n/a    4    12      13     14    N
AE      B   T  i    n/a    1    15      16     17    N
AE      T   B  i    n/a    1    18      19     20    N
AX      T   T  s    n/a    2    21      22     23    N
B       SIL AE b    n/a    3    24      25     26    N
B       AE  SIL e   n/a    3    27      28     29    N
T       AE  AX e    n/a    4    30      31     32    N
T       AX  AE b    n/a    4    33      34     35    N

The # lines are simply comments. The rest of the variables mean the following:

  n_base      : no. of CI phones (also called "base" phones), 5 here
  n_tri       : no. of triphones , 7 in this case
  n_state_map : Total no. of HMM states (emitting and non-emitting)
                The Sphinx appends an extra terminal non-emitting state
                to every HMM, hence for 5+7 phones, each specified by
                the user to be modeled by a 3-state HMM, this number
                will be 12phones*4states = 48
  n_tied_state: no. of states of all phones after state-sharing is done.
                We do not share states at this stage. Hence this number is the
                same as the total number of emitting states, 12*3=36
n_tied_ci_state:no. of states for your CI phones after state-sharing     
                is done. The CI states are not shared, now or later.
                This number is thus again the total number of emitting CI
                states, 5*3=15
 n_tied_tmat   : The total number of transition matrices is always the same
                 as the total number of CI phones being modeled. All triphones
                 for a given phone share the same transition matrix. This
                 number is thus 5.

Columns definitions: The following columns are defined:
       base  : name of each phone
       lft   : left-context of the phone (- if none)
       rt    : right-context of the phone (- if none)
       p     : position of a triphone. Four position markers are supported:
               b = word begining triphone
               e = word ending triphone
               i = word internal triphone
               s = single word triphone 
       attrib: attribute of phone. In the phone list, if the phone is "SIL",
               or if the phone is enclosed by "+", as in "+BANG+", these
              phones are interpreted as non-speech events. These are
               also called "filler" phones, and the attribute "filler" is
               assigned to each such phone. The base phones and the
               triphones have no special attributes, and hence are 
               labelled as "n/a", standing for "no attribute"
      tmat   : the id of the transition matrix associated with the phone      
 state id's  : the ids of the HMM states associated with any phone. This list
               is terminated by an "N" which stands for a non-emitting
               state. No id is assigned to it. However, it exists, and is
               listed.
</pre>                  
<hr>
<p>



<a name="25"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>FLAT INITIALIZATION OF CD UNTIED MODEL PARAMETERS</td>
</table>
<!------------------------------------------------------------------------->
<p>
In the next step in CD untied training, after the CD untied model 
definition file has been constructed, the model parameters are first
intialized. During this process, the model parameter files corresponding
to the CD untied model-definition file are generated. Four files are
generated: means, variances, transition matrices and mixture weights. In
each of these files, the values are first copied from the corresponding
CI model parameter file. Each state of a particular CI phone contributes 
to the same state of the same CI phone in the Cd -untied model parameter file,
and also to the same state of the *all* the triphones of the same CI
phone in the CD-untied model parameter file. The CD-untied model definition
file is of course used as a reference for this mapping. This process, as
usual, is called "initialization".
<p>
Initialization for the CD-untied training is done by the executable called
"init_mixw".  It need the following arguments:
<table border="1">
<tr><td valign="top"> -src_moddeffn </td><td> source (CI) model definition file </td></tr>
<tr><td valign="top"> -src_ts2cbfn </td><td> .cont. </td></tr>
<tr><td valign="top"> -src_mixwfn </td><td> source (CI) mixture-weight file </td></tr>
<tr><td valign="top"> -src_meanfn </td><td> source (CI) means file </td></tr>
<tr><td valign="top"> -src_varfn </td><td> source (CI) variances file </td></tr>
<tr><td valign="top"> -src_tmatfn </td><td> source (CI) transition-matrices file  </td></tr>
<tr><td valign="top"> -dest_moddeffn </td><td> destination (CD untied) model definition file </td></tr>
<tr><td valign="top"> -dest_ts2cbfn </td><td> .cont. </td></tr>
<tr><td valign="top"> -dest_mixwfn </td><td> destination (CD untied) mixtrue weights file </td></tr>
<tr><td valign="top"> -dest_meanfn </td><td> destination (Cd untied) means file </td></tr>
<tr><td valign="top"> -dest_varfn </td><td> destination (CD untied) variances file </td></tr>
<tr><td valign="top"> -dest_tmatfn </td><td> destination (Cd untied) transition matrices file </td></tr>
<tr><td valign="top"> -feat </td><td> feature configuration </td></tr>
<tr><td valign="top"> -ceplen </td><td> dimensionality of base feature vector </td></tr>
</table>
<hr>
<p>

<a name="26"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>TRAINING CD UNTIED MODELS</td>
</table>
<!------------------------------------------------------------------------->
<p>
Once the initialization for CD-untied training is done, the next step is to actually train the CD untied models. To do this, as in the CI training,
 the Baum-Welch forward-backward algorithm is iteratively used. Each iteration
 consists of generating bw buffers by running the bw executable on the
training corpus (this can be divided into many parts as explained in the
CI training), follwed by running the norm executable to compute the
final parameters at the end of the iteration. The arguments required by the
bw executable at this stage are as follows.
<p>
<table border="1">
<tr><td align="center"> FLAG     </td> <td align="center"> DESCRIPTION </td> </tr>
<tr><td valign="top"> -moddeffn</td> <td> CD-untied model definition file </td></tr>
<tr><td valign="top"> -ts2cbfn </td> <td> this flag should be set to ".cont." if
                             you are training continuous models, and to
                             ".semi." if you are training semi-continuous 
                             models, without the double quotes </td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which the 
mixture-weights from the previous iteration are stored. Full path must be 
                             provided</td></tr>
<tr><td valign="top"> -mwfloor </td> <td> Floor value for the mixture weights. Any number 
                             below the floor value is set to the floor 
                             value.</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which 
 the transition matrices from the previous iteration are stored. 
Full path must be  provided</td></tr>
<tr><td valign="top"> -meanfn  </td> <td>name of the file in which 
the means from the previous iteration are stored. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which 
the variances fromt he previous iteration are stored.
Full path must be provided</td></tr>         
<tr><td valign="top"> -dictfn  </td> <td> Dictionary </td></tr>
<tr><td valign="top"> -fdictfn </td> <td> Filler dictionary</td></tr>
<tr><td valign="top"> -ctlfn   </td> <td> control file </td></tr>
<tr><td valign="top"> -part    </td> <td> You can split the training into N equal parts by
setting a flag. If there are M utterances in your control file, then this will
enable you to run the training separately on each (M/N)<sup>th</sup> part. This
flag may be set to specify which of these parts you want to currently train 
on. As an example, if your total number of parts is 3, this flag can take
one of the values 1,2 or 3</td></tr>
<tr><td valign="top"> -npart   </td><td>  number of parts in which you have split 
                             the training </td></tr>
<tr><td valign="top"> -cepdir  </td><td> directory where your feature files are
                            stored</td></tr>
<tr><td valign="top"> -cepext  </td><td> the extension that comes after the name listed
in the control file. For example, you may have a file called a/b/c.d and
may have listed a/b/c in your control file. Then this flag must be given the
argument "d", without the double quotes or the dot before it </td></tr>
<tr><td valign="top"> -lsnfn   </td><td> name of the transcript file </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate results from each part of your training will be written in this directory. If you have T means to estimate, then

the size of the mean buffer from the current part of your training will
be T*4 bytes (say). There will likewise be a variance buffer, a buffer for
mixture weights, and a buffer for transition matrices</td></tr>
<tr><td valign="top"> -varfloor  </td><td> minimum variance value allowed </td></tr>
<tr><td valign="top"> -topn  </td><td> no. of gaussians to consider for likelihood computation</td></tr>
<tr><td valign="top"> -abeam </td><td> forward beamwidth</td></tr>
<tr><td valign="top"> -bbeam </td><td> backward beamwidth</td></tr>
<tr><td valign="top"> -agc   </td><td> automatic gain control</td></tr>
<tr><td valign="top"> -cmn   </td><td> cepstral mean normalization</td></tr>
<tr><td valign="top"> -varnorm  </td><td> variance normalization</td></tr>
<tr><td valign="top"> -meanreest </td><td> mean re-estimation</td></tr>
<tr><td valign="top"> -varreest </td><td>  variance re-estimation</td></tr>
<tr><td valign="top"> -2passvar </td><td> Setting this flag to "yes" lets bw 
use the previous means in the estimation of the variance. The current variance
is then estimated as E[(x - prev_mean)<sup>2</sup>]. If this flag is set to 
"no" the current estimate of the means are used to estimate variances. This 
requires the estimation of variance as E[x<sup>2</sup>] - (E[x])<sup>2</sup>, 
an unstable estimator that sometimes results in negative estimates of the 
variance due to arithmetic imprecision</td></tr>
<tr><td valign="top"> -tmatreest </td><td> re-estimate transition matrices or not</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
<p>

<li> The Baum-Welch step should be followed by the nomalization step. The
executable "norm" must be used for this. The arguments required by the norm
executable are the same as that for CI training, and are listed below:
<p>
<table border="1">

<tr><td align="center"> FLAG </td> <td align="center"> DESCRIPTION </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate buffer directory</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which you want 
                                          to write the mixture weights.
Full path must be provided</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which you want to write
                             the transition matrices. Full path must be
                             provided</td></tr>
<tr><td valign="top"> -meanfn  </td> <td>name of the file in which you want to write
                             the means. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which you want to write
                             the variances. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
<p>
The iterations of Baum-Welch and norm must be run till the likelihoods 
converge (ie, the convergence ratio reaches a small threshold value). Typically
this happens in 6-10 iterations.
<hr>
<p>


<a name="27"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>BUILDING DECISION TREES FOR PARAMETER SHARING</td>
</table>
<!------------------------------------------------------------------------->
<p>
Once the CD-untied models are computed, the next major step in training
continuous models is decision tree building. Decision trees are used to
decide which of the HMM states of all the triphones (seen and unseen) are
similar to each other, so that data from all these states are collected
together and used to train one global state, which is called a "senone".
Many groups of similar states are formed, and  the number of "senones"
that are finally to be trained can be user defined. A senone is also
called a tied-state and is obviously shared across the triphones which
contributed to it. The technical details of decision tree building and
state tying are explained in the technical section of this manual. It is
sufficient to understand here that for state tying, we require to build
decision trees. 
<p>
<a name="28"></a>
<u><b>Generating the linguistic questions</b></u>
<p>

The decision trees require the CD-untied models and a set of predefined
phonetic classes (or classes of the acoustic units you are modeling) which
share some common property. These classes or questions are used to
partition the data at any given node of a tree. Each question results in
one partion, and the question that results in the "best" partition (maximum
increase in likelihood due to the partition) is chosen to partition the
data at that node.  All linguistic questions are written in a single file
called the "linguistic questions" file. One decision tree is built for each
state of each phone.
<p>
For example, if you want to build a decision tree for the contexts (D B P
AE M IY AX OY) for any phone, then you could ask the question: does the
context belong to the class vowels? If you have defined the class vowels to
have the phones AE AX IY OY EY AA EH (in other words, if one of your
linguistic questions has the name "VOWELS" and has the elements AE AX IY OY
EY AA EH corresponding to that name), then the decision tree would branch
as follows:
<pre>
                     D B P AE M IY AX OY
                              |
                             
       question: does this context belong to the class VOWELS ?
                              /\
                             /  \
                            /    \
                         yes      no
                         /         \
                        /           \
                     AE IY AX OY     D B P M
                        |             |
                      question     question
                       /\             /\
                      /  \           /  \
                      
</pre>

Here is an example of a "linguistic-questions" file:
<pre>
ASPSEG      HH
SIL         SIL
VOWELS      AE AX IY OY EY AA EH
ALVSTP      D T N
DENTAL      DH TH
LABSTP      B P M
LIQUID      L R
</pre>
The column on the left specifies the name gives to the class. This name
is user defined. The classes consist of a single phone or a cluster of phones
whaich share some common acoustic property. If your acoustic units are not
completely phonetically motivated, or if you are training models for
a language whose phonetic structure you are not completely sure about,
then the executable classed "make_quests" provided with the SPHINX-III
package can be used to generate the linguistic questions. It uses the
CI models to make the questions, and needs the following arguments:
<p>
<table border="1">
<tr><td valign="top"> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td valign="top"> -moddeffn </td><td> CI model definition file </td></tr>
<tr><td valign="top"> -meanfn </td><td> CI means file </td></tr>
<tr><td valign="top"> -varfn </td><td> CI variances file </td></tr>
<tr><td valign="top"> -mixwfn </td><td> CI mixture weights file </td></tr>
<tr><td valign="top"> -npermute </td><td> A bottom-up top-down clustering
algorithm is used to group the phones into classes. Phones are clustered
using bottom-up clustering until npermute classes are obtained. The npermute
classes are exhaustively partitioned into two classes and evaluated to identify
the optimal partitioning of the entire phone set into two groups. 
An identical procedure is performed recursively on each of these groups
to generate an entire tree.
npermute is typically between 8 and 12. Smaller values of npermute result
in suboptimal clustering. Larger values become computationally prohibitive.
</td></tr>
<tr><td valign="top"> -niter </td><td> The bottom-up top-down clustering
can be iterated to give more optimal clusters. niter sets the number of
iterations to run. niter is typiclly set to 1 or 2. The clustering
saturates after 2 iterations.
</td></tr>
<tr><td valign="top"> -qstperstt </td><td> The algoritm clusters state
distributions belonging to each state of the CI phone HMMs to generate
questions. Thus all 1st states are clustered to generate one subset of
questions, all 2nd states are clustered for the second subset, and so on.
qstperstt determines how many questions are to be generated by clustering
any state. Typically this is set to a number between 20 and 25.
</td></tr>
<tr><td valign="top"> -tempfn </td><td>  </td></tr>
<tr><td valign="top"> -questfn </td><td> output lingustic questions file  </td></tr>
</table>
<p>
Once the linguistic questions have been generated, decision trees must be
built for each state of each CI phone present in your phonelist. Decision
trees are however not built for filler phones written as +()+ in your
phonelist. They are also not built for the SIL phone. In order to build 
decision trees, the executable "bldtree" must be used. It takes the following 
arguments:
<table border="1">
<tr><td valign="top">  FLAG  </td><td> DESCRIPTION  </td></tr>

<tr><td valign="top"> -treefn </td><td> full path to the directory in which
you want the decision trees to be written </td></tr>

<tr><td valign="top"> -moddeffn </td><td> CD-untied model definition file
  </td></tr>

<tr><td valign="top"> -mixwfn </td><td> Cd-untied mixture weights file
  </td></tr>

<tr><td valign="top"> -ts2cbfn  </td><td> .cont.  </td></tr>

<tr><td valign="top"> -meanfn  </td><td> CD-untied means file  </td></tr>

<tr><td valign="top"> -varfn  </td><td> CD-untied variances file  </td></tr>

<tr><td valign="top"> -mwfloor </td><td> Floor value of the mixture
weights.  Values below this are reset to this value. A typical value is
1e-8 </td></tr>

<tr><td valign="top"> -psetfn </td><td> linguistic questions file
</td></tr>

<tr><td valign="top"> -phone </td><td> CI phone for which you want to build
the decision tree </td></tr>

<tr><td valign="top"> -state </td><td> The HMM state for which you want to
build the decision tree. For a three state HMM, this value can be 0,1 or
2. For a 5 state HMM, this value can be 0,1,2,3 or 4, and so on </td></tr>

<tr><td valign="top"> -stwt </td><td> This flag needs a string of numbers
equal to the number of HMM-states, for example, if you were using 5-state
HMMs, then the flag could be given as "-stwt 1.0 0.3 0.1 0.01 0.001". Each
of these numbers specify the weights to be given to state distributions
during tree building, beginning with the *current* state. The second number
specifies the weight to be given to the states *immediately adjacent* to
the current state (if there are any), the third number specifies the weight
to be given to adjacent states *one removed* from the immediately adjacent
one (if there are any), and so on. A typical set of values for 5 state HMMs
is "1.0 0.3 0.1 0.01 0.001" </td></tr>

<tr><td valign="top"> -ssplitmin </td><td> Complex questions are built for
the decision tree by first building "pre-trees" using the linguistic
questions in the question file. The minimum number of bifurcations in this
tree is given by ssplitmin. This should not be lesser than 1. This value is
typically set to 1.  </td></tr>

<tr><td valign="top"> -ssplitmax </td><td> The maximum number of
bifurcations in the simple tree before it is used to build complex
questions. This number is typically set to 7. Larger values would be more
computationally intensive. This number should not be smaller than the value
given for ssplitmin</td></tr> <tr><td valign="top"> -ssplitthr </td><td>
Minimum increase in likelihood to be considered for a bifurcation in the
simple tree. Typically set to a very small number greater than or equal to
0 </td></tr>

<tr><td valign="top"> -csplitmin </td><td> The minimum number of
bifurcations in the decision tree. This should not be less than 1
</td></tr>

<tr><td valign="top"> -csplitmax </td><td> The maximum number of
bifurcations in the decision tree. This should be as large as
computationlly feasible. This is typically set to 2000 </td></tr>

<tr><td valign="top"> -csplitthr </td><td> Minimum increase in likelihood
to be considered for a bifurcation in the decision tree. Typically set to a
very small number greater than or equal to 0.  </td></tr>

<tr><td valign="top"> -cntthresh </td><td> Minimum number of observations
in a state for it to be considered in the decision tree building process.
</td></tr>
</table>

If, for example, you have a phonelist which contains the following phones
<pre>
+NOISE+
SIL
AA
AX
B
</pre>
and you are training 3 state HMMs, then you must build 9 decision trees,
one each for each state of the phones AA, AX and B.
<hr>
<p>

<a name="29"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>PRUNING THE DECISION TREES</td>
</table>
<!------------------------------------------------------------------------->
<p>
Once the decision trees are built, they must be pruned to have as many
leaves as the number of tied states (senones) that you want to
train. Remember that the number of tied states does not include the CI
states, which are never tied. In the pruning process, the bifurcations in
the decision trees which resulted in the minimum increase in
likelihood are progressively removed and replaced by the parent node. The
selection of the branches to be pruned out is done across the entire
collection of decision trees globally. The executable to be used for
decision tree pruning is called "prunetree" and requires the following 
arguments:
<table border="1">
<tr><td vlign="top"> FLAG </td><td> DESCRIPTION </td></tr>

<tr><td vlign="top"> -itreedir </td><td> directory in which the full
decision trees are stored </td></tr>

<tr><td vlign="top"> -nseno </td><td> number of senones that you want to
train </td></tr>

<tr><td vlign="top"> -otreedir </td><td> directory to store the pruned
decision trees </td></tr>

<tr><td vlign="top"> -moddeffn </td><td> CD-untied model definition file
</td></tr>

<tr><td vlign="top"> -psetfn </td><td> lingistic questions file </td></tr>
<tr><td vlign="top"> -minocc </td><td> minimum number of observations in the
given tied state. If there are fewer observations, the branches corresponding
to the tied state get pruned out by default. This value should never be 0, 
otherwise you will end up having senones with no data to train 
(which are seen 0 times in the training set) </td></tr>
</table>
<hr>
<p>

<a name="30"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CD TIED MODEL DEFINITION FILE</td>
</table>
<!------------------------------------------------------------------------->
<p>
Once the trees are pruned, a new model definition file must be created
which
<ul start="a">
<li>contains all the triphones which are seen during training
<li>has the states corresponding to these triphones identified with
senones from the pruned trees
</ul>
In order to do this, the model definition file which contains all possible
triphones from the current training dictionary can be used [alltriphones model
definition file]. This was
built during the process of building the CD-untied model definition file.
Remember that the CD-untied model definition file contained only
a selected number of triphones, with various thresholds used for selection.
That file, therefore, cannot be used to build the CD-tied model definition
file, except in the exceptional case  where you are sure that the
CD-untied model definition file includes *all* triphones seen during training.
The executable that must be used to tie states is called "tiesate" and
needs the following arguments:
<p>
<table border="1">
<tr><td valign="top"> FLAG </td><td> DESCRIPTION </td></tr>

<tr><td valign="top"> -imoddeffn </td><td> alltriphones model definition
file </td></tr>

<tr><td valign="top"> -omoddeffn </td><td> CD-tied model definition file
</td></tr>

<tr><td valign="top"> -treedir </td><td> pruned tree directory </td></tr>

<tr><td valign="top"> -psetfn </td><td> linguistic questions file </td></tr>
</table>
<p>
Here is an example of a CD-tied model definition file, based on the earlier
example given for the CD-untied model definition file. The alltriphones model
definition file:
<pre>
# Generated by [path]/mk_model_def on Sun Nov 26 12:42:05 2000
# triphone: (null)
# seno map: (null)
#
0.3
5 n_base
34 n_tri
156 n_state_map
117 n_tied_state
15 n_tied_ci_state
5 n_tied_tmat
#
# Columns definitions
#base lft  rt p attrib tmat      ... state id's ...
  SIL   -   - - filler    0    0    1    2    N
   AE   -   - -    n/a    1    3    4    5    N
   AX   -   - -    n/a    2    6    7    8    N
    B   -   - -    n/a    3    9   10   11    N
    T   -   - -    n/a    4   12   13   14    N
   AE   B   T i    n/a    1   15   16   17    N
   AE   T   B i    n/a    1   18   19   20    N
   AX  AX  AX s    n/a    2   21   22   23    N
   AX  AX   B s    n/a    2   24   25   26    N
   AX  AX SIL s    n/a    2   27   28   29    N
   AX  AX   T s    n/a    2   30   31   32    N
   AX   B  AX s    n/a    2   33   34   35    N
   AX   B   B s    n/a    2   36   37   38    N
   AX   B SIL s    n/a    2   39   40   41    N
   AX   B   T s    n/a    2   42   43   44    N
   AX SIL  AX s    n/a    2   45   46   47    N
   AX SIL   B s    n/a    2   48   49   50    N
   AX SIL SIL s    n/a    2   51   52   53    N
   AX SIL   T s    n/a    2   54   55   56    N
   AX   T  AX s    n/a    2   57   58   59    N
   AX   T   B s    n/a    2   60   61   62    N
   AX   T SIL s    n/a    2   63   64   65    N
   AX   T   T s    n/a    2   66   67   68    N
    B  AE  AX e    n/a    3   69   70   71    N
    B  AE   B e    n/a    3   72   73   74    N
    B  AE SIL e    n/a    3   75   76   77    N
    B  AE   T e    n/a    3   78   79   80    N
    B  AX  AE b    n/a    3   81   82   83    N
    B   B  AE b    n/a    3   84   85   86    N
    B SIL  AE b    n/a    3   87   88   89    N
    B   T  AE b    n/a    3   90   91   92    N
    T  AE  AX e    n/a    4   93   94   95    N
    T  AE   B e    n/a    4   96   97   98    N
    T  AE SIL e    n/a    4   99  100  101    N
    T  AE   T e    n/a    4  102  103  104    N
    T  AX  AE b    n/a    4  105  106  107    N
    T   B  AE b    n/a    4  108  109  110    N
    T SIL  AE b    n/a    4  111  112  113    N
    T   T  AE b    n/a    4  114  115  116    N
</pre>
is used as the base to give the following  CD-tied model definition file
with 39 tied states (senones):
<pre>
# Generated by [path]/mk_model_def on Sun Nov 26 12:42:05 2000
# triphone: (null)
# seno map: (null)
#
0.3
5 n_base
34 n_tri
156 n_state_map
54 n_tied_state
15 n_tied_ci_state
5 n_tied_tmat
#
# Columns definitions
#base lft  rt p attrib tmat      ... state id's ...
  SIL   -   - - filler    0    0    1    2    N
   AE   -   - -    n/a    1    3    4    5    N
   AX   -   - -    n/a    2    6    7    8    N
    B   -   - -    n/a    3    9   10   11    N
    T   -   - -    n/a    4   12   13   14    N
   AE   B   T i    n/a    1   15   16   17    N
   AE   T   B i    n/a    1   18   16   19    N
   AX  AX  AX s    n/a    2   20   21   22    N
   AX  AX   B s    n/a    2   23   21   22    N
   AX  AX SIL s    n/a    2   24   21   22    N
   AX  AX   T s    n/a    2   25   21   22    N
   AX   B  AX s    n/a    2   26   21   27    N
   AX   B   B s    n/a    2   23   21   27    N
   AX   B SIL s    n/a    2   24   21   27    N
   AX   B   T s    n/a    2   25   21   27    N
   AX SIL  AX s    n/a    2   26   21   28    N
   AX SIL   B s    n/a    2   23   21   28    N
   AX SIL SIL s    n/a    2   24   21   28    N
   AX SIL   T s    n/a    2   25   21   28    N
   AX   T  AX s    n/a    2   26   21   29    N
   AX   T   B s    n/a    2   23   21   29    N
   AX   T SIL s    n/a    2   24   21   29    N
   AX   T   T s    n/a    2   25   21   29    N
    B  AE  AX e    n/a    3   30   31   32    N
    B  AE   B e    n/a    3   33   31   32    N
    B  AE SIL e    n/a    3   34   31   32    N
    B  AE   T e    n/a    3   35   31   32    N
    B  AX  AE b    n/a    3   36   37   38    N
    B   B  AE b    n/a    3   36   37   39    N
    B SIL  AE b    n/a    3   36   37   40    N
    B   T  AE b    n/a    3   36   37   41    N
    T  AE  AX e    n/a    4   42   43   44    N
    T  AE   B e    n/a    4   45   43   44    N
    T  AE SIL e    n/a    4   46   43   44    N
    T  AE   T e    n/a    4   47   43   44    N
    T  AX  AE b    n/a    4   48   49   50    N
    T   B  AE b    n/a    4   48   49   51    N
    T SIL  AE b    n/a    4   48   49   52    N
    T   T  AE b    n/a    4   48   49   53    N
</pre>
<hr>
<p>

<a name="31"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">TRAINING CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>INITIALIZING AND TRAINING CD TIED GAUSSIAN MIXTURE MODELS </td>
</table>
<!------------------------------------------------------------------------->
<p>
The next step is to train the CD-tied models. In the case of
continuous models, the HMM states can be modeled by either a single
Gaussian distribution, or a mixture of Gaussian distributions. The number
of Gaussians in a mixture-distribution must preferably be even, and a power
of two (for example, 2,4,8,16, 32,..). To model the HMM states by a mixture
of 8 Gaussians (say), we first have to train 1 Gaussian per state
models. Each Gaussian distribution is then split into two by perturbing its
mean slightly, and the resulting two distributions are used to intialize
the training for 2 Gaussian per state models. These are further perturbed
to initialize for 4 Gaussains per state models and a further split is done
to initalize for the 8 Gaussians per state models.  So the CD-tied training
for models with 2<sup>N</sup> Gaussians per state is done in N+1
steps. Each of these N+1 steps consists of
<p>
<ol>
<li>initialization
<li>iterations of Baum-Welch followed by norm
<li>Gaussian splitting (not done in the N+1<sup>th</sup> stage of CD-tied training)
</ol>
<p>
The training begins with the initialization of the 1 Gaussian per state models.
During initialization, the model parameters from
the CI model parameter files are copied into appropriate positions
in the CD tied model parameter files. Four model parameter files 
are created, one each for the  means, variances, transition matrices 
and mixture weights. During initialization, each state of a particular CI phone 
contributes 
to the same state of the same CI phone in the CD-tied model parameter file,
and also to the same state of the *all* the triphones of the same CI
phone in the CD-tied model parameter file. The CD-tied model definition
file is used as a reference for this mapping.
<p>
Initialization for the 1 gaussian per state models is done by the executable 
called <b><font color="green">init_mixw</font></b>.  It requires the following 
arguments:

<p>
<table border="1">
<tr><td valign="top"> -src_moddeffn </td><td> source (CI) model definition file 
</td></tr>
<tr><td valign="top"> -src_ts2cbfn </td><td> .cont. </td></tr>
<tr><td valign="top"> -src_mixwfn </td><td> source (CI) mixture-weight file </td
></tr>
<tr><td valign="top"> -src_meanfn </td><td> source (CI) means file </td></tr>
<tr><td valign="top"> -src_varfn </td><td> source (CI) variances file </td></tr>
<tr><td valign="top"> -src_tmatfn </td><td> source (CI) transition-matrices file
  </td></tr>
<tr><td valign="top"> -dest_moddeffn </td><td> destination (CD tied) model def
inition file </td></tr>
<tr><td valign="top"> -dest_ts2cbfn </td><td> .cont. </td></tr>
<tr><td valign="top"> -dest_mixwfn </td><td> destination 
(CD tied 1 Gau/state) mixture weights file </td></tr>
<tr><td valign="top"> -dest_meanfn </td><td> destination (CD tied 1 Gau/state) means file 
</td></tr>
<tr><td valign="top"> -dest_varfn </td><td> destination (CD tied 1 Gau/state) variances file </td></tr>
<tr><td valign="top"> -dest_tmatfn </td><td> destination (CD tied 1 Gau/state) transition 
matrices file </td></tr>
<tr><td valign="top"> -feat </td><td> feature configuration </td></tr>
<tr><td valign="top"> -ceplen </td><td> dimensionality of base feature vector 
<td></tr>
</table>
<p>


<p>
The executables used for baum-welch, norm and Gaussaian splitting are
<b><font color="green">bw</font></b>,
<b><font color="green">norm</font></b> and
<b><font color="green">inc_comp</font></b>
<p>
The arguments needed by <b><font color="green">bw</font></b> are
<p>
<table border="1", noshade>
<tr><td align="center"> FLAG     </td> <td align="center"> DESCRIPTION </td> </tr>
<tr><td valign="top"> -moddeffn</td> <td> CD tied model definition file</td></tr>
<tr><td valign="top"> -ts2cbfn </td> <td> this flag should be set to ".cont." if
                             you are training continuous models, and to
                             ".semi." if you are training semi-continuous 
                             models, without the double quotes </td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which the 
mixture-weights from the previous iteration are stored. Full path must be 
                             provided</td></tr>
<tr><td valign="top"> -mwfloor </td> <td> Floor value for the mixture weights. Any number 
                             below the floor value is set to the floor 
                             value.</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which 
 the transition matrices from the previous iteration are stored. 
Full path must be  provided</td></tr>
<tr><td valign="top"> -tpfloor </td> <td> Floor value for the transition probabilities. Any number 
                             below the floor value is set to the floor 
                             value.</td></tr>

<tr><td valign="top"> -meanfn  </td> <td>name of the file in which 
the means from the previous iteration are stored. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which 
the variances fromt he previous iteration are stored.
Full path must be provided</td></tr>         
<tr><td valign="top"> -dictfn  </td> <td> Dictionary </td></tr>
<tr><td valign="top"> -fdictfn </td> <td> Filler dictionary</td></tr>
<tr><td valign="top"> -ctlfn   </td> <td> control file </td></tr>
<tr><td valign="top"> -part    </td> <td> You can split the training into N equal parts by
setting a flag. If there are M utterances in your control file, then this will
enable you to run the training separately on each (M/N)<sup>th</sup> part. This
flag may be set to specify which of these parts you want to currently train 
on. As an example, if your total number of parts is 3, this flag can take
one of the values 1,2 or 3</td></tr>
<tr><td valign="top"> -npart   </td><td>  number of parts in which you have split 
                             the training </td></tr>
<tr><td valign="top"> -cepdir  </td><td> directory where your feature files are
                            stored</td></tr>
<tr><td valign="top"> -cepext  </td><td> the extension that comes after the name listed
in the control file. For example, you may have a file called a/b/c.d and
may have listed a/b/c in your control file. Then this flag must be given the
argument "d", without the double quotes or the dot before it </td></tr>
<tr><td valign="top"> -lsnfn   </td><td> name of the transcript file </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate results from each part of your training will be written in this directory. If you have T means to estimate, then

the size of the mean buffer from the current part of your training will
be T*4 bytes (say). There will likewise be a variance buffer, a buffer for
mixture weights, and a buffer for transition matrices</td></tr>
<tr><td valign="top"> -varfloor  </td><td> minimum variance value allowed </td></tr>
<tr><td valign="top"> -topn  </td><td> no. of gaussians to consider for likelihood computation</td></tr>
<tr><td valign="top"> -abeam </td><td> forward beamwidth</td></tr>
<tr><td valign="top"> -bbeam </td><td> backward beamwidth</td></tr>
<tr><td valign="top"> -agc   </td><td> automatic gain control</td></tr>
<tr><td valign="top"> -cmn   </td><td> cepstral mean normalization</td></tr>
<tr><td valign="top"> -varnorm  </td><td> variance normalization</td></tr>
<tr><td valign="top"> -meanreest </td><td> mean re-estimation</td></tr>
<tr><td valign="top"> -varreest </td><td>  variance re-estimation</td></tr>
<tr><td valign="top"> -2passvar </td><td> Setting this flag to "yes" lets bw 
use the previous means in the estimation of the variance. The current variance
is then estimated as E[(x - prev_mean)<sup>2</sup>]. If this flag is set to 
"no" the current estimate of the means are used to estimate variances. This 
requires the estimation of variance as E[x<sup>2</sup>] - (E[x])<sup>2</sup>, 
an unstable estimator that sometimes results in negative estimates of the 
variance due to arithmetic imprecision</td></tr>
<tr><td valign="top"> -tmatreest </td><td> re-estimate transition matrices or not</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
<p>
The arguments needed by <b><font color="green">norm</font></b> are:
<p>
<table border="1", noshade>

<tr><td align="center"> FLAG </td> <td align="center"> DESCRIPTION </td></tr>
<tr><td valign="top"> -accumdir </td><td> Intermediate buffer directory</td></tr>
<tr><td valign="top"> -feat    </td><td>   feature configuration</td></tr>
<tr><td valign="top"> -mixwfn  </td> <td> name of the file in which you want 
                                          to write the mixture weights.
Full path must be provided</td></tr>
<tr><td valign="top"> -tmatfn  </td> <td> name of the file in which you want to write
                             the transition matrices. Full path must be
                             provided</td></tr>
<tr><td valign="top"> -meanfn  </td> <td>name of the file in which you want to write
                             the means. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -varfn   </td> <td>name of the file in which you want to write
                             the variances. Full path must be
                             provided</td></tr>         
<tr><td valign="top"> -ceplen  </td><td>  length of basic feature vector</td></tr>
</table>
<p>
The arguments needed by <b><font color="green">inc_comp</font></b> are:
<p>
<table border="1">
<tr><td> FLAG  </td><td> DESCRIPTION  </td></tr>
<tr><td> -ninc </td><td> how many gaussians (per state) to split currently. You need not
always split to double the number of Gaussians. you can specify other numbers
here, so long as they are less than the number of Gaussians you currently have.This is a positive integer like "2", given without the double quotes  </td></tr>
<tr><td> -ceplen </td><td> length of the base feature vector  </td></tr>
<tr><td> -dcountfn </td><td> input mixture weights file  </td></tr>
<tr><td> -inmixwfn </td><td> input mixture weights file  </td></tr>
<tr><td> -outmixwfn </td><td>output mixture weights file   </td></tr>
<tr><td> -inmeanfn </td><td> input means file  </td></tr>
<tr><td> -outmeanfn </td><td>ouput means file   </td></tr>
<tr><td> -invarfn </td><td> input variances file  </td></tr>
<tr><td> -outvarfn </td><td> output variances file  </td></tr>
<tr><td> -feat </td><td> type of feature </td></tr>
</table>
<p>

<a href="#top">Back to index</a>
<hr>

<a name="3"></a>
<a name="3b"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>VECTOR QUANTIZATION</td>
</table>
<p>
This is done in two steps. In the first step, the
   feature vectors are accumulated for quantizing the vector space. Not
   all feature vectors are used. Rather, a sampling of the vectors available
   is done by  the executable "agg_seg". This executable simply "aggregates"
   the vectors into a buffer. The following flag settings
   must be used with this executable:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td>  -segdmpdirs </td><td>  directory in which you want to put  the aggregate buffer </td></tr>
<tr><td> -segdmpfn </td><td> name of the buffer (file) </td></tr>
<tr><td> -segtype </td><td> all </td></tr>
<tr><td> -ctlfn </td><td> control file </td></tr>
<tr><td> -cepdir </td><td> path to feature files </td></tr>
<tr><td> -cepext </td><td> feature vector filename extension </td></tr>
<tr><td> -ceplen </td><td> dimensionality of the base feature vector </td></tr>
<tr><td> -agc </td><td> automatic gain control factor(max/none) </td></tr>
<tr><td> -cmn </td><td> cepstral mean normalization(yes/no) </td></tr>
<tr><td> -feat </td><td> type of feature. As mentioned earlier, the 4-stream feature vector is usually given as an option here. When you specify the
4-stream feature, this program will compute and aggregate vectors
  corresponding to all streams separately.</td></tr>
<tr><td> -stride </td><td> how many samples to ignore during sampling of 
vectors (pick every stride'th sample)</td></tr>
</table>
<p>
In the second step of vector quantization, an Expectation-Maximization (EM)
algorithm is applied to segregate each aggregated stream of vectors into a
codebook of N Gaussians. Usually N is some power of 2, the commonly used
number is N=256. The number 256 can in principle be varied, but this option
is not provided in the SPHINX-II decoder. So if you intend to use the
SPHINX-II decoder, but are training models with SPHINX-III trainer, you
must use N=256. It has been observed that the quality of the models built with
256 codeword codebooks is sufficient for good recognition. Increasing the
number of codewords may cause data-insufficiency problems. In many instances,
the choice to train semi-continuous models (rather than continuous ones) arises
from insufficiency of training data. When this is indeed the case,
increasing the number of codebooks might aggravate the estimation
problems that might arise due to data insufficiency. Consider this 
fact seriously before you decide to increase N.
<p>
In SPHINX-III, the EM-step is done through a k-means algorithm carried
out by the executable <b><font color="green">kmeans_init</font></b>. 
This executable is usually used with the following flag settings:
<pre>
    -grandvar   yes
    -gthobj     single
    -stride     1
    -ntrial     1
    -minratio   0.001
    -ndensity   256
    -meanfn     full_path_to_codebookmeans.file
    -varfn      full_path_to_codebookvariances.file
    -reest      no
    -segdmpdirs directory_in_which_you_want_to_put_aggregate.file
    -segdmpfn   aggregate.file
    -ceplen     dimensionality_of_feature_vector
    -feat       type_of_feature
    -agc        automatic_gain_control_factor(max/none)
    -cmn        cepstral_mean_normalization(yes/no)
</pre>               
Once the vector quantization is done, you have to flat-initialize your
acoustic models to prepare for the first real step in training. The following
steps explain the flat-initialization process:
<p>

<a name="3d"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CI MODEL DEFINITION FILE</td>
</table>
<p>
<a href="#20">
This procedure is the same as described for continuous models</a>.


<a name="3e"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE HMM TOPOLOGY FILE</td>
</table>
<p>
<a href="#21">
This procedure is the same as described for continuous models</a>.


<a name="3c"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>FLAT INITIALIZATION OF CI MODEL PARAMETERS</td>
</table>
<p>
In flat-initialization, all mixture weights are set to be equal for
all states, and all state transition probabilities are set to be equal. 
Unlike in continuous models, the means and variances of the codebook
Gaussians are not given
global values, since they are already estimated from the data in the
vector quantization step. To flat-initialize the mixture weights, each
component of each mixture-weight distribution of each feature stream is set 
to be a number equal to 1/N, where N is the codebook size.
The mixture_weights and
transition_matrices are initialized using the executable <b><font
color="green">mk_flat</font></b>. It needs the following arguments:
<p>
<table border="1">
<tr><td> FLAG </td><td> DESCRIPTION </td></tr>
<tr><td> -moddeffn </td><td> CI model definition file </td></tr>
<tr><td> -topo </td><td>  HMM topology file. </td></tr>
<tr><td> -mixwfn </td><td>  file in which you want to write the
initialized mixture weights </td></tr>
<tr><td> -tmatfn </td><td>  file in which you want to write the
initialized transition matrices </td></tr>
<tr><td> -nstream </td><td> number of independent feature streams, for
continuous models this number should be set to "1", without the double quotes
</td></tr>
<tr><td> -ndensity </td><td> codebook size. This number is usually set
to "256", without the double quotes</td></tr>
</table>
<p>

<a name="3f"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>TRAINING CI MODELS</td>
</table>
<p>
<a href="#23">
This procedure is the same as described for continuous models</a>, except
<p>
<ol>
<li>For the executable <b><font color="green">bw</font></b>, the flags -tst2cbfn,  -topn and -feat must be set to the values
<p>
<table border="1">
<tr><td> FLAG </td><td> VALUE </td></tr>
<tr><td> -tst2cbfn </td><td> .semi. </td></tr>
<tr><td> -topn </td><td> This value should be lower than or equal to the
codebook size. It decides how many components of each mixture weight
distribution are used to estimate likelihoods during the baum-welch passes. Itaffects the speed of training. A higher value results in slower iterations </td></tr>
<tr><td> -feat </td><td> The specific feature type you are using to train
the semi-continuous models </td></tr>
</table>
<p>
<li>For the executable <b><font color="green">norm</font></b>, the flag -feat
 must be set to the value
<p>
<table border="1">
<tr><td> FLAG </td><td> VALUE </td></tr>
<tr><td> -feat </td><td> The specific feature type you are using to train
the semi-continuous models </td></tr>
</table>
</ol>
<p>
Also, it is important to remember here that the re-estimated means and 
variances now correspond to <em>codebook</em> means and variances. In
semi-continuous models, the codebooks are also re-estimated during training.
The vector quantization step is therefore only an <em>initialization</em>
step for the codebooks. This fact will affect the way we do model
adaptation for the semi-continuous case.
<p>
<a name="3g"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CD UNTIED MODEL DEFINITION FILE</td>
</table>
<p>
<a href="#24">
This procedure is the same as described for continuous models</a>.

<a name="3h"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>FLAT INITIALIZATION OF CD UNTIED MODEL PARAMETERS</td>
</table>
<p>
<a href="#25">
This procedure is the same as described for continuous models</a>.

<a name="3i"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>TRAINING CD UNTIED MODELS</td>
</table>
<p>
<a href="#26">
This procedure is the same as described for continuous models</a>.

<a name="3j"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>BUILDING DECISION TREES FOR PARAMETER SHARING</td>
</table>
<p>
<a href="#27">
This procedure is the same as described for continuous models</a>.

<a name="3k"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>GENERATING THE LINGUISTIC QUESTIONS</td>
</table>
<p>
<a href="#28">
This procedure is the same as described for continuous models</a>.

<a name="3l"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>PRUNING THE DECISION TREES</td>
</table>
<p>
<a href="#29">
This procedure is the same as described for continuous models</a>.

<a name="3m"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>CREATING THE CD TIED MODEL DEFINITION FILE</td>
</table>
<p>
<a href="#30">
This procedure is the same as described for continuous models</a>.

<a name="3n"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>INITIALIZING AND TRAINING CD TIED MODELS</td>
</table>
<p>
During initialization, the model parameters from
the CI model parameter files are copied into appropriate positions
in the CD tied model parameter files. Four model parameter files 
are created, one each for the  means, variances, transition matrices 
and mixture weights. During initialization, each state of a particular CI phone contributes 
to the same state of the same CI phone in the CD-tied model parameter file,
and also to the same state of the *all* the triphones of the same CI
phone in the CD-tied model parameter file. The CD-tied model definition
file is used as a reference for this mapping.
<p>
Initialization for the CD-tied training is done by the executable called
<b><font color="green">init_mixw</font></b>.  It requires the following 
arguments:

<p>
<table border="1">
<tr><td valign="top"> -src_moddeffn </td><td> source (CI) model definition file 
</td></tr>
<tr><td valign="top"> -src_ts2cbfn </td><td> .semi. </td></tr>
<tr><td valign="top"> -src_mixwfn </td><td> source (CI) mixture-weight file </td
></tr>
<tr><td valign="top"> -src_meanfn </td><td> source (CI) means file </td></tr>
<tr><td valign="top"> -src_varfn </td><td> source (CI) variances file </td></tr>
<tr><td valign="top"> -src_tmatfn </td><td> source (CI) transition-matrices file
  </td></tr>
<tr><td valign="top"> -dest_moddeffn </td><td> destination (CD tied) model def
inition file </td></tr>
<tr><td valign="top"> -dest_ts2cbfn </td><td> .semi. </td></tr>
<tr><td valign="top"> -dest_mixwfn </td><td> destination (CD tied) mixture wei
ghts file </td></tr>
<tr><td valign="top"> -dest_meanfn </td><td> destination (CD tied) means file 
</td></tr>
<tr><td valign="top"> -dest_varfn </td><td> destination (CD tied) variances fi
le </td></tr>
<tr><td valign="top"> -dest_tmatfn </td><td> destination (CD tied) transition 
matrices file </td></tr>
<tr><td valign="top"> -feat </td><td> feature configuration </td></tr>
<tr><td valign="top"> -ceplen </td><td> dimensionality of base feature vector </
td></tr>
</table>
<p>
@@
<a name="3a"></a>
<center><h4><font color="red">TRAINING SEMI-CONTINUOUS MODELS</font></h4>
</center>
<TABLE width="100%" bgcolor="#ffffff">
<td>DELETED INTERPOLATION</td>
</table>
<p>
Deleted interpolation is the final step in creating semi-continuous models.
The output of deleted interpolation are semi-continuous models in sphinx-3
format.  These have to be further converted to sphinx-2 format, if you want
to use the SPHINX-II decoder.
<p>
 Deleted interpolation is an iterative process to interpolate between
CD and CI mixture-weights to reduce the effects of overfitting. The data are
divided into two sets, and the data from one set are used to estimate
the optimal interpolation factor between CI and CD models trained
from the other set. Then the two data sets are switched and this
procedure is repeated using the last estimated interpolation factor
as an initialization for the current step. The switching is continued
until the interpolation factor converges.
<p>
To do this, we need *two* balanced data sets. Instead of the actual data,
however, we use the Bauim-Welch buffers, since the related math is convenient.
we therefore need an *even* number of buffers that can be grouped into two
sets. DI cannot be performed if you train using only one buffer. At
least in the final iteration of the training, you must perform the training
in (at least) two parts. You could also do this serially
as one final iteration of training AFTER BW has converegd, on a non-lsf
setup.
<p>
Note here that the norm executable used at the end of every Baum-Welch
iteration also computes models from the buffers, but it does not require an
even number of buffers. BW returns numerator terms and denominator terms
for the final estimation, and norm performs the actual division. The number
of buffers is not important, but you would need to run norm at the end
of EVERY iteration of BW, even if you did
the training in only one part. When you have multiple parts norm sums up
the numerator terms from the various buffers, and the denominator terms,
and then does the division.
<p>
The executable "delint" provided with the SPHINX-III package does the
deleted interpolation. It takes the following arguments:
<table border="1">
<tr><td valign="top">FLAG</td><td>DESCRIPTION</td></tr>
<tr><td valign="top"> -accumdirs   </td><td> directory which holds the baum-welch buffers  </td></tr>
<tr><td valign="top"> -moddeffn  </td><td> CD-tied model-definition file  </td></tr>
<tr><td valign="top"> -mixwfn  </td><td> CD-tied mixture weights files  </td></tr>
<tr><td valign="top"> -cilambda  </td><td> initial interpolation factor between the CI
models and the Cd models. It is the weight given given to the CI models
initially. The values range from 0 to 1. This is typically set to
0.9  </td></tr>
<tr><td valign="top"> -ceplen  </td><td> dimentionality of base feature vector  </td></tr>
<tr><td valign="top"> -maxiter  </td><td> the number of iterations of deleted-interpolation
that you want to run. DI can be slow to converge, so this number is typically
between 1000-4000 </td></tr>
</table>

</ol>

(more to come...) 
<em>After the decision trees are built using semi-continuous models, it is
possible to train continuous models. ci-semicontinuous models need to be
trained for initializing the semicontinuous untied models. ci-continuous
models need to be trained for initializing the continuous tied state
models. the feature set can be changed after the decision tree building
stage.
</em>
<a href="#top">Back to index</a>
<hr>

<a name="4"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
SPHINX2 data and model formats</td></table>
<ol>

<li>Feature set: This is a binary file with all the elements in each of the
vectors stored sequentially.  The header is a 4 byte integer which tells us
how many floating point numbers there are in the file. This is followed by
the actual cepstral values (usually 13 cepstral values per frame, with 10ms
skip between adjacent frames. Framesize is usually fixed and is usually
25ms).
<pre>
              <4_byte_integer header>
              vec 1 element 1
              vec 1 element 2
		.
		.
	      vec 1 element 13
              vec 2 element 1
	      vec 2 element 2
               .
               .
	      vec 2 element 13
</pre>
<li> Sphinx2 semi-continuous HMM (SCHMM) formats:
<br>The sphinx II SCHMM format is rather complicated. It has the following
 main components (each of which has sub-components):
<ul>
<li>A set of codebooks
<li>A "sendump" file that stores state (senone) distributions
<li> A "phone" and a "map" file which map senones on to states of a triphone
<li> A set of ".chmm" files that store transition matrices
</ul>
<ol>
<p>
<li>Codebooks: There are 8 codebook files. The sphinx-2 uses a four stream feature set:
<ul>
<li>cepstral feature: [c1-c12],  (12 components)
<li> delta    feature: [delta_c1-delta_c12,longterm_delta_c1-longterm_delta_c12],(24 components)
<li> power feature:    [c0,delta_c0,doubledelta_c0],   (3 components)
<li> doubledelta feature: [doubledelta_c-doubledelta_c12] (12 components)
</ul>
The 8 codebooks files store the means and variances of all the gaussians
 for each of these 4 features. The 8 codebooks are,
<ul>
<li>cep.256.vec    [this is the file of means for the cepstral feature]
<li> cep.256.var    [this is the file of variacnes for the cepstral feature]
<li> d2cep.256.vec  [this is the file of means for the delta cepstral feature]
<li> d2cep.256.var  [this is the file of variances for the delta cepstral feature]
<li> p3cep.256.vec  [this is the file of means for the power feature]
<li> p3cep.256.var  [this is the file of variances for the power feature]
<li> xcep.256.vec   [this is the file of means for the double delta feature]
<li> xcep.256.var   [this is the file of variances for the double delta feature]
</ul>
All files are binary and have the following format:
 [4 byte int][4 byte float][4 byte float][4 byte float]......
 The 4 byte integer header stores the number of floating point values to
 follow in the file. For the cep.256.var, cep.256.vec, xcep.256.var and
 xcep.256.vec this value should be 3328. For d2cep.* it should be 6400,
 and for p3cep.* it should be 768.
 The floating point numbers are the components of the mean vectors (or
 variance vectors) laid end to end. So cep.256.[vec,var] have 256 mean
 (or variance) vectors, each 13 dimensions long,
 d2cep.256.[vec,var] have 256 mean/var vectors, each 25 dimensions long,
 p3cep.256.[vec,var] have 256 vectors, each of dimension 3,
 xcep.256.[vec,var] have 256 vectors of length 13 each.
<p>
The 0th component of the cep,d2cep and xcep distributions are not used in
 likelihood computation and are part of the format for purely historical
 reasons.
<p>
<li> The "sendump" file: The "sendump" file stores the mixture weights of the states associated with
 each phone.  (this file has a little ascii header, which might help you
 a little). Except for the header, this is a binary file. The mixture weights
 have all been transformed to 8 bit integer by the following operation
 intmixw = (-log(float mixw)  >> shift)
 The log base is 1.0003. The "shift" is the number of bits the smallest
 mixture weight has to be shifted right to fit in 8 bits.
 The sendump file stores,
<pre>
for each feature (4 features in all)
   for each codeword (256 in all)
     for each ci-phone (including noise phones)
       for each tied state associated with ci phone,
         probability of codeword in tied state
       end
       for each CI state associated with ci phone, ( 5 states )
         probability of codeword in CI state
       end
     end
   end
 end
</pre>
The sendump file has the following storage format (all data, except for
 the header string are binary):
<pre>
Length of header as 4 byte int (including terminating '\0')
 HEADER string (including terminating '\0')
 0 (as 4 byte int, indicates end of header strings).
 256 (codebooksize, 4 byte int)
 Num senones (Total number of tied states, 4 byte int)
 [lut[0],    (4 byte integer, lut[i] = -(i"<<"shift))
 prob_of_codeword[0]_of_feat[0]_1st_CD_sen_of_1st_ciphone (unsigned char)
 prob_of_codeword[0]_of_feat[0]_2nd_CD_sen_of_1st_ciphone (unsigned char)
 ..
 prob_of_codeword[0]_of_feat[0]_1st_CI_sen_of_1st_ciphone (unsigned char)
 prob_of_codeword[0]_of_feat[0]_2nd_CI_sen_of_1st_ciphone (unsigned char)
 ..
 prob_of_codeword[0]_of_feat[0]_1st_CD_sen_of_2nd_ciphone (unsigned char)
 prob_of_codeword[0]_of_feat[0]_2nd_CD_sen_of_2nd_ciphone (unsigned char)
 ..
 prob_of_codeword[0]_of_feat[0]_1st_CI_sen_of_2st_ciphone (unsigned char)
 prob_of_codeword[0]_of_feat[0]_2nd_CI_sen_of_2st_ciphone (unsigned char)
 ..
 ]
 [lut[1],    (4 byte integer)
 prob_of_codeword[1]_of_feat[0]_1st_CD_sen_of_1st_ciphone (unsigned char)
 prob_of_codeword[1]_of_feat[0]_2nd_CD_sen_of_1st_ciphone (unsigned char)
 ..
 prob_of_codeword[1]_of_feat[0]_1st_CD_sen_of_2nd_ciphone (unsigned char)
 prob_of_codeword[1]_of_feat[0]_2nd_CD_sen_of_2nd_ciphone (unsigned char)
 ..
 ]
 ... 256 times ..
 Above repeats for each of the 4 features
</pre>
<p>
<li> PHONE file: The phone file stores a list of phones and triphones used by 
the decoder. This is an ascii file
 It has 2 sections.
 The first section lists the CI phones in the models
 and consists of lines of the format
<pre>
AA      0       0       8       8
</pre>
"AA" is the CI phone, the first "0" indicates that it is a CI phone,
 the first 8 is the index of the CI phone, and the last 8 is the
 line number in the file.
 The second 0 is there for historical reasons.
<p>
The second section lists TRIPHONES
 and consists of lines of the format
<pre>
A(B,C)P -1 0 num num2
</pre>
"A" stands for the central phone, "B" for the left context, and
 "C" for the right context phone. The "P" stands for the position of
 the triphone and can take 4 values "s","b","i", and "e", standing
 for single word, word beginning, word internal, and word ending triphone.
 The -1 indicates that it is a triphone and not a CI phone. num
 is the index of the CI phone "A", and num2 is the position of the
 triphone (or ciphone) in the list, essentially the number of the
 line in the file (beginning with 0).
<p>
<li>  map file: The "map" file stores a mapping table to show which senones each state of
 each triphone are mapped to. This is also an ascii file with lines of the form
<pre>
 AA(AA,AA)s<0>       4
 AA(AA,AA)s<1>      27
 AA(AA,AA)s<2>      69
 AA(AA,AA)s<3>      78
 AA(AA,AA)s<4>     100
</pre>
The first line indicates that the 0th state of the triphone "AA" in the
 context of "AA" and "AA" is modelled by th 4th senone associated
 with the CI phone AA. Note that the numbering is specific to the
 CI phone. So the 4th senone of "AX" would also be numbered 4 (but
 this should not cause confusion)
<p>
<li> chmm FILES: There is one *.chmm file per ci phone. Each stores the transition matrix
 associated with that partiular ci phone in following binary format.
 (Note all triphones associated with a ci phone share its transition matrix)
 (all numbers are 4 byte integers):

<ul>
<li> -10     (a  header to indicate this is a tmat file)
<li> 256    (no of codewords)
<li>5      (no of emitting states)
<li>6      (total no. of states, including non-emitting state)
<li> 1      (no. of initial states. In fbs8 a state sequence can only begin
          with state[0]. So there is only 1 possible initial state)
<li>0       (list of initial states. Here there is only one, namely state 0)
<li>1       (no. of terminal states. There is only one non-emitting terminal state)
<li>5       (id of terminal state. This is 5 for a 5 state HMM)
<li>14      (total no. of non-zero transitions allowed by topology)
<pre>
 [0 0 (int)log(tmat[0][0]) 0]   (source, dest, transition prob, source id)
 [0 1 (int)log(tmat[0][1]) 0]
 [1 1 (int)log(tmat[1][1]) 1]
 [1 2 (int)log(tmat[1][2]) 1]
 [2 2 (int)log(tmat[2][2]) 2]
 [2 3 (int)log(tmat[2][3]) 2]
 [3 3 (int)log(tmat[3][3]) 3]
 [3 4 (int)log(tmat[3][4]) 3]
 [4 4 (int)log(tmat[4][4]) 4]
 [4 5 (int)log(tmat[4][5]) 4]
 [0 2 (int)log(tmat[0][2]) 0]
 [1 3 (int)log(tmat[1][3]) 1]
 [2 4 (int)log(tmat[2][4]) 2]
 [3 5 (int)log(tmat[3][5]) 3]
</pre>
There are thus 65 integers in all, and so each *.chmm file should be
 65*4 = 260 bytes in size.
</ul>
</ol>
(more to come...)

<p>
<a href="#top">Back to index</a>
<hr>


<a name="4b"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
SPHINX3 data and model formats</td></table>
<ol>
<p>
All senone-ids in the model files are with reference to the corresponding
model-definition file for the model-set.
<p>
<b><u>The means file</b></u>
<p>
The ascii means file for 8 Gaussians/state 3-state HMMs
looks like this: 
<pre>
param 602 1 8
mgau 0
feat 0

density 0 6.957e-01 -8.067e-01 -6.660e-01 3.402e-01 -2.786e-03 -1.655e-01
2.2 56e-02 9.964e-02 -1.237e-01 -1.829e-01 -3.777e-02 1.532e-03 -9.610e-01
-3.883e-0 1 5.229e-01 2.634e-01 -3.090e-01 4.427e-02 2.638e-01 -4.245e-02
-1.914e-01 -5.52 1e-02 8.603e-02 3.466e-03 5.120e+00 1.625e+00 -1.103e+00
1.611e-01 5.263e-01 2.4 79e-01 -4.823e-01 -1.146e-01 2.710e-01 -1.997e-05
-3.078e-01 4.220e-02 2.294e-01
 1.023e-02 -9.163e-02

density 1 5.216e-01 -5.267e-01 -7.818e-01 2.534e-01 6.536e-02 -1.335e-01
-1.3 22e-01 1.195e-01 5.900e-02 -2.095e-01 -1.349e-01 -8.872e-02 -4.965e-01
-2.829e-0 1 5.302e-01 2.054e-01 -2.669e-01 -2.415e-01 2.915e-01 1.406e-01
-1.572e-01 -1.50 1e-01 2.426e-02 1.074e-01 5.301e+00 7.020e-01 -8.537e-01
1.448e-01 3.256e-01 2.7 09e-01 -3.955e-01 -1.649e-01 1.899e-01 1.983e-01
-2.093e-01 -2.231e-01 1.825e-01
 1.667e-01 -2.787e-02

density 2 5.844e-01 -8.953e-01 -4.268e-01 4.602e-01 -9.874e-02 -1.040e-01
-3.  739e-02 1.566e-01 -2.034e-01 -8.387e-02 -3.551e-02 4.647e-03
-6.439e-01 -8.252e- 02 4.776e-01 2.905e-02 -4.012e-01 1.112e-01 2.325e-01
-1.245e-01 -1.147e-01 3.39 0e-02 1.048e-01 -7.266e-02 4.546e+00 8.103e-01
-4.168e-01 6.453e-02 3.621e-01 1.  821e-02 -4.503e-01 7.951e-02 2.659e-01
-1.085e-02 -3.121e-01 1.395e-01 1.340e-01
 -5.995e-02 -7.188e-02   

.....

.....

density 7 6.504e-01 -3.921e-01 -9.316e-01 1.085e-01 9.951e-02 7.447e-02
-2.42 3e-01 -8.710e-03 7.210e-02 -7.585e-02 -9.116e-02 -1.630e-01
-3.008e-01 -3.175e-0 1 1.687e-01 3.389e-01 -3.703e-02 -2.052e-01 -3.263e-03
1.517e-01 8.243e-02 -1.40 6e-01 -1.070e-01 4.236e-02 5.143e+00 5.469e-01
-2.331e-01 1.896e-02 8.561e-02 1.  785e-01 -1.197e-01 -1.326e-01 -6.467e-02
1.787e-01 5.523e-02 -1.403e-01 -7.172e- 02 6.666e-02 1.146e-01

mgau 1
feat 0

density 0 3.315e-01 -5.500e-01 -2.675e-01 1.672e-01 -1.785e-01 -1.421e-01
9.0 70e-02 1.192e-01 -1.153e-01 -1.702e-01 -3.114e-02 -9.050e-02 -1.247e-01
3.489e-0 1 7.102e-01 -2.001e-01 -1.191e-01 -6.647e-02 2.222e-01 -1.866e-01
-1.067e-01 1.0 52e-01 7.092e-02 -8.763e-03 5.029e+00 -1.354e+00 -2.135e+00
2.901e-01 5.646e-01 1.525e-01 -1.901e-01 4.672e-01 -3.508e-02 -2.176e-01
-2.031e-01 1.378e-01 1.029e -01 -4.655e-02 -2.512e-02

density 1 4.595e-01 -8.823e-01 -4.397e-01 4.221e-01 -2.269e-03 -6.014e-02
-7.  198e-02 9.702e-02 -1.705e-01 -6.178e-02 -4.066e-02 9.789e-03
-3.188e-01 -8.284e- 02 2.702e-01 6.192e-02 -2.077e-01 2.683e-02 1.220e-01
-4.606e-02 -1.107e-01 1.16 9e-02 8.191e-02 -2.150e-02 4.214e+00 2.322e-01
-4.732e-02 1.834e-02 8.372e-02 -7 .559e-03 -1.111e-01 -3.453e-03 5.487e-02
2.355e-02 -8.777e-02 4.309e-02 3.460e-0 2 -1.521e-02 -3.808e-02
</pre>

This is what it means, reading left to right, top to bottom:
<p>
Parameters for 602 tied-states (or senones), 1 feature stream,
8 Gaussians per state.
<p>
Means for senone no. 0, feature-stream no. 0.
Gaussian density no. 0, followed by its 39-dimensional mean vector.
(Note that each senone is a mixture of 8 gaussians, and each
feature vector consists of 13 cepstra, 13 delta cepstra and 13
double delta cepstra)
<pre>
Gaussian density no. 1, followed by its 39-dimensional mean vector.
Gaussian density no. 2, followed by its 39-dimensional mean vector.
.....
.....
Gaussian density no. 7, followed by its 39-dimensional mean vector.

Means for senone no. 1, feature-stream no. 0.
Gaussian density no. 0, followed by its 39-dimensional mean vector.
Gaussian density no. 1, followed by its 39-dimensional mean vector.
</pre>
- and so on -

<p>
<b><u>The variances file</b></u>
<pre>
param 602 1 8
mgau 0
feat 0

density 0 1.402e-01 5.048e-02 3.830e-02 4.165e-02 2.749e-02 2.846e-02
2.007e- 02 1.408e-02 1.234e-02 1.168e-02 1.215e-02 8.772e-03 8.868e-02
6.098e-02 4.579e- 02 4.383e-02 3.646e-02 3.460e-02 3.127e-02 2.336e-02
2.258e-02 2.015e-02 1.359e- 02 1.367e-02 1.626e+00 4.946e-01 3.432e-01
7.133e-02 6.372e-02 4.693e-02 6.938e- 02 3.608e-02 3.147e-02 4.044e-02
2.396e-02 2.788e-02 1.934e-02 2.164e-02 1.547e- 02

density 1 9.619e-02 4.452e-02 6.489e-02 2.388e-02 2.337e-02 1.831e-02
1.569e- 02 1.559e-02 1.082e-02 1.008e-02 6.238e-03 4.387e-03 5.294e-02
4.085e-02 3.499e- 02 2.327e-02 2.085e-02 1.766e-02 1.781e-02 1.315e-02
1.367e-02 9.409e-03 7.189e- 03 4.893e-03 1.880e+00 3.342e-01 3.835e-01
5.274e-02 4.430e-02 2.514e-02 2.516e- 02 2.863e-02 1.982e-02 1.966e-02
1.742e-02 9.935e-03 1.154e-02 8.361e-03 8.059e- 03

density 2 1.107e-01 5.627e-02 2.887e-02 2.359e-02 2.083e-02 2.143e-02
1.528e- 02 1.264e-02 1.223e-02 9.553e-03 9.660e-03 9.241e-03 3.391e-02
2.344e-02 2.220e- 02 1.873e-02 1.436e-02 1.458e-02 1.362e-02 1.350e-02
1.191e-02 1.036e-02 8.290e- 03 5.788e-03 1.226e+00 1.287e-01 1.037e-01
3.079e-02 2.692e-02 1.870e-02 2.873e- 02 1.639e-02 1.594e-02 1.453e-02
1.043e-02 1.137e-02 1.086e-02 8.870e-03 9.182e- 03
</pre>

- and so on -
The format is exactly as for the means file.
<p>

<b><u>The mixture_weights file</b></u>
<p>

The ascii mixure_weights file for 8 Gaussians/state 3-state HMMs
looks like this:
<pre>
mixw 602 1 8

mixw [0 0] 7.434275e+03
8.697e+02 9.126e+02 7.792e+02 1.149e+03 9.221e+02 9.643e+02 1.037e+03 8.002e+02

mixw [1 0] 8.172642e+03
8.931e+02 9.570e+02 1.185e+03 1.012e+03 1.185e+03 9.535e+02 7.618e+02 1.225e+03
</pre>
This is what it means, reading left to right, top to bottom:
<p>
Mixtrue weights for 602 tied-states (or senones), 1 feature stream,
8 Gaussians per state (Each mixture weight is a vector with 8 components)
<p>
Mixture weights for senone no. 0, feature-stream no. 0, number of
times this senone occured in the training corpus (instead of
writing normalized values, this number is directly recorded since it
is useful in other places during training [interpolation, adaptation, 
tree building etc]).
When normalized (for example, by the decoder during decoding), the
mixture weights above would read as:

</pre>
mixw 602 1 8

mixw [0 0] 7.434275e+03
1.170e-01 1.228e-01 1.048e-01 1.546e-01 1.240e-01 1.297e-01 1.395e-01 1.076e-01

mixw [1 0] 8.172642e+03
1.093e-01 1.171e-01 1.450e-01 1.238e-01 1.450e-01 1.167e-01 9.321e-02 1.499e-01
</pre>

<p>
<b><u>The transition_matrices file</b></u>
<p>
The ascii file looks like this:
<pre>
tmat 34 4
tmat [0]
 6.577e-01 3.423e-01
          6.886e-01 3.114e-01
                   7.391e-01 2.609e-01
tmat [1]
 8.344e-01 1.656e-01
          7.550e-01 2.450e-01
                   6.564e-01 3.436e-01
tmat [2]
 8.259e-01 1.741e-01
          7.598e-01 2.402e-01
                   7.107e-01 2.893e-01
tmat [3]
 4.112e-01 5.888e-01
          4.371e-01 5.629e-01
                   5.623e-01 4.377e-01    
</pre>
- and so on -

This is what it means, reading left to right, top to bottom:
<p>
Transition matrices for 34 HMMs, each with four states (3 emitting states +
1 non-emitting state)
<p>
Transition matrix for HMM no. 0 (NOTE THAT THIS IS THE HMM NUMBER, AND
NOT THE SENONE NUMBER), matrix.
<pre>
Transition matrix for HMM no 1, matrix.
Transition matrix for HMM no 2, matrix.
Transition matrix for HMM no 3, matrix.
</pre>
- and so on -
<p>
<b><u>Explanation of the feature-vector components:</b></u>
<p>
The 13 dimensional cepstra, 13 dimensional delta cepstra and 
13 dimensional double-delta cepstra are arranged, in all model
files, in the following order:
1s_12c_12d_3p_12dd  (you can denote this by s3_1x39 in the
decoder flags).
The format string means: 1 feature-stream, 12 cepstra, 12 deltacepstra,
3 power and 12 doubledeltacepstra.
The power part is composed of the 0th component of the cepstral
vector, 0th component of the d-cepstral vector and 0th component
of the dd-cepstral vector.
<p>
In the quantized models, you will see the string 
24,0-11/25,12-23/26,27-38 
In this string, the slashes are delimiters. The numbers represent various
components occuring in each of the 3 codebooks. In the above string,
for instance, the first codebook is composed of the 24th component of 
the feature vector (s3_1x39) followed by components 0-11. The second
codeword has components 25, followed by components 12-23, and the
third codeword is composed of components 26 and 27-28. This basically
accounts for the odd order in s3_1x39. By constructing the codewords
in this manner, we ensure that the first codeword is composed entirely
of cepstral terms, the second codeword of delta cepstral terms and the
third codeword of double delta terms.
<p>
s3_1x39 is a historical order. It can be disposed of in any new code that
you write.
Writing the feature vector components in different
orders has no effect on recognition, provided training and
test feature formats are the same.
<p>


<a name="5"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
TRAINING MULTILINGUAL MODELS</td></table>

Once you have acoustic data and the corresponding transcriptions for any
language, and a lexicon which translates words used in the transcription
into sub-word units (or just maps them into some reasonable-looking
acoustic units), you can use the SPHINX to train acoustic models for that
language. You do not need anything else.
<p>
The linguistic questions that are needed for building the decision trees
are automatically designed by the SPHINX. Given the acoustic units you
choose to model, the SPHINX can automatically determine the best
combinations of these units to compose the questions. The hybrid algorithm
that the SPHINX uses clusters state distributions of context-independent
phones to obtain questions for triphonetic contexts.  This is very useful
if you want to train models for languages whose phonetic structure you do
not know well enough to design your own phone classes (or if a phonetician
is not available to help you do it). An even greater advantage comes from
the fact that the algorithm can be effectively used in situations where the
subword units are not phonetically motivated. Hence you can comfortably
use any set of acoustic units that look reasonable to you for the
task.
<p>
If you are completely lost about the acoustic units but have enough
training data for all (or most) words used in the transcripts, then build
word models instead of subword models. You do not have to build decision
trees. Word models are usually context-independent models, so you only have
to follow through the CI training. Word models do have some limitations,
which are currently discussed in the non-technical version of this manual.
<p>
<a href="#top">Back to index</a>
<hr>
<a name="6"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
THE TRAINING LEXICON</td></table>
Inconsistencies in the training lexicon can result in bad acoustic models.
Inconsistencies stem from the usage of a phoneset with phones that are
confusible in the pattern space of our recognizer. To get an idea about the
confusibility of the phones that you are using, look at the per-frame log
likelihoods of the utterances during training. A greater number of phones
in the lexicon should ordinarily result in higher log likelihoods. If you
have a baseline to compare with, and this is *not* the case, then it means
that the phoneset is more diffuse over the pattern space (more compact, if
you observe the opposite for a smaller phone set), and the corresponding
distributions are wider (sharper in the other case). Generally, as the
number of applicable distributions decreases over a given utterance, the
variances tend to become larger and larger. The distributions flatten out
since the areas under the distributions are individually conserved (to
unity) and so the overall per frame likelihoods are expected to be lower.
<p>
The solution is to fix the phoneset, and to redo the lexicon in terms of a
phoneset of smaller size covering the acoustic space in a more compact
manner. One way to do this is to collapse the lexicon into syllables and
longer units and to expand it again using a changed and smaller
phoneset. The best way to do this is still a research problem, but if you
are a native speaker of the language and have a good ear for sounds, your
intuition will probably work.  The SPHINX will, of course, be able to train
models for any new phoneset you come up with.
<p><a href="#top">Back to index</a>
<hr>
<a name="7"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
CONVERTING SPHINX3 FORMAT MODELS TO SPHINX2 FORMAT</td></table>

To convert the 5 state/HMM, 4 feature stream semi-continuous models 
trained using the Sphinx3 trainer into the Sphinx2 format (compatible
with the Sphinx2 decoder), programs in the following directories
must be compiled and used:
<pre>
-----------------------------------------------------------------------
program directory       corresponding    function 
                        executable       of executable
-----------------------------------------------------------------------
mk_s2cb                 mk_s2cb          makes s2 codebooks 
mk_s2hmm                mk_s2hmm         makes s2 mixture weights
mk_s2phone              mdef2phonemap    makes phone and map files
mk_s2seno               makesendmp       makes senone dmp files
-----------------------------------------------------------------------


Variables needed:
-----------------
s2dir  : sphinx_2_format directory
s3dir  : sphinx_3_format directory
s3mixw : s3dir/mixture_weights
s3mean : s3dir/means
s3var  : s3dir/variances
s3tmat : s3dir/transition_matrices
s3mdef : s3dir/mdef_file (MAKE SURE that this mdef file
                          includes all the phones/triphones needed for
                          the decode. It should ideally be made from
                          the decode dictionary, if the decode vocabulary
                          is fixed)


Usage:
------
mk_s2cb 
        -meanfn   s3mean
        -varfn    s3var
        -cbdir    s2dir
        -varfloor 0.00001

mk_s2hmm
        -moddeffn s3mdef 
        -mixwfn   s3mixw 
        -tmatfn   s3tmat 
        -hmmdir   s2dir

makesendmp
s2_4x $s3mdef .semi. $s3mixw 0.0000001 $s2dir/sendump
(the order is important)
cleanup: s2dir/*.ccode s2dir/*.d2code s2dir/*.p3code s2dir/*.xcode

mdef2phonemap
grep -v "^#" s3mdef | mdef2phonemap s2dir/phone s2dir/map
</pre>
make sure that the mdef file used in the programs above includes all the
triphones needed. The programs (especially the makesendmp program) will not
work if any tied state is missing from the mdef file. This can happen if
you ignore the dictionary provided with the models and try to make a
triphone list using another dictionary. Even though you may have the same
phones, there may be enough triphones missing to leave out some leaves in
the pruned trees altogether (since they cannot be associated with any of
the new triphones states). To avoid this, use the dictionary provided. You
may extend it by including new words.
<hr>

<a name="8"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
UPDATING OR ADAPTING EXISTING MODELS SETS</td></table>
In general one is better off training speaker specific models if sufficient
data (at least 8-10 hours) are available. If you have less data for
a speaker or a domain, then the better option is to adapt any existing
models you have to the data. Exactly how you adapt would depend on the kind
of acoustic models you're using. If you're using semi-continuous
models, adaptation could be performed by interpolating speaker specific
models with speaker-independent models. For continuous HMMs you would have
to use MLLR, or one of its variants.

To adapt or update existing semicontinuous models, follow these steps:
<p>
<ol>
<li> Compute features for the new training data. The features must be
   computed in the same manner as your old training features. In fact, the
   feature computation in the two cases must be identical as far as possible.
<li> Prepare transcripts and dictionary for the new data. The dictionary must
   have the same phoneset as was used for training the models. The transcripts
   must also be prepared in the same manner. If you have new filler phones
   then the fillerdict must map them to the old filler phones.
<li> The new training transcript and the corresponding ctl file can include
   the old training data IF all you are doing is using additional data
   from the SAME domain that you might have recently acquired. If you
   are adapting to a slightly different domain or slightly different acoustic
   conditions, then use only the new data.
<li> Starting with the existing deleted-interpolated models, and using the
   same tied mdef file used for training the base models and the same
training parameters like the difference features, number of streams etc.,
run through one
   or two passes of Baum-Welch. However, this must be done without
   re-estimating the means and variances. Only the mixture-weights must be
   re-estimated. If you are running the norm after the Baum-Welch, then
   make sure that the norm executable is set to normalize only the mixture
   weights.
<li> Once the mixture weights are re-estimated, the new mixture weights must be
   interpolated with the ones you started with. The executable "mixw_interp"
   provided with the SPHINX package may be used for this. You can experiment
   with various mixing weights to select the optimal one.
This is of course the simplest update/adaptation technique. There are more
sophisticated techniques which will be explained here later.  
</ol>
<p>

<p>
<b>The <font color="green">mixw_interp</font> executable</b>:
<p>
This is used in model adaptation for interpolating between two mixture weight
files. It requires the following flags:
<table border="1">
<tr valign="top"><td>FLAG</td><td>DESCRIPTION</td></tr>
<tr valign="top"><td>-SImixwfn</td><td>The original Speaker-Independent mixture weights file</td></tr>

<tr valign="top"><td>-SDmixwfn</td><td>The Speaker Dependent mixture weight file that you have after the bw iterations for adaptation </td></tr>
<tr valign="top"><td>-tokencntfn</td><td>The token count file</td</tr>
<tr valign="top"><td>-outmixwfn</td><td>The output interpolated mixture weight parameter file name</td></tr>
<tr valign="top"><td>-SIlambda</td><td>Weight given to SI mixing weights</td></tr>
</table>
<hr>
<a name="9"></a>
<TABLE width="100%" bgcolor="#ffffff"><td>
USING THE SPHINX-III DECODER WITH SEMI-CONTINUOUS AND CONTINUOUS MODELS</td></table>
There are two flags which are specific to the type of model being used, the
rest of the flags are independent of model type. The flags you need to change
to switch from continuous models to semi-continuous ones  are:
<p>
<ul>
<li>the -senmgaufn flag would change from ".cont." to ".semi."
<li>the -feat flag would change from the feature you are using with continuous
models to the feature you are using with
   the semicontinuous models (usually it is s3_1x39 for continuous models
and s2_4x  for semi-continuous models)
</ul>
<p>
Some of the other decoder flags and their usual settings are as follows:
<pre>
        -logbase 1.0001 \
        -bestpath     0 \
        -mdeffn $mdef \
        -senmgaufn .cont. \
        -meanfn $ACMODDIR/means \
        -varfn $ACMODDIR/variances \
        -mixwfn $ACMODDIR/mixture_weights \
        -tmatfn $ACMODDIR/transition_matrices \
        -langwt  10.5  \
        -feat s3_1x39 \
        -topn 32 \
        -beam 1e-80 \
        -nwbeam 1e-40 \
        -dictfn $dictfn \
        -fdictfn $fdictfn \
        -fillpenfn $fillpenfn \
        -lmfn $lmfile \
        -inspen 0.2 \
        -ctlfn $ctlfn \
        -ctloffset $ctloffset \
        -ctlcount  $ctlcount \                      
        -cepdir $cepdir \
        -bptblsize 400000 \
        -matchsegfn $matchfile \
       -outlatdir $outlatdir \
        -agc none \
        -varnorm yes \         
</pre>
<p>

<a name="04"></a>
<!------------------------------------------------------------------------->
<center><h4><font color="red">BEFORE YOU TRAIN</font></h4></center>
<TABLE width="100%" bgcolor="#ffffff">
<td>FORCE-ALIGNMENT</td>
</table>
<!------------------------------------------------------------------------->

Multiple pronunciations are not automatically considered in
the SPHINX. You have to mark the right pronunciations in the
transcripts and insert the interword silences. For this
<p>
a) remove the non-silence fillers from your filler dictionary and
   put them in your regular dictionary
<p>
b) Remove *all* silence markers (&#60s>, &#60sil> and &#60/s>) from your
training transcripts
<p>
For faligning with semi-continuous models, use the binary s3align provided
with the trainer package with the following flag settings. For faligning with
continuous models, change the settings of the flags -senmgaufn (.cont.),
-topn (no. of Gaussians in the Gaussian mixture modeling each HMM state),
-feat (the correct feature set):
<pre>
        -outsentfn      <faligned transcripts filename>
        -insentfn       <transcript filename> 
        -ctlfn          <ctl file corresponding to trascript file>
        -ctloffset      0
        -ctlcount       <no. of entries in ctl file >
        -cepdir         <feature files directory>
        -dictfn         <dictionary>
        -fdictfn        < filler dictionary>
        -mdeffn         <mdef file name>
        -senmgaufn      .semi.
        -meanfn         <model directory/means>
        -varfn          <model directory/variances>
        -mixwfn         <model directory/mixture_weights>
        -tmatfn         <model directory/transition_matrices>
        -topn           4 
        -feat           s2_4x
        -beam           1e-90
        -agc            <max or none>
        -cmn            <none or current>
        -logfn          <logfile name>



</pre>
<p>



<em> last modified: 22 Nov. 2000 </em>
</body>
</html>