This file is indexed.

/usr/share/php/tests/Horde_Feed/Horde/Feed/fixtures/lexicon/http-www.coffeecode.net-feeds-index.rss2 is in php-horde-feed 2.0.1-4.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
<?xml version="1.0" encoding="utf-8" ?>

<rss version="2.0" 
   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
   xmlns:admin="http://webns.net/mvcb/"
   xmlns:dc="http://purl.org/dc/elements/1.1/"
   xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
   xmlns:wfw="http://wellformedweb.org/CommentAPI/"
   xmlns:content="http://purl.org/rss/1.0/modules/content/"
   >
<channel>
    <title>Coffee|Code : Dan Scott, Caffeinated Librarian Geek</title>
    <link>http://www.coffeecode.net/</link>
    <description>Many ideas crammed into bits...</description>
    <dc:language>en</dc:language>
    <generator>Serendipity 1.3.1 - http://www.s9y.org/</generator>
    <pubDate>Sat, 12 Jul 2008 20:02:58 GMT</pubDate>

    

<item>
    <title>Academic reserves for Evergreen: request for comments</title>
    <link>http://www.coffeecode.net/archives/164-Academic-reserves-for-Evergreen-request-for-comments.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/164-Academic-reserves-for-Evergreen-request-for-comments.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=164</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=164</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
I&#039;ve posted a second revision of the &lt;a href=&quot;http://open-ils.org/dokuwiki/doku.php?id=feature:academic_reserves&quot;&gt;&quot;academic reserves&quot; requirements RFC&lt;/a&gt;. I&#039;m not looking to boil the ocean with the first iteration of academic reserves for Evergreen (that&#039;s what third-party systems like &lt;a href=&quot;http://reservesdirect.org&quot;&gt;ReservesDirect&lt;/a&gt; and Ares are for), but I am hoping that by engaging the community in a discussion we can ensure that we build something that satisfies the core set of requirements for academic institutions in the area of reserves.  My lack of familiarity with what other institutions with more capable systems, or with local workarounds or third-party reserves systems installed, makes me nervous that I&#039;m missing something obvious. So if you feel like weighing in on the discussion, please address your comments to the &lt;a href=&quot;http://open-ils.org/listserv.php&quot;&gt;Evergreen General mailing list&lt;/a&gt;, add a comment here, or send me email if you prefer to keep your comments private.
&lt;/p&gt;
&lt;p&gt;
The biggest change in the second revision of the RFC is the inclusion of a base set of requirements for electronic reserves. For physical items alone, the requirements expressed in the RFC go far beyond the capabilities of the ILS we currently use at Laurentian; getting even basic support for electronic reserves in Evergreen would be a huge win for us when we migrate.
&lt;/p&gt;
&lt;p&gt;
That said, I&#039;ll probably start working on implementing a subset of the requirements real soon now; it should be easy enough to make a course correction should something significant turn up during the second round of comments.
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Sat, 12 Jul 2008 16:02:58 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/164-guid.html</guid>
    
</item>
<item>
    <title>(unofficial) bzr repositories for Evergreen branches</title>
    <link>http://www.coffeecode.net/archives/163-unofficial-bzr-repositories-for-Evergreen-branches.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/163-unofficial-bzr-repositories-for-Evergreen-branches.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=163</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=163</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;I wrote a long blog post about the distributed version control workflow that the two Laurentian students working on &lt;a href=&quot;http://open-ils.org&quot;&gt;Evergreen&lt;/a&gt; (Kevin Beswick and Craig Ricciuto) are using successfully this summer, only to lose the post to a session timeout and my own lack of caution (note to self: if writing directly in the browser text field, CTRL-A CTRL-C before hitting preview!). So the gist of the blog post was:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://bazaar-vcs.org&quot;&gt;bzr&lt;/a&gt;, with the &lt;a href=&quot;http://bazaar-vcs.org/BzrSvn&quot;&gt;bzr-svn plugin&lt;/a&gt;, works quite well for cloning and updating from a centralized Subversion repository like Evergreen&#039;s; just watch out for memory consumption issues due to memory leaks in the Python bindings for Subversion (&lt;a href=&quot;http://jelmer.vernstok.nl/blog/archives/218-bzr-svn-now-with-its-own-Subversion-Python-bindings.html&quot;&gt;fixed&lt;/a&gt; in the development version of bzr-svn)&lt;/li&gt;
&lt;li&gt;there&#039;s no compelling reason for Evergreen to move to a different version control system; it&#039;s easy to use a distributed version control workflow with the Evergreen Subversion repository as-is&lt;/li&gt;
&lt;li&gt;you can tar up a bzr branch and untar it where ever you like and &quot;bzr up&quot; will immediately happily work (which is how I worked around the severe memory constraints on this server that ended up repeatedly running into the Linux out of memory killer when I was trying to create a bzr-svn checkout from scratch)&lt;/li&gt;
&lt;li&gt;it&#039;s a hell of a lot faster to check out or branch from a bzr repository than it is from a Subversion repository, so if you&#039;re going to take this approach set up one clean bzr repository using bzr-svn and check out or branch from that using bzr, rather than repeatedly using bzr-svn to create new branches&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To enable you to get a bzr repo of Evergreen quickly, I&#039;ve set up (unofficial, of course, but updated hourly) bzr repositories of the most useful Evergreen branches as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://bzr.coffeecode.net/ILS/trunk&quot;&gt;Evergreen trunk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://bzr.coffeecode.net/ILS/acq-experiment&quot;&gt;Evergreen acq-experiment&lt;/a&gt; (acquisitions and serials branch)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://bzr.coffeecode.net/OpenSRF/trunk&quot;&gt;OpenSRF trunk&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
Enjoy!
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Sat, 12 Jul 2008 15:46:26 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/163-guid.html</guid>
    
</item>
<item>
    <title>eIFL-FOSS ILS workshop on Evergreen, day one</title>
    <link>http://www.coffeecode.net/archives/162-eIFL-FOSS-ILS-workshop-on-Evergreen,-day-one.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/162-eIFL-FOSS-ILS-workshop-on-Evergreen,-day-one.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=162</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=162</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
The following summary is taken almost directly from an email I wrote to one of the would-be participants who was, sadly, prevented from making it to Yerevan due to travel complications. I meant to clean this up earlier and post it, but have not yet found the time - so I might as well just post it as is with most names obfuscated and possibly some additional editorial comments. Those who are new to installing and configuring Evergreen might find this useful; and reading through it, I remembered a few challenges I planned to tackle &lt;img src=&quot;http://www.coffeecode.net/templates/default/img/emoticons/smile.png&quot; alt=&quot;:-)&quot; style=&quot;display: inline; vertical-align: bottom;&quot; class=&quot;emoticon&quot; /&gt;
&lt;/p&gt;
&lt;hr width=&quot;50%&quot; /&gt;
&lt;p&gt;
Shortly after I arrived on Monday, I was able to try out the
install of Evergreen 1.2.1.4 that A. and G. from the Fundamental
Science Library (FSL) had completed with only two email exchanges with me.
I was very happy to see that they had successfully completed the install!
There was only one minor problem with the structure of the &quot;organizational
unit&quot; hierarchy that I had to fix. After that, we confirmed that we were
able to import bibliographic records from Z39.50 and attached call numbers and
copies to those records. Finally, we tried searching for the records in
the catalogue and were delighted to see that everything was working as
we had hoped. That allowed me to sleep well on Monday, in preparation for the
first day of the workshop on Tuesday.
&lt;/p&gt;
&lt;p&gt;
After the introductions of the workshop participants on Tuesday, I gave the
introduction to Evergreen presentation and Henri Damien Laurent of BibLibre
demonstrated Koha. Both Henri Damien Laurent and I showed our respective
library systems running with an Armenian interface, thanks to the translation
efforts of Tigran! Then we broke into separate Koha and Evergreen groups to
work together on our respective library systems. Of the attendees of the
workshop, E. was the most
interested in migrating his library (with 40,000 volumes) to Evergreen. A.,
from one of the 29 branches of the American University of Armenia (AUA), also
attended most of the Evergreen session. Even though his institution is mostly
interested in Koha, he wanted to be able to compare the two systems. Albert&#039;s
colleague S. attended the Koha training session so they would be able to
compare their experiences later.  Our group also had R. from the Netherlands
and A., G., and A. from FSL -- apparently Tigran is considering
running Evergreen as a union catalogue, so his IT people are very interested
in learning more.
&lt;/p&gt;
&lt;p&gt;
Our first exercise was to model the organizational unit hierarchy using the
configuration bootstrap interfaces in the /cgi-bin/config.cgi. We began by
drawing the hierarchy on a whiteboard. The &quot;Yerevan Consortium&quot;
represented the Evergreen system as a whole; we added the FSL, MSU, and AUA
systems as children of the Yerevan Consortium, and then added specific branches
as children of each of these systems. While we were creating this hierarchy, I
showed the participants how the organization unit type defines the labels used
in the catalogue as well as the respective depth in the hierarchy for each type.
&lt;/p&gt;
&lt;p&gt;
We then ensured that the systems and branches in the hierarchy had the right
types, and that the types were defined with valid parent-child relationships. We
found a few types that were children of themselves, which causes a problem in
searching. There was also some confusion about the role of types to
organization units, resulting in the creation of types with labels like &quot;FSL&quot;
rather than &quot;Library System&quot;. After a few minutes of explanation and working
through correcting the exercises, I think the participants were better able to
understand the relationship between types and organization units.
&lt;/p&gt;
&lt;p&gt;
After we were satisfied with the structure of the organization unit hierarchy, I
ran the autogen.sh script to update the catalogue and staff client
representations of the hierarchy. Well, first I demonstrated how search in the
catalogue will quickly be broken if you do not run the autogen.sh script &lt;img src=&quot;http://www.coffeecode.net/templates/default/img/emoticons/smile.png&quot; alt=&quot;:-)&quot; style=&quot;display: inline; vertical-align: bottom;&quot; class=&quot;emoticon&quot; /&gt;
&lt;/p&gt;
&lt;p&gt;
Our next step was to register new users with the Evergreen staff client. This
helped introduce the participants to the staff client, as well as giving them
a quick introduction to some parts of Evergreen that still need to be localized
to allow regional variations on postal code formats, telephone numbers, and
forms of identification. The default Evergreen staff client still enforces
American conventions, but fortunately I have had to create patches for Evergreen
to support my own country&#039;s standards so I can assure you that it is relatively
easy to change or remove these format checks. In the future, it would be
wonderful to include a localization pack for each locale interested in using
Evergreen that supports regional variations on date formats, phone number
patterns, etc. The participants were pleased with the feedback mechanism in
the staff client that summarized all of the remaining problems with the current
patron record (missing address, invalid phone number, etc) and made it easy to
switch between screens without losing any of the data they had already entered.
&lt;/p&gt;
&lt;p&gt;
Once we had registered new users for each of our branches, we went to work
importing new bibliographic records and attaching call numbers and copies to
those records. This gave us a good opportunity to see how changing the scope
of a search in Evergreen from &quot;Everywhere&quot; down to a specific branch changes
the search results, and demonstrated how the organization type labels are
displayed in the catalogue. As an aside, I should point out that in Evergreen
1.4 (due by the end of this summer), the labels are internationalized so that
different labels can be displayed depending on the locale in which you are
using the catalogue or staff client. Good news for those of us who work in
bilingual or multilingual libraries!
&lt;/p&gt;
&lt;p&gt;
Now that we had records with copies attached and patrons registered in our
Evergreen instance, we were able to use the catalogue&#039;s &quot;My Account&quot; features
to try out features like sharable bookbags, account preferences, and the
account summary. Users also have the ability to specify their
own user names and to log in with those instead (which means that they can
simply remember their unique nickname rather than, say, a 14-digit barcode).
&lt;/p&gt;
&lt;p&gt;
The first feature that the participants discovered, of course, was the
strong password enforcement feature. When a patron is registered, the system
automatically generates a random 4-digit password; however, this is not
considered to be a safe password, so when they log in they are forced to
change it to a longer password containing both numbers and letters.
&lt;/p&gt;
&lt;p&gt;
At this point, we also discovered a data validation bug: in the staff client,
it is possible to enter a user barcode that consists of letters and numbers.
However, in the catalogue, user barcodes containing letters are considered
invalid and the system will not even attempt to log that user in; it simply
rejects the barcode. I plan to ask E. to report this bug to the Evergreen
mailing list; it would be an excellent outcome of the workshop if participants
felt comfortable reporting problems to the mailing list, and reporting this
problem in particular would help improve the quality of Evergreen.
&lt;/p&gt;
&lt;p&gt;
Things were going reasonably well, but we noticed that the system was
running into a problem if you tried to edit a bibliographic record after
you had already created or imported the record. I had rather fortunately
already experienced this problem (it is a result of different behaviour
regarding XML namespaces between different versions of LibXML2) and knew
that it had been fixed in 1.2.2.1. So rather than trying to fix the problem
with the installed version of 1.2.1.4, I decided to try upgrading our
Evergreen system to the recently released 1.2.2.1 to demonstrate to the
participants that the upgrade process was fast, reasonably well documented,
and not nearly as complicated as the install process. This was, by the way,
something Randy had urged me to do, so I blame him for the subsequent problems
we experienced (hah!).
&lt;/p&gt;
&lt;p&gt;
The first problem is that the change from 1.2.1.x to 1.2.2.x requires the
installation of a new Perl module from CPAN (JSON::XS). This is not much of a
problem in itself, as the module is very easy to install and compile; however,
given our internet connection I had to wait a long time for the CPAN
repository metadata to be downloaded. The participants were still able to use
the system while this was happening, but we ended up hitting the coffee break
still waiting for CPAN to finish. (As an aside, Irakli and I were discussing
the possibility of having the eIFL-FOSS coordinators investigate setting up
local mirrors of FOSS resources like CPAN to speed up access to frequently
used resources).
&lt;/p&gt;
&lt;p&gt;
When we returned from the coffee break, the JSON::XS install had finished but
the participants were having problems searching and using the staff client. I
checked the logs (using the &quot;grep ERR /openils/var/log/*&quot; command to start
with) and saw that our database connections were dying for some reason. On a
hunch, I checked the system logs (&quot;dmesg&quot;) and discovered that the Linux &quot;out
of memory (OOM) killer&quot; had started killing random processes to try to free up
memory. It was killing the PostgreSQL processes, the Evergreen processes -
anything! I was lucky, because I had been reading about the OOM on Linux
after hearing about a Linux user that had run into a similar
problem, and knew that the way to disable the OOM was to prevent Linux from
overcommitting memory to processes in the first place. Wondering why our
system had started running out of memory in the first place, I ran &quot;free&quot; and
saw that it had been set up with no swap space; I confirmed this by running
fdisk to see that there were no swap partitions. Here, however, I made a
mistake. I ran &quot;echo &#039;2&#039; &gt; /proc/sys/vm/overcommit_memory&quot; to prevent Linux
from overcommitting memory to new processes and to prevent the OOM killer from
killing any more random processes. But this also meant that I was immediately
unable to launch any new programs - so I could not safely shut down PostgreSQL
and Evergreen, and we had to turn the power off to the system.
&lt;/p&gt;
&lt;p&gt;
Fortunately, the system started up cleanly again (hurray for journalled
filesystems) and I was able to complete the upgrade before the rest of our
hands on session for the day was finished. A few things that are missing in the
current upgrade instructions:
&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You have to compile the new version of Evergreen. The easiest way to do
this is to copy install.conf over from your previous version of Evergreen and
run &quot;make config&quot; to ensure that all of the settings are still correct, then
run &quot;make&quot; to build the new version of Evergreen.
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Very important&lt;/strong&gt;: Before installing the new version of Evergreen, you must
prevent the database schema from being completely recreated or it will destroy
any data that is already in your system. One way of doing this is, during the
&quot;make config&quot; step, to list all of the Evergreen targets &lt;u&gt;except for&lt;/u&gt;
openils_db. I am simply incapable of remembering all of those targets, so my
dirty workaround is to open Open-ILS/src/Makefile in an editor and modify the
&quot;install: &quot; make target by removing the &quot;storage-bootstrap&quot; make target. What
we really need is an &quot;upgrade&quot; target for &quot;make config&quot; that simply installs
everything except for the database schema.
&lt;/li&gt;
&lt;li&gt;Confirm that the new version of Evergreen has been installed by running
the srfsh command &quot;request open-ils.storage open-ils.system.version&quot;.
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
For tomorrow (today, by the time you receive this), A. and G. are going to
create a swap file to enable the system to swap memory to disk if need be; the
system has 1 GB of RAM, which is enough for a small Evergreen system but when
one is compiling programs at the same time as running Evergreen swap space
really is necessary. This was a very good lesson learned for all of us!
&lt;/p&gt;
&lt;p&gt;
E. also interested in learning more about basic Linux
administration. His institution currently runs on an entirely Windows
infrastructure, so the requirement to learn Linux is a fairly high hurdle.
I&#039;m hoping that the eIFL-FOSS list will be a good resource for him to start
that journey. He has also asked to go over the step-by-step instructions for
installing Evergreen, so I&#039;m considering starting that in a VMWare session so
that we can run through the steps. Our major goal for tomorrow is to migrate
some data from FSL&#039;s legacy system into Evergreen. Wish us luck!
&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Editorial comment:&lt;/em&gt; The combination of Armenian and Russian MARC records refused to load into the Evergreen 1.2.2.1 system, but on the flight home I confirmed that they loaded perfectly and were searchable on my Evergreen development system. As the development version will become this summer&#039;s 1.4 &quot;internationalization&quot; release, we are in good shape.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Editorial comment 2:&lt;/em&gt;On the second day, while running in circles trying to figure out why the records were refusing to load into the 1.2.2.1 system, I decided to try the &lt;a href=&quot;irc://chat.freenode.net/#openils-evergreen&quot;&gt;#openils-evergreen&lt;/a&gt; IRC channel. Yerevan is 9 hours ahead of the Toronto/Atlanta time zone, so at noon Yerevan time I was hardly expecting any of the current core Evergreen developers to be online - yet, to our amazement, Mike Rylander responded. This was a pretty convincing demonstration to the attendees that the core developers really aren&#039;t far away or hard to contact at all.&lt;/p&gt; 
    </content:encoded>

    <pubDate>Mon, 23 Jun 2008 20:34:52 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/162-guid.html</guid>
    
</item>
<item>
    <title>Get out of jail, go free, part I</title>
    <link>http://www.coffeecode.net/archives/161-Get-out-of-jail,-go-free,-part-I.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/161-Get-out-of-jail,-go-free,-part-I.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=161</wfw:comment>

    <slash:comments>2</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=161</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
As Mark Leggott mentioned in &lt;a href=&quot;http://loomware.typepad.com/loomware/2008/05/vendor-to-open.html&quot;&gt;Vendor to Open Source ILS in 1 Month #1&lt;/a&gt;, I had the pleasure of assisting the migration of the University of Prince Edward Island library system from Unicorn to Evergreen. &lt;a href=&quot;http://coffeecode.net/archives/123-Evergreen-and-the-business-case-for-choosing-an-open-source-ILS.html&quot; title=&quot;Business case for choosing an open source library system&quot;&gt;A little over a year ago&lt;/a&gt;, in discussing the business case for open source library systems, I stated that one of the problems we faced with migrations is that the license for a proprietary system often inhibits openly sharing of information about how to export data from those systems in machine-usable formats. Thus, the open source library community needs to encourage the development of &quot;migration ninjas&quot;. Little did I know that I would soon join the guild of ninjas and become &lt;em&gt;deadly and silent, and unspeakably violent&lt;/em&gt;(1)(2). 
&lt;/p&gt;
&lt;p&gt;
As a result, I have created a utility script that should be of assistance to SirsiDynix Unicorn or Symphony sites who are interested in exploring the possibilities offered by other library systems. The rather dryly named &quot;export_unicorn.pl&quot; script was added to the &lt;a href=&quot;http://sirsiapi.org&quot; title=&quot;Unicorn API repository&quot;&gt;Unicorn API repository&lt;/a&gt; as entry # 228 today under a GPL-2.0 license(3). As the script uses the Unicorn/Symphony API, however, I am sadly (to the best of my knowledge) not free to simply share the script with anyone. Therefore, to gain access to the script you must be an API-certified Unicorn or Symphony customer. Still, by making an export script available to SirsiDynix customers that provides the raw data in a relatively standard output format, it should ease the effort required by the migration ninjas for open source systems to massage the data into the needed input formats, and to avoid the &lt;a href=&quot;http://www.google.ca/search?q=define%3Atetsubishi&quot; title=&quot;small, sharp, often poisoned caltrops scattered to immobilize or slow down pursuers&quot;&gt;tetsu-bishi&lt;/a&gt; scattered by the proprietary systems in defence of &quot;their&quot; data(4)(5).
&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;http://www.bnlmusic.com&quot;&gt;Barenaked Ladies&lt;/a&gt;, &quot;The Ninjas&quot;. &lt;em&gt;Their website is horrible Flash and JavaScript overkill but damnit Jim, they&#039;re musicians, not webmasters; the &quot;Snacktime&quot; album is especially recommended if you have kids.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Although I have to say I&#039;m nowhere near as violent as Mike Rylander, who with his PostgreSQL-fu can carve seemingly any piece of data into the shape needed for import into Evergreen.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Thanks to Mark Leggott for insisting that I retain copyright over the scripts created during the UPEI migration and for allowing me to share those scripts in the appropriate avenues. It&#039;s another weapon (shuriken? ninja-to?) in the migration ninja arsenal.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;This data does, after all, belong to the libraries who license a library system, but at least one company reportedly has a pattern of repeatedly removing interfaces that enable easy machine-readable access to library data...&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;I find myself being thankful that Unicorn does provide an API for generating machine-readable data exports; all that it cost our library was a week of my life and the associated training fees and travel expenses&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt; 
    </content:encoded>

    <pubDate>Mon, 16 Jun 2008 16:38:26 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/161-guid.html</guid>
    
</item>
<item>
    <title>Introduction to Evergreen at eIFL-FOSS ILS workshop</title>
    <link>http://www.coffeecode.net/archives/160-Introduction-to-Evergreen-at-eIFL-FOSS-ILS-workshop.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/160-Introduction-to-Evergreen-at-eIFL-FOSS-ILS-workshop.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=160</wfw:comment>

    <slash:comments>2</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=160</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
I was in Armenia last week, leading a &lt;a href=&quot;&quot;http://www.eifl.net/cps/sections/services/eifl-foss/ils/ils-project-workshop&quot;&gt;workshop on open source library systems&lt;/a&gt; along with Henri Damien Laurent from &lt;a href=&quot;http://biblibre.com&quot;&gt;BibLibre&lt;/a&gt;. My charge was to introduce Evergreen and lead participants in two days of hands-on experience with the system; Henri took on the same task for Koha.  I cannot say enough good things about our host for the workshop, the &lt;a href=&quot;http://www.sci.am&quot;&gt;Fundamental Library of the National Academy of Sciences of Armenia&lt;/a&gt; headed up by Tigran Zargaryan; nor can I offer enough compliments to Randy Metcalfe on his skills in ensuring that everything ran smoothly; nor can I express how rewarding it was to meet representatives of so many different countries and how much I enjoyed their company! I look forward to helping the pilot sites succeed with their implementations.
&lt;/p&gt;
&lt;p&gt;
So, for the short term, I&#039;ll simply link to the &quot;Introduction to Evergreen&quot; presentation that I gave at the start of the workshop in 
&lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/Evergreen-eIFL-FOSS.odp&quot; title=&quot;Evergreen-eIFL-FOSS.odp&quot; target=&quot;_blank&quot;&gt;OpenOffice&lt;/a&gt; and &lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/Evergreen-eIFL-FOSS.ppt&quot; title=&quot;Evergreen-eIFL-FOSS.ppt&quot; target=&quot;_blank&quot;&gt;PowerPoint&lt;/a&gt; formats (as I promised to the participants). In the next day or two I plan to post a summary of the workshop activities; some of the lessons learned; and where I think I&#039;ll focus my attention next.
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Mon, 16 Jun 2008 16:04:40 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/160-guid.html</guid>
    
</item>
<item>
    <title>In which digital manifestations of myself plague the Internets</title>
    <link>http://www.coffeecode.net/archives/159-In-which-digital-manifestations-of-myself-plague-the-Internets.html</link>
            <category>Coding</category>
    
    <comments>http://www.coffeecode.net/archives/159-In-which-digital-manifestations-of-myself-plague-the-Internets.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=159</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=159</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;Over the past few months, I&#039;ve been fortunate enough to participate in a few events that have been recorded and made available on the &#039;net for your perpetual amusement. Well - amusing if you&#039;re a special sort of person. Following are the three latest such adventures, in chronological order:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://www.archive.org/details/code4lib.conf.2008.pres.CouchDBsacrilege&quot;&gt;CouchDB: delicious sacrilege&lt;/a&gt; (presentation at the Code4Lib 2008 conference from February 2008). You can find my slides for the presentation &lt;a href=&quot;http://coffeecode.net/archives/151-CouchDB-delicious-sacrilege.html&quot;&gt;here&lt;/a&gt;, but Noel Peden did such a good job of recording the video that you probably don&#039;t need them. Watching this wasn&#039;t as painful as I thought it was going to be. Oh, and what is this? It&#039;s a fairly technical introduction to &lt;a href=&quot;http://incubator.apache.org/couchdb/&quot;&gt;CouchDB&lt;/a&gt;, a RESTful, replicating, high-performance document database. It&#039;s only 20 minutes long, and it may be amusing to watch my presentation &quot;style&quot; even if you don&#039;t care about the technical bits at all. Oh, and my monitor was blank for the entire thing, so I had to look over my shoulder to see what my audience was seeing. video_out_problems--&lt;/li&gt;
&lt;li&gt;I gave a presentation on the state of acquisitions in Evergreen as of March 12, 2008 at the &lt;a href=&quot;http://www.valenj.org/newvale/ols/symposium2008/program-schedule.shtml&quot;&gt;VALE Next Generation Academic Library Symposium&lt;/a&gt;. VALE made &lt;a href=&quot;mms://video.wpunj.edu/FMG1/locally_produced/wmv_300kbit/LD-4-10-08_OLS-Symposium-D_WMV_300Kbit.wmv&quot;&gt;video (streaming WMV)&lt;/a&gt; available, as well as &lt;a href=&quot;http://www.valenj.org/newvale/ols/symposium2008/media/dscott.mp3&quot;&gt;audio (MP3)&lt;/a&gt;, and my slides are available &lt;a href=&quot;http://coffeecode.net/archives/152-Evergreen-Acquisitions-at-VALEs-Next-Generation-Academic-Library-System-Symposium.html&quot;&gt;here&lt;/a&gt;. I haven&#039;t watched this one yet: for one thing, the state of Evergreen acquisitions has come a &lt;em&gt;long&lt;/em&gt; way in the past two months. For another, during the question-and-answer session that follows my talk, I recall giving a rather garbled answer to what should have been a straightforward question about the GPL license. Of course, nothing is straightforward about licensing...&lt;/li&gt;
&lt;li&gt;Last week I was at the University of Windsor collaborating with the likes of Mike Rylander, Bill Erickson, &lt;a href=&quot;http://lisletters.fiander.info/&quot; title=&quot;David Fiander&quot;&gt;David Fiander&lt;/a&gt;, &lt;a href=&quot;http://www.google.ca/search?q=art+rhyno&quot; title=&quot;A single link can&#039;t do the man justice&quot;&gt;Art Rhyno&lt;/a&gt;, &lt;a href=&quot;http://libgrunt.blogspot.com/&quot; title=&quot;John Fink&quot;&gt;John Fink&lt;/a&gt;, &lt;a href=&quot;http://lackoftalent.org/michael/blog/&quot; title=&quot;Michael Giarlo&quot;&gt;Michael Giarlo&lt;/a&gt;, and &lt;a href=&quot;http://fawcett.blogspot.com/&quot; title=&quot;Graham Fawcett&quot;&gt;Graham Fawcett&lt;/a&gt; on Evergreen&#039;s acquisitions system... David snapped &lt;a href=&quot;http://www.flickr.com/photos/bookgeek/2515771427/&quot;&gt;this photo&lt;/a&gt; of John and I working in the ancient &quot;Technical Services&quot; room (hah!) and it was picked up by a &lt;a href=&quot;http://www.blogwindsor.com/2008/05/28/library-geeks/#more-273&quot;&gt;local Windsor blog&lt;/a&gt; with the comment &quot;librarians are the new geek&quot;. Thanks, I think...&lt;/li&gt;
&lt;/ul&gt; 
    </content:encoded>

    <pubDate>Wed, 28 May 2008 08:46:20 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/159-guid.html</guid>
    
</item>
<item>
    <title>Weeding 2.0</title>
    <link>http://www.coffeecode.net/archives/158-Weeding-2.0.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/158-Weeding-2.0.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=158</wfw:comment>

    <slash:comments>5</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=158</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;Okay, this is definitely a lame thing to be thinking about at midnight on a Saturday, but I was just playing with the shelf browser in the &lt;a href=&quot;http://open-ils.org&quot; title=&quot;Evergreen project page&quot;&gt;Evergreen&lt;/a&gt; representation of our 780,000 bibliographic records (okay, that is definitely the wrong thing to be &lt;em&gt;doing&lt;/em&gt; at midnight on a Saturday). For some reason, I was wandering through the subject collection pertinent to librarians (pray for my soul), noticed a book that probably should have been discarded years ago, and thought &quot;Gee, i don&#039;t want to deal with this right now, but wouldn&#039;t it be nice if I could just mark this &lt;strong&gt;Weed me&lt;/strong&gt; and forget about it until Monday?&quot;&lt;/p&gt;
&lt;p&gt;Then I realized that that wouldn&#039;t be a stretch at all. In Evergreen, users have &quot;bookbags&quot; to which they can add items. These bookbags can be shared as RSS feeds and otherwise easily exported into other formats. If we were running Evergreen for real, I could create a &quot;Weed me!&quot; bookbag, add in the suspect along with a bunch of other festering tomes, and send the RSS feed to a student to perform the manual labour. Or perhaps the RSS feed gets aggregated with other weeders&#039; feeds and a weeding list gets generated on a monthly basis for efficient labour practices. You get the idea.&lt;/p&gt;
&lt;p&gt;Of course, you would really want to have more information than just the stock shelf browsing interface at hand when making weeding decisions. For example, you would need a tally of recorded uses displayed beside the item, with the ability to drill down for totals by year. If you participate in a consortial &quot;last copy standing&quot; program, you would want a quick check to see if any other institutions still hold a copy of the resource. So, an enhanced interface would be needed to provide an experience that combines the traditional weeding approach of roaming the stacks and generating reports of items matching some minimum age and minimum usage criteria.&lt;/p&gt;
&lt;p&gt;Think about it a little further though (I&#039;m sure you&#039;re thinking a lot faster than me at this point; you&#039;re probably having the luxury of reading this at the beginning of the day, coffee in hand, invigorated after an early morning run in the lingering late spring chill... or not), and there are points in our institutional workflows where we could naturally introduce weeding activities. How do we get to the point of having three editions of a given text on the shelf? If I have the 1995, 2003, and 2007 editions of a text, I can assure you that when I ordered the 2007 edition I had already checked our ILS to see if we had a copy of that edition already, and would have noticed the previous editions. At that point, I should have the ability to say &quot;Oh - get rid of the 1995 edition &lt;strong&gt;now&lt;/strong&gt; and once the 2007 edition is processed and on the shelf, cull the 2003 edition to boot.&quot; If I was designing an acquisitions module today, that&#039;s certainly something I would consider as a nice-to-have. Ahem.&lt;/p&gt;
&lt;p&gt;Weeding 2.0 may not be a sexy subject. &lt;a href=&quot;http://www.google.ca/search?q=%22weeding+2.0%22&quot;&gt;Google&lt;/a&gt; and &lt;a href=&#039;http://search.yahoo.com/search?p=&quot;weeding+2.0&quot;&#039;&gt;Yahoo&lt;/a&gt; each turn up exactly four hits, none of them related to libraries, which is remarkable in this overly-hyped everything 2.0 world. But it&#039;s something we should consider in the design and tailoring of our library systems; and while it&#039;s not going to rank in my top level of priorities for Evergreen, it will work its way in there somewhere, sometime. Hopefully before the stacks in my subject areas buckle under the weight of unused, out-of-date books.&lt;/p&gt; 
    </content:encoded>

    <pubDate>Sun, 11 May 2008 00:07:18 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/158-guid.html</guid>
    
</item>
<item>
    <title>Two! 2! Too! Tu! Tout!</title>
    <link>http://www.coffeecode.net/archives/157-Two!-2!-Too!-Tu!-Tout!.html</link>
            <category>Amber</category>
    
    <comments>http://www.coffeecode.net/archives/157-Two!-2!-Too!-Tu!-Tout!.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=157</wfw:comment>

    <slash:comments>1</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=157</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;div class=&quot;serendipity_imageComment_left&quot; style=&quot;width: 110px&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;a class=&#039;serendipity_image_link&#039; href=&#039;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_1.jpg&#039; onclick=&quot;F1 = window.open(&#039;/uploads/pics/amber/amber_bday_2008_1.jpg&#039;,&#039;Zoom&#039;,&#039;height=501,width=663,top=141,left=188,toolbar=no,menubar=no,location=no,resize=1,resizable=1,scrollbars=yes&#039;); return false;&quot;&gt;&lt;!-- s9ymdb:263 --&gt;&lt;img class=&quot;serendipity_image_left&quot; width=&quot;110&quot; height=&quot;83&quot;  src=&quot;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_1.serendipityThumb.jpg&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;Ramping up&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;This year, we hosted a small party focusing on the little ones in Amber&#039;s life: a few of her friends from day care, and a friend from up the street.&lt;/p&gt;&lt;br clear=&quot;all&quot;/&gt;

&lt;div class=&quot;serendipity_imageComment_left&quot; style=&quot;width: 110px&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;a class=&#039;serendipity_image_link&#039; href=&#039;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_2.jpg&#039; onclick=&quot;F1 = window.open(&#039;/uploads/pics/amber/amber_bday_2008_2.jpg&#039;,&#039;Zoom&#039;,&#039;height=501,width=663,top=141,left=188,toolbar=no,menubar=no,location=no,resize=1,resizable=1,scrollbars=yes&#039;); return false;&quot;&gt;&lt;!-- s9ymdb:264 --&gt;&lt;img class=&quot;serendipity_image_left&quot; width=&quot;110&quot; height=&quot;83&quot;  src=&quot;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_2.serendipityThumb.jpg&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;The amazing cat cake&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Lynn used the same carrot cake recipe as last year (nice and tasty!), but this year it came in the appearance of Amber&#039;s favourite animal.&lt;/p&gt;&lt;br clear=&quot;all&quot;/&gt;

&lt;div class=&quot;serendipity_imageComment_left&quot; style=&quot;width: 110px&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;a class=&#039;serendipity_image_link&#039; href=&#039;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_3.jpg&#039; onclick=&quot;F1 = window.open(&#039;/uploads/pics/amber/amber_bday_2008_3.jpg&#039;,&#039;Zoom&#039;,&#039;height=501,width=663,top=141,left=188,toolbar=no,menubar=no,location=no,resize=1,resizable=1,scrollbars=yes&#039;); return false;&quot;&gt;&lt;!-- s9ymdb:265 --&gt;&lt;img class=&quot;serendipity_image_left&quot; width=&quot;110&quot; height=&quot;83&quot;  src=&quot;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_3.serendipityThumb.jpg&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;She blew out the candle herself. Self! SELF!&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Blowing out the candle was a huge success.&lt;/p&gt;&lt;br clear=&quot;all&quot;/&gt;

&lt;div class=&quot;serendipity_imageComment_left&quot; style=&quot;width: 110px&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;a class=&#039;serendipity_image_link&#039; href=&#039;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_4.jpg&#039; onclick=&quot;F1 = window.open(&#039;/uploads/pics/amber/amber_bday_2008_4.jpg&#039;,&#039;Zoom&#039;,&#039;height=501,width=663,top=141,left=188,toolbar=no,menubar=no,location=no,resize=1,resizable=1,scrollbars=yes&#039;); return false;&quot;&gt;&lt;!-- s9ymdb:266 --&gt;&lt;img class=&quot;serendipity_image_left&quot; width=&quot;110&quot; height=&quot;83&quot;  src=&quot;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_4.serendipityThumb.jpg&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;Cake distribution went smoothly&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Very little cake was wasted in the making of this birthday. Most of the cake was consumed rather than applied to faces or clothes.&lt;/p&gt;&lt;br clear=&quot;all&quot;/&gt;

&lt;div class=&quot;serendipity_imageComment_left&quot; style=&quot;width: 110px&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;a class=&#039;serendipity_image_link&#039; href=&#039;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_5.jpg&#039; onclick=&quot;F1 = window.open(&#039;/uploads/pics/amber/amber_bday_2008_5.jpg&#039;,&#039;Zoom&#039;,&#039;height=501,width=663,top=141,left=188,toolbar=no,menubar=no,location=no,resize=1,resizable=1,scrollbars=yes&#039;); return false;&quot;&gt;&lt;!-- s9ymdb:267 --&gt;&lt;img class=&quot;serendipity_image_left&quot; width=&quot;110&quot; height=&quot;83&quot;  src=&quot;http://www.coffeecode.net/uploads/pics/amber/amber_bday_2008_5.serendipityThumb.jpg&quot; alt=&quot;&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;Daddy and Amber wound down with a book in the window on a rainy day&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;Thanks to everyone for their cards and calls and emails celebrating Amber&#039;s birthday!&lt;/p&gt; 
    </content:encoded>

    <pubDate>Sat, 10 May 2008 09:32:00 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/157-guid.html</guid>
    
</item>
<item>
    <title>Tuning PostgreSQL for Evergreen on a test server</title>
    <link>http://www.coffeecode.net/archives/156-Tuning-PostgreSQL-for-Evergreen-on-a-test-server.html</link>
            <category>Evergreen</category>
            <category>PostgreSQL</category>
    
    <comments>http://www.coffeecode.net/archives/156-Tuning-PostgreSQL-for-Evergreen-on-a-test-server.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=156</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=156</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;&lt;strong&gt;Update 2008-05-01&lt;/strong&gt;: Fixed a typo for sysctl: -a parameter simply shows all settings; -w parameter is needed to write the setting. Duh.&lt;/p&gt;
&lt;p&gt;
Once you have decided on and acquired your &lt;a href=&quot;http://www.coffeecode.net/archives/155-Test-server-strategies.html&quot;&gt;test hardware for Evergreen&lt;/a&gt;, you need to think about tuning your PostgreSQL database server. Once you start loading bibliographic records, you might notice that after 100,000 records or so that your search response times aren&#039;t too snappy. Don&#039;t snarl at Evergreen. By default, PostgreSQL ships with very conservative settings (something like machines with 256 MB of RAM!) so if you don&#039;t tune those settings you&#039;re getting a false representation of your system&#039;s capabilities.
&lt;/p&gt;
&lt;p&gt;
The &quot;right&quot; settings for PostgreSQL depend significantly on your hardware and deployment context, but in almost any circumstance you will want to bump up the settings from the delivered defaults. To give you an idea of what you need to consider, I thought I would share the settings that we&#039;re currently using on our Evergreen test server at Laurentian University. You might be able to use these as a starting point and adjust them accordingly once you&#039;ve run some representative load tests against your configuration. And it&#039;s useful documentation for me to fall back on in a few months, when all of this has escaped my grasp &lt;img src=&quot;http://www.coffeecode.net/templates/default/img/emoticons/smile.png&quot; alt=&quot;:-)&quot; style=&quot;display: inline; vertical-align: bottom;&quot; class=&quot;emoticon&quot; /&gt;
&lt;/p&gt;
&lt;h4&gt;The defaults (as shipped in Debian Etch)&lt;/h4&gt;
&lt;p&gt;The defaults in Debian Etch are quite conservative. Consider that our test server has 12GB of RAM. The default only allocates 1MB of RAM to work memory (which is critical for sorting performance) and only 8MB of RAM to shared buffers. Following are the defaults set in /etc/postgresql/8.1/main/postgresql.conf:&lt;/p&gt;
&lt;pre&gt;
# - Memory -

#shared_buffers = 1000                  # min 16 or max_connections*2, 8KB each
#temp_buffers = 1000                    # min 100, 8KB each
#max_prepared_transactions = 5          # can be 0 or more
# note: increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
#work_mem = 1024                        # min 64, size in KB
#maintenance_work_mem = 16384           # min 1024, size in KB
#max_stack_depth = 2048                 # min 100, size in KB

# - Free Space Map -

#max_fsm_pages = 20000                  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000               # min 100, ~70 bytes each
&lt;/pre&gt;
&lt;h4&gt;Our test server settings&lt;/h4&gt;
&lt;p&gt;Our test server has 12 GB of RAM. Assuming that the PostgreSQL defaults were set for a system with 1 GB of RAM, we should be able to multiply the memory-based settings by at least a factor of 12. We&#039;re a little bit more aggressive than that in our settings. Note, however, that this is a single-server install of Evergreen, so we&#039;re also running memcached, ejabberd, Apache, and all of the Evergreen services as well as the database - oh, and a test instance of an institutional repository, among other apps - so we&#039;re not nearly as aggressive as we would be in a dedicated PostgreSQL server configuration. Please note that I&#039;m making no claims that this is the optimal set of configuration values for PostgreSQL even on our own hardware!&lt;/p&gt;
&lt;pre&gt;
# shared_buffers: much of our performance depends on sorting, so we&#039;ll set it 100X the default
# some tuning guides suggest cranking this up to as much 30% of your available RAM
shared_buffers = 100000 # 8K * 100000 = ~ 0.8 GB

# work_mem: how much RAM each concurrent process is allowed to claim before swapping to disk
# your workload will probably have a large number of concurrent processes
work_mem=524288 # 512 MB

# max_fsm_pages: increased because PostgreSQL demanded it
max_fsm_pages = 200000
&lt;/pre&gt;
&lt;p&gt;After you change these settings, you will need to restart PostgreSQL to make the settings take effect.&lt;/p&gt;
&lt;h4&gt;Kernel tuning&lt;/h4&gt;
&lt;p&gt;In addition to PostgreSQL complaining about max_fsm_pages not being high enough, your operating system kernel defaults for SysV shared memory might not be high enough to support the amount of RAM PostgreSQL demands as a result of your modifications. In one of our test configurations, we had cranked up work_mem to 8GB; Debian complained about an insufficient SHMMAX setting, so we were able to adjust that by running the following command as root to set the kernel SHMMAX to 8GB (8*1024^2):&lt;/p&gt;
&lt;pre&gt;
sysctl -w kernel.shmmax=8589934592
&lt;/pre&gt;
&lt;p&gt;To make this setting sticky through reboots, you can simply modify /etc/sysctl.conf to include the following line:&lt;/p&gt;
&lt;pre&gt;
# Set SHMMAX to 8GB for PostgreSQL
#kernel.shmmax=8589934592
&lt;/pre&gt;
&lt;h4&gt;Other measures&lt;/h4&gt;
&lt;p&gt;
Debian Etch comes with PostgreSQL 8.1. The first version of PostgreSQL 8.1 was released in November 2005. That&#039;s a long time in computer years. Version 8.2, which was released less than a year later, &quot;adds many functionality and performance improvements&quot; (according to the &lt;a href=&quot;http://www.postgresql.org/docs/8.2/static/release-8-2.html&quot;&gt;release notes&lt;/a&gt;). If you&#039;re not getting the performance you expect from your hardware with Debian Etch, perhaps a &lt;a href=&quot; http://packages.debian.org/etch-backports/postgresql-8.2&quot;&gt;backport of PostgreSQL 8.2&lt;/a&gt; would help out.
&lt;/p&gt;
&lt;h4&gt;Further resources&lt;/h4&gt;
&lt;p&gt;This is just a shallow dip into PostgreSQL tuning for Evergreen - hopefully enough to alert you to some of the factors you need to consider if you&#039;re putting Evergreen into a serious testing environment or production environment. Here are a few places to dig deeper into the art of PostgreSQL tuning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;PostgreSQL manual, resource consumption section of server configuration: &lt;a href=&quot;http://www.postgresql.org/docs/8.1/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY&quot;&gt;version 8.1&lt;/a&gt; and &lt;a href=&quot;http://www.postgresql.org/docs/8.2/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY&quot;&gt;version 8.2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;An annotated version of the 8.0 parameters with more explicit advice is available at &lt;a href=&quot;http://www.powerpostgresql.com/Downloads/annotated_conf_80.html&quot;&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Some good advice is buried about halfway down &lt;a href=&quot;http://cbbrowne.com/info/postgresql.html&quot;&gt;Christopher Browne&#039;s page&lt;/a&gt; under the heading &quot;Tuning PostgreSQL&quot;, along with links to further resources&lt;/li&gt;
&lt;li&gt;The &quot;Performance Whack-A-Mole&quot; presentation at  &lt;a href=&quot;http://www.powerpostgresql.com/Docs&quot;&gt;PowerPostgreSQL&lt;/a&gt; is a great tutorial for holistic system tuning&lt;/li&gt;
&lt;/ul&gt; 
    </content:encoded>

    <pubDate>Mon, 14 Apr 2008 14:48:19 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/156-guid.html</guid>
    
</item>
<item>
    <title>Test server strategies</title>
    <link>http://www.coffeecode.net/archives/155-Test-server-strategies.html</link>
            <category>Coding</category>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/155-Test-server-strategies.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=155</wfw:comment>

    <slash:comments>11</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=155</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
Occasionally on the &lt;a href=&quot;http://open-ils.org/irc.php&quot;&gt;#OpenILS-Evergreen IRC channel&lt;/a&gt;, a question comes up what kind of hardware a site should buy if they&#039;re getting serious about trying out Evergreen. I had exactly the same chat with Mike Rylander back in December, so I thought it might be useful to share the strategy we developed in case other organizations are interested in piggy-backing on our research. We came up with three different scenarios, depending on the funding available to the organization and how serious the organization is about testing, developing, and deploying Evergreen.
&lt;/p&gt;
&lt;p&gt;
You can also look at the scenarios as stages, as the scenarios enable
progressively more realistic testing. An organization can always
start with a single server and add more servers over time; if you can
swing a significant discount for buying in bulk, however, it might
make sense to bite the bullet early.
&lt;/p&gt;
&lt;p&gt;
Some pertinent facts about our requirements: we will eventually be loading around 5 million bibliographic records onto the system. We&#039;re an academic organization, so concurrent searching and circulation loads will be low relative to public libraries.
&lt;/p&gt;
&lt;h4&gt;Scenario 1: A single bargain-basement testing server&lt;/h4&gt;
&lt;p&gt;
In this scenario, the organization purchases a single server for the short
term, and configures it to run the entire Evergreen + OpenSRF stack:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;database&lt;/li&gt;
&lt;li&gt;Web server&lt;/li&gt;
&lt;li&gt;Jabber messaging&lt;/li&gt;
&lt;li&gt;memcached&lt;/li&gt;
&lt;li&gt;OpenSRF applications&lt;/li&gt;
&lt;/ul&gt;
&lt;/p&gt;
&lt;p&gt;
This server needs to have powerful CPUs, large amounts of RAM, and many fast (10K RPM or higher) hard drives in a
striped RAID configuration (the latter because database performance
typically gets knee-capped by disk access). A &quot;higher education&quot; quote online from a reputable big-name vendor for a rack-mounted 2U database server with 2x4-core
CPU, 16GB RAM, 6x73GB RAID 5 drives comes in at approximately $7000.
&lt;/p&gt;
&lt;p&gt;
This scenario is fine for development and testing with a limited
number of users, but if you intend to do any sort of stress testing
with this server or throw it open to the public, performance will
likely grind to a halt. &lt;strong&gt;Note:&lt;/strong&gt; This is close to the system that we&#039;re currently running at &lt;a href=&quot;http://biblio-dev.laurentian.ca&quot;&gt;http://biblio-dev.laurentian.ca&lt;/a&gt; - 12 GB of RAM, 2 dual-core CPUs - with 800K bibliographic records and pretty snappy search performance. It&#039;s certainly nothing to sneeze at.
&lt;/p&gt;
&lt;h4&gt;Scenario 2: one database server, one network server&lt;/h4&gt;
&lt;p&gt;
In this scenario, you purchase a database server and a network server.
We&#039;ll use the same specs from scenario 1 for the database server, and
a CPU + RAM-oriented server for the network server (disk access isn&#039;t
a factor for the network apps, so you just buy two small mirrored
drives). The stock higher education quote for a rack-mounted 1U
network server with 2x4-core CPU, 16GB RAM, 2x73GB RAID 1 drives is
approximately $5250.
&lt;/p&gt;
&lt;p&gt;
This scenario will support development and testing, as well as enable
you perform relatively representative stress testing runs with a
significant number of simultaneous users.
&lt;/p&gt;
&lt;h4&gt;Scenario 3: two database servers, two or three network servers&lt;/h4&gt;
&lt;p&gt;
In this scenario, you purchase two database servers so that you can test
database replication, split database loads between search and reporting, and two or three network servers to test
different distributions of the caching and network apps across the servers to
determine the configuration that best meets your expected demands. The cost of the five servers adds up to less than $30,000 - less than a single traditional proprietary UNIX server - and would be less if you can negotiate a bulk discount.
&lt;/p&gt;
&lt;p&gt;
The third scenario supports development and testing, and will give you
practical experience with a configuration that would approximate your
production deployment of servers. When you go live, you could move one of the database servers
and all but one of the network servers over to the production cluster, and revert back to scenario one for your ongoing test and development environment.
&lt;/p&gt;
&lt;h4&gt;The Conifer approach&lt;/h4&gt;
&lt;p&gt;
We opted to go with the third scenario to build a serious test cluster for our consortium. However, the &quot;scenarios as stages&quot; approach ended up being our strategy as our original choice of Dell servers came with RAID controllers that do not work well under Debian. After returning the servers to Dell, we were forced to press one of our backup servers into service as a scenario-one style server while waiting for our new order from HP to arrive.
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Wed, 09 Apr 2008 20:39:18 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/155-guid.html</guid>
    
</item>
<item>
    <title>Inspiring confidence that my problem will be solved</title>
    <link>http://www.coffeecode.net/archives/154-Inspiring-confidence-that-my-problem-will-be-solved.html</link>
            <category>Coding</category>
    
    <comments>http://www.coffeecode.net/archives/154-Inspiring-confidence-that-my-problem-will-be-solved.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=154</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=154</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;Hmm. I think I&#039;m in trouble if the support site itself is incapable of displaying accented characters properly.&lt;/p&gt;
&lt;div class=&quot;serendipity_imageComment_center&quot;&gt;&lt;div class=&quot;serendipity_imageComment_img&quot;&gt;&lt;!-- s9ymdb:259 --&gt;&lt;img class=&quot;serendipity_image_center&quot; src=&quot;http://www.coffeecode.net/uploads/pics/inspiring_confidence.png&quot; alt=&quot;Corrupted characters in a problem report about corrupted characters. Oh dear.&quot; /&gt;&lt;/div&gt;&lt;div class=&quot;serendipity_imageComment_txt&quot;&gt;Corrupted characters in a problem report about corrupted characters. Oh dear.&lt;/div&gt;&lt;/div&gt;
&lt;p&gt;
My analysis of the problem is that the content in the middle is contained within a frame, and is actually encoded in ISO-8859-1 - but doesn&#039;t have an encoding declaration. And the containing HTML page, of course, declares that it is UTF-8. So poor Mozilla gets very confused. And our poor users continue to get corrupted characters in their reminder and overdues notices.
&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Some information removed from the screencap to protect the innocent - the client care people are actually excellent folk, and I&#039;m sure they&#039;re just as frustrated by the problem reporting system as we are.&lt;/p&gt; 
    </content:encoded>

    <pubDate>Thu, 27 Mar 2008 21:39:27 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/154-guid.html</guid>
    
</item>
<item>
    <title>Progress with Project Conifer</title>
    <link>http://www.coffeecode.net/archives/153-Progress-with-Project-Conifer.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/153-Progress-with-Project-Conifer.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=153</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=153</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;Project Conifer is the effort by McMaster University, University of Windsor, and Laurentian University to put together a consortial instance of Evergreen. &lt;a href=&quot;http://conifer.mcmaster.ca/node/15&quot;&gt;A few weeks back&lt;/a&gt;, we agreed that May 2009 would be our go-live date. So the clock is ticking quite loudly in my ears.
&lt;/p&gt;
&lt;p&gt;Today I got an &lt;a href=&quot;http://biblio-dev.laurentian.ca&quot;&gt;Evergreen test server&lt;/a&gt; up and running, loaded with the records from the consortium of Laurentian partners. I hit a few bumps on the road, but eventually successfully loaded about 800,000 bibliographic records and about 500,000 items. I also turned on the Syndetics enrichment data, so some items offer cover images, tables of contents, reviews, and author information. The response time is pretty snappy (it&#039;s running on a 4-core server with 12GB of RAM).&lt;/p&gt;
&lt;p&gt;Things that made my task harder than it probably should have been:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;yaz-marcdump generated invalid XML when I converted our MARC records from MARC21 to MARC21XML format. Maybe this problem is fixed in later versions of yaz-marcdump (I was using the stable Debian Etch version, 2.1.56, which is &lt;em&gt;crazy&lt;/em&gt; old), or I could have tried &lt;a href=&quot;http://marc4j.tigris.org/&quot;&gt;marc4j&lt;/a&gt; or &lt;a href=&quot;http://oregonstate.edu/~reeset/marcedit/html/index.html&quot;&gt;MarcEdit&lt;/a&gt; instead to try for better results, but I didn&#039;t, and it cascaded into problems with...&lt;/li&gt;
&lt;li&gt;Dumping all of the holdings as part of the bibliographic records threw things off when some of the records had so many holdings attached (think a weekly periodical that a library circulates and therefore each issue has its own barcode) that they spilled over MARC&#039;s record length limit, resulting in multiple MARC records just to hold the holdings - which causes some problems for the basic import process. I eventually punted on trying to parse the MARC21XML for holdings and just dumped the data I needed directly from Unicorn in pipe-delimited format.&lt;/li&gt;
&lt;li&gt;Not tuning PostgreSQL &lt;em&gt;before&lt;/em&gt; starting to load data into the database was just plain stupid. The defaults for PostgreSQL are incredibly conservative, and must be modified to handle large transactions and to perform. Here are the tweaks I made for our 12GB machine, starting with the Linux kernel memory settings:&lt;pre&gt;
# -- in /etc/sysctl.conf --
# Set SHMMAX to 8GB for PostgreSQL
kernel.shmmax=8589934592
&lt;/pre&gt;
&lt;pre&gt;
# -- in /etc/postgresql/8.1/main/postgresql.conf --
# Crank up shared_buffers and work_mem
shared_buffers = 10000
work_mem=8388608 # 8 GB, equal to our kernel.shmmax
max_fsm_pages = 200000
&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
Evergreen depends on accurate fixed fields to determine the format of an item. Unfortunately, many of our electronic resources appear not to have been coded as such... so we have some data clean-up to do.
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
Ah well: as Jerry Pournelle used to say in his Chaos Manor column, &quot;I do these things so that you don&#039;t have to.&quot; Hopefully it makes a smoother path for others to get to Evergreen.
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Wed, 26 Mar 2008 22:15:25 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/153-guid.html</guid>
    
</item>
<item>
    <title>Evergreen Acquisitions at VALE's Next Generation Academic Library System Symposium</title>
    <link>http://www.coffeecode.net/archives/152-Evergreen-Acquisitions-at-VALEs-Next-Generation-Academic-Library-System-Symposium.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/152-Evergreen-Acquisitions-at-VALEs-Next-Generation-Academic-Library-System-Symposium.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=152</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=152</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
On Wednesday, I was fortunate enough to join a distinguished panel
of speakers and a crowded music hall at &lt;a href=&quot;http://www.valenj.org/newvale/ols/symposium2008/&quot;&gt;VALE&#039;s Next Generation Academic Library System Symposium&lt;/a&gt; at &lt;a href=&quot;http://www.tcnj.edu&quot;&gt;The College of New Jersey&lt;/a&gt;. I had been invited to
present an update on the state of acquisitions support in Evergreen, as well
as to provide a brief overview of Project Conifer (the collaboration
between Laurentian University, McMaster University, and the University of
Windsor to create a consortial implementation of Evergreen).
&lt;/p&gt;
&lt;p&gt;
To summarize what I intended to be the main points of my
presentation (which may or may not have come through in real life):
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Project Conifer is an existing effort to create a shared consortial implementation of Evergreen for academic institutions; we would be delighted to have others join forces with us&lt;/li&gt;
&lt;li&gt;If acquisitions isn&#039;t as far along as we would have hoped by now, it&#039;s because
&lt;ul&gt;
&lt;li&gt;We (the Project Conifer institutions) haven&#039;t contributed enough
development resource to the effort thus far - although we are planning to
correct this problem in the near term by hiring one or more developers to
work on the requirements that we, as academic institutions, need for a
successful Evergreen experience. If you&#039;re interested in a position as an
Evergreen developer for Project Conifer,
&lt;a href=&quot;mailto:dan@coffeecode.net&quot;&gt;let&#039;s talk&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Creating an enterprise-grade acquisitions system demands much more
effort and attention to detail than creating a simplistic acquisitions
system that would be acceptable for a small library. If it took two years
to build Evergreen&#039;s circulation, cataloging, reporting, and OPAC functionality
from scratch, it&#039;s not unreasonable that it should take a year or more to
build an acquisitions system to the same standards as the rest of Evergreen&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Evergreen acquisitions has made significant progress since December 2007,
and at this pace we expect a complete set of basic functionality to be in
place by the end of April. By &quot;basic functionality&quot; I mean that the manual
acquisitions mode should be supported with a minimalist user interface. MARC
order record batch loading, EDI send/receive support, and a more polished
user interface will take some more time - probably September-ish 2008. You can see the in-development, regularly updated bare-bones interface at &lt;a href=&quot;http://acq.open-ils.org/oils/acq/base/index&quot;&gt;http://acq.open-ils.org/oils/acq/base/index&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
I have to say that Equinox is making incredible progress considering that
they&#039;re still doing the bulk of the work with the same amount of development
resource that they had before Georgia PINES went live on Evergreen, and
they started their own company, and they started bringing BC PINES on line,
and they began receiving an onslaught of requests for visits and presentations
and conference calls...  imagine what we could do with Evergreen, together,
if a few more sites or consortiums were able to devote human or
financial resources to enhancing Evergreen.
&lt;/p&gt;
&lt;p&gt;
Here are my slides in &lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/Evergreen_acquisitions_VALE.odp&quot; title=&quot;Evergreen_acquisitions_VALE.odp&quot; target=&quot;_blank&quot;&gt;OpenOffice&lt;/a&gt; and &lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/Evergreen_acquisitions_VALE.ppt&quot; title=&quot;Evergreen_acquisitions_VALE.ppt&quot; target=&quot;_blank&quot;&gt;PowerPoint&lt;/a&gt; format. If you&#039;re going to
look at my slides, I highly recommend reading the presenter notes that I wrote;
I&#039;ve recently realized that presenter notes are as much for the benefit of a
disconnected audience as they are useful preparation material for the presenter. In the absence of a full paper on the subject matter at hand, presenter notes should help flesh out the brevity forced by slideware.
&lt;/p&gt;
&lt;p&gt;
A huge thanks to Ed Corrado, Anne Hoang, and Kurt Wagner for making the overall experience
so enjoyable. I was honoured to be part of such a high-quality panel of
speakers.
&lt;/p&gt;
&lt;p&gt;
Oh, and as an aside - the entire symposium was videotaped, and the
presentations and question and answer sessions will be made available
from the VALE Web site. I will update this post when those become available. I
wonder if Ed got this idea from code4lib... in any case, I certainly applaud
the initiative.
&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; Umm, more polished acquisitions will likely be available in Sept. 2008, not 2007... thanks to Brad Lajeunesse for pointing out that time travel would be required to make that happen&lt;/p&gt; 
    </content:encoded>

    <pubDate>Sat, 15 Mar 2008 12:34:14 -0400</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/152-guid.html</guid>
    
</item>
<item>
    <title>CouchDB: delicious sacrilege</title>
    <link>http://www.coffeecode.net/archives/151-CouchDB-delicious-sacrilege.html</link>
            <category>Coding</category>
    
    <comments>http://www.coffeecode.net/archives/151-CouchDB-delicious-sacrilege.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=151</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=151</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
Well, the talk about CouchDB (an open-source document database similar in concept to Lotus Notes, but with a RESTful API and JSON as an interchange format) wasn&#039;t as much of a train wreck as it could have been. I learned a lot putting it together, and had some fun with the content - and even though it was a marked departure from the style of many of the other presentations, I think it was generally positively received (at least, from what I could glean from the backscroll in #code4lib and from comments).
&lt;/p&gt;
&lt;p&gt;
I veer towards the &quot;here&#039;s how you do stuff&quot; technical angle because that tends to be what I&#039;m interested in hearing from other people. And even though a 20 minute slot is probably the wrong venue for technical information, CouchDB is so simple in some respects that it&#039;s actually enough to get the core message across.
&lt;/p&gt;
&lt;p&gt;
Here are the slides for your amusement and enlightenment. At some point I&#039;m going to write down the  OpenOffice.org secret that lets you change the colour of hypertext links - I&#039;ve learned and forgotten that a number of times already.
&lt;p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/couchdb.odp&quot;&gt;CouchDB: Delicious Sacrilege (OpenOffice Impress)&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.coffeecode.net/uploads/talks/2008/couchdb.pdf&quot;&gt;CouchDB: Delicious Sacrilege (PDF)&lt;/li&gt;
&lt;/ul&gt; 
    </content:encoded>

    <pubDate>Thu, 28 Feb 2008 17:23:09 -0500</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/151-guid.html</guid>
    
</item>
<item>
    <title>Evergreen workshop at code4lib 2008</title>
    <link>http://www.coffeecode.net/archives/150-Evergreen-workshop-at-code4lib-2008.html</link>
            <category>Evergreen</category>
    
    <comments>http://www.coffeecode.net/archives/150-Evergreen-workshop-at-code4lib-2008.html#comments</comments>
    <wfw:comment>http://www.coffeecode.net/wfwcomment.php?cid=150</wfw:comment>

    <slash:comments>0</slash:comments>
    <wfw:commentRss>http://www.coffeecode.net/rss.php?version=2.0&amp;type=comments&amp;cid=150</wfw:commentRss>
    

    <author>dan@coffeecode.net (Dan Scott)</author>
    <content:encoded>
    &lt;p&gt;
Yesterday morning we (Bill Erickson, Sally Murphy &lt;em&gt;aka&lt;/em&gt; &quot;Murph&quot;, and I ran an &lt;a href=&quot;http://open-ils.org/dokuwiki/doku.php?id=advocacy:evergreen_workshop&quot;&gt;Evergreen workshop&lt;/a&gt; (rough agenda, presentation, and links to associated resources from that page) for the code4lib 2008 preconference session. My personal goals were:
&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Walk people through a simple Evergreen install&lt;/li&gt;
&lt;li&gt;Get a small set of bib records and holdings imported&lt;/li&gt;
&lt;li&gt;Attract some more developers to the project by demonstrating how seductively simple it is to add a new service to Evergreen at the OpenSRF layer and then expose it in the catalogue or staff client&lt;/li&gt;
&lt;li&gt;Show off some of the great features of Evergreen that haven&#039;t had nearly enough exposure (reports, &quot;fresh meat&quot; feeds, exporter interface)&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Problems&lt;/h4&gt;
&lt;p&gt;
&lt;a name=&quot;problem1&quot;&gt;Problem #1&lt;/a&gt;: I started organizing the pre-conference too late. To save time on the install section, I asked attendees to prepare by setting up a VMWare image or bootable Debian or Ubuntu partition and get a bunch of the prerequisite packages installed ahead of time. But by the time I sent my request out, the attendees only had a few days to prepare - and many of them probably hadn&#039;t worked with VMWare before, so they suddenly had another learning barrier to overcome. I wasn&#039;t too surprised when only about 25% of the room had been able to &quot;do their homework&quot;.
&lt;/p&gt;
&lt;p&gt;
Problem #2: I lost at least six hours of preparation time when, due to my own stupidity, I left my passport in a hotel in Atlanta and ended up having to drive across the border from Vancouver to Portland, Oregon. Six hours, man... that&#039;s almost a full day thrown away, which is critical when you&#039;ve left things too late (see &lt;a href=&quot;#problem1&quot;&gt;#1&lt;/a&gt;). Continuing on the negative side, all I could listen to during the drive was completely formulaic rock stations and political rhetoric worthy of 10-year-olds as I drove through Washington. If radio is a dying medium, I have a very good idea why...
&lt;/p&gt;
&lt;p&gt;
Problem #3: We ran into bizarre projector problems that, for some reason, prevented us from being able to see our laptop screens at the same time as the projected screen. This laptop worked fine with the projector at the OLA Superconference just a few weeks ago, and Bill was afflicted by the same problem - so it really put a crimp in my ability to switch from the presentation to the live install image. My neck was wrecked from constantly twisting around to peer up at the screen while trying to do some minor mousing around.
&lt;/p&gt;
&lt;p&gt;
&lt;a name=&quot;problem4&quot;&gt;Problem #4&lt;/a&gt;: I severely underestimated how long the install process would take when trying to support a whole group of people at once; you&#039;re guaranteed to have a question on almost every step. When we were preparing for the workshop, we had this idea that we would take a hard line and spend no more than one or two minutes on each step - which certainly would have saved a lot of time. But when you&#039;ve made a connection with the audience, and people have made it through the first dozen steps, it suddenly becomes a lot, lot harder to simply abandon them with the promise that you&#039;ll help them later. So we ended up spending something like 2 hours on the install (including a break) rather than the 45 minutes we had been aiming for.
&lt;/p&gt;
&lt;p&gt;
Problem #5: We were overly optimistic about how much we could get done in 2.5 hours. Even without the severe compounding of our time crunch by &lt;a href=&quot;problem4&quot;&gt;#4&lt;/a&gt;, in retrospect its clear we would still have been rushing through all of the other pieces. I think we knew that anyways, but we were just so excited about showing off Evergreen that we wanted to show off as much as possible.
&lt;/p&gt;
&lt;p&gt;
It&#039;s not really all that bleak though. There were successes, too.
&lt;/p&gt;
&lt;h4&gt;Successes&lt;/h4&gt;
&lt;p&gt;
Success #1: We have at least one person who successfully made it through the install phase and who successfully imported the bib records and holdings, and several others who feel they are &lt;em&gt;very&lt;/em&gt; close to finishing. I&#039;m hoping that we can spend a few minutes over the course of the conference to help them reach that finish line.
&lt;/p&gt;
&lt;p&gt;
Success #2: We have a real example of &lt;a href=&quot;http://open-ils.org/dokuwiki/doku.php?id=importing:holdings:import_via_staging_table&quot;&gt;how to import holdings&lt;/a&gt; into Evergreen now. This is something that people have been asking for on the list, and I&#039;m really happy to have been able to package up what Mike Rylander provided with a set of sample records and a sample &quot;parse holdings&quot; script that hopefully others will be able to adopt to their own needs.
&lt;/p&gt;
&lt;p&gt;
Success #3: I had feedback from a number of people who, even though they weren&#039;t trying to go through the install, still felt it was worthwhile getting an explanation of all the pieces that OpenSRF and Evergreen depend on and how they fit together. I think it was clear that the complexity involved in installing Evergreen isn&#039;t so much OpenSRF or Evergreen themselves as it is a few finicky details involving networking - largely ejabberd and Net::Domain&#039;s insistence on specific and sometimes conflicting definitions of hostnames.
&lt;/p&gt;
&lt;p&gt;
Success #4: Bill did get to quickly demonstrate &lt;a href=&quot;http://open-ils.org/dokuwiki/doku.php?id=advocacy:evergreen_workshop#customizing_evergreennew_service&quot;&gt;how to add a new OpenSRF service&lt;/a&gt; (&quot;reset my password and email it to me&quot;) and how to integrate that into the catalogue. It was rough and dirty code, but at approximately one page of Perl code and about 10 lines of JavaScript I think it was a convincing demonstration of how easy it is to extend Evergreen.
&lt;/p&gt;
&lt;p&gt;
Success #5: We have laid the groundwork for an Evergreen workshop now, and having gone through the experience once we&#039;ll be able to refine the concept for future events. One idea that we&#039;ve already kicked around is to split it into several tracks so that attendees can self-select what they&#039;re interested in and so that we can give enough time to each section. Say, two (or three) hours for an installfest; two hours for &quot;exploring the dark corners of Evergreen&quot;; and two hours on developing and extending Evergreen (OpenSRF, catalogue, staff client). Or we could have spent the entire pre-conference day on Evergreen.
&lt;/p&gt;
&lt;h4&gt;Reflection&lt;/h4&gt;
&lt;p&gt;
I think it might have been really cool if we had worked with LibraryFind and Zotero to set up an ongoing theme throughout the three pre-conference sessions. We could have collaborated on pre-requisites, so that the LibraryFind install could go on top of the same image as the Evergreen install, and then the newly installed Evergreen image could have been added as a LibraryFind source during the LibraryFind administration section. Then, during the Zotero session, Evergreen and LibraryFind could have been added as new sources for capturing citation information (by making Evergreen and LibraryFind generate COInS objects that Zotero understands or giving Zotero the ability to understand the various formats that Evergreen offers via unAPI).
&lt;/p&gt;
&lt;p&gt;
Of course, it also would have required a heck of a lot of pre-conference planning. A suggestion I would make for next year&#039;s pre-conference organizers would be to communicate as much as possible ahead of time to set expectations and help your attendees determine what your agenda should be. We could have just thrown out the entire Evergreen install section, had people get comfortable with a pre-installed VMWare ahead of time, and focused most of the session on developing and exposing OpenSRF services, for example, if that&#039;s what our attendees wanted.
&lt;/p&gt; 
    </content:encoded>

    <pubDate>Tue, 26 Feb 2008 08:44:43 -0500</pubDate>
    <guid isPermaLink="false">http://www.coffeecode.net/archives/150-guid.html</guid>
    
</item>

</channel>
</rss>