LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Koha Library Software, Zebra or Quagga Software? (https://www.linuxquestions.org/questions/slackware-14/koha-library-software-zebra-or-quagga-software-4175452272/)

tronayne 03-01-2013 11:38 AM

Koha Library Software, Zebra or Quagga Software?
 
I work with a group of folks indexing a 60,000+ book collection (almost none of which are 20th century publications -- they date from the 14th through the 19th centuries). Currently the group is using FoxBase on a PC which is stumbling over its own feet (well, that kind of figures, eh?) and it's time for a change. I'm thinking either a Slackware 14.x 32-bit or 64-bit platform as a server.

I'm looking at Koha Library Software (http://koha-community.org/), a library automation package, used by libraries 'round the world, which looks pretty darn good to me.

One requirement of Koah is Zebra, (http://www.gnu.org/software/zebra/) which appears to be obsolete; it's been supplemented by Quagga (available at SlackBuilds.org at http://slackbuilds.org/repository/14.0/network/quagga/).

Before I mess up a system installing either Zebra or Quagga I'm wondering (1) what impact does installing "a routing software suite" going to have on my "normal" operations (Quagga installs a daemon named zebra) and (2) are the two compatible; i.e., does it matter if I install Quagga instead of Zebra?

And, of course, if anybody has experience with library systems that will handle upwards of 100,000 titles (Kaha will) and provides a web-page interface backed by MySQL (or equivalent), I'd really appreciate knowing about it.

Thanks.

Mike_M 03-01-2013 12:47 PM

Are you sure you're looking at the correct Zebra software? It seems strange for a library indexing software to be required to run on a router. I don't know exactly what Koho requires (why can't they just include a URL in their INSTALL document, especially when possibility of ambiguity exists?), but it seems far more likely this is the Zebra in question:

https://www.indexdata.com/zebra

Assuming the Zebra you've been looking at is the wrong one, both of your questions are no longer pertinent.

tronayne 03-01-2013 01:22 PM

Well, they did not supply a link, so Google tuns up GNU Zebra (obsolete in 2005), and suggests Quadda (for which there is a SlackBuild at SlackBuilds.org) and I really kinda wondered what the heck routing software was needed for.

Your link, thank you, turned up what looks a whole heck of a lot more like what's needed. And, you're right, ain't no sense to either GNU Zebra (kinda makes sense, a gnu is one kind of critter, a zebra is just another kind of critter; sheesh) or Quagga (who the heck knows what that means).

And, well, Koha is a whole lot of PERL stuff (gag) with a whole lot of PERL modules being needed from CPAN (choke) and that's why I gave up on Bugzilla. We shall see after we get Zebra installed and working.

Thanks again.

Mike_M 03-01-2013 01:33 PM

No problem. It irks me when authors and developers can't be bothered to include links to the dependencies they list, even for well known ones such as Apache's httpd. It's far worse when a dependency is more obscure, as in this case.

There's nothing wrong with Perl. ;) Plus, CPAN is a fantastic resource. (OK, that may be a stretch; I'm sure people can list plenty of things wrong with Perl, but the same can be said for any language.)

kikinovak 03-01-2013 02:51 PM

Quote:

Originally Posted by tronayne (Post 4902585)
I work with a group of folks indexing a 60,000+ book collection (almost none of which are 20th century publications -- they date from the 14th through the 19th centuries). Currently the group is using FoxBase on a PC which is stumbling over its own feet (well, that kind of figures, eh?) and it's time for a change. I'm thinking either a Slackware 14.x 32-bit or 64-bit platform as a server.

Forget Koha and use PMB (http://www.pmbservices.fr/nouveau_site/pmbservices.html), probably the best free library management software out there. I used it to network a dozen public libraries around here, with about as many entries as you have.

tronayne 03-01-2013 03:28 PM

Hey, Kiki? It might be great if I could read French (their English button doesn't work).

Thanks for the thought.

meejies 03-01-2013 10:46 PM

Quote:

Originally Posted by Mike_M (Post 4902662)
It irks me when authors and developers can't be bothered to include links to the dependencies they list, even for well known ones such as Apache's httpd. It's far worse when a dependency is more obscure, as in this case.

Not that it makes things a whole lot better, but I believe that Zebra is an optional plugin for Koha, not a dependency.

To the OP: There is also Evergreen ILS, which was designed for large academic libraries, so it should handle 100k books just fine.

tronayne 03-02-2013 08:44 AM

I'm looking at Evergreen as we speak; looks interesting, thank you.

I'm going to evaluate Koha, Evergreen and (if I can find an English version of it) PMB too.

I've got roughly 2,000 book records I can export from Tellico for import into these systems (they all seem to have that capability). Might take a little fiddling but it looks doable. I have to bear in mind that the folks that are actually doing the work of cataloging are volunteers (including me) and only one person has ever actually worked in a public library (not me) and computer-using skills are kind of all over the map (the idea of using one of these systems being that pretty much anybody can use a web browser to enter, search for and view information). The folks will get a crack at all three to see which they like better and find more useful.

The selling point for any one of these system (or any other systems that might come along) is an easy, intuitive user interface that make sense with a DBMS behind it. We're dealing with materials from the 14th century on and, well, no ISBN, no LOC Number, no Dewey Decimal, barely a title, author and maybe publisher (you'd be amazed, believe me). We're dealing with English, French (old and new), Portuguese, Latin, Italian, German (old German) and more than a few volumes in Greek (which is all Greek to me and I've forgotten most of the Latin I never knew). It's kind of a challenge.

I'm inclined to use Tellico for the "other" stuff -- coins, stamps, art (of all kinds), machines, scientific instruments, an observatory with telescope, you name it and the quantities are in the thousands. Tellico does an excellent job of recording collections of those sorts (the search capability is quite useful). I'm not so sure that Tellico is fully useful for the books (although I've used for my own library which is some 2,000+ volumes) because it's a single user type of application and the library systems all support multiple terminals (Tellico is not a DBMS system).

After finding the "right" Zebra, thanks to Mike_M, and reading about it, it look to be a Good Thing for use with Koha for indexing; time will tell.

Anyway, thanks to all for the input.

Mike_M 03-02-2013 12:39 PM

Quote:

Originally Posted by meejies (Post 4902911)
Not that it makes things a whole lot better, but I believe that Zebra is an optional plugin for Koha, not a dependency.

You may be right, but the following appears near the top of the INSTALL file (emphasis mine):

Quote:

You need to have a server running Perl 5.10 or later, MySQL 5, Zebra
2.0.22 or greater and a webserver (preferably Apache2) before installing
Koha.
That certainly makes it seem like Zebra is a requirement, and that's all I have to work off of having never used the software.

lstamm 03-02-2013 10:32 PM

I work at a library that currently uses Evergreen, as a small member of the Sitka consortium. This is a complex software suite, which is not exactly easy to install and set up on Slackware (all the development is done on Debian). From what you have described, Evergreen sounds like overkill. I also don't see how a group without considerable cataloguing experience could handle the initial data entry if you can't get MARC records for your bibliographic items from somewhere.

One neat thing about Evergreen is that it allows check-outs of non-biblio items like rooms, audio-visual equipment, etc., through the booking module. You might be able to leverage this for your items without biblio info.

I have also installed and played around with Koha a bit, but this was a few years ago. It was much simpler in scope than Evergreen at the time, but it has evolved considerably since then and I haven't kept up with it. But I would guess that Koha would fit your needs better than Evergreen.

kikinovak 03-03-2013 03:53 AM

Quote:

Originally Posted by tronayne (Post 4902727)
Hey, Kiki? It might be great if I could read French (their English button doesn't work).

Thanks for the thought.

I'm sorry. I didn't realize until now their application isn't localized for anything non-French.

tronayne 03-03-2013 06:48 AM

So far, I've spent roughly 16 hours screwing with Perl and CPAN getting the "required" Perl modules installed. Perl, in my opinion, is one of the most useless things I've ever dealt with, at least from the point of view of getting required functions installed just so you can actually use the thing -- nothing but grief with Bugzilla, even more so with Koha -- about 120 Perl modules required and, in my experience, lots of failures, trouble shooting and do-it-again.

Enough bitching.

Kiki, no problme, I'm grateful for the input.

Lstamm, thank you for your input. If I can get Koha going and see what it does, Evergreen will be the next system to look at. Either or both appear to have the capability of serving queries from patrons (or, for that matter, the outside world -- but not yet). Right now, it's get everything recorded in a usable, preferably portable format. The man who built the collection, a physician, did keep records. On 3x5 cards, hand-written (in "doctor," which is nearly impossible to read) and filed those 60,000+ cards in drawers by... hmm: subject, author, title, illustrator, whim, perhaps. There is no rhyme or reason to the card files.

An example: the first book I evaluated was from George Washington's personal library at Mount Vernon. There had been a fire there, the book was scorched, and our collector had a slipcase made for it to keep it in as good a condition as possible (it was kinda stinky). The book was a Portuguese to New French dictionary published about 1780-something (he had many slipcases made for delicate volumes, serious guy this fellow). So, where do you look? Dictionary? Nope. Portuguese? Nope. French? Nope. New French? Nope. Editor? Nope. Finally found it in the W's: Washington, George. And they're all pretty much the same, some under author, quite a few under Cruikshank (Edgar Cruikshank, English illustrator, did the illustrations for Charles Dickens and a bunch of other folks) -- quite a few volumes are found under Cruikshank rather than the author.

Once a card is found, you know when he got it, who sold it to him, how much he paid for it plus, if possible, the author, title, publisher, printer, binder, perhaps who owned it and usually undecipherable notes.

And there are 60,000+.

Another example: a four-volume work by Luigi Bellinzoni, Usi e costumi antichi e moderni di tutti i popoli del mondo, 1884 (Customs and traditions, ancient and modern of all the peoples of the world). Still haven't found the card, but I keep looking.

Just getting this stuff into usable form and standardized is the challenge and getting a sensible system to do that with (the original FoxBase setup has fallen down) is starting to look like more of a challenge every minute.

So, anyway, if I can get Koah going, load some stuff into it, do some Internet searches with it and see how that goes will be step one (and if I can't get to go, it's scrap), then step two will be Evergreen and so on down the yellow brick road. The ultimate goal is that the facility will be a research and cultural center, I think it's important to have a means of browsing the collections and I'm just looking for a tool that will fit.

Thanks for the input.

kikinovak 03-03-2013 01:02 PM

I checked again, and PMB's user interface is localized into several languages, including english. I'll try and write a short HOWTO about this applications, since it works really great.

Forget Koha, it's just a confusing mess.

tronayne 03-03-2013 01:57 PM

Quote:

Originally Posted by kikinovak (Post 4903812)
I checked again, and PMB's user interface is localized into several languages, including english. I'll try and write a short HOWTO about this applications, since it works really great.

Forget Koha, it's just a confusing mess.

Yeah, I'm kind of getting there with the struggles with Perl and CPAN and errors and all that normal crap -- right now I'm trying to install Evergreen; gotta see what's on offer.

Appreciate your help and advice.

Mike_M 03-03-2013 03:27 PM

Quote:

Originally Posted by tronayne (Post 4903654)
So far, I've spent roughly 16 hours screwing with Perl and CPAN getting the "required" Perl modules installed. Perl, in my opinion, is one of the most useless things I've ever dealt with, at least from the point of view of getting required functions installed just so you can actually use the thing -- nothing but grief with Bugzilla, even more so with Koha -- about 120 Perl modules required and, in my experience, lots of failures, trouble shooting and do-it-again.

Out of curiosity, how are you trying to install the Perl modules? CPAN is generally considered one of Perl's greatest strengths. The CPAN module should be able to take care of dependencies for you, without the need for you to install every needed module one at a time.

If you're downloading the module sources and building each of them by hand, then you're making it harder than you need to. As root, use the CPAN module's shell:

Code:

# perl -MCPAN -e shell;
Answer a few questions (let it autoconfigure as much as possible). When finished with that, from the cpan prompt install whatever module you need:

Code:

cpan[1]> install <Module Name>
Replace <Module Name> with the full name of whatever module you're trying to install, for example to install the Net::Server module:

Code:

cpan[1]> install Net::Server
The CPAN module should then take care of downloading, building, and installing Net::Server along with any of it's dependencies.

My apologies if this is already what you are doing, but your comments above indicate you may not know how to properly use Perl and take advantage of some features that make it very simple to work with.

T3slider 03-03-2013 03:30 PM

cpan2tgz makes things even easier (and provides added accountability).

tronayne 03-03-2013 04:45 PM

My experience with Perl is limited to installing things like Bugzilla, well, only Bugzilla truth be known. It's inevitable that something will fail tests, not install and just be a big PITA to deal with. I don't use it for anything, I prefer going in other directions whenever possible -- I started learning it about, oh, 20 years ago (or something like that) and didn't get too far; that was on Solaris boxes, it kinda worked but didn't work more than it did and that experience has repeated itself over and over again just frustrating the heck out me every time I have to do something with Perl to get some application going. Even the automagic "install all the modules" thing that comes with Bugzilla falls down. I suppose that if I'd spend a few days and really dig in that some of that frustration might be alleviated but, well, just haven't gotten to where I really want to do that.

It might be something to do with Slackware (which I kind of doubt) but the sum total of my experience has left a pretty sour taste.

And, yeah, I do know about the autoconfigure and how to use it (well, sorta) to get modules installed (thanks for the little tutorial, by the way), no need for apologies, I'm always grateful for any information -- been at this stuff for a long, long time but every day is an opportunity to learn something new and different and I'm grateful for that.

For example, one of the modules needed is Business::ISBN. It fails:
Code:

<lots and lots of these above here>
DEL(516/519): /root/.cpan/build/Locale-Currency-Format-1.30-ijdep9.yml
DEL(517/519): /root/.cpan/build/Locale-Currency-Format-1.30-ijdep9
DEL(518/519): /root/.cpan/build/Locale-PO-0.23-uz72Ku
DEL(519/519): /root/.cpan/build/Locale-PO-0.23-uz72Ku.yml

  CPAN.pm: Building B/BD/BDFOY/Business-ISBN-2.05.tar.gz

Checking if your kit is complete...
Looks good
Writing Makefile for Business::ISBN
Writing MYMETA.yml and MYMETA.json
cp lib/ISBN10.pm blib/lib/Business/ISBN10.pm
cp lib/ISBN13.pm blib/lib/Business/ISBN13.pm
cp lib/ISBN.pm blib/lib/Business/ISBN.pm
Manifying blib/man3/ISBN10.3
Manifying blib/man3/ISBN13.3
Manifying blib/man3/ISBN.3
  BDFOY/Business-ISBN-2.05.tar.gz
  /usr/bin/make -- OK
Running make test
/usr/bin/perl5.16.1 "-MTest::Manifest" "-e" "run_t_manifest(0, 'blib/lib', 'blib/arch',  )"
t/load.t ................. ok 
t/pod.t .................. ok 
t/pod_coverage.t ......... ok 
t/constants.t ............ defined(%hash) is deprecated at t/constants.t line 9.
        (Maybe you should just omit the defined()?)
t/constants.t ............ ok   
t/interface.t ............ ok 
t/albania.t .............. ok   
t/isbn10.t ............... 1/?
#  Failed test 'Bad group code [9997022576] is invalid'
#  at t/isbn10.t line 101.
#          got: '-1'
#    expected: '-2'
#
# Checking ISBNs... (this may take a bit)
t/isbn10.t ............... 38/? #
# Checking bad ISBNs... (this should be fast)
# Looks like you failed 1 test of 39.
t/isbn10.t ............... Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/39 subtests
t/isbn13.t ............... 1/?
#  Failed test 'Bad group code [978-9997022576] is invalid'
#  at t/isbn13.t line 130.
#          got: '-1'
#    expected: '-2'
#
# Checking ISBN13s... (this may take a bit)
#
# Checking bad ISBN13s... (this should be fast)
# Looks like you failed 1 test of 41.
t/isbn13.t ............... Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/41 subtests
t/valid_isbn_checksum.t .. ok 
t/xisbn10.t .............. ok   
t/png_barcode.t .......... skipped: GD is missing GD::Font->Small. Can't continue.
t/rt/27107.t ............. ok 
t/rt/28843.t ............. ok 
t/rt/29089.t ............. #
# Checking ISBN13s... (this may take a bit)
t/rt/29089.t ............. ok 
t/rt/29292.t ............. ok 

Test Summary Report
-------------------
t/isbn10.t            (Wstat: 256 Tests: 39 Failed: 1)
  Failed test:  33
  Non-zero exit status: 1
t/isbn13.t            (Wstat: 256 Tests: 41 Failed: 1)
  Failed test:  35
  Non-zero exit status: 1
Files=15, Tests=149, 12 wallclock secs ( 0.05 usr  0.02 sys +  6.84 cusr  0.10 csys =  7.01 CPU)
Result: FAIL
Failed 2/15 test programs. 2/149 subtests failed.
make: *** [test_dynamic] Error 255
  BDFOY/Business-ISBN-2.05.tar.gz
  /usr/bin/make test -- NOT OK
//hint// to see the cpan-testers results for installing this module, try:
  reports BDFOY/Business-ISBN-2.05.tar.gz
Running make install
  make test had returned bad status, won't install without force

This is all to typical, and I don't a clue what to do about it.

Oh, yeah, that missing GD? Like hell it's not there!

That's why I'm so enthusiastic about Perl.

Thanks you for your help and advice (I did learn something!).

lstamm 03-04-2013 12:58 AM

Quote:

An example: the first book I evaluated was from George Washington's personal library at Mount Vernon. There had been a fire there, the book was scorched, and our collector had a slipcase made for it to keep it in as good a condition as possible (it was kinda stinky). The book was a Portuguese to New French dictionary published about 1780-something (he had many slipcases made for delicate volumes, serious guy this fellow). So, where do you look? Dictionary? Nope. Portuguese? Nope. French? Nope. New French? Nope. Editor? Nope. Finally found it in the W's: Washington, George. And they're all pretty much the same, some under author, quite a few under Cruikshank (Edgar Cruikshank, English illustrator, did the illustrations for Charles Dickens and a bunch of other folks) -- quite a few volumes are found under Cruikshank rather than the author.
I think if you don't know how to properly catalogue this sort of item into standard Marc tags, then you need to find some library cataloguing help. Or give up the idea of using either Koha or Evergreen. They are pretty strict in their cataloguing requirements. Searches won't work in either system if the items aren't properly catalogued.

You seem to have two separate problems here: One is how to organize the collection in a coherent manner, and the other is to set up a software system that will allow searches of that organized collection. I'm not a cataloguer, but believe me it is more complicated than it appears on the surface. And most systems set up for libraries expect the data to be entered according to pretty ridgid cataloguing systems, usually some form of MARC or Dublin Core.

It might actually be easier for you to roll your own database backed web application for the amount of items you have, than to install Evergreen and learn all the nuances about library cataloguing.

Richard Cranium 03-04-2013 03:16 AM

Quote:

Originally Posted by lstamm (Post 4904101)
I think if you don't know how to properly catalogue this sort of item into standard Marc tags, then you need to find some library cataloguing help.

I think the OP is talking about how the original owner cataloged his library, not how he and his group intends to do it.

tronayne 03-04-2013 06:09 AM

I am talking about the owner's, um, system -- it's a 3x5 card, hand written, with what he cared about (title, author(s), illustrator, publisher, printer, binder, subject, seller, price, acquired date, comments, some other stuff). I've thought about using Tellico (a collection manager) for porting what's already recorded plus all the rest of the collection. Tellico does internet search which, in many cases, returns "standard" information about given volumes (at least a LOC number in many cases). Tellico comes in handy for "update from all sources" and I think that will take us a long way down the road to getting these items cataloged in a standard way making them available to researchers and others (there are some pretty interesting and important volumes in the collection). I'm figuring on using a "standard" system for making that happen.

I have looked at rolling my own data base, either MySQL or PostgreSQL, on the LAMP model but, essentially, that would look pretty much like the card files do now, maybe cleaned up, maybe with keywords, maybe with subjects, materials condition and other information -- faster, better, cleaner for sure (and I'm perfectly capable of data base design, got paid for doing that for a long, long time). At least better search capability on multiple fields, methinks.

Either Tellico or roll-your-own may be a good intermediate step: get off the cards and on to something you can actually find something with. Might be better than trying to go to full-boat library system. I dunno, but I'm thinking it might be worth a look-see in any event.

Thanks to all for the thoughts and advice.

lstamm 03-04-2013 11:35 AM

Hi Tronayne,

You might want to check out ICA-Atom. This is archival software, but for your purposes it might work better than a specifically library-oriented software system. It certainly is a lot easier to install than either Evergreen or Koha.

tronayne 03-04-2013 01:05 PM

Quote:

Originally Posted by lstamm (Post 4904466)
You might want to check out ICA-Atom. This is archival software, but for your purposes it might work better than a specifically library-oriented software system. It certainly is a lot easier to install than either Evergreen or Koha.

Oh, yeah, that was easy to install.

Now all I have to do is get educated on what to do with it and how to do dat. Might just work, might just.

But -- these other things have become a quest: dang it, I'm gonna get the blasted things to work (even if they're never used for anything)!

Thanks for the suggestion.

tronayne 03-04-2013 01:25 PM

Quote:

Originally Posted by T3slider (Post 4903866)
cpan2tgz makes things even easier (and provides added accountability).

Seems like a good idea (and maybe I'm not doing it correctly) but...
Code:

cpan2tgz --no-install --pkgdir=${PWD} Data::Dumper
Reading '/root/.cpan/Metadata'
  Database was generated on Sun, 03 Mar 2013 12:17:03 GMT
Checksum for /root/.cpan/sources/authors/id/S/SM/SMUELLER/Data-Dumper-2.143.tar.gz ok


Processing Data::Dumper...

Running make for S/SM/SMUELLER/Data-Dumper-2.143.tar.gz

  CPAN.pm: Building S/SM/SMUELLER/Data-Dumper-2.143.tar.gz

Checking if your kit is complete...
Looks good
Writing Makefile for Data::Dumper
Writing MYMETA.yml and MYMETA.json
cp Dumper.pm blib/lib/Data/Dumper.pm
/usr/bin/perl5.16.1 /usr/share/perl5/ExtUtils/xsubpp  -typemap /usr/share/perl5/ExtUtils/typemap  Dumper.xs > Dumper.xsc && mv Dumper.xsc Dumper.c
cc -c  -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -fPIC  -DVERSION=\"2.143\" -DXS_VERSION=\"2.143\" -fPIC "-I/usr/lib64/perl5/CORE"  -DUSE_PPPORT_H Dumper.c
Running Mkbootstrap for Data::Dumper ()
chmod 644 Dumper.bs
rm -f blib/arch/auto/Data/Dumper/Dumper.so
cc  -shared -O2 -fPIC -fstack-protector Dumper.o  -o blib/arch/auto/Data/Dumper/Dumper.so        \
            \
 
chmod 755 blib/arch/auto/Data/Dumper/Dumper.so
cp Dumper.bs blib/arch/auto/Data/Dumper/Dumper.bs
chmod 644 blib/arch/auto/Data/Dumper/Dumper.bs
  SMUELLER/Data-Dumper-2.143.tar.gz
  /usr/bin/make -- OK
make ERROR [Data::Dumper]: Numerical argument out of domain

No package and the "Numerical argument out of domain" shows up with everything I try.

Probably me.

opcionh 08-26-2016 04:35 PM

Hi Tronayne, can you tell me if you finally could install koha ils on slackware ? ? ? I want to know about your experience because I need to install Koha ils software on slackware. thank you for your answer

Juan Esponda
Santa Rosa - La Pampa

tronayne 08-27-2016 07:03 AM

Hi Juan,

I seem to remember that I did get going but it turned out that I didn't really need a library system, I needed a repository system. The collections (books, documents, coins, stamps, art works and other items (all numbering in the thousands) just did not fit a library system -- nothing will ever be loaned out and all of it would be for researchers.

I turned to DSpace which was "just the right thing" for the job and it's working just fine (so far). Had to twiddle a couple of things -- defining fields that aren't part of DSpace -- but that was no big deal.

It's been a couple of years but, yes, I did get Koha working but it didn't fit the bill too well and it's gone from my servers so I really can't tell you just what I did to get it going. I went though a lot of SlackBuilds and "compile from scratch" as I remember but it did finally get going. I can't find any of my notes (they probably went into the shredder because I clean out things that are obsolete or that I won't use) so I really can't advise other than you're going to have a lot of required packages to download and build.

Hope this helps some.

Thomas

kikinovak 08-27-2016 08:22 AM

Quote:

Originally Posted by tronayne (Post 4902585)
And, of course, if anybody has experience with library systems that will handle upwards of 100,000 titles (Kaha will) and provides a web-page interface backed by MySQL (or equivalent), I'd really appreciate knowing about it.

Thanks.

Back in 2006, my job consisted in setting up a network of eleven small public libraries, with roughly 60.000 titles, using open source software. I must have tried every software available under the sun (including Koha), but when I gave PMB a spin, I knew I found the perfect application.

It's a "classic" web application running on a LAMP server. The only minor difficulty was (at the time) the installation of the php-yaz module. Our server at the time was running Slackware, so I had to rebuild PHP from source and tweak it to get that module to work. Other than that, expect no major difficulties.

http://www.sigb.net/?lang_sel=en_UK

Cheers,

Niki


All times are GMT -5. The time now is 10:02 AM.