Monday, February 27, 2012

Where from here?

Apologies about the blog being a bit slow as of late. My hopeful volunteering has hit a snag with people at the possible place of volunteering being extremely busy and us having a hard time synching schedules. However, in exciting news, I will be transitioning this blog from discussing my experience learning ERM in a class to my experience learning ERM as a newbie librarian. I just was hired at a new position in which I will be working extensively with trying to better integrate ERM into all aspects of the technical library workflow through an open source project called Kuali. As such, I look forward to sharing my thoughts, my failures, and my almost successes!

What is Kuali you ask? Good question, says I. Allow me to explain as much as I can (from my reading and discussion with soon to be new work place). Kuali OLE is an open source Integrated Library Management System for academic libraries that is trying to more fully incorporate the acquisitions, management, and metadata needs of Electronic Resources (unlike many traditional ILMS, which are strongly focused towards traditional print resources). It is built upon a series of modules that are basically processes that can affect different entities (collections, individual resources, persons). I will be working with the first two of these workflow modules: Select and Acquire entity. These two represent the earliest parts of an entities life with the library: the mark for purchase or at at least interest and initial ingest into the system, as well as rights, licensing and other purchase information once the object has been decided on and added into the library's holdings. Other modules include the Deliver module, which tracks the request of an item by a patron, and what use restrictions or notices should go along with this request. It then supplies the resource to the patron or computer which the patron is using. Describe is about the addition and display of the appropriate metadata for each type of entity, and finally Manage is what it sounds like, management of the entire process from there on out:usage, repair, technical troubles, updates, etc. Its pretty ambitious to try and encapsulate all types of materials used at an academic library, and I am thrilled to be a part of it. Goals for now are learning all I can about it, test driving it, oh, and maybe actually trying to finally really, for serious this time, learn some basic programming.

Anyway, I hope to keep you all up to date on this new part of my life which will technically start in July, but for which I will be preparing for long before then. And if any of you have tried to use Kuali OLE before, or work with it, let me know what you think about it. I believe that it is highly ambitious, especially in terms of time line to completion (necessary in order to complete before grand funding goes bye), but worthwhile endeavor.

Tuesday, December 6, 2011

Reflections on a Semester.

Every semester, I have the same cliched question: How did the end get here so fast? I have been working on this blog for 14 weeks now, and this is my last "class entry". But this blog is not going away, so never fear, my one or two possible readers! I have discovered that I love blogging, and I am not too shabby at it. In addition, I will be doing a volunteer internship at WiLS (Wisconsin Library Services) in their Electronic Resources department, so I will have plenty to talk about! Writing this blog, and being in this class has made me realize that I love the many facets of ERM, and want to explore it more. I look forward to sharing my future experiences with all of you.

When I began this class, I believed that it would be collections management for electronic resources; basically, we would be learning about how to select databases, how to analyze usage statistics for collections management decisions, how to purchase databases (so something about licensing) and maybe how to catalog/keep track of those databases in some way. From reading job descriptions for ERM librarians, I also was hopeful we would delve into a lot of the technologies that were discussed therein: SFX, openURL, link resolving. What I came to discover, from just looking at the syllabus and which was brought home throughout the class, was the Electronic Resource Management affects almost every level of librarianship. These resources are so much more complex, in terms of purchase, maintenance, federal legislation, user services and instruction, marketing, everything, than print and this means it takes a whole library working together to truly "manage" them. It also included things that I did not think of as ERM, like ILL and Electronic reserves. When I first saw these, especially ILL on the syllabus, I was excited (because I deal a lot with ILL at work, and think it is an essential service) but confused. What did ILL have to do with ERM, I wondered? But, by the time we got close to that section, I realized it had everything to do with it. Not only is ILL now facilitated online through programs like Illiad, but more and more libraries are requesting texts from databases or purely online journals. As such, the same dilemmas and issues that come with databases--concerns about licensing, technology protection measures, copyright and fair use--are tied up in ILL, in E-reserves. The breadth of issues that electronic resources touch is truly amazing, and makes this class by far one of the most important and informative I have taken at UW Madison.

What ties this whole class together for me is copyright law. I never would have thought that this was the core to understanding how to be an electronic resource librarian, but it really is. The technology is of course extremely important and helpful; one would have a hard time guiding patrons through all the material or even making them available to patrons without ER management systems, without creating easy access to changing links, without usage standards like COUNTER. But what is constantly at the back of it all, at the back of all the technology used to limit access, the skyrocketing price of journals, the incredibly detailed process of licensing, is copyright, and more specifically the fear copyright owners have of how easy it is to now make copies digitally.

This concept first really gelled in my mind when I was writing two blog posts, one from the 22nd of September ("Licensing is a Pit Trap Full of Spikes") and the one the week later ("Georgia on my Mind"). I think that the reason was because these blog posts were my first exposure to how fair use, the copyright law of 1976, CONFU, DMCA, and all the other issues surrounding electronic transmission concretely affect the library world. In the license post, I expressed my fear of licensing. Looking back, I now see that that fear derived out of a fear about not understanding copyright and all the legal language well enough. But that post was the start of my desire to understand the rules and the court cases and the issues as well as I possibly could. Georgia gave me a bit of hope. While we still (!) do not know the outcome, it looks like they were not able to take away most of Georgia's rights. Reading about this case and writing this blog post allowed me to believe that, sometimes, the law can be on our side, if we can back up our decisions from a place of knowledge. Due to these two blogs, which caused me to sort through my feelings and fears, I devoted myself to better understanding these issues not only throughout the rest of the class, but outside of it as well. I have investigated copyright policy at Edgewood, and am working with them to attempt to create a more flexible one. And I am trying to face my fear head on, by not only volunteering at WiLS, but also working with our ER librarian at Edgewood to create better licenses for our library, and even help with negotiation.

Finally, if anyone is reading this blog from the outside and is thinking about taking a class on ERM, do it. At my most recent job interview, earlier this week, I was asked about what I knew about scholarly communication, especially how researchers can maintain their rights in an electronic publishing world. I was able to discuss open access solutions, as well as the importance of negotiating contracts. I could bring up what rights authors had under copyright and explain what resources were available to the researchers to help them further understand their rights and assist them in reading contracts. This not being a library setting, but instead a research group, they were astounded that librarians knew so much about this, and could help. This is something that goes beyond libraries and into publishing and all forms of research. It opens up not only your own understanding, but employment doors. And in these economic times, that last phrase is the best endorsement I can give a class.

Thursday, December 1, 2011

YEP (Yet Another E-book Post).

In the article "Reading Books in the Digital Age subsequent to Amazon, Google, and the long tail", Terje Hillesund comments that he found over 500 articles written through 2007 that discussed e-books. At the Wisconsin Library Association's conference this fall, the keynote talk was about E-books. And when I opened up the December 2011 edition of American Libraries , one of the first articles I saw was entitled "A Guide to Buying E-books". They are on everyone's mind, and their market continues to grow. But what are they, really? Can they ever replace books or be used in a similar way? Do they even count as books or are they a completely different beast?

Many people have attempted to answer these questions, and in the next few paragraphs, I would like to make my own attempt to clarify what I think about the whole e-book thing. But I do not think there ever will be one answer to these questions for the simple reason that reading is a subjective personal activity, directed by individual needs, cultural background, and value systems. As such, my thoughts reflect my own background, that of a lover of reading, a person who has been in school for 19 years, a person raised by two professors, married to a PhD student and sister to another (basically surrounded by those of the intelligentsia), and also a person in love with technology and gadgets. With that caveat, lets talk e-books.

Both Hillesund's article and another article entitled "Disowning Commodities" by Ted Striphas take the view that e-books represent something fundamentally different from what we have seen before. Hillesund claims that they have, via separating the storage of the text from the reading of the text (storage in bits, reading in good old fashioned letters), broken the tradition of the book, in which the creation, storage, distribution and reading of knowledge are all contained within one unchangeable form. He quotes Roger Chartier, a prominent historian of reading, who says "Our current revolution is obviously more extensive than Gutenbergs". Another intellectual, Stephan Bickerts, discussed in Striphas, believes that e-books have destroyed "deep reading" of the past, a type of meditative engrossment in the words of another. Something,all agree, has changed.

There can be no argument that the medium has changed. Physically, the Kindle is very different from a paperback. However, through my own experience with my Kindle, I have found that the act of reading has fundamentally remained the same. For pleasure reading (a far too uncommon thing in my life at the moment) reading on the Kindle does not detract from my enjoyment of the story in anyway. Having Pride and Prejudice on Kindle does not change the fact that Austen writes with an incredible sense of character and societal analysis. The only problem is that it requires you to push a button to flip from page to page, which makes it hard to get back to a section, but, when reading for pleasure, this is not usually a huge issue. So when it comes to reading for fun, a Kindle or a paperback work just as well.

But what about for that "deep reading" or scholarly reading? Hillesund claims that the only reading people won't do electronically is for sustained, detailed reading of lengthy texts. In this I agree with Hillesund. I believe that this is truly where the book will continue to flourish, at least for the time being. Striphas believes that this is due to how the owning a physical book, in the 1930's became a status symbol of then new middle class. While this might have something to do with it, I actually do not think my dislike of using e-text for scholarship is a cultural phenomena. If anything, I believe that the middle class has embraced the e-reader like they did the displayed book in the 1930's, as symbols of their learning and ability to spend. The truth of the matter is that the functionality for serious scholarly reading is just not there. For myself, the lack of an easy way to write notes or highlight text in my Kindle, or quickly flip to a place I highlighted, seriously limits its use as a tool for research. Even if the e-reader improves on this, it still is much more expensive to lay out three e-readers to compare text than to spread out books on a table for cross comparison. Perhaps scholarship methods will change, but until that happens, a need for written text will remain for the scholar and the student

So, in my opinion, e-books are in fact just a new medium, not something brand new. In containing the same content one would find in a manuscript, in a codex, on a scroll, they fall into the category that these mediums do: the book. The fact that they are stored in a different way does not change their purpose, every books purpose, which is to share knowledge. However, they should and could not totally replace the older medium of the printed codex. Not only have they not reached a point in which they are truly useful for scholarly study, but they are simply not affordable for everyone. The digital divide still exists, people still come into their library because they have no computer at home. E-books are neat, and possibly might someday drive out the printed book. But today is not that day.

Thursday, November 17, 2011

The Perpetual Access Problem

Ever since I learned about it during my first semester at University of Wisconsin Madison's library school, the possible lack of perpetual access to already paid for electronic content troubled me, especially in the face of many libraries switching from ownership of material model to that of solely leasing content. At my current workplace, this issue has moved from the theoretical to the practical for me, as we recently rearranged the library and went e-only for most publications to save space. Our print journals all suffered from low usage, with patrons clearly preferring online access. As Watson's chapter in Managing the Transition from Print to Electronic Journals and Resources makes clear, patrons "are reluctant to interrupt their workflow by stopping what they are doing just to visit the library in the hope that the needed article is on the shelf" (pg. 47). This was certainly the case at my workplace, and as such, it made sense, financially, to spend the money on resources being used.

I understand this decision. We were running out of space, we were strapped for cash, and our patrons were not being served currently by the existence of print. But we need some way of preserving this access, electronically. There is no library literature that I have read that advocates giving up access rights to previously subscribed content if one cancels a subscription. Instead, most point out that doing this would create the bizarre issue that, though a huge amount of information is being produced, our time could appear to future scholars as a dark ages, a black hole of literature. The literature, having not been preserved, would be lost.

Because of this, I examined my workplaces current solutions, and judged them according to the two articles I recently read that I found the most helpful for considering this issue: Watson's chapter and Stemper and Barribeau's article "Perpetual Access to Electronic Journals: A Survey of One Academic Research Library's Licenses" (which appeared in Library Resources and Technical Services). Sadly, I found both of their current solutions lacking.

Their first solution is to rely on ILL for access to articles for which they have unsubscribed. For the time being, this seems to be working, as most of our ILL requests for canceled journals can be filled. Of course, this costs the library money, but compared to the price of many electronic resource purchases, it is a very small amount. The main issue for this is that it forces us to rely on the kindness of others, and the fact that other libraries will still maintain print collections, or have licensed products that allow Inter Library Loan. As Stemper and Barribeau point out, many license agreements, especially standard publisher ones before negotiation, do not allow any ILL's to be sent that are copies of their electronic material. So, while it works for now, we could run into problems later.

The other method is relying on non negotiated licenses to grant perpetual access. While Stemper and Barribeau do mention that some licenses contain such a right naturally, they point out that the high number of licenses they found permitting perpetual access in some form (be it locally hosted, through the publishers site for a fee or via a third party)included that clause due to negotiation by the library. Indeed, they point out that librarians "should consider making the lack of perpetual access rights a deal breaker" and must negotiate for it, with no mercy. By not negotiating licenses, my workplace has left itself open to losing access.

So, what should my library do? Maintaining their own print archive is not an option, as, during their process of switching to e only, they threw away a great deal of their print copies (again, to save space). Negotiating licenses might not be feasible either, as there is not many library staff, and no one who is comfortable with license negotiation. While I believe that being sure to negotiate for some form of perpetual access, be it either through the publisher (and maintained via an access fee) or whether it is through a third party that works for the publisher (examples are the open source shared archiving software LOCKSS and Portico, a single server that contains publishers material and then shares this with the library), is the ideal solution, I do not know how practical this would be. As such, I believe the best thing for my library to do would be to join a consortium, as Watson discusses, where each library is responsible for keeping some journals in print and agree to share copies of these print articles via ILL. I do not really like this solution; it seems inelegant to me to maintain the integrity and access to lost electronic content via print. It also would introduce lag time for these journals as they would have to be sent from another consortium library to ours. However, for the small academic library, with little staff resources or time or space, it is a better plan that rely strictly on the kindness of publishers to save you.

Friday, November 11, 2011

The Technological Rabbit Hole.

Although I consider myself rather technologically literate (I can write a blog! I can make a web page! I know how to construct a basic for loop!), I find myself amazed every time I am forced to think about what makes what seems a simple computer operation, like a search for a known article in a library catalog, work. One simply types in the metadata detailing what they know, and a page appears that lists all of the places you can find that article in full text. It even links you to inter-library loan services! But, as Boston and Gendon in the book Electronic resource management in libraries: Research and practice and Walker in The E-Resources Management Handbook demonstrate, the process is far from simple. It requires numerous programs, all working in tandem, to make that list of full text appear.

For something like the known item journal search to work, first a list of what journals we have and where such journals are located must be created. According to Weddle and Grog in Managing the Transition from Print to Electronic Journals and Resources , this process is handled by an A-Z journal list, usually by a global, proprietary knowledgebase, as one librarian or even library keeping track of such things would be a monumental task. The library still must create a local knowledgebase, however, detailing their own campus holdings.

In order to find a specific article using this knowledge base, a request for that article's location must be made. URLS cannot be used because they change to frequently. Instead, every request that goes from that journal's request is named via an openURL link resolver, which contains some metadata about the item. The local knowledgebase is then queried using this metadata, and the correct results are returned.

At UW Madison, when one finds a specific journal, the results page also includes a list of suggested materials. This is another piece of programming, described by Boston and Gendon. Using openURL, a program looks for other materials in the knowledge base that contain similar words or subjects, and also returns them. This is added to facilitate resource discovery, showing people resources that they might never have considered.

That is at least three pieces of complicated programming, on top of the web programming required to display the material and the database software required to store the knowledgebase, and the licensed journals required to have the materials. It reminds me of something I was linked to on google plus, the beginning of which is below:

_____________________________________________________________________
"You just went to the Google home page.

Simple, isn't it?

What just actually happened?

Well, when you know a bit of about how browsers work, it's not quite that simple. You've just put into play HTTP, HTML, CSS, ECMAscript, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."

Let's simplify.

You just connected your computer to www.google.com.

Simple, isn't it?

What just actually happened?

Well, when you know a bit about how networks work, it's not quite that simple. You've just put into play DNS, TCP, UDP, IP, Wifi, Ethernet, DOCSIS, OC, SONET, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
______________________________________________________________________


Sometimes I wonder, is all this complexity truly necessary? Do users need to be able to tag, to personalize their webpages, to conduct federated searches? What is actually being used in the suite of tools and interlocking functions that appear to create a coherent whole to the library user?

I admit, I am unsure what the right answer to this is. I tend to be wary of jumping on every new technology bandwagon that comes along, but I also understand the desire for improving user access and resource discovery in any possible way. In my own personal experience, as a user and reference librarian, some of these complex tools, such as the system for finding full text I described above, or improved content linking via both papers citing an article and papers cited by an article, are essential for resource discovery.

But in my experience some things mentioned in articles like Boston and Gendon's and Walker's tend to just not be useful, or at least under used by patrons. For example, both of these articles mention that one new method of resource discovery is bringing up suggested articles when a patron does a known item search. UW Madison includes this whenever a full text of article is sought using FindIt. However, I have never seen a patron use this service, although I have helped with many a full text or known item search. I admit, I tend not to point it out to them, as I am often disappointed in the results. They seem to never identify the topic correctly. I could see such a thing being useful if it was more accurate, but the current program does not seem to have the intelligence to be very useful.

Basically, I think that we need to continuously strive to build a system that truly helps a user in their resource discovery and electronic access, and this will be complicated technologically, no doubt. But at the same time, assessment of the tools we are providing, and user studies, must be conducted so that we know when a technology is being useful and when it is just adding another, unnecessary, layer.

Wednesday, November 9, 2011

Standards, their importance and problems.

Well, this week readers will get two posts to make up for the dearth of one last week. I was a bit busy, having three assignments due and a conference to attend. But now, as I drink tea and watch the first snow come down outside, I can take some time to talk about standards! Yes, I know, very exciting. On a serious note, however, standards, both of the technical and metadata variety, play an increasingly vital role in librarianship. We have always been a standards obsessed profession, what with our AACR2s, and MARC, and Library of Congress Subject Headings. Since modern American librarianship found its footing in the mid 1900's, librarians have focused around organizing, transferring, and helping people find information. We needed standards then to assure that if someone went to two different libraries, they wouldn't have to learn an entirely new way to look a book up and then find it on the shelf. And we needed standards so that librarians would not have to learn brand new ways to classify something at every library they worked for.

So, as librarians, we gravitate towards creating standards, and now that tendency, according to Pesch, in "Library standards and e-resource management: A survey of current initiatives and standards efforts" and Yue in Electronic resources librarianship and management of digital information., is more important than ever. It is vital for all librarians to realize that information, while easier to access and share than ever before, has also gotten more complicated. More departments are involved in preparing materials for display and access, including acquisitions, cataloging/metadata, licensing, IT, and the new field of Electronic Resources Librarianship. And more information is being transferred, between librarians and vendors, individual librarians and consortia, consortia and vendors, and between libraries themselves through ILL and the web. Standards are essential for insuring that information is comparable, compatible, and communicable.

So where do we stand in terms of standards? According to the above two articles, the library community has made great strides recently, especially in terms of vendor usage statistics (with COUNTER, which is an awesome standard that I love!), electronic resource link resolving (OpenURL), and meta/federated searching. These standards, due to their relative ease of use, are incorporated and followed by many libraries. We are making progress towards others, like defining a data dictionary to talk about important functions and duties of ERM systems.

However, there are many standards with which librarians struggle, where ideas of what should be included and not creates conflict. The problem is if there is a feeling that a standard is handed down without input or too burdensome, it will not be followed. For example, RDA,developed by the Library of Congress, created an XML based cataloging standard that more easily incorporates non-print objects (a serious problem with current cataloging standards). However, due to the fact that many catalogers felt that they had no say in this decision, and that a lot of animosity exists between those who believe AARC2 is fine, those who embrace RDA, and those who believe that RDA does not go far enough, it has been extremely slow to be adopted. It also creates a great deal of work, as all old records need to be transferred to the new format to make resource finding optimal.

As a strange aside, I found out recently that web programmers have been moving away from XML to a data transfer language called JSON. In addition to its faster data transfer, these developers believe that XML has been overburdened by XML schema and standards, created by librarians (METS, MODS, etc, etc). Librarians, in their desire to build standards, have made XML too complex for application development.

So what does this all mean? To me, the lessons to take away are that standards are necessary for librarianship in this electronic age, but they cannot be imposed. If librarians (or other industries) feel that they do not have a voice in the standard process, or if the standard itself will require a huge amount of extra work for librarians, it will not be followed, no matter how ideal and wonderful it would be if it worked. So, do I think we should still try to create standards? Yes, but I think we must make them simple (Dublin Core or COUNTER are excellent examples of this) and they must be developed in a way that the library community feels that they have a say and stake in the outcome.

Friday, October 28, 2011

Verde and ERMes, a (kind of) brief analysis

In the belief (or maybe false hope?) that someone outside of my wonderful ERM class may stumble across this blog, I decided to look at two ERM systems we read about, Verde by ExLibris and Ermes, an open source system made in Wisconsin. Based on this reading and a few other sources, I will judge them by the standards and checklists that Horgatrh and Bloom in the book Electronic resource management in libraries: Research and practice and Collin's article ERM systems: Background, Selection and Implementation , set out. I chose these two systems because they represent the opposite ends of the spectrum in terms of ERMs. Verde is a proprietary system, by a major library vendor, with a very shiny advertising website . The second is a system based on Access, created by a librarian at the University of Wisconsin La Crosse, which advertises itself via its own blog . Perhaps the following analysis will help someone make a decision on ERMs?

Both Hogarth and Bloom and Collins list important aspects of functionality to look for in an ERMs. Since I am trying to not write an article, I will look at just three aspects these works emphasize: the ability to help with communication and support well organized workflow from department to department, the ability to be interoperable with other ILS and serial tools, like A-Z lists and SFX linking, (so that an entire library system does not have to be overhauled to add an ERMs), and the ability to get usage statistics in report form, including the coveted cost/use report.

Verde


Verde's main page does not provide a huge amount of information about how it works, which makes sense as its point is to try and sell you a product, not provide complete documentation. So I also looked at a report I found from CUNY (City University of New York) explaining their decision to use Verde. Verde's main selling point,as explained on their website, is that they have "Built-in workflow management capabilities that enable the library to define consistent and replicable processes, helping staff keep track of task assignments throughout the e-resource life cycle." They also allow staff to access all of these capabilities through one main interface, which helps with management and ease of learning the software. According to CUNY's report, Verde automatically sends out reminder emails to help people stay on top of their duties, and in general works well in coordinating people and communication across numerous departments, an important thing for large library systems

In terms of interoperability, Verde obviously is easy to integrate with Aleph Voyager, as that is the ILS also put out by Ex Libris. The Verde Website claims that it can be "integrated with existing library applications such as SFX®, your library online public access catalog (OPAC), A-Z list, and more". CUNY believes this is also one of Verde's best points claiming "One of the strengths of Ex Libris is product interdependence and interoperability, critical factors in enabling numerous technologies to interface with one another and create a seamless experience for both back and front end users". However, CUNY does mention that it takes some programming to get tools like SFX to fully work with the system. While this was not a major concern for CUNY, it could be one for small libraries who do not have this type of technical expertise.

Finally, Verde allows SUSHI data transmission (a new standard in vendor usage data transmission via XML) and as such works well with COUNTER data, both good things in terms of ease of use and following library standards. Verde's main page does not mention what types of reports they run, except to say "Staff can easily derive cost-per-use metrics as vendor usage data is automatically retrieved and uploaded". This leaves it unclear if the reports can be automatically generated using Verde of or they have to be manually done in some outside program and Verde just stores the data. The stats are built on Oracle, according to CUNY, which makes me believe that reports can be created pretty easily if one knows that database. Again, though, at a small library this might not always be the case.

Verdict:

In general, it seems that Verde is especially strong in Workflow management, and pretty good in interoperability, especially if you have someone with some programming knowledge. It does keep usage statistics and is up to date in following standards, but is unclear about how it runs reports.


Ermes


Ermes has numerous things outside of the categories I will be examining going for it. For one, its free! Secondly, it is created by librarians for librarian needs in mind, and it has a lot of customer support from the creators.

Ermes, as far as I can tell, does not contain anything like email reminders or other communication tools to assist with workflow management. It seems to be meant to be used by only a few people, and therefore not appropriate for large scale, department crossing ERM work. In does allow reports to be run to track renewal dates as well as tracking when payments are due, which would be useful for workflow and management. I could see, since everything is on one Microsoft Access record (license info, usage stats, pricing, vendor information), that this could help communication, as everything would be easy to find and people could see all the information quickly. However, it could also be a trick to coordinate all the entry, making sure that everything is filled out to the same level across all departments. It really seems like this system is meant to be used by a small team, that can organize their workflow with the aid of ERMes, but do not need to rely on it.

Through reading the documentation, there seems to be very little interoperability between this system and others. It does have a way to create one's own A-Z list, but does not incorporate information from an ILS or knowledge base. Everything must be added by hand into the Access database. Since it is already nicely set up in Access, this would not require a huge amount of technical expertise, but would require a lot of time. As such, this system is not feasible for a large library system, unless that large system figures out some technical wizardry to batch import records in the right fields from some preexisting source.

In terms of usage statistics, it does work with COUNTER and helps one run reports with this data. However, because it does not have an easy way to bring in these in (it does not support SUSHI) , everything must be imported by hand, which can be time consuming. While it allows reports to be run showing price per year comparisons, which is very nice, it does not auto generate/provide a template for price/use comparisons.

Verdict:

For a college that has a small amount of periodicals, a small ERM staff, and simply wants to be able to keep track of each database's information in one place (and run some nice reports based on that!) ERMes is a good solution. For example, I think this would be great for many small private colleges who do not have the need or a budget for anything too complex. The main issue is that it does not integrate with the OPAC or SFX. For libraries that are larger or that really want everything to be integrated, this is not a good system