Friday, October 28, 2011

Verde and ERMes, a (kind of) brief analysis

In the belief (or maybe false hope?) that someone outside of my wonderful ERM class may stumble across this blog, I decided to look at two ERM systems we read about, Verde by ExLibris and Ermes, an open source system made in Wisconsin. Based on this reading and a few other sources, I will judge them by the standards and checklists that Horgatrh and Bloom in the book Electronic resource management in libraries: Research and practice and Collin's article ERM systems: Background, Selection and Implementation , set out. I chose these two systems because they represent the opposite ends of the spectrum in terms of ERMs. Verde is a proprietary system, by a major library vendor, with a very shiny advertising website . The second is a system based on Access, created by a librarian at the University of Wisconsin La Crosse, which advertises itself via its own blog . Perhaps the following analysis will help someone make a decision on ERMs?

Both Hogarth and Bloom and Collins list important aspects of functionality to look for in an ERMs. Since I am trying to not write an article, I will look at just three aspects these works emphasize: the ability to help with communication and support well organized workflow from department to department, the ability to be interoperable with other ILS and serial tools, like A-Z lists and SFX linking, (so that an entire library system does not have to be overhauled to add an ERMs), and the ability to get usage statistics in report form, including the coveted cost/use report.

Verde


Verde's main page does not provide a huge amount of information about how it works, which makes sense as its point is to try and sell you a product, not provide complete documentation. So I also looked at a report I found from CUNY (City University of New York) explaining their decision to use Verde. Verde's main selling point,as explained on their website, is that they have "Built-in workflow management capabilities that enable the library to define consistent and replicable processes, helping staff keep track of task assignments throughout the e-resource life cycle." They also allow staff to access all of these capabilities through one main interface, which helps with management and ease of learning the software. According to CUNY's report, Verde automatically sends out reminder emails to help people stay on top of their duties, and in general works well in coordinating people and communication across numerous departments, an important thing for large library systems

In terms of interoperability, Verde obviously is easy to integrate with Aleph Voyager, as that is the ILS also put out by Ex Libris. The Verde Website claims that it can be "integrated with existing library applications such as SFX®, your library online public access catalog (OPAC), A-Z list, and more". CUNY believes this is also one of Verde's best points claiming "One of the strengths of Ex Libris is product interdependence and interoperability, critical factors in enabling numerous technologies to interface with one another and create a seamless experience for both back and front end users". However, CUNY does mention that it takes some programming to get tools like SFX to fully work with the system. While this was not a major concern for CUNY, it could be one for small libraries who do not have this type of technical expertise.

Finally, Verde allows SUSHI data transmission (a new standard in vendor usage data transmission via XML) and as such works well with COUNTER data, both good things in terms of ease of use and following library standards. Verde's main page does not mention what types of reports they run, except to say "Staff can easily derive cost-per-use metrics as vendor usage data is automatically retrieved and uploaded". This leaves it unclear if the reports can be automatically generated using Verde of or they have to be manually done in some outside program and Verde just stores the data. The stats are built on Oracle, according to CUNY, which makes me believe that reports can be created pretty easily if one knows that database. Again, though, at a small library this might not always be the case.

Verdict:

In general, it seems that Verde is especially strong in Workflow management, and pretty good in interoperability, especially if you have someone with some programming knowledge. It does keep usage statistics and is up to date in following standards, but is unclear about how it runs reports.


Ermes


Ermes has numerous things outside of the categories I will be examining going for it. For one, its free! Secondly, it is created by librarians for librarian needs in mind, and it has a lot of customer support from the creators.

Ermes, as far as I can tell, does not contain anything like email reminders or other communication tools to assist with workflow management. It seems to be meant to be used by only a few people, and therefore not appropriate for large scale, department crossing ERM work. In does allow reports to be run to track renewal dates as well as tracking when payments are due, which would be useful for workflow and management. I could see, since everything is on one Microsoft Access record (license info, usage stats, pricing, vendor information), that this could help communication, as everything would be easy to find and people could see all the information quickly. However, it could also be a trick to coordinate all the entry, making sure that everything is filled out to the same level across all departments. It really seems like this system is meant to be used by a small team, that can organize their workflow with the aid of ERMes, but do not need to rely on it.

Through reading the documentation, there seems to be very little interoperability between this system and others. It does have a way to create one's own A-Z list, but does not incorporate information from an ILS or knowledge base. Everything must be added by hand into the Access database. Since it is already nicely set up in Access, this would not require a huge amount of technical expertise, but would require a lot of time. As such, this system is not feasible for a large library system, unless that large system figures out some technical wizardry to batch import records in the right fields from some preexisting source.

In terms of usage statistics, it does work with COUNTER and helps one run reports with this data. However, because it does not have an easy way to bring in these in (it does not support SUSHI) , everything must be imported by hand, which can be time consuming. While it allows reports to be run showing price per year comparisons, which is very nice, it does not auto generate/provide a template for price/use comparisons.

Verdict:

For a college that has a small amount of periodicals, a small ERM staff, and simply wants to be able to keep track of each database's information in one place (and run some nice reports based on that!) ERMes is a good solution. For example, I think this would be great for many small private colleges who do not have the need or a budget for anything too complex. The main issue is that it does not integrate with the OPAC or SFX. For libraries that are larger or that really want everything to be integrated, this is not a good system

Monday, October 24, 2011

More fun with Copyright: The TEACH Act

I admit, I am conflicted about the TEACH act, legislation enacted by President Bush in 2002 to help clarify what copyrighted materials could be used in online education. In fact, my feelings going into class were not positive. I agreed with Thomas Lipinski in his article "The Climate of Distance Education in the 21st Century" that "this act is a complex piece of legislation" whose full implications will not be known until court decisions have been made for clarification. The phrase that audiovisual material had to be shown in a "reasonable and limited quantity" worried me the most. I interpreted limited to refer to amount shown, meaning that under no circumstance could a full work be shown, no matter how reasonable such use might be in terms of education. I also worried about the view of class as a limited time course, divided into units that the bill calls "mediated instructional activities" and with material only allowed to be viewed within a time frame similar to class time. Along with Lipinski, I am concerned that this is another example of congress attempting to compare new modes to old modes in order to maintain profit. Indeed, Crews in "Distance Education and the TEACH act" states that this "provision is clearly intended to protect the market for materials designed to serve the educational marketplace".

So, I was surprised when I went to class and found out that librarians view this legislation in a positive light. But when I stepped back, I realized that it was I who was being perhaps a bit harsh. I had been working on deciphering licensing terms all week, and so was in a hyper critical mode reading about this act. I examined every word, wondering how would it help publishers and hurt libraries/educators. I had missed the statement in a briefing by the ALA that the Copyright Office even said "Fair use could apply as well to instructional transmissions not covered by the changes to section 110(2) recommended above. Thus, for example, the performance of more than a limited portion of a dramatic work in a distance education program might qualify as fair use in appropriate circumstances.” This removes some of my worry about the limited portion (although it then brings it back to Fair Use, which anyone reading this blog knows, is a pretty tricky issue). But, upon re-reading many of the articles that I viewed as too positive earlier, I can see that the TEACH does have its heart in the right place. It is trying to allow new technology to be used and it is trying to help education reach the most people. It is certainly an improvement over the previous law which only allowed distance education via broadcast video conferencing, students in one room and the teacher in another. This legislation is not totally focused on slowing taking away user's rights one by one.

The section mandating that materials not be stored upon a computer or able to be transmitted to others students "for longer than the class session" still worries me. I understand why they placed such a limit in here. It ties back to the fear of piracy, of movie copies being distributed across the entire internet in a matter of seconds if not protected. As Lipinski states "In Congress' view, copyright piracy on college campuses has reached epidemic proportions. Although little of the piracy is tied to curricular infringement, it consists of students engaging in peer-to-peer file exchanges" To Congress, this fear is real, and must be prevented by technological limitation such as stated above.

However, by tying it to a mode similar to that of face to face instruction, in terms of only being available for viewing during a limited period of time relating to class session, it limits the possibilities of this act to work with future technologies. Litman, in her book Digital Copyright frequently mentions that one of the major problems with copyright law is that it only looks towards the present and the past, not the future. As such, it does not adapt to change well. I am concerned that, by adding the words "class session" they are limiting themselves by using an old model and not truly reflecting the asynchronous nature of current and future online education. I know when I had a few classes online due the professor being at a conference, I stopped and started the video numerous times, due to life getting in the way. Often, this is the situation in online education, and one of the great things about it being not set at a certain location for a certain time. Would someone only be given a limited time period to watch a movie and then they cannot access it anymore? What happens if they need to refer to it for a paper or studying for a test? What happens if life indeed gets in the way of watching it all at one time, or even over the course of a day? Technology changes what can constitute a class session, and the language remains very unclear as to whether the TEACH reflects this.

Thursday, October 13, 2011

Experiences from the reference desk: How TPM and license convetions affect users

For three years, I have been serving in a reference librarian capacity in various academic settings, both small and large. During this time, I have seen numerous examples of licensed material, both e-book and article database, causing patrons no end of problems. The problems stem from user restrictions, difficult to understand interfaces, and in general, the expectation of the patron not meeting the reality of the source. So, when I read Professor Kristen Eschenfelder's articles "Every library's nightmare? Digital rights management, use restrictions, and licensed scholarly digital resources" and "Social construction of authorized users in the digital age" (co-written with Xiahua Zhu), I recognized many of the problems these articles discuss. Eschenfelder and Zhu hit problems facing patrons today right on the nose, and it was wonderful to see them treated with such seriousness, in an academic fashion.

When I talk about "frustrations" I am mainly thinking of two different problems. The first ones result from what Eschenfelder calls "soft restrictions", technological limitations on use imposed by a licensed source that can be worked around but make performing certain tasks annoying. Examples would be only allowing a patron to print an article through clicking on a special, quite hidden, button or disabling right click and ctr+c copying. These are things, either by choice or bad design (probably a combination of the two in my opinion) that make the user jump through hoops to perform functions allowed in the license. They are very common, in my experience, and, as Eschenfelder rightly points out, a problem that many librarians, including me, feel powerless to solve.

The other type of frustrations come from who counts as an authorized user and the authorization methods used to control access. Zhu and Eschenfelder explain (and this was totally new to me!) that what is now an authorized user has gone through a great deal of change, as publishers and librarians have fought and forced language change through the introduction of negotiating licenses, a practice that didn't start until the 1990s. Problems began when, due to resources transitioning towards being electronic, the public could no longer just come in and use material off the shelf. Publishers used this to fact to restrict who could access their materials, and charge more for more access. When electronic resources were first introduced, libraries first put them only on certain password protected computers for which the librarians gave out the correct information. Then, as technology changed, and people could connect to the libraries resource remotely, campus IDs connected to a main campus IP address became the primary way to determine who was an authorized user of the material or not. As such, walks ins today have to follow by these rules, limiting their use to a few special library computers or being forced to ask the librarians for special guest passwords. Even with all of the password protection and other efforts made by librarians to limit use, some publishers still are not satisfied with campus passwords, and make users register specifically for their products. This whole process has left the public disenfranchised.

I recognized my patron's experiences reading about these problems, especially the walk-in public patrons I serve. It is not just an academic thought experiment to say these issues listed above might cause frustration. In the past few weeks, I have had a extremely confused professor approach me about a science database she was trying to use. She wanted to access the full text of an article from a citation, and I know for a fact we had this journal full text through this particular database. However, the database kept on informing her that she had to create an account to access any full text. She had almost bought the article from another source in desperation, because she truly believed she did not have access. What was actually going on was an example of a particular database not trusting the campus password system and making every member create their own (free) account. If this is confusing faculty, who actually care enough to approach the reference desk, think about how many students must either accidentally pay OR just leave the database and never get to use a source that they should have access to.

And it is not just extra passwords. Students, using JSTOR, believe themselves technically incompetent and come to the desk frustrated and mad at themselves for not being able to print more than a page at a time (JSTOR requires one to go into a separate page to print). Nursing students using the database Up to Date try and copy and paste, and find they cannot right click. They come up to me, in a bad mood, feeling defeated by their inability to perform what they consider a standard function. When I show them they have to use the edit function, they ask "why is it like that?" My general answer is "the database is stupid, its not you". But I cannot provide them with a good reason. Viewers of e-books in NetLibrary, unable to view more than one page at a time, say that it is not worth their time, and move onto articles or actual books. Some will even request the physical copy through ILL, adding up to a week onto their research process, due their anger at not being able to scan a chapter. These soft restrictions are causing serious technological frustrations. It makes the resources we have not be as friendly, causes users to doubt themselves and the library resources and in general is trouble. The only good thing about them is that they increase reference questions.

At the small library where I work, the public is allowed access, but only on eight computers. This causes many members of the public to not be able to get on the computers when all eight are taken up, which causes them to get quite cranky. At the large college where I also worked, public users had to get a special password from the librarians. This did not phase many, but quite a few clearly were shy about this, embarrassed they had to ask. Two even yelled at me for limiting access in this way. I tried to be as calm about it as I could, and help those embarrassed. But I felt wrong that they had to be signaled out this way, their past rights to be able to come in, anonymously and read a journal now stymied.

The point of these tales is this: these restrictions being placed on public use and the soft restrictions being implemented by the content owners do affect patron satisfaction and do increase frustration. Eschenfeldor and Zhu are right to draw attention to these concerns, which affect libaries every day but which are often ignored in the literature that I read. Many patrons are not tech savvy enough to know their way around the annoyances and hidden save and print buttons when they run up against them. And the public does indeed feel the crunch.

It feels wrong to point out the problems without offering solutions. Luckily, as Eschenfelder, Desai and Downey point out in their article " The Pre-Internet Downloading Controversy: The Evolution of Use Rights for Digital Intellectual and Cultural Works." the opinions and de facto needs of users combine with librarians being aggressive in their licensing has made significant changes to policy before. The ability to download search results was, in the 1980's, considered a topic of hot debate and often not allowed by database owners. Due to a resistance in library culture and users simply ignoring warnings forbidding downloading, the publishers eventually gave in and now the right to download is expected to appear in every good license.

We can follow this pattern in protesting against some of the problems that hurt our users, especially soft restrictions. We should not just shrug and tolerate extra passwords or inability to save documents easily. Instead, we should make noise; writing articles, complaining to the publishers, and asking customers to do the same. We need to let both the public and other librarians know these troubles are not just the facts of life, but can be changed. We can also refuse to license materials that use these tricks to make their material inaccessible and let them know the reason for the refusal to license. In many cases, libraries are the main form of income for these resources, and by both making noise with our voices and our actions, we can push back and let publishers know these practices are unacceptable, just like how libraries and users did for downloading in the 1990s.


Thursday, October 6, 2011

The Journal Crisis: Some Possible Solutions.

Most librarians know about the "journals crisis",being the spiraling out of of control of academic journal prices, especially scientific journals. I always believed the main cause to be publisher greed. In some reading that I have been doing this week, however, I found out that there is more involved than just these factors, which is actually good news. Some of these other factors have possible solutions. Publisher greed tends to be hard to combat. Astle and Hamaker in their article "Journal publishing: Pricing and structural issues in the 1930s and the 1980s" and King and Tenopir in their chapter from the book Economics and Usage of Digital Libraries: Byting the Bullett. both bring attention to factors such as publishers pushing for larger journals and more content in order to be seen as more successful (the more content you have, the more citations you can get to your journal) and journal fragmentation caused again by needing more content. All of this is driven by an academic culture that believes that the more ones publish, the more valuable they are, and which therefore provides the journals with enough content to allow for the fragmentation and size increase.

These factors all make a lot of sense. The academy, as I have personally witnessed it, pushes publication to an extreme degree. I have a husband, a sister, and parents who are involved, either as students or professors, in the world of academia, and they all feel that pressure to push out materials and break up their research into many articles in order to get as many articles out of it as possible. And this pressure to publish would lead to publishers getting tons of submissions and realizing that they have the material for lots of journals, so they might as well make a lot of journals. Due to the way that free market economics tend to work, the big publishers rise to the top, buy up lots of journals, and have the most resource to put towards their journals. This results in their journals becoming the most prestigious and everyone trying to publish in them, causing them to be able to make more money due to their essential nature, and the cycle repeats.

This phenomena, of the journals that are top in the field being "essential" to have and owned by big commercial presses in my opinion is what led to the big deal, defined by Kenneth Frazier as when "libraries agree to buy electronic access to all of a commercial publisher's journals for a price based on current payments to that publisher, plus some increment." (Frazier "The Librarian's Dilemma"). There has been a lot of discussion about this topic in the literature of librarianship, some in support and some against. Those in favor, as Rick Best mentions in his article "Is the 'Big Deal' Dead" tend to mention the benefit to small colleges, who now can afford to have the breadth in their journal collection that they always wanted, but never could. Others, like Ken Frazier, say that the big deal takes away the libraries ability to select, therefore burdening patrons with lots of useless material through which to wade It also takes away any ability to discontinue journals that are not being used and reduce prices correspondingly. I think both are right, and that the Big Deal, and indeed the increase of journal pricing is far more complex than "do away with it entirely or buy every big deal ever". A balance between the desire for full text that many patrons now expect and prices that are unsustainable, between breadth and quality control must be found. Quite a few suggestions have been made about how to go about this, but I want to focus on two that I believe have the most potential to grant libraries more control and also bring down prices.

First solution:

Best mentions that OhioLink, a large consortium, made a deal to allow them to drop journals with low usage statistics from a big deal and get a corresponding reduction in price. This requires licensing expertise and often the power of numbers a consortium represents, but I believe it is a huge step in the correct direction. Libraries need to realize that it would be very easy for most of these large publishers to actually serve different content to every academic library in the United States, at least technologically. I talked to my husband, a computer science PhD student at Madison, and he explained that all one would need to would be to set up a schoolID table and a journalID table in a relational database, and then do a join of the two tables to create a unique plan ID connecting the school to numerous journals. Assuming something like ScienceDirect, which has 1,000 journals and a separate plan for each academic library in the United States (lets guess something big like 50,000 different libraries), for an average database, this additional table would take up about 4 megabytes of disk space for the publishers servers. This is based on the fact that every entry in a standard database takes about 4 bytes, and so joining the two would be 8 bytes: 1,000 journal entries x 50,000 libraries x 8 bytes= 4,000,000 bytes=4 mb. This is tiny. It will take more licensing overhead, but these individual plans, or at least consortia plans, will provide the selection, the ability to reduce price, and the ability to control the libraries own purchasing destiny. It will also allow libraries to keep the savings bundled packages provide over buying each journal individually.

Second Solution:

Librarians need to get faculty on their side and together explore alternative means of publication. I was fascinated to read in Hamaker and Astle's article that in the late 20's and 30's librarians were dealing with similar issues of high journal costs, stemming from the prestigious publishers in Germany publishing lots of new journals and content. While the price gouging was a bit more obvious in this case (it was clear the prices were being artificially raised to help the struggling post World War I economy) than it is now, librarians eventually rose up and brought down the price. The way they did it? They got agreements from faculty that they would no longer purchase or support these journals in their own professional organizations. They built up enough power through a coalition that they could afford to stand up to these prestigious publishers. And they did it by getting the academy on their side.

Libraries now have even more to offer scholars. In the 1930s, if the journals were cancelled there was no alternative method of publication. Now, as Ken Frazier and Rick Best point out, there are other means of publication. Frazier, back in 2001, mentioned new open source journals by non profits that offer more rights to authors, are peer reviewed, and if they can get the community behind them, are capable of undermining the big commercial publishers. Some libraries already have reached out to the faculty and caused change. The University of California system partnered with faculty to protest and indeed threaten to cancel the Nature Publishing Group's big deal if they do not lower the price. Cornell's faculty senate, according to Best, "called among other things for faculty to become familiar with the pricing policies of journals in their specialties and to cease supporting publishers who engage in exorbitant pricing by not submitting papers to, or refereeing for, the journals sold by those publishers" (354). To me, this is the direction more libraries serious about bringing commercial publishers into line need to go. The root of all of this is the academic cycle of trying to publish the most in the "important" journals. If the academy continues to support these types of journals, it will continue to make them indispensable, and they will continue to hold libraries hostage. Only through educating the faculty about these practices, the nature of the publishers, and trying to introduce the idea of alternate methods of both scholarly communication and even scholarly worth, can we begin to really turn this pricing crisis around. We also need to look outside our field to others who are calling for reform for the way the academic system is run and partner with them, to get the change and message we want. This solution is not easy, its taking on a giant entrenched feeling of worth in academia. But I really think it is the only way we can build enough support to possibly challenge the prestigious, monopoly like journal publishers and create serious alternate methods of publication.