Ever since I learned about it during my first semester at University of Wisconsin Madison's library school, the possible lack of perpetual access to already paid for electronic content troubled me, especially in the face of many libraries switching from ownership of material model to that of solely leasing content. At my current workplace, this issue has moved from the theoretical to the practical for me, as we recently rearranged the library and went e-only for most publications to save space. Our print journals all suffered from low usage, with patrons clearly preferring online access. As Watson's chapter in Managing the Transition from Print to Electronic Journals and Resources makes clear, patrons "are reluctant to interrupt their workflow by stopping what they are doing just to visit the library in the hope that the needed article is on the shelf" (pg. 47). This was certainly the case at my workplace, and as such, it made sense, financially, to spend the money on resources being used.
I understand this decision. We were running out of space, we were strapped for cash, and our patrons were not being served currently by the existence of print. But we need some way of preserving this access, electronically. There is no library literature that I have read that advocates giving up access rights to previously subscribed content if one cancels a subscription. Instead, most point out that doing this would create the bizarre issue that, though a huge amount of information is being produced, our time could appear to future scholars as a dark ages, a black hole of literature. The literature, having not been preserved, would be lost.
Because of this, I examined my workplaces current solutions, and judged them according to the two articles I recently read that I found the most helpful for considering this issue: Watson's chapter and Stemper and Barribeau's article "Perpetual Access to Electronic Journals: A Survey of One Academic Research Library's Licenses" (which appeared in Library Resources and Technical Services). Sadly, I found both of their current solutions lacking.
Their first solution is to rely on ILL for access to articles for which they have unsubscribed. For the time being, this seems to be working, as most of our ILL requests for canceled journals can be filled. Of course, this costs the library money, but compared to the price of many electronic resource purchases, it is a very small amount. The main issue for this is that it forces us to rely on the kindness of others, and the fact that other libraries will still maintain print collections, or have licensed products that allow Inter Library Loan. As Stemper and Barribeau point out, many license agreements, especially standard publisher ones before negotiation, do not allow any ILL's to be sent that are copies of their electronic material. So, while it works for now, we could run into problems later.
The other method is relying on non negotiated licenses to grant perpetual access. While Stemper and Barribeau do mention that some licenses contain such a right naturally, they point out that the high number of licenses they found permitting perpetual access in some form (be it locally hosted, through the publishers site for a fee or via a third party)included that clause due to negotiation by the library. Indeed, they point out that librarians "should consider making the lack of perpetual access rights a deal breaker" and must negotiate for it, with no mercy. By not negotiating licenses, my workplace has left itself open to losing access.
So, what should my library do? Maintaining their own print archive is not an option, as, during their process of switching to e only, they threw away a great deal of their print copies (again, to save space). Negotiating licenses might not be feasible either, as there is not many library staff, and no one who is comfortable with license negotiation. While I believe that being sure to negotiate for some form of perpetual access, be it either through the publisher (and maintained via an access fee) or whether it is through a third party that works for the publisher (examples are the open source shared archiving software LOCKSS and Portico, a single server that contains publishers material and then shares this with the library), is the ideal solution, I do not know how practical this would be. As such, I believe the best thing for my library to do would be to join a consortium, as Watson discusses, where each library is responsible for keeping some journals in print and agree to share copies of these print articles via ILL. I do not really like this solution; it seems inelegant to me to maintain the integrity and access to lost electronic content via print. It also would introduce lag time for these journals as they would have to be sent from another consortium library to ours. However, for the small academic library, with little staff resources or time or space, it is a better plan that rely strictly on the kindness of publishers to save you.
Thursday, November 17, 2011
Friday, November 11, 2011
The Technological Rabbit Hole.
Although I consider myself rather technologically literate (I can write a blog! I can make a web page! I know how to construct a basic for loop!), I find myself amazed every time I am forced to think about what makes what seems a simple computer operation, like a search for a known article in a library catalog, work. One simply types in the metadata detailing what they know, and a page appears that lists all of the places you can find that article in full text. It even links you to inter-library loan services! But, as Boston and Gendon in the book Electronic resource management in libraries: Research and practice and Walker in The E-Resources Management Handbook demonstrate, the process is far from simple. It requires numerous programs, all working in tandem, to make that list of full text appear.
For something like the known item journal search to work, first a list of what journals we have and where such journals are located must be created. According to Weddle and Grog in Managing the Transition from Print to Electronic Journals and Resources , this process is handled by an A-Z journal list, usually by a global, proprietary knowledgebase, as one librarian or even library keeping track of such things would be a monumental task. The library still must create a local knowledgebase, however, detailing their own campus holdings.
In order to find a specific article using this knowledge base, a request for that article's location must be made. URLS cannot be used because they change to frequently. Instead, every request that goes from that journal's request is named via an openURL link resolver, which contains some metadata about the item. The local knowledgebase is then queried using this metadata, and the correct results are returned.
At UW Madison, when one finds a specific journal, the results page also includes a list of suggested materials. This is another piece of programming, described by Boston and Gendon. Using openURL, a program looks for other materials in the knowledge base that contain similar words or subjects, and also returns them. This is added to facilitate resource discovery, showing people resources that they might never have considered.
That is at least three pieces of complicated programming, on top of the web programming required to display the material and the database software required to store the knowledgebase, and the licensed journals required to have the materials. It reminds me of something I was linked to on google plus, the beginning of which is below:
_____________________________________________________________________
"You just went to the Google home page.
Simple, isn't it?
What just actually happened?
Well, when you know a bit of about how browsers work, it's not quite that simple. You've just put into play HTTP, HTML, CSS, ECMAscript, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
Let's simplify.
You just connected your computer to www.google.com.
Simple, isn't it?
What just actually happened?
Well, when you know a bit about how networks work, it's not quite that simple. You've just put into play DNS, TCP, UDP, IP, Wifi, Ethernet, DOCSIS, OC, SONET, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
______________________________________________________________________
Sometimes I wonder, is all this complexity truly necessary? Do users need to be able to tag, to personalize their webpages, to conduct federated searches? What is actually being used in the suite of tools and interlocking functions that appear to create a coherent whole to the library user?
I admit, I am unsure what the right answer to this is. I tend to be wary of jumping on every new technology bandwagon that comes along, but I also understand the desire for improving user access and resource discovery in any possible way. In my own personal experience, as a user and reference librarian, some of these complex tools, such as the system for finding full text I described above, or improved content linking via both papers citing an article and papers cited by an article, are essential for resource discovery.
But in my experience some things mentioned in articles like Boston and Gendon's and Walker's tend to just not be useful, or at least under used by patrons. For example, both of these articles mention that one new method of resource discovery is bringing up suggested articles when a patron does a known item search. UW Madison includes this whenever a full text of article is sought using FindIt. However, I have never seen a patron use this service, although I have helped with many a full text or known item search. I admit, I tend not to point it out to them, as I am often disappointed in the results. They seem to never identify the topic correctly. I could see such a thing being useful if it was more accurate, but the current program does not seem to have the intelligence to be very useful.
Basically, I think that we need to continuously strive to build a system that truly helps a user in their resource discovery and electronic access, and this will be complicated technologically, no doubt. But at the same time, assessment of the tools we are providing, and user studies, must be conducted so that we know when a technology is being useful and when it is just adding another, unnecessary, layer.
For something like the known item journal search to work, first a list of what journals we have and where such journals are located must be created. According to Weddle and Grog in Managing the Transition from Print to Electronic Journals and Resources , this process is handled by an A-Z journal list, usually by a global, proprietary knowledgebase, as one librarian or even library keeping track of such things would be a monumental task. The library still must create a local knowledgebase, however, detailing their own campus holdings.
In order to find a specific article using this knowledge base, a request for that article's location must be made. URLS cannot be used because they change to frequently. Instead, every request that goes from that journal's request is named via an openURL link resolver, which contains some metadata about the item. The local knowledgebase is then queried using this metadata, and the correct results are returned.
At UW Madison, when one finds a specific journal, the results page also includes a list of suggested materials. This is another piece of programming, described by Boston and Gendon. Using openURL, a program looks for other materials in the knowledge base that contain similar words or subjects, and also returns them. This is added to facilitate resource discovery, showing people resources that they might never have considered.
That is at least three pieces of complicated programming, on top of the web programming required to display the material and the database software required to store the knowledgebase, and the licensed journals required to have the materials. It reminds me of something I was linked to on google plus, the beginning of which is below:
_____________________________________________________________________
"You just went to the Google home page.
Simple, isn't it?
What just actually happened?
Well, when you know a bit of about how browsers work, it's not quite that simple. You've just put into play HTTP, HTML, CSS, ECMAscript, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
Let's simplify.
You just connected your computer to www.google.com.
Simple, isn't it?
What just actually happened?
Well, when you know a bit about how networks work, it's not quite that simple. You've just put into play DNS, TCP, UDP, IP, Wifi, Ethernet, DOCSIS, OC, SONET, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
______________________________________________________________________
Sometimes I wonder, is all this complexity truly necessary? Do users need to be able to tag, to personalize their webpages, to conduct federated searches? What is actually being used in the suite of tools and interlocking functions that appear to create a coherent whole to the library user?
I admit, I am unsure what the right answer to this is. I tend to be wary of jumping on every new technology bandwagon that comes along, but I also understand the desire for improving user access and resource discovery in any possible way. In my own personal experience, as a user and reference librarian, some of these complex tools, such as the system for finding full text I described above, or improved content linking via both papers citing an article and papers cited by an article, are essential for resource discovery.
But in my experience some things mentioned in articles like Boston and Gendon's and Walker's tend to just not be useful, or at least under used by patrons. For example, both of these articles mention that one new method of resource discovery is bringing up suggested articles when a patron does a known item search. UW Madison includes this whenever a full text of article is sought using FindIt. However, I have never seen a patron use this service, although I have helped with many a full text or known item search. I admit, I tend not to point it out to them, as I am often disappointed in the results. They seem to never identify the topic correctly. I could see such a thing being useful if it was more accurate, but the current program does not seem to have the intelligence to be very useful.
Basically, I think that we need to continuously strive to build a system that truly helps a user in their resource discovery and electronic access, and this will be complicated technologically, no doubt. But at the same time, assessment of the tools we are providing, and user studies, must be conducted so that we know when a technology is being useful and when it is just adding another, unnecessary, layer.
Wednesday, November 9, 2011
Standards, their importance and problems.
Well, this week readers will get two posts to make up for the dearth of one last week. I was a bit busy, having three assignments due and a conference to attend. But now, as I drink tea and watch the first snow come down outside, I can take some time to talk about standards! Yes, I know, very exciting. On a serious note, however, standards, both of the technical and metadata variety, play an increasingly vital role in librarianship. We have always been a standards obsessed profession, what with our AACR2s, and MARC, and Library of Congress Subject Headings. Since modern American librarianship found its footing in the mid 1900's, librarians have focused around organizing, transferring, and helping people find information. We needed standards then to assure that if someone went to two different libraries, they wouldn't have to learn an entirely new way to look a book up and then find it on the shelf. And we needed standards so that librarians would not have to learn brand new ways to classify something at every library they worked for.
So, as librarians, we gravitate towards creating standards, and now that tendency, according to Pesch, in "Library standards and e-resource management: A survey of current initiatives and standards efforts" and Yue in Electronic resources librarianship and management of digital information., is more important than ever. It is vital for all librarians to realize that information, while easier to access and share than ever before, has also gotten more complicated. More departments are involved in preparing materials for display and access, including acquisitions, cataloging/metadata, licensing, IT, and the new field of Electronic Resources Librarianship. And more information is being transferred, between librarians and vendors, individual librarians and consortia, consortia and vendors, and between libraries themselves through ILL and the web. Standards are essential for insuring that information is comparable, compatible, and communicable.
So where do we stand in terms of standards? According to the above two articles, the library community has made great strides recently, especially in terms of vendor usage statistics (with COUNTER, which is an awesome standard that I love!), electronic resource link resolving (OpenURL), and meta/federated searching. These standards, due to their relative ease of use, are incorporated and followed by many libraries. We are making progress towards others, like defining a data dictionary to talk about important functions and duties of ERM systems.
However, there are many standards with which librarians struggle, where ideas of what should be included and not creates conflict. The problem is if there is a feeling that a standard is handed down without input or too burdensome, it will not be followed. For example, RDA,developed by the Library of Congress, created an XML based cataloging standard that more easily incorporates non-print objects (a serious problem with current cataloging standards). However, due to the fact that many catalogers felt that they had no say in this decision, and that a lot of animosity exists between those who believe AARC2 is fine, those who embrace RDA, and those who believe that RDA does not go far enough, it has been extremely slow to be adopted. It also creates a great deal of work, as all old records need to be transferred to the new format to make resource finding optimal.
As a strange aside, I found out recently that web programmers have been moving away from XML to a data transfer language called JSON. In addition to its faster data transfer, these developers believe that XML has been overburdened by XML schema and standards, created by librarians (METS, MODS, etc, etc). Librarians, in their desire to build standards, have made XML too complex for application development.
So what does this all mean? To me, the lessons to take away are that standards are necessary for librarianship in this electronic age, but they cannot be imposed. If librarians (or other industries) feel that they do not have a voice in the standard process, or if the standard itself will require a huge amount of extra work for librarians, it will not be followed, no matter how ideal and wonderful it would be if it worked. So, do I think we should still try to create standards? Yes, but I think we must make them simple (Dublin Core or COUNTER are excellent examples of this) and they must be developed in a way that the library community feels that they have a say and stake in the outcome.
So, as librarians, we gravitate towards creating standards, and now that tendency, according to Pesch, in "Library standards and e-resource management: A survey of current initiatives and standards efforts" and Yue in Electronic resources librarianship and management of digital information., is more important than ever. It is vital for all librarians to realize that information, while easier to access and share than ever before, has also gotten more complicated. More departments are involved in preparing materials for display and access, including acquisitions, cataloging/metadata, licensing, IT, and the new field of Electronic Resources Librarianship. And more information is being transferred, between librarians and vendors, individual librarians and consortia, consortia and vendors, and between libraries themselves through ILL and the web. Standards are essential for insuring that information is comparable, compatible, and communicable.
So where do we stand in terms of standards? According to the above two articles, the library community has made great strides recently, especially in terms of vendor usage statistics (with COUNTER, which is an awesome standard that I love!), electronic resource link resolving (OpenURL), and meta/federated searching. These standards, due to their relative ease of use, are incorporated and followed by many libraries. We are making progress towards others, like defining a data dictionary to talk about important functions and duties of ERM systems.
However, there are many standards with which librarians struggle, where ideas of what should be included and not creates conflict. The problem is if there is a feeling that a standard is handed down without input or too burdensome, it will not be followed. For example, RDA,developed by the Library of Congress, created an XML based cataloging standard that more easily incorporates non-print objects (a serious problem with current cataloging standards). However, due to the fact that many catalogers felt that they had no say in this decision, and that a lot of animosity exists between those who believe AARC2 is fine, those who embrace RDA, and those who believe that RDA does not go far enough, it has been extremely slow to be adopted. It also creates a great deal of work, as all old records need to be transferred to the new format to make resource finding optimal.
As a strange aside, I found out recently that web programmers have been moving away from XML to a data transfer language called JSON. In addition to its faster data transfer, these developers believe that XML has been overburdened by XML schema and standards, created by librarians (METS, MODS, etc, etc). Librarians, in their desire to build standards, have made XML too complex for application development.
So what does this all mean? To me, the lessons to take away are that standards are necessary for librarianship in this electronic age, but they cannot be imposed. If librarians (or other industries) feel that they do not have a voice in the standard process, or if the standard itself will require a huge amount of extra work for librarians, it will not be followed, no matter how ideal and wonderful it would be if it worked. So, do I think we should still try to create standards? Yes, but I think we must make them simple (Dublin Core or COUNTER are excellent examples of this) and they must be developed in a way that the library community feels that they have a say and stake in the outcome.
Subscribe to:
Comments (Atom)