Tuesday, December 6, 2011

Reflections on a Semester.

Every semester, I have the same cliched question: How did the end get here so fast? I have been working on this blog for 14 weeks now, and this is my last "class entry". But this blog is not going away, so never fear, my one or two possible readers! I have discovered that I love blogging, and I am not too shabby at it. In addition, I will be doing a volunteer internship at WiLS (Wisconsin Library Services) in their Electronic Resources department, so I will have plenty to talk about! Writing this blog, and being in this class has made me realize that I love the many facets of ERM, and want to explore it more. I look forward to sharing my future experiences with all of you.

When I began this class, I believed that it would be collections management for electronic resources; basically, we would be learning about how to select databases, how to analyze usage statistics for collections management decisions, how to purchase databases (so something about licensing) and maybe how to catalog/keep track of those databases in some way. From reading job descriptions for ERM librarians, I also was hopeful we would delve into a lot of the technologies that were discussed therein: SFX, openURL, link resolving. What I came to discover, from just looking at the syllabus and which was brought home throughout the class, was the Electronic Resource Management affects almost every level of librarianship. These resources are so much more complex, in terms of purchase, maintenance, federal legislation, user services and instruction, marketing, everything, than print and this means it takes a whole library working together to truly "manage" them. It also included things that I did not think of as ERM, like ILL and Electronic reserves. When I first saw these, especially ILL on the syllabus, I was excited (because I deal a lot with ILL at work, and think it is an essential service) but confused. What did ILL have to do with ERM, I wondered? But, by the time we got close to that section, I realized it had everything to do with it. Not only is ILL now facilitated online through programs like Illiad, but more and more libraries are requesting texts from databases or purely online journals. As such, the same dilemmas and issues that come with databases--concerns about licensing, technology protection measures, copyright and fair use--are tied up in ILL, in E-reserves. The breadth of issues that electronic resources touch is truly amazing, and makes this class by far one of the most important and informative I have taken at UW Madison.

What ties this whole class together for me is copyright law. I never would have thought that this was the core to understanding how to be an electronic resource librarian, but it really is. The technology is of course extremely important and helpful; one would have a hard time guiding patrons through all the material or even making them available to patrons without ER management systems, without creating easy access to changing links, without usage standards like COUNTER. But what is constantly at the back of it all, at the back of all the technology used to limit access, the skyrocketing price of journals, the incredibly detailed process of licensing, is copyright, and more specifically the fear copyright owners have of how easy it is to now make copies digitally.

This concept first really gelled in my mind when I was writing two blog posts, one from the 22nd of September ("Licensing is a Pit Trap Full of Spikes") and the one the week later ("Georgia on my Mind"). I think that the reason was because these blog posts were my first exposure to how fair use, the copyright law of 1976, CONFU, DMCA, and all the other issues surrounding electronic transmission concretely affect the library world. In the license post, I expressed my fear of licensing. Looking back, I now see that that fear derived out of a fear about not understanding copyright and all the legal language well enough. But that post was the start of my desire to understand the rules and the court cases and the issues as well as I possibly could. Georgia gave me a bit of hope. While we still (!) do not know the outcome, it looks like they were not able to take away most of Georgia's rights. Reading about this case and writing this blog post allowed me to believe that, sometimes, the law can be on our side, if we can back up our decisions from a place of knowledge. Due to these two blogs, which caused me to sort through my feelings and fears, I devoted myself to better understanding these issues not only throughout the rest of the class, but outside of it as well. I have investigated copyright policy at Edgewood, and am working with them to attempt to create a more flexible one. And I am trying to face my fear head on, by not only volunteering at WiLS, but also working with our ER librarian at Edgewood to create better licenses for our library, and even help with negotiation.

Finally, if anyone is reading this blog from the outside and is thinking about taking a class on ERM, do it. At my most recent job interview, earlier this week, I was asked about what I knew about scholarly communication, especially how researchers can maintain their rights in an electronic publishing world. I was able to discuss open access solutions, as well as the importance of negotiating contracts. I could bring up what rights authors had under copyright and explain what resources were available to the researchers to help them further understand their rights and assist them in reading contracts. This not being a library setting, but instead a research group, they were astounded that librarians knew so much about this, and could help. This is something that goes beyond libraries and into publishing and all forms of research. It opens up not only your own understanding, but employment doors. And in these economic times, that last phrase is the best endorsement I can give a class.

Thursday, December 1, 2011

YEP (Yet Another E-book Post).

In the article "Reading Books in the Digital Age subsequent to Amazon, Google, and the long tail", Terje Hillesund comments that he found over 500 articles written through 2007 that discussed e-books. At the Wisconsin Library Association's conference this fall, the keynote talk was about E-books. And when I opened up the December 2011 edition of American Libraries , one of the first articles I saw was entitled "A Guide to Buying E-books". They are on everyone's mind, and their market continues to grow. But what are they, really? Can they ever replace books or be used in a similar way? Do they even count as books or are they a completely different beast?

Many people have attempted to answer these questions, and in the next few paragraphs, I would like to make my own attempt to clarify what I think about the whole e-book thing. But I do not think there ever will be one answer to these questions for the simple reason that reading is a subjective personal activity, directed by individual needs, cultural background, and value systems. As such, my thoughts reflect my own background, that of a lover of reading, a person who has been in school for 19 years, a person raised by two professors, married to a PhD student and sister to another (basically surrounded by those of the intelligentsia), and also a person in love with technology and gadgets. With that caveat, lets talk e-books.

Both Hillesund's article and another article entitled "Disowning Commodities" by Ted Striphas take the view that e-books represent something fundamentally different from what we have seen before. Hillesund claims that they have, via separating the storage of the text from the reading of the text (storage in bits, reading in good old fashioned letters), broken the tradition of the book, in which the creation, storage, distribution and reading of knowledge are all contained within one unchangeable form. He quotes Roger Chartier, a prominent historian of reading, who says "Our current revolution is obviously more extensive than Gutenbergs". Another intellectual, Stephan Bickerts, discussed in Striphas, believes that e-books have destroyed "deep reading" of the past, a type of meditative engrossment in the words of another. Something,all agree, has changed.

There can be no argument that the medium has changed. Physically, the Kindle is very different from a paperback. However, through my own experience with my Kindle, I have found that the act of reading has fundamentally remained the same. For pleasure reading (a far too uncommon thing in my life at the moment) reading on the Kindle does not detract from my enjoyment of the story in anyway. Having Pride and Prejudice on Kindle does not change the fact that Austen writes with an incredible sense of character and societal analysis. The only problem is that it requires you to push a button to flip from page to page, which makes it hard to get back to a section, but, when reading for pleasure, this is not usually a huge issue. So when it comes to reading for fun, a Kindle or a paperback work just as well.

But what about for that "deep reading" or scholarly reading? Hillesund claims that the only reading people won't do electronically is for sustained, detailed reading of lengthy texts. In this I agree with Hillesund. I believe that this is truly where the book will continue to flourish, at least for the time being. Striphas believes that this is due to how the owning a physical book, in the 1930's became a status symbol of then new middle class. While this might have something to do with it, I actually do not think my dislike of using e-text for scholarship is a cultural phenomena. If anything, I believe that the middle class has embraced the e-reader like they did the displayed book in the 1930's, as symbols of their learning and ability to spend. The truth of the matter is that the functionality for serious scholarly reading is just not there. For myself, the lack of an easy way to write notes or highlight text in my Kindle, or quickly flip to a place I highlighted, seriously limits its use as a tool for research. Even if the e-reader improves on this, it still is much more expensive to lay out three e-readers to compare text than to spread out books on a table for cross comparison. Perhaps scholarship methods will change, but until that happens, a need for written text will remain for the scholar and the student

So, in my opinion, e-books are in fact just a new medium, not something brand new. In containing the same content one would find in a manuscript, in a codex, on a scroll, they fall into the category that these mediums do: the book. The fact that they are stored in a different way does not change their purpose, every books purpose, which is to share knowledge. However, they should and could not totally replace the older medium of the printed codex. Not only have they not reached a point in which they are truly useful for scholarly study, but they are simply not affordable for everyone. The digital divide still exists, people still come into their library because they have no computer at home. E-books are neat, and possibly might someday drive out the printed book. But today is not that day.

Thursday, November 17, 2011

The Perpetual Access Problem

Ever since I learned about it during my first semester at University of Wisconsin Madison's library school, the possible lack of perpetual access to already paid for electronic content troubled me, especially in the face of many libraries switching from ownership of material model to that of solely leasing content. At my current workplace, this issue has moved from the theoretical to the practical for me, as we recently rearranged the library and went e-only for most publications to save space. Our print journals all suffered from low usage, with patrons clearly preferring online access. As Watson's chapter in Managing the Transition from Print to Electronic Journals and Resources makes clear, patrons "are reluctant to interrupt their workflow by stopping what they are doing just to visit the library in the hope that the needed article is on the shelf" (pg. 47). This was certainly the case at my workplace, and as such, it made sense, financially, to spend the money on resources being used.

I understand this decision. We were running out of space, we were strapped for cash, and our patrons were not being served currently by the existence of print. But we need some way of preserving this access, electronically. There is no library literature that I have read that advocates giving up access rights to previously subscribed content if one cancels a subscription. Instead, most point out that doing this would create the bizarre issue that, though a huge amount of information is being produced, our time could appear to future scholars as a dark ages, a black hole of literature. The literature, having not been preserved, would be lost.

Because of this, I examined my workplaces current solutions, and judged them according to the two articles I recently read that I found the most helpful for considering this issue: Watson's chapter and Stemper and Barribeau's article "Perpetual Access to Electronic Journals: A Survey of One Academic Research Library's Licenses" (which appeared in Library Resources and Technical Services). Sadly, I found both of their current solutions lacking.

Their first solution is to rely on ILL for access to articles for which they have unsubscribed. For the time being, this seems to be working, as most of our ILL requests for canceled journals can be filled. Of course, this costs the library money, but compared to the price of many electronic resource purchases, it is a very small amount. The main issue for this is that it forces us to rely on the kindness of others, and the fact that other libraries will still maintain print collections, or have licensed products that allow Inter Library Loan. As Stemper and Barribeau point out, many license agreements, especially standard publisher ones before negotiation, do not allow any ILL's to be sent that are copies of their electronic material. So, while it works for now, we could run into problems later.

The other method is relying on non negotiated licenses to grant perpetual access. While Stemper and Barribeau do mention that some licenses contain such a right naturally, they point out that the high number of licenses they found permitting perpetual access in some form (be it locally hosted, through the publishers site for a fee or via a third party)included that clause due to negotiation by the library. Indeed, they point out that librarians "should consider making the lack of perpetual access rights a deal breaker" and must negotiate for it, with no mercy. By not negotiating licenses, my workplace has left itself open to losing access.

So, what should my library do? Maintaining their own print archive is not an option, as, during their process of switching to e only, they threw away a great deal of their print copies (again, to save space). Negotiating licenses might not be feasible either, as there is not many library staff, and no one who is comfortable with license negotiation. While I believe that being sure to negotiate for some form of perpetual access, be it either through the publisher (and maintained via an access fee) or whether it is through a third party that works for the publisher (examples are the open source shared archiving software LOCKSS and Portico, a single server that contains publishers material and then shares this with the library), is the ideal solution, I do not know how practical this would be. As such, I believe the best thing for my library to do would be to join a consortium, as Watson discusses, where each library is responsible for keeping some journals in print and agree to share copies of these print articles via ILL. I do not really like this solution; it seems inelegant to me to maintain the integrity and access to lost electronic content via print. It also would introduce lag time for these journals as they would have to be sent from another consortium library to ours. However, for the small academic library, with little staff resources or time or space, it is a better plan that rely strictly on the kindness of publishers to save you.

Friday, November 11, 2011

The Technological Rabbit Hole.

Although I consider myself rather technologically literate (I can write a blog! I can make a web page! I know how to construct a basic for loop!), I find myself amazed every time I am forced to think about what makes what seems a simple computer operation, like a search for a known article in a library catalog, work. One simply types in the metadata detailing what they know, and a page appears that lists all of the places you can find that article in full text. It even links you to inter-library loan services! But, as Boston and Gendon in the book Electronic resource management in libraries: Research and practice and Walker in The E-Resources Management Handbook demonstrate, the process is far from simple. It requires numerous programs, all working in tandem, to make that list of full text appear.

For something like the known item journal search to work, first a list of what journals we have and where such journals are located must be created. According to Weddle and Grog in Managing the Transition from Print to Electronic Journals and Resources , this process is handled by an A-Z journal list, usually by a global, proprietary knowledgebase, as one librarian or even library keeping track of such things would be a monumental task. The library still must create a local knowledgebase, however, detailing their own campus holdings.

In order to find a specific article using this knowledge base, a request for that article's location must be made. URLS cannot be used because they change to frequently. Instead, every request that goes from that journal's request is named via an openURL link resolver, which contains some metadata about the item. The local knowledgebase is then queried using this metadata, and the correct results are returned.

At UW Madison, when one finds a specific journal, the results page also includes a list of suggested materials. This is another piece of programming, described by Boston and Gendon. Using openURL, a program looks for other materials in the knowledge base that contain similar words or subjects, and also returns them. This is added to facilitate resource discovery, showing people resources that they might never have considered.

That is at least three pieces of complicated programming, on top of the web programming required to display the material and the database software required to store the knowledgebase, and the licensed journals required to have the materials. It reminds me of something I was linked to on google plus, the beginning of which is below:

_____________________________________________________________________
"You just went to the Google home page.

Simple, isn't it?

What just actually happened?

Well, when you know a bit of about how browsers work, it's not quite that simple. You've just put into play HTTP, HTML, CSS, ECMAscript, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."

Let's simplify.

You just connected your computer to www.google.com.

Simple, isn't it?

What just actually happened?

Well, when you know a bit about how networks work, it's not quite that simple. You've just put into play DNS, TCP, UDP, IP, Wifi, Ethernet, DOCSIS, OC, SONET, and more. Those are actually such incredibly complex technologies that they'll make any engineer dizzy if they think about them too much, and such that no single company can deal with that entire complexity."
______________________________________________________________________


Sometimes I wonder, is all this complexity truly necessary? Do users need to be able to tag, to personalize their webpages, to conduct federated searches? What is actually being used in the suite of tools and interlocking functions that appear to create a coherent whole to the library user?

I admit, I am unsure what the right answer to this is. I tend to be wary of jumping on every new technology bandwagon that comes along, but I also understand the desire for improving user access and resource discovery in any possible way. In my own personal experience, as a user and reference librarian, some of these complex tools, such as the system for finding full text I described above, or improved content linking via both papers citing an article and papers cited by an article, are essential for resource discovery.

But in my experience some things mentioned in articles like Boston and Gendon's and Walker's tend to just not be useful, or at least under used by patrons. For example, both of these articles mention that one new method of resource discovery is bringing up suggested articles when a patron does a known item search. UW Madison includes this whenever a full text of article is sought using FindIt. However, I have never seen a patron use this service, although I have helped with many a full text or known item search. I admit, I tend not to point it out to them, as I am often disappointed in the results. They seem to never identify the topic correctly. I could see such a thing being useful if it was more accurate, but the current program does not seem to have the intelligence to be very useful.

Basically, I think that we need to continuously strive to build a system that truly helps a user in their resource discovery and electronic access, and this will be complicated technologically, no doubt. But at the same time, assessment of the tools we are providing, and user studies, must be conducted so that we know when a technology is being useful and when it is just adding another, unnecessary, layer.

Wednesday, November 9, 2011

Standards, their importance and problems.

Well, this week readers will get two posts to make up for the dearth of one last week. I was a bit busy, having three assignments due and a conference to attend. But now, as I drink tea and watch the first snow come down outside, I can take some time to talk about standards! Yes, I know, very exciting. On a serious note, however, standards, both of the technical and metadata variety, play an increasingly vital role in librarianship. We have always been a standards obsessed profession, what with our AACR2s, and MARC, and Library of Congress Subject Headings. Since modern American librarianship found its footing in the mid 1900's, librarians have focused around organizing, transferring, and helping people find information. We needed standards then to assure that if someone went to two different libraries, they wouldn't have to learn an entirely new way to look a book up and then find it on the shelf. And we needed standards so that librarians would not have to learn brand new ways to classify something at every library they worked for.

So, as librarians, we gravitate towards creating standards, and now that tendency, according to Pesch, in "Library standards and e-resource management: A survey of current initiatives and standards efforts" and Yue in Electronic resources librarianship and management of digital information., is more important than ever. It is vital for all librarians to realize that information, while easier to access and share than ever before, has also gotten more complicated. More departments are involved in preparing materials for display and access, including acquisitions, cataloging/metadata, licensing, IT, and the new field of Electronic Resources Librarianship. And more information is being transferred, between librarians and vendors, individual librarians and consortia, consortia and vendors, and between libraries themselves through ILL and the web. Standards are essential for insuring that information is comparable, compatible, and communicable.

So where do we stand in terms of standards? According to the above two articles, the library community has made great strides recently, especially in terms of vendor usage statistics (with COUNTER, which is an awesome standard that I love!), electronic resource link resolving (OpenURL), and meta/federated searching. These standards, due to their relative ease of use, are incorporated and followed by many libraries. We are making progress towards others, like defining a data dictionary to talk about important functions and duties of ERM systems.

However, there are many standards with which librarians struggle, where ideas of what should be included and not creates conflict. The problem is if there is a feeling that a standard is handed down without input or too burdensome, it will not be followed. For example, RDA,developed by the Library of Congress, created an XML based cataloging standard that more easily incorporates non-print objects (a serious problem with current cataloging standards). However, due to the fact that many catalogers felt that they had no say in this decision, and that a lot of animosity exists between those who believe AARC2 is fine, those who embrace RDA, and those who believe that RDA does not go far enough, it has been extremely slow to be adopted. It also creates a great deal of work, as all old records need to be transferred to the new format to make resource finding optimal.

As a strange aside, I found out recently that web programmers have been moving away from XML to a data transfer language called JSON. In addition to its faster data transfer, these developers believe that XML has been overburdened by XML schema and standards, created by librarians (METS, MODS, etc, etc). Librarians, in their desire to build standards, have made XML too complex for application development.

So what does this all mean? To me, the lessons to take away are that standards are necessary for librarianship in this electronic age, but they cannot be imposed. If librarians (or other industries) feel that they do not have a voice in the standard process, or if the standard itself will require a huge amount of extra work for librarians, it will not be followed, no matter how ideal and wonderful it would be if it worked. So, do I think we should still try to create standards? Yes, but I think we must make them simple (Dublin Core or COUNTER are excellent examples of this) and they must be developed in a way that the library community feels that they have a say and stake in the outcome.

Friday, October 28, 2011

Verde and ERMes, a (kind of) brief analysis

In the belief (or maybe false hope?) that someone outside of my wonderful ERM class may stumble across this blog, I decided to look at two ERM systems we read about, Verde by ExLibris and Ermes, an open source system made in Wisconsin. Based on this reading and a few other sources, I will judge them by the standards and checklists that Horgatrh and Bloom in the book Electronic resource management in libraries: Research and practice and Collin's article ERM systems: Background, Selection and Implementation , set out. I chose these two systems because they represent the opposite ends of the spectrum in terms of ERMs. Verde is a proprietary system, by a major library vendor, with a very shiny advertising website . The second is a system based on Access, created by a librarian at the University of Wisconsin La Crosse, which advertises itself via its own blog . Perhaps the following analysis will help someone make a decision on ERMs?

Both Hogarth and Bloom and Collins list important aspects of functionality to look for in an ERMs. Since I am trying to not write an article, I will look at just three aspects these works emphasize: the ability to help with communication and support well organized workflow from department to department, the ability to be interoperable with other ILS and serial tools, like A-Z lists and SFX linking, (so that an entire library system does not have to be overhauled to add an ERMs), and the ability to get usage statistics in report form, including the coveted cost/use report.

Verde


Verde's main page does not provide a huge amount of information about how it works, which makes sense as its point is to try and sell you a product, not provide complete documentation. So I also looked at a report I found from CUNY (City University of New York) explaining their decision to use Verde. Verde's main selling point,as explained on their website, is that they have "Built-in workflow management capabilities that enable the library to define consistent and replicable processes, helping staff keep track of task assignments throughout the e-resource life cycle." They also allow staff to access all of these capabilities through one main interface, which helps with management and ease of learning the software. According to CUNY's report, Verde automatically sends out reminder emails to help people stay on top of their duties, and in general works well in coordinating people and communication across numerous departments, an important thing for large library systems

In terms of interoperability, Verde obviously is easy to integrate with Aleph Voyager, as that is the ILS also put out by Ex Libris. The Verde Website claims that it can be "integrated with existing library applications such as SFX®, your library online public access catalog (OPAC), A-Z list, and more". CUNY believes this is also one of Verde's best points claiming "One of the strengths of Ex Libris is product interdependence and interoperability, critical factors in enabling numerous technologies to interface with one another and create a seamless experience for both back and front end users". However, CUNY does mention that it takes some programming to get tools like SFX to fully work with the system. While this was not a major concern for CUNY, it could be one for small libraries who do not have this type of technical expertise.

Finally, Verde allows SUSHI data transmission (a new standard in vendor usage data transmission via XML) and as such works well with COUNTER data, both good things in terms of ease of use and following library standards. Verde's main page does not mention what types of reports they run, except to say "Staff can easily derive cost-per-use metrics as vendor usage data is automatically retrieved and uploaded". This leaves it unclear if the reports can be automatically generated using Verde of or they have to be manually done in some outside program and Verde just stores the data. The stats are built on Oracle, according to CUNY, which makes me believe that reports can be created pretty easily if one knows that database. Again, though, at a small library this might not always be the case.

Verdict:

In general, it seems that Verde is especially strong in Workflow management, and pretty good in interoperability, especially if you have someone with some programming knowledge. It does keep usage statistics and is up to date in following standards, but is unclear about how it runs reports.


Ermes


Ermes has numerous things outside of the categories I will be examining going for it. For one, its free! Secondly, it is created by librarians for librarian needs in mind, and it has a lot of customer support from the creators.

Ermes, as far as I can tell, does not contain anything like email reminders or other communication tools to assist with workflow management. It seems to be meant to be used by only a few people, and therefore not appropriate for large scale, department crossing ERM work. In does allow reports to be run to track renewal dates as well as tracking when payments are due, which would be useful for workflow and management. I could see, since everything is on one Microsoft Access record (license info, usage stats, pricing, vendor information), that this could help communication, as everything would be easy to find and people could see all the information quickly. However, it could also be a trick to coordinate all the entry, making sure that everything is filled out to the same level across all departments. It really seems like this system is meant to be used by a small team, that can organize their workflow with the aid of ERMes, but do not need to rely on it.

Through reading the documentation, there seems to be very little interoperability between this system and others. It does have a way to create one's own A-Z list, but does not incorporate information from an ILS or knowledge base. Everything must be added by hand into the Access database. Since it is already nicely set up in Access, this would not require a huge amount of technical expertise, but would require a lot of time. As such, this system is not feasible for a large library system, unless that large system figures out some technical wizardry to batch import records in the right fields from some preexisting source.

In terms of usage statistics, it does work with COUNTER and helps one run reports with this data. However, because it does not have an easy way to bring in these in (it does not support SUSHI) , everything must be imported by hand, which can be time consuming. While it allows reports to be run showing price per year comparisons, which is very nice, it does not auto generate/provide a template for price/use comparisons.

Verdict:

For a college that has a small amount of periodicals, a small ERM staff, and simply wants to be able to keep track of each database's information in one place (and run some nice reports based on that!) ERMes is a good solution. For example, I think this would be great for many small private colleges who do not have the need or a budget for anything too complex. The main issue is that it does not integrate with the OPAC or SFX. For libraries that are larger or that really want everything to be integrated, this is not a good system

Monday, October 24, 2011

More fun with Copyright: The TEACH Act

I admit, I am conflicted about the TEACH act, legislation enacted by President Bush in 2002 to help clarify what copyrighted materials could be used in online education. In fact, my feelings going into class were not positive. I agreed with Thomas Lipinski in his article "The Climate of Distance Education in the 21st Century" that "this act is a complex piece of legislation" whose full implications will not be known until court decisions have been made for clarification. The phrase that audiovisual material had to be shown in a "reasonable and limited quantity" worried me the most. I interpreted limited to refer to amount shown, meaning that under no circumstance could a full work be shown, no matter how reasonable such use might be in terms of education. I also worried about the view of class as a limited time course, divided into units that the bill calls "mediated instructional activities" and with material only allowed to be viewed within a time frame similar to class time. Along with Lipinski, I am concerned that this is another example of congress attempting to compare new modes to old modes in order to maintain profit. Indeed, Crews in "Distance Education and the TEACH act" states that this "provision is clearly intended to protect the market for materials designed to serve the educational marketplace".

So, I was surprised when I went to class and found out that librarians view this legislation in a positive light. But when I stepped back, I realized that it was I who was being perhaps a bit harsh. I had been working on deciphering licensing terms all week, and so was in a hyper critical mode reading about this act. I examined every word, wondering how would it help publishers and hurt libraries/educators. I had missed the statement in a briefing by the ALA that the Copyright Office even said "Fair use could apply as well to instructional transmissions not covered by the changes to section 110(2) recommended above. Thus, for example, the performance of more than a limited portion of a dramatic work in a distance education program might qualify as fair use in appropriate circumstances.” This removes some of my worry about the limited portion (although it then brings it back to Fair Use, which anyone reading this blog knows, is a pretty tricky issue). But, upon re-reading many of the articles that I viewed as too positive earlier, I can see that the TEACH does have its heart in the right place. It is trying to allow new technology to be used and it is trying to help education reach the most people. It is certainly an improvement over the previous law which only allowed distance education via broadcast video conferencing, students in one room and the teacher in another. This legislation is not totally focused on slowing taking away user's rights one by one.

The section mandating that materials not be stored upon a computer or able to be transmitted to others students "for longer than the class session" still worries me. I understand why they placed such a limit in here. It ties back to the fear of piracy, of movie copies being distributed across the entire internet in a matter of seconds if not protected. As Lipinski states "In Congress' view, copyright piracy on college campuses has reached epidemic proportions. Although little of the piracy is tied to curricular infringement, it consists of students engaging in peer-to-peer file exchanges" To Congress, this fear is real, and must be prevented by technological limitation such as stated above.

However, by tying it to a mode similar to that of face to face instruction, in terms of only being available for viewing during a limited period of time relating to class session, it limits the possibilities of this act to work with future technologies. Litman, in her book Digital Copyright frequently mentions that one of the major problems with copyright law is that it only looks towards the present and the past, not the future. As such, it does not adapt to change well. I am concerned that, by adding the words "class session" they are limiting themselves by using an old model and not truly reflecting the asynchronous nature of current and future online education. I know when I had a few classes online due the professor being at a conference, I stopped and started the video numerous times, due to life getting in the way. Often, this is the situation in online education, and one of the great things about it being not set at a certain location for a certain time. Would someone only be given a limited time period to watch a movie and then they cannot access it anymore? What happens if they need to refer to it for a paper or studying for a test? What happens if life indeed gets in the way of watching it all at one time, or even over the course of a day? Technology changes what can constitute a class session, and the language remains very unclear as to whether the TEACH reflects this.

Thursday, October 13, 2011

Experiences from the reference desk: How TPM and license convetions affect users

For three years, I have been serving in a reference librarian capacity in various academic settings, both small and large. During this time, I have seen numerous examples of licensed material, both e-book and article database, causing patrons no end of problems. The problems stem from user restrictions, difficult to understand interfaces, and in general, the expectation of the patron not meeting the reality of the source. So, when I read Professor Kristen Eschenfelder's articles "Every library's nightmare? Digital rights management, use restrictions, and licensed scholarly digital resources" and "Social construction of authorized users in the digital age" (co-written with Xiahua Zhu), I recognized many of the problems these articles discuss. Eschenfelder and Zhu hit problems facing patrons today right on the nose, and it was wonderful to see them treated with such seriousness, in an academic fashion.

When I talk about "frustrations" I am mainly thinking of two different problems. The first ones result from what Eschenfelder calls "soft restrictions", technological limitations on use imposed by a licensed source that can be worked around but make performing certain tasks annoying. Examples would be only allowing a patron to print an article through clicking on a special, quite hidden, button or disabling right click and ctr+c copying. These are things, either by choice or bad design (probably a combination of the two in my opinion) that make the user jump through hoops to perform functions allowed in the license. They are very common, in my experience, and, as Eschenfelder rightly points out, a problem that many librarians, including me, feel powerless to solve.

The other type of frustrations come from who counts as an authorized user and the authorization methods used to control access. Zhu and Eschenfelder explain (and this was totally new to me!) that what is now an authorized user has gone through a great deal of change, as publishers and librarians have fought and forced language change through the introduction of negotiating licenses, a practice that didn't start until the 1990s. Problems began when, due to resources transitioning towards being electronic, the public could no longer just come in and use material off the shelf. Publishers used this to fact to restrict who could access their materials, and charge more for more access. When electronic resources were first introduced, libraries first put them only on certain password protected computers for which the librarians gave out the correct information. Then, as technology changed, and people could connect to the libraries resource remotely, campus IDs connected to a main campus IP address became the primary way to determine who was an authorized user of the material or not. As such, walks ins today have to follow by these rules, limiting their use to a few special library computers or being forced to ask the librarians for special guest passwords. Even with all of the password protection and other efforts made by librarians to limit use, some publishers still are not satisfied with campus passwords, and make users register specifically for their products. This whole process has left the public disenfranchised.

I recognized my patron's experiences reading about these problems, especially the walk-in public patrons I serve. It is not just an academic thought experiment to say these issues listed above might cause frustration. In the past few weeks, I have had a extremely confused professor approach me about a science database she was trying to use. She wanted to access the full text of an article from a citation, and I know for a fact we had this journal full text through this particular database. However, the database kept on informing her that she had to create an account to access any full text. She had almost bought the article from another source in desperation, because she truly believed she did not have access. What was actually going on was an example of a particular database not trusting the campus password system and making every member create their own (free) account. If this is confusing faculty, who actually care enough to approach the reference desk, think about how many students must either accidentally pay OR just leave the database and never get to use a source that they should have access to.

And it is not just extra passwords. Students, using JSTOR, believe themselves technically incompetent and come to the desk frustrated and mad at themselves for not being able to print more than a page at a time (JSTOR requires one to go into a separate page to print). Nursing students using the database Up to Date try and copy and paste, and find they cannot right click. They come up to me, in a bad mood, feeling defeated by their inability to perform what they consider a standard function. When I show them they have to use the edit function, they ask "why is it like that?" My general answer is "the database is stupid, its not you". But I cannot provide them with a good reason. Viewers of e-books in NetLibrary, unable to view more than one page at a time, say that it is not worth their time, and move onto articles or actual books. Some will even request the physical copy through ILL, adding up to a week onto their research process, due their anger at not being able to scan a chapter. These soft restrictions are causing serious technological frustrations. It makes the resources we have not be as friendly, causes users to doubt themselves and the library resources and in general is trouble. The only good thing about them is that they increase reference questions.

At the small library where I work, the public is allowed access, but only on eight computers. This causes many members of the public to not be able to get on the computers when all eight are taken up, which causes them to get quite cranky. At the large college where I also worked, public users had to get a special password from the librarians. This did not phase many, but quite a few clearly were shy about this, embarrassed they had to ask. Two even yelled at me for limiting access in this way. I tried to be as calm about it as I could, and help those embarrassed. But I felt wrong that they had to be signaled out this way, their past rights to be able to come in, anonymously and read a journal now stymied.

The point of these tales is this: these restrictions being placed on public use and the soft restrictions being implemented by the content owners do affect patron satisfaction and do increase frustration. Eschenfeldor and Zhu are right to draw attention to these concerns, which affect libaries every day but which are often ignored in the literature that I read. Many patrons are not tech savvy enough to know their way around the annoyances and hidden save and print buttons when they run up against them. And the public does indeed feel the crunch.

It feels wrong to point out the problems without offering solutions. Luckily, as Eschenfelder, Desai and Downey point out in their article " The Pre-Internet Downloading Controversy: The Evolution of Use Rights for Digital Intellectual and Cultural Works." the opinions and de facto needs of users combine with librarians being aggressive in their licensing has made significant changes to policy before. The ability to download search results was, in the 1980's, considered a topic of hot debate and often not allowed by database owners. Due to a resistance in library culture and users simply ignoring warnings forbidding downloading, the publishers eventually gave in and now the right to download is expected to appear in every good license.

We can follow this pattern in protesting against some of the problems that hurt our users, especially soft restrictions. We should not just shrug and tolerate extra passwords or inability to save documents easily. Instead, we should make noise; writing articles, complaining to the publishers, and asking customers to do the same. We need to let both the public and other librarians know these troubles are not just the facts of life, but can be changed. We can also refuse to license materials that use these tricks to make their material inaccessible and let them know the reason for the refusal to license. In many cases, libraries are the main form of income for these resources, and by both making noise with our voices and our actions, we can push back and let publishers know these practices are unacceptable, just like how libraries and users did for downloading in the 1990s.


Thursday, October 6, 2011

The Journal Crisis: Some Possible Solutions.

Most librarians know about the "journals crisis",being the spiraling out of of control of academic journal prices, especially scientific journals. I always believed the main cause to be publisher greed. In some reading that I have been doing this week, however, I found out that there is more involved than just these factors, which is actually good news. Some of these other factors have possible solutions. Publisher greed tends to be hard to combat. Astle and Hamaker in their article "Journal publishing: Pricing and structural issues in the 1930s and the 1980s" and King and Tenopir in their chapter from the book Economics and Usage of Digital Libraries: Byting the Bullett. both bring attention to factors such as publishers pushing for larger journals and more content in order to be seen as more successful (the more content you have, the more citations you can get to your journal) and journal fragmentation caused again by needing more content. All of this is driven by an academic culture that believes that the more ones publish, the more valuable they are, and which therefore provides the journals with enough content to allow for the fragmentation and size increase.

These factors all make a lot of sense. The academy, as I have personally witnessed it, pushes publication to an extreme degree. I have a husband, a sister, and parents who are involved, either as students or professors, in the world of academia, and they all feel that pressure to push out materials and break up their research into many articles in order to get as many articles out of it as possible. And this pressure to publish would lead to publishers getting tons of submissions and realizing that they have the material for lots of journals, so they might as well make a lot of journals. Due to the way that free market economics tend to work, the big publishers rise to the top, buy up lots of journals, and have the most resource to put towards their journals. This results in their journals becoming the most prestigious and everyone trying to publish in them, causing them to be able to make more money due to their essential nature, and the cycle repeats.

This phenomena, of the journals that are top in the field being "essential" to have and owned by big commercial presses in my opinion is what led to the big deal, defined by Kenneth Frazier as when "libraries agree to buy electronic access to all of a commercial publisher's journals for a price based on current payments to that publisher, plus some increment." (Frazier "The Librarian's Dilemma"). There has been a lot of discussion about this topic in the literature of librarianship, some in support and some against. Those in favor, as Rick Best mentions in his article "Is the 'Big Deal' Dead" tend to mention the benefit to small colleges, who now can afford to have the breadth in their journal collection that they always wanted, but never could. Others, like Ken Frazier, say that the big deal takes away the libraries ability to select, therefore burdening patrons with lots of useless material through which to wade It also takes away any ability to discontinue journals that are not being used and reduce prices correspondingly. I think both are right, and that the Big Deal, and indeed the increase of journal pricing is far more complex than "do away with it entirely or buy every big deal ever". A balance between the desire for full text that many patrons now expect and prices that are unsustainable, between breadth and quality control must be found. Quite a few suggestions have been made about how to go about this, but I want to focus on two that I believe have the most potential to grant libraries more control and also bring down prices.

First solution:

Best mentions that OhioLink, a large consortium, made a deal to allow them to drop journals with low usage statistics from a big deal and get a corresponding reduction in price. This requires licensing expertise and often the power of numbers a consortium represents, but I believe it is a huge step in the correct direction. Libraries need to realize that it would be very easy for most of these large publishers to actually serve different content to every academic library in the United States, at least technologically. I talked to my husband, a computer science PhD student at Madison, and he explained that all one would need to would be to set up a schoolID table and a journalID table in a relational database, and then do a join of the two tables to create a unique plan ID connecting the school to numerous journals. Assuming something like ScienceDirect, which has 1,000 journals and a separate plan for each academic library in the United States (lets guess something big like 50,000 different libraries), for an average database, this additional table would take up about 4 megabytes of disk space for the publishers servers. This is based on the fact that every entry in a standard database takes about 4 bytes, and so joining the two would be 8 bytes: 1,000 journal entries x 50,000 libraries x 8 bytes= 4,000,000 bytes=4 mb. This is tiny. It will take more licensing overhead, but these individual plans, or at least consortia plans, will provide the selection, the ability to reduce price, and the ability to control the libraries own purchasing destiny. It will also allow libraries to keep the savings bundled packages provide over buying each journal individually.

Second Solution:

Librarians need to get faculty on their side and together explore alternative means of publication. I was fascinated to read in Hamaker and Astle's article that in the late 20's and 30's librarians were dealing with similar issues of high journal costs, stemming from the prestigious publishers in Germany publishing lots of new journals and content. While the price gouging was a bit more obvious in this case (it was clear the prices were being artificially raised to help the struggling post World War I economy) than it is now, librarians eventually rose up and brought down the price. The way they did it? They got agreements from faculty that they would no longer purchase or support these journals in their own professional organizations. They built up enough power through a coalition that they could afford to stand up to these prestigious publishers. And they did it by getting the academy on their side.

Libraries now have even more to offer scholars. In the 1930s, if the journals were cancelled there was no alternative method of publication. Now, as Ken Frazier and Rick Best point out, there are other means of publication. Frazier, back in 2001, mentioned new open source journals by non profits that offer more rights to authors, are peer reviewed, and if they can get the community behind them, are capable of undermining the big commercial publishers. Some libraries already have reached out to the faculty and caused change. The University of California system partnered with faculty to protest and indeed threaten to cancel the Nature Publishing Group's big deal if they do not lower the price. Cornell's faculty senate, according to Best, "called among other things for faculty to become familiar with the pricing policies of journals in their specialties and to cease supporting publishers who engage in exorbitant pricing by not submitting papers to, or refereeing for, the journals sold by those publishers" (354). To me, this is the direction more libraries serious about bringing commercial publishers into line need to go. The root of all of this is the academic cycle of trying to publish the most in the "important" journals. If the academy continues to support these types of journals, it will continue to make them indispensable, and they will continue to hold libraries hostage. Only through educating the faculty about these practices, the nature of the publishers, and trying to introduce the idea of alternate methods of both scholarly communication and even scholarly worth, can we begin to really turn this pricing crisis around. We also need to look outside our field to others who are calling for reform for the way the academic system is run and partner with them, to get the change and message we want. This solution is not easy, its taking on a giant entrenched feeling of worth in academia. But I really think it is the only way we can build enough support to possibly challenge the prestigious, monopoly like journal publishers and create serious alternate methods of publication.

Thursday, September 29, 2011

Georgia on my mind


Readers who have perused my other posts will be aware already of my anti-corporation leanings and my beliefs that copyright should have included in it broad protections for the public's use of works, especially in matters concerning education. People who have been paying attention to copyright at all recently will also have realized that whenever a court case appears pitting a user against a producer, the producer tends to win. Even when it does not go to court, the producers are often able to push enough money and publicity at the violation that the user is forced to settle, and in settling, agree that the publishers are correct (see the Napster Case). In general, the outlook for fair use as a user right has seemed bleak, a sad fact for me and for librarians in general.

So you can imagine that I was unsurprised when I read for this week about the Georgia State vs. Oxford UP, Cambridge UP and Sage (or as it is officially called "Cambridge University Press, et al v. Patton et al"). Briefly, this case was against Georgia's electronic reserve policy, which includes a check list ( seen here ) for professors to use to determine if the work they were putting up on reserve was indeed fair use. The publishers said that the use was too broad, too much was being copied from a single work, and that the check list was biased towards a finding of fair use among professors who did not know enough about copyright to be making the decisions anyway ("Both Sides Angle for Victory in E-Reserve Case" in Publishers Weekly). Of course, I thought, publishers are getting nervous and trying to stop any free use of their work, per usual.

Indeed, it seemed, from an article in the magazine Against the Grain by Stanford G. Thatcher, director of Penn State Press written on behalf of the publishers, that this is exactly what they want to do. Thatcher bases his attack on a set of guidelines called the CONFU guidelines, which were commissioned by President Clinton to deal with electronic material and fair use, but never actually agreed upon. Nonetheless, he takes these basically as law, and argues Georgia should be punished for not complying with them. His points against Georgia basically boil down to two factors: that Georgia did not emphasize that electronic reserves should only be for "supplemental" material so that they do not take the place of course readers and that Georgia did not mention at all that texts should only be used for one term and then permission must be sought for any further use. He says that because of these policies, Georgia's e-reserve causes University Presses' profits to take a hit, which makes them not fall under fair use.

But not only does he say Georgia is wrong and needs to be stopped, but he proposes further changes to policy, which I believe come from a wrong interpretation of fair use, and which worry me in case they become de rigueur due to the publishers being successful in court. For example, he claims that not even first use or deciding to use an article spontaneously would be an excuse for fair use (something specifically allowed in CONFU), as now people can use online access to quickly get permission from the publishers. Because permission can be so easily asked for, there is no reason not to ask and then pay if necessary. He even goes as far as to suggest that students should pay for access to e-reserves, just like they pay for course readers or textbooks. He clearly is aiming in doing away with fair use for electronic reserves entirely. In his reasoning here, he shows, in my opinion, a distinctive misunderstanding of fair use for education. To me, fair use in an educational setting exists so that more knowledge can be built upon by students and teachers, which will hopefully allow for the creation of more original works. It does not exist because right holders were just to hard to get in contact with earlier. Fair use for educational purposes is an intentional action, not an accident. But my personal opinions about such things tend not to be what the court decides.

Instead of Thatcher's version of electronic reserve rights, or even CONFU, I would like to see libraries follow the practices for fair use published by the Association of Research Libraries in 2004, before the Georgia case, but which reflect many of the practices Georgia embraced. For example, they say "limiting e-reserves solely to supplemental readings is not necessary since potential harm to the market is considered regardless of the status of the material" and "if libraries determine that if the first three factors show that use is clearly fair, the fourth factor does not weight as heavily" (fourth factor being market effect of the reserve). In this, these practices push against Thatcher and even CONFU"s guidelines about e-reserve material being only supplemental and the heavy focus on the market as the major determiner of fair use status. But knowing the Georgia case happened after this guide was published, I wanted to know if fear about litigation made library policies both at Georgia and also at the college where I work reflect a stricter sense of fair use than ARL's findings.

So I took a look at University System of Georgia's Guidelines and one place at which I have worked, Edgewood College's Guidelines to see which interpretation, CONFU, Thatcher's, or ARL's, they more closely followed, with a feeling that I already knew the answer.

In this little exercise, my gloom and doom assumptions turned out to be totally wrong. Both policies reflected ARL's findings more than Thatcher's concepts, even Edgewood, which tends to be risk adverse. Both allow core content to be used, though Georgia does mention supplemental content as something that would be in favor of fair use. I should not have been as surprised about this as I was, as I know that at least at UW Madison, electronic reserve material often composes an entire course's reading. It seems that in this at least, the publishers have lost. E-reserves have replaced course packets, and there is no charge for students to use them, even if they are core resources. In addition, the reuse of a resource from term to term is not mentioned in either policy, though both do mention that they delete electronic copies once the term is over. This lack of mention makes me believe that both of them turn a blind eye to the activity, or at least do not expressly forbid it. In this as well, then, the publishers have not succeeded in scaring libraries into submission.

Unlike most of my posts then, I shall end this one on an optimistic, upbeat note. In this battle for electronic reserves, it looks like the publishers and their court case are not having the desired effect of fear and suppression of practices as they might have hoped Practices the ARL found being used in 2004 are still being used, and libraries still actively exercise their right of fair use. I am proud of both of the schools whose policies I examined for not backing down in their rights, and in their role of promoting learning through such services. I hope that no matter which way the court case is decided, librarians continue to exercise their fair use rights, and if the court decides against it, fight back.

Thursday, September 22, 2011

Licensing... (is a pit trap filled with spikes).

In the small college library where I work, licensing electronic resources is not a popular topic.  Our electronic resources librarian turns a shade paler whenever the process is mentioned, and all the other librarians avoid the topic entirely.  Perhaps it is because I am only a part time employee there, but what our contracts look like and how we negotiate for our current resources has never been discussed within my ear shot.  It is the topic untouched, disliked and feared.

Perhaps this experience is why I find words of trusting vendors and enjoying licensing once you know how to do it properly ringing a bit false. Harris, in her work Licensing Digital Content , especially in her chapter "Un-intimidating Negotiations" states that content owners really want the same thing that libraries do: they want their material to be used and appreciated. Because of this, they should be treated with understanding and trust, not with aggression.

If this is the case, if vendors and librarians should trust each other and negotiation is not scary (Harris says it is even fun), why does my library dread this process so much? Though Harris speaks soothingly and stresses that license providers are not to be feared, I found that the actual process of licensing detailed in her book, as well as information about current vendor behavior brought up by Russell in her book Complete Copyright and in an article written by Hadro for Library Journal entitled "Researchers aim to help libraries negotiate better on complex deals", belie this calming attitude. For example, Harris, in her discussion of clauses to include in an effective license, emphasizes over and over again that everything a library would want to do with content must be specified, in detail, in writing. Do you want to make sure fair use still actually exists for this content and that the publishers will not forbid it? You need to make sure you say this in the license. Do you want to make sure you can have patrons print a copy for themselves? You need to specify this in the license. She also mentions at least ten times that one has to make sure that license providers actually have permission to license the desired content. This gave me the impression that many libraries have been scammed by companies, which makes me even more wary of the process and distrustful of vendors.

When I read all of this, while I was glad to have it placed in front of me in an easy to follow fashion for when I have to negotiate licenses, it did not put me at ease or help me trust license holders. Indeed, it had the opposite effect. It made me feel that licensing is a process full of horrible spiky floor traps that, if you do not vigilantly watch for them, will kill your patrons ability to use content and leave you vulnerable to being sued for large amounts of money. The thought of the perils of forgetting just one thing and then making my whole library (and especially the patrons who need the content!) suffer, terrified me. Even her words meant to make negotiating un-intimidating, especially her suggestion to bargain away things wanted to guarantee getting rights needed, made me uncomfortable. This whole process strongly reminded me of buying a car, and how much I hate that entire interaction, with its cajoling, its jockeying, its bluffing . Perhaps I am just not suited for this work, but I would worry about my ability to be canny enough, to read between the lines, and fight for what the library needed without becoming antagonistic (which tends to be the eventual outcome when I do something like buy a car). It seems like an incredibly difficult, emotionally draining process full of constant vigilance, and I can understand why the librarians at my college hate it.

Russel and Library Journal do not help me feel any better about the process, or trust vendors. Russell discussed UTICA, a law on the books only in two states, Maryland and Virginia, and facing fierce opposition from libraries, lawyers, and software dealers. This law discusses licenses for software and online products that are non-negotiable, and must be agreed to before using the content. While this takes away the fear of the negotiation process, some of the things companies and vendors are trying to push through with this, like not allowing public criticism of their work and content, show that these companies are not operating with libraries or the public in mind. Instead, they will do whatever they can do protect their piece of the pie. If this is at least what some vendors want to see as law, it makes me less likely to look kindly on online vendors as a whole. Hadro's article in Library Journal, which discusses scholars winning an open records case against two major e-content publishers, Elsiver and Springer, also makes me believe that these big companies at least are not on the libraries side and would be difficult entities with which to negotiate. Elsiver and Springer said that they would not release their contracts with the University of Texas because the contracts are secret and would hurt them. While the article only tells their official reason, that the contracts acted as "trade secrets", I would assume that the actual reason they do not want other libraries to know about the content of the contracts is that they do not want these other libraries to realize what rights and deals they might be missing. With this kind of secrecy and distrust about libraries getting rights among content owners, how can libraries be trusting in turn?

So, with all of this clear reason to not like licensing and not trust publishers, why does Harris insist in saying that trust exists? For Harris, I think it is a small lie used in an attempt to get more librarians willing to negotiate licenses and stand up for their needs, instead of accepting whatever license a company gives them to avoid the stress and drain of the negotiation process. By saying one can trust publishers and actually reach a win-win solution, people might be more willing to give any type of conversation a go. Even if they do not get everything, they might end up with more than if they just accepted a license at face value. I think that this is a worthy goal, but as I have mentioned before, paying attention to any of her cautions will make one doubt her words, which is unfortunate. Licensing is so difficult and technical though that I do not believe anyone could make it sound fun or even positive. Harris makes a good effort, and her reasoning for the tone is good, but I do not know how many librarians afraid of licensing this will actually convince.

Thursday, September 15, 2011

Removing the Copy from Copyright.


In my last post, I commented  that the United States needs a new copyright law, but did not know what form it could take.   Then I read chapter 12 in Litman's book Digital Copyright, in which she proposes a new form of copyright, based not on controlling all copies derived from a work  but instead on commercial exploitation.  In addition to this, she would make explicit the right for private parties to make electronic copies and to cite electronically using hyperlinks.  Most interesting to me, she would make all playing with and building on copyrighted work legal as long as there are links back to the original work the new work draws upon.   Basically, all copying would be legal if it did not pose a significant threat to the profits of the copyright holder.  Her reason for the switch ties into something I noted in my last post: copying has become a lot easier as of late. Due to the difficulty in copying up until the mid 20th century, "multiple reproduction was a chiefly commercial act" (Litman pg. 178).  Therefore it made sense to judge commercial exploitation by copy.  Now, however, copying is easily accomplished in private for one's own creativity or enjoyment, and often has nothing to do with exploitation.  Therefore, believing all copying is harming profits no longer is true.

Now, I do not know if she is correct in saying this law better captures of the spirit of copyright as intended in the constitution (not being a legal scholar),  but I find it intriguing.  In this post, I want to explore what I believe its effect would be if enacted.  I am going to do this through examining how this law would effect both  library licensing practices, as described in Lesley Ellen Harris' Licensing Digital Content and fair use as exemplified by the CONFU attempted library guidelines as described by Georgia K. Harper in the website "Copyright Crash Course" (located  here).

In terms of licensing, I think Litman's proposed law would help libraries be less scared of litigation and therefore allow them to create simpler licenses.    Reading Harris' book, it seemed to me that she was speaking to librarians who were afraid, both of the licensing process and the content owners.   After reading the first two chapters in Harris' book, I can understand that fear.  The book explains that libraries often have no permanent rights to the material they purchase, that licenses must be struck for everything, even using a photograph on a website, and that lawyers are necessary so ones library and patrons do not get screwed over. Its intimidating stuff.

Under Litman's new laws, copying an entire journal's content from another library and then offering it to your patrons would still be considered commercial infringement.  However, posting a photograph on a library's website would not require permission.  As long as the photograph on the website links back to the original location, I do not think this use would hurt the original photographer's ability to make money, and therefore would be legal.  Basically, agreements would still be needed for large scale expensive purchases, but the library would be more free to use other work in a small way.  In addition, with a clear law that gives private individuals an explicit right to copy, libraries would not need to constantly worry about having to monitor their patrons or deal with the content owners trying to infringe on their patron's privacy through electronic monitoring.  As individuals, patrons would have every right to make their own copies either electronically or print, and share them with others.  Libraries would have much more weight behind their and their patron's use of materials, and hopefully this would help licenses be less complex and librarians less fearful.

In terms of CONFU, guidelines for fair use would be hopefully become clearer and such lines as "every reasonable definition for fair use is fair game for a lawsuit"  (Harper, section 7) and the fear that inspires, would not be needed.  While wholesale copying of textbooks and then handing them out to a class would still not be fair use, as it would hurt the profits of the textbook company, the copying of articles for a small discussion group by a student would be for private use and would therefore be completely allowed (again, as far as I can tell).  Using a picture for a class Power Point with proper citation but without permission would also be fair use.  While there still would be ambiguity, I believe this would help make fair use a bit broader and a bit clearer.

So, the results, as far as I can see, would be an environment more friendly to fair use and a less full of the fear of litigation.  Of course, I fully admit I could be interpreting Litman wrong.  I seem to be rather poor at coming up with correct interpretation of the law in general.  For example,  this week I read four different court cases about copyright and fair use and I guessed wrong on every final ruling.  Good thing I am not planning on going into law. Because of my issues with law interpretation, I would be interested in hearing what others thought. Do you think my analysis of how Litman's proposal would affect library fair use and licensing is accurate, or did I miss the point entirely? 

Tuesday, September 13, 2011

Copyright versus Creativity

Greetings!

It has been a busy week for this library student, and so I have not yet completed all of the readings.  As such, I will be making a second blog post once I have finished.  However, I have been thinking a lot about one particular topic that has come up in Litman's book Digital Copyright and also in a video watched in class entitled Rip! , that of copyright versus creativity, and just wanted to take the time to write down my thoughts.

Litman points out in her chapter "Just Say Yes to Licensing" that she believes copyright rules have not taken precedence in people's minds because they do not seem to make sense.  People cannot fit them into their mental and moral framework, and end up ignoring them.  It is not that they set out to be thieves, it is that they feel that they have a right to use other's works in an interesting and different way, to play with things and that the government should not have a right to control this. Rip's director Brett Gaylor shows us the end result of this disconnect: people have taken the material available to them online and have created whole new art forms based on playing and sampling this material.  They are borrowing in a way that they feel is fair.  Hip Hop and rap artists, as well as electronica artists rely heavily on samples of other people's work.  YouTube is filled with videos mixing an image from one source with the audio of another, often to humorous effect.  I listen and view this type of work everyday, not thinking about the fact that it might be breaking copyright.  Even though I am a library student (and so technically should know better), I still suffer from this disconnect. I can't believe what I am watching on YouTube that has been made simply for fun, not profit, could actually be illegal.

Processing this all, I realized that the main reason I find it so difficult to wrap my mind around current copyright rules is, as a lover of literature and music, I understood long ago that there are very few completely original works.  I know that copyright, as both Russell and Litman make clear, does not protect ideas, it protects manifestations, no matter how derivative they are.  But that is the point.  Works have always been based and drawn, either to a great deal or little, on what has gone before.  One of my favorite works, as a true geek, is Star Wars, a work that, while new in its approach, was directly based on the hero myth structure set up by scholar Joseph Campbell.  An even more blatant example of borrowing from Star Wars was the styling of Naboo in the second trilogy.  Compare the picture of Naboo to Waterfall City from the excellent book Dinotopia:


Production Art of Naboo
Waterfall City by James Gurney.

All of these works, Star Wars, Joseph Campbell's A Hero with a Thousand Faces, and James Gurney's Dinotopia are copyrighted works. Yet, they build and borrow from one another, creating new from old.  So far, it has not destroyed the world.

When digital users manipulate copyrighted content and view it, they are simply following in the steps of what humans have done for thousands of years: see a creative work, be inspired, and from this inspiration, build something new. The difference is now that material to manipulate is easier to obtain, and so more people can work with it. Copyright holders can no longer rely on people physically not being able to get their hands on information as a barrier to such creation. As Litman says "The old balance is gone. Whatever approach we choose, we need to find a different balance" (pg. 115)  Litman brings us some hope that in the last copyright negotiation, for the Digital Millenium Act, user groups, like libraries and law professors, actually tried to stand up to restrictions via the medium that these restrictions were trying to control: the Internet. Sadly, many of these public advocacy groups, including the library, were ignored. However, I believe that we can, as librarians and the public, strive to create a new balance, and that balance should be one in which creating new things out of old should be celebrated, not quashed.

To top it off, here is one of my personal favorite remixes. Without this type of play, I would have never known that Crank Dat by Soulja Boy and Carol of the Bells have basically the same beat.

Thursday, September 8, 2011

Confused by Copyright? Blame Industry.

Titles referred to in this post

  • Jessica Litman (2006), Digital copyright,chps. 1-6
  • Carrie Russel (2004), Complete copyright: An everyday guide for librarians, chpt. 2
  • Mary Rasenberger & Chris Weston, (2005). Overview of the libraries and archives exception in the copyright act: Background, history, and meaning [Section 108]

Working at a small liberal arts library, questions of copyright and fair use come constantly into play, especially concerning electronic reserves and inter library loan. While I have worked at this college for over a year, I never felt that I knew the why or how of copyright and fair use; I simply picked things up from overhearing them. I did not even know if the things I followed, such as any more than 10 % of text being used for teaching was a violation of fair use, were even true. Only after reading the selections I have listed above do I now feel like I even begin to understand the why of copyright and where libraries rights actually fit. In this post, I want explore why I believe, based on what I learned from the above readings, my knowledge, and indeed the knowledge of my fellow librarians, is so spotty concerning this issue. The answer? The law was not written for us.


Both Litman and Rasenberger & Weston provide insight into why copyright and fair use confuses so many of us (including me!). Litman, not hiding her distaste for current copyright directions and industry, explores in detail how copyright law in the 20th century has become a process of industry pandering, with congress pushing copyright decisions onto current industry stakeholders, thereby avoiding political heat. It also resulted in a law that is full of detailed, narrow exceptions based on complex compromises with little thought for the future. While her clear bias this does make me wonder what the other side of the issue might have to say about this, after reading Russle and Rasenberger & Weston, I am inclined to agree with her assessment of both the state and cause of the law. Now, I admit that I also tend towards anti-corporate leanings, which means that I am naturally inclined to agree with Litman. However, her claims are backed up by Rasenberger and Weston in their analysis of section 108 of the current copyright law which governs library photocopying rights. In their article, they discuss how book publishers being concerned over losing any profit due to copying held up any type of law from being created on this matter until 1976, over 15 years since a law about this issue was first proposed. In the end, the law, in total around 3 pages, due to the nature of exceptions based on publishers insistence, takes Rasenberger and Weston over nine pages to explain


So, corporations are in control of our copyright, causing it to be narrow and confusing and not built for the public to understand. Because of this, copyright and fair use continues to confuse librarians and continuing to leave copyright in the hands of industry will, I believe, not change this situation. Why? In my biased, liberal view (which seems to have been borne out in the squabbles that caused copyright to remain unchanged from 1909 to 1976!), corporations have one goal in mind: survival through competition crushing. This goal seems to backed up not only by the simple fact that industry kept copyright at a standstill from 1909 to 1976 due to inter party squabbles, something just amazing to me, but in specific, more recent actions as well. One situation that stands out in my mind is when Litman discusses how cable companies negotiated with congress to make the new satellite companies pay more to broadcast copyrighted programs compared to themselves.


This control by corporations seems to have led directly to some of the current issues we now face in the so called "digital" age. One only has to look at some of the current video game companies and their use of DRM (Digital Rights Management), which forces a person to be online and monitored every time they play in order to prevent the legal purchaser from installing the game on numerous computers to see where corporations concerned about profit making decisions about law has led. And due to this, librarians live in fear of making the corporations angry, breaking one of the many laws that fill the books, not even able to understand sections 107 and 108 of the 1976 copyright law, the ones that apply to them most directly. As Rasenberger and Weston's quoting of the laws show, they are filled with warnings against any crossing the line, and filled with words like "fair" as in "the library must first consult the copyright owner and trade sources to determine that an used or unused copy cannot be purchased at a fair price" (section 108e in H.R. REP. No. 94-1476, at 75-6). What is a fair price? Given this ambiguity in language and the frequent mention of the severe penalties of breaking the law, as well as in recent years, the increase in lawsuits over digital piracy, is it any surprise that libraries are not only confused but afraid of what the corporations might do if they even try to exercise their law given rights?


Three Possible Solutions


So what can be done? While rewriting copyright law as three pages of prose that some one in grade school could understand, as Litman suggests, would be ideal, it is probably not the reality. One thing that can be done is to write to one's congressperson and let them know that copyright is something that should be maintained with the public consumer in mind, without who the copyright interest holders would not exist. Push for transparency in copyright discussions, and push for them to get involved and not just pawn it off on people with vested interests in making money from publishing and production. Congress might not listen, but it is a goal for which to shoot


Another thing is to be educated. A great place to start is Russel's book. Use it to educate yourself and staff at wherever you work. Through this book, for example, I found out that what my College is currently considering law is in fact only from a guideline called the Classroom Copying Guidelines . It is not binding and was meant as a guideline for minimum use, not maximum. That 10% of a book rule? Not true at all. If one can make a case more of the work is essential for teaching or general education and that it will not be in such great use that it will hurt the companies profit, one can use more! Many libraries believe that these guidelines are indeed the word and letter of the law, but this comes from lack of education and study. The truth is fair use is far more wiggly than that, and it requires judgement calls that are more complex than a simple checklist. Because of this, education is essential, and Russell's book is an excellent place to start.


Finally, an strong copyright policy and especially fair use policy is essential for a library to have in place. While this is never specifically mentioned, and I am in no way a policy expert, I think such a guideline would be essential to maintain consistent use so that patrons do not get confused over different people giving them different privileges. It should be based on the guidelines, but should allow for broader interpretation. I admit, I am not sure how such a policy could be made to be consistent but still flexible enough to handle the amount of factors that go into determining if something is correct under fair use and copyright, though. What do you, my possible readers, think?