Saturday, December 11, 2010

Reflection Post


For our final blog post, our assignment is to provide an assessment of this reflective experience and look back at what we’ve learned and how our blogging project has contributed to that. So here goes…

This experience of blogging my reflections on the coursework in ERM got me thinking that there may be an upside to blogging that I have since ignored in exchange for sharing the sentiments of those who claim: “Blogs: never have so many had so much to say about so little to so few.” :) I used to see blogs as online diaries, I guess, which feels extremely invasive for someone like myself who does as little Facebook updating, Twittering, and whatever other modern social networking is available in an attempt to avoid advertising myself for future generations’ entertainment and study. However, this class found me looking forward to blogging as a medium by which I felt like I could synthesized the readings for class after having digested them and listened to discussions and lectures in class. Being able to post my thoughts about things here helped keep me accountable to intentional reflection on coursework.

Also, more directly related to this class, an interesting element of my blogging that I grew in was the pictures I attach to each entry. Prior to this course and in other projects, I would simply Google images and put them up, or search in Wikimedia and post them without attribution (guilty as charged – but I’ve gone back and added attribution now!). However, I am now paying much more attention to the licensing information on the Wikimedia licenses for each different photo and website, and am much more interested in the concept of Creative Commons licensing. At first I was a lot more paranoid and let it almost affect my “creative thought and growth” in constant fear of trying to avoid potholes of copyright trouble, like a little minefield where copyright infringement could explode at any second.

So on the whole, my thinking about ERM has changed in the following ways:
  • Copyright is not as scary/frightening/overwhelming/ominous. Take your pick.

  • I understand people’s concern with wanting to protect their stuff through copyright, though there are instances of copyright that still surprise me.

  • Overall, I have a bigger picture perspective on things like click through licenses, disclaimers, and getting electronic access to journal articles.

  • I actually like looking up license and terms of use agreements on the databases I’m using at school. And know how to read them.

  • I am up to speed on the real-life issues facing libraries with ERM, such as Georgia State, and the initiatives currently in the works that seek to further advance this area of the field.

  • I see how standards are a good thing, especially in LibraryLand, because without them things get very complicated.

  • I know how to look for supplemental information regarding licensing, copyright, and e-resource management, and I’m familiar with the foundational concepts and terms used by ER librarians on a day to day basis.

  • I know that the heck is going on when I click Find It!

  • And finally, I have grasped a piece of the LIS puzzle that I didn’t truly understand before, and see where it fits in as well as how interconnected it is with the other pillar issues in the field (i.e. ownership versus access, collection development, virtual versus physical space, etc.).

I think my most important post was the Unit 11: Finding Content. Perhaps it is because I had a whole month to be thinking about it (again, apologies for the delay in posting), but it is also the post where I suddenly starting seeing everything come together as far as concepts in the class. Copyright and licensing issues are big picture, but those are all dependent on what goes on at the link level with the OpenURLs and link resolvers. I realized how interconnected everything is that we’ve been working on this semester, and even more so, how much of an effect Unit 11’s concepts in particular have on patrons’ access to information. Also, my “What’s In A Name: Definitions” post really helped clarify so much of what we have been discussing in the later half of the semester, and connecting it to everything that we discussed at the beginning of the semester. I always think it’s a good thing to hear something explained a number of different ways, so this helped immensely with reinforcing our course content.

And in closing, here is the quote that puts it together for me, where ERM, libraries, and users intersect and help further illuminate the job we have as information professionals, perhaps even ER librarians:

“The behaviors and expectations of today’s users are changing, so make the
effort to learn about them. For example, users take for granted, and expect,
relevance ranking; they like the user experience to be intuitive, without
compelling them to learn how to use the system; and they expect to be able to
interact with the system by contributing, for example, reviews and tags that
will help them and others find the information in future” (Walker, 2010).

Farewell for now – I’ll be back again soon (*cue uproarious laughter and smirks of, “That’s what you said last time!”) with news from the job search, or even hopefully from my new job, if I find one!

Happy Holidays!

Image credit: Wikimedia by DR04

Unit 14: Perpetual Access


This week focused on perpetual access. People are more familiar with preservation of physical items, but preservation of electronic resources has the spotlight this week. Why is this important? Watson (2008) writes, "If a cohesive preservation strategy for electronic resources is not devised and implemented in the near future, there is a real danger that current research output may be lost to future generations." I thought perpetual access meant simply being able to access the content of an e-journal after the library no longer subscribes to it, but it turns out that there's a bit more and it gets a little complicated regarding definitions. Here's what Temper & Barribeau (2006) write about perpetual access:
  • Perpetual access right to an electronic journal: the right to permanently access licensed materials paid for during the period of a license agreement (not to be confused with the right to copy journal content solely for preservation purposes).

  • Perpetual access right VS Archving right: archiving right is the right to permanently retain an electronic copy of the licensed materials. "The emphasis of [perpetual access right] is retaining access, not on how such access is achieved (through the publisher's site, a locally retained copy, or a third-party site)."

So, as I understand it, perpetual access right is being able to always access the content (regardless of how dynamic it is) rather than just having the right to keep a static copy of the content you licensed. As I was working through the readings, I kept thinking about the similarities between OpenURL vs static links. Basically it comes down to this: when a library purchases physical materials, it would have back copies archived and available. However, e-content is another story, including anything that has been accessed through a subscription. Sure, a library could ILL the item, but Watson (2008) points out that more recently, ILL clauses found in licenses actually make it harder for institutions that still have the resources to loan them to other libraries.

The question then turns to how libraries and publishers are trying to make these resources accessible in both the short- and long-term because, "Just-in-time collection development is on the rise, and just-in-time delivery of content is becoming increasingly attractive to libraries" (Watson, 2008). I liked that quote because in one of my early SLIS classes, we discussed the old "just-in-case" model of collection development, and its evolution to "just-in-time," to which Watson (2008) also refers. The entire "ownership vs access" issues that are pillar to LIS are now not as cut and dried as they used to be (but really, were they ever?). :) Accreditation bodies, writes Watson (2008), are starting to change what even constitutes ownership in libraries.

Temper & Barribeau (2006) write, "What type of access a lapsed subscriber might have at such an archive and what license wording is specific enough to ensure such access are the critical issues." So the major concerns affecting preservation of e-resources are:

  1. Maintaining perpetual access: Watson (2008) writes that the risk of losing perpetual access hasn't "had any significant impact on libraries' decision making in terms of the models and licenses they rely upon to provide their users with access to e-resources." Why? a) Patron pressure: they want current access, so libraries just need to ensure they provide current access. b) Availability of content: Watson (2008) predicts that someday in the near future, print copies of journals will have less content than the electronically published copies, plus more libraries are just collecting electronic, or at least mostly electronic. And lastly, c) Financial pressures: e-only eliminated some of the big costs of space, time, and upkeep. It isn't that perpetual access would be impossible to get, as Watson (2008) seems to think that publishers would be open to providing it, but people on the library side are in short supply or time crunches where negotiating for it is challenging and couldn't even supply staff to set up perpetual access for e-resources.

  2. Data copying and migration: Rather than copying to paper in order to preserve, now looking at using digital means. However, digital storage can be expensive thanks to the cost of hardware, software, technical expertise, etc. Additionally, technology keeps changing so it might not be safe to keep digitally stored things in the same system that stored it a few years ago due to compatibility issues.

  3. Preserving content in the "information explosion": Watson (2008) refers to the information explosion as a profound increase in the creation modification and distribution of digital content. For the most part, this means electronic content. In other words, finding a way to preserve things like websites, blogs, wikis, and online reference works. Having an archive of past versions of these media would benefit, for example, researchers in citing these sources or going back and verifying citations/using the original version of the work.

What is being done to address these concerns? "Libraries are driven by a shortage of funding and space, and patron demand for desktop access, to cancel and discard print journals in favor of online, but this same lack of funding restricts their ability to invest in or support preservation initiatives" (Watson, 2008). Despite lack of available funds, here is a list of the current initiatives for e-preservation that I've summarized from Watson (2008):

Third Parties:

  • JSTOR: (Journal Storage Project) subscription-based, fee for service to back issues of journals on collection by collection basis not individual titles. Must have a subscription to keep getting this service, no perpetual access otherwise. JSTOR content supplements subscriptions to current content, “to this end, there is a rolling embargo, the length of that varies from publisher to publisher” (Watson 2008) in order to not damage current content sales from other publishers. Interesting note: JSTOR is working with University of California and Harvard to make a print archive of all its online holdings.
  • LOCKSS: (Lots of Copies Keep Stuff Safe) Individual libraries have LOCKSS servers for all types of electronic content, but to put it in the server it needs to be in the license with the publisher to do it. Servers are backups to each other, creates a distributed environment. Benefit is that it sort of can be perpetual access for libraries’ cancelled subscription content. Started with funding from Mellon Foundation. Libraries maintain their own LOCKSS servers, pubs don’t have much control over content on LOCKSS servers. Not good for e-books.
  • Portico: started with funding from Mellon Foundation in 2005. “dark” or unavailable, central archive until publisher stops offering back issues. put their stuff there and pay to have it stored. Manages the server for institutions (good option for libraries without tech support for it), a little more control for publishers than with LOCKSS. Eileen Fenton spoke about uncertainty of the field of electronic preservation and how even efforts like Portico don’t necessarily guarantee long-term access, and the importance of synergizing efforts from around the field to come up with solutions, also not good for e-books.
  • Google Book Search, PubMed Central

Libraries:

  • CIC: consortial effort, “Libraries determined to go e-only to save money may wish to allocate money for print retention projects and seek membership in a consortia that has such projects” (Temper & Barribeau, 2006).
  • Institutional repositories: material produced by researchers within the institution can be stored (uncertain about completeness and long-term integrity of repositories, preprints/postprints not really good substitutes for the published journal article
  • IFLA

Publishers:

  • “...questions remain about how reliable publishers are as a solution to long-term preservation needs. Publishers are bought, sold, and go out of business. Their main concern is profit rather than preservation” (Watson, 2008). Going to third parties mostly in response to this.

Governments:

  • Some countries require publishers to submit a print copy of all publications to a legal depository. Much like NIH’s mandate to deposit articles that received NIH funding to PubMed Central.

Foundations:

  • Mostly there for the money to support the initiatives.

On the whole, there are a lot of unknowns in the topic of e-preservation. How will it be funded? Who should be in charge of it? What technology will last long enough to make it worth doing now? Much like with the discussion about e-books and accessibility, each initiative has different high points, so as future initiatives continue to evolve, the aforementioned questions will continue to be addressed by the key players.

Image credit: Wikimedia by Kris De Curis.

Friday, December 10, 2010

Mobile Devices ERM Presentation

Here are the slides from my ERM class presentation about mobile devices and licensed electronic content. My presentation was November 19. Kudos to Jessi Van Der Volgen for the tip on using SlideShare.com to get my slides to appear here! Westby erm presentation

Enjoy!

Wednesday, December 8, 2010

Unit 13: Reflections on the Worklife of the ER Librarian


While working through this unit’s readings, I was interested to learn about the crossover between ER librarians and what I understand constitute special librarians (which, for non-libraryland people, probably sounds very funny, but is – in fact – an actual type of librarian). The two seem to both negotiate in the spaces between business and libraries and therefore must be skilled in multidisciplinary professional strengths: business, law, communications, technology, etc.

"Librarians who find themselves in the role of ER manager often experience this same unsettled feeling because of rapid changes in product availability, new technologies, pricing structures, and licensing terms...Like the products themselves, the ER librarian position still evolves. It plays many roles, e.g. traditional librarianship, Web design, systems management, and accepts many responsibilities, e.g., contract attorney, business manager, technology troubleshooter" (Albitz & Shelburne, 2007).

Yikes!

This is probably where the process mapping discussed by Afifi (2008) comes into play. Process mapping is used in systems development and for managing large and complex projects, whereby business processes are mapped and continually re-evaluated to improve performance and productivity. In Afifi’s (2008) own words, "Process mapping itself is a way of breaking down a process into distinct steps with beginning and end points, somewhat similar to a flow chart.” It shows an input and output, starting with the big picture then get more detailed as it continues. It is also called business process reengineering, and the primary benefit is that it helps to create flexible organizations and help illustrate communication breakdowns, which is especially useful for constantly changing environments like in managing electronic resources from back-end delivery perspective. "Process maps can help businesses understand and control how their companies function, what actions are involved and who is performing the necessary steps to manufacture a product or manage a service." (Afifi, 2008). But where would a topic in ERM be without the need for standards! NISO and the Digital Library Federation (DLF) have tried to help in standardizing process mapping, but according to Afifi (2008), there is no concensus yet.

Afifi (2008) isn’t the only scholar to have touched upon the importance of process mapping in our semester readings. In Unit 11, Weddle (2008) mentioned how important workflow is because different departments use the tools and all share that entry point. Also, Unit 11’s Grogg (2006) pointed out in relation to the necessity of external linking when going from one content provider to another, process mapping is very difficult despite the opportunities that come with it because things like external linking is hard to map and manage.

At first, I was surprised to see the Albitz & Shelburne (2007) survey data showed that most people had been in reference or a bibliographer prior to working as an ER librarian, assuming people in the ERM line of work would come from the Tech Services persuasion. However, in retrospect, I’ve realized that it would be more beneficial to come from experience with hands-on time with the resources to truly understand how they are utilized and user needs. Albitz & Shelburne (2007) highlight this when they write about ER librarians’ "...increased complexity of the position and the need to focus the incumbent's responsibilities on ER management rather than on reference or bibliographic instruction.” This, however, got me thinking about whether there should be a separate degree or specialization for this type of work. Back in LIS 450 during my first semester at SLIS, we talked a little about dividing IT people, or computer science, and librarians. However, ER librarians really seem to be the hybrid breed here, needing a little of everything.

In the handy guide for the “serialist” provided by Griffin (2009), I found the suggestions helpful and the guidance it provided to be something that I would expect from the people who hired me in a serialist position. However, I wonder just how much time someone has to actually go through all the extra research and discovering materials through browsing print reference collections that libraries keep on-hand, especially at a place like UW. I think it's hard enough to really get into the system and workflow of a new employment environment, and I've heard multiple estimates on how many years (yes, years) it takes someone to actually get comfortable with the collection they have at their library. Thus, to go into a new job learning all of this at once, I cannot imagine that this is easy. Ha. Understatement of the year. But it was nice to see that Griffin (2009), too, was an English major like myself and could make it in this area of librarianship – there is hope yet!

And just to throw it out there, I think web developers in libraries will follow a similar pattern that ER librarians have, in going from back-burner "on the side" positions to very up-front and vital to library operations. As libraries recognize the importance of a web presence, I think the jobs of “Web Librarians” will be permanent full-time positions sought after with great urgency. It will be interesting to monitor how this changes the theory behind LIS and either the further separation between or blending of libraries and IT.

Image credit: Wikimedia by Petritap

Unit 12: E-Books: Audio & Text


This week’s course content centered on e-books: audio and text. I am one of the seemingly few public library patrons who are lamenting the withdrawl of books-on-tape. My old Lumina sports a perfectly functioning tape deck, so all this talk of digital audio books (DABs) has me a little bitter... :)

Grudges aside, Peters (2005) kicks things off this week by giving a nice overview of the major players in DABs in libraries, which I’ve listed below with my notes from each:

OverDrive - Play on PC with free software, transfer to playback device and/or burned, WMA files, simultaneous users, media markers to going between stuff, allows consortial agreements good administrative module

Playaway - Has some different language materials, wide range of content, can purchase by individual title, circulates at most libraries as an actual physical item

NetLibrary - No software needed, can be played on multiple devices, no burning, need to set up an account, have different file types and sound qualities - radio and CD - which allows for users on dial-up and other connections, can preview the book very much like flipping through a few pages

TumbleBooks - Has different language materials, games and books and puzzles, need to be online - no downloads, does not offer a consortial pricing model

Audible.com - More individual subscriptions, issue: not allowing multiple people to "check out" a DAB copy simultaneously, libraries need in-house circ, no way to independently load books onto player because need to go directly to device, frustration of having to relearn new technologies

So to rewind a moment, the National Library Service for the Blind and Physically Handicapped started a digital talking book program in 2008, and it is a sector of the field that is emerging very quickly. Initiatives to develop DAB systems that enhance accessibility for all patrons continue to grow, including the Braille and Audio Reading Download website. Accessibility issues are very real but not often recognized as important and driving factors for this segment of library user populations. Today, you can see that there are a lot of different devices available, and the readings and videos for this week brought to my mind some of the issues associated even with things like DABs. DABs have great potential to bridge accessibility gaps for certain segments of library user populations, especially those with vision impairment or physical handicaps. However, there are still accessibility issues becoming apparent as more DAB technologies find their way into libraries, such as web access to content (Audible) and the physical operation of a device (Playaway), for example.

I began to think about the importance of keeping in mind that when things like mp3 devices were created, they were born out of convenience, something to play music on wherever you were (and I even owned an early model that doubled as a flashdrive for my Microsoft Office files, too). I don’t believe they were created necessarily as the primary media by which people with disability and accessibility concerns would receive information. As it is realized more and more by the industry just how fitting these devices would be to fill a gap in library accessibility, devices will continue and have continued to mature, as do the software that accompany them. It is difficult to tell at this time what a future device would look like, one that would “have it all.” My immediate thought was to just take the best of each one and create a single device that would, in fact, have it all. But then I start to question if, for example, there any areas where adding a feature to one device would take away from the best advantages of the original device? For example, would adding something to a DAB gizmo or program end up compromising its accessibility in another way?

As a librarian, I imagine it is difficult to work around and with the different players, appreciating the diversity each offers but uncertain about how to best integrate them into your library holdings, let alone a single cohesive DAB collection. Peters et al (2007) notes this: “...if an individual library subscribes to NetLibrary audiobooks and gains access to OverDrive audiobooks through some consortia or statewide initiative, that library may want to pull all this digital audiobook content together into a seamlessly whole collection and service that it presents to its users.” Also, as one of the videos we viewed for this week noted for e-books, there are challenges presented in purchase versus lease plans for DABs. “Under the purchase model, libraries purchase and own each copy of each title. On the other hand, a leasing model offers multiple concurrent users access to a title” (Peters et al, 2007). And we find ourselves again back at one of the pillar concerns facing libraries today: ownership versus access.

With regard to pricing, Peters et al (2005) write that authors think unlimited simultaneous user pricing model is best and that it should be industry standard along with variable speed playback, enlarged content scope, and more focus on usability than style. I found this a little ironic considering two years later, Peters et al (2007) picked apart every stylistic detail of DABs content including narrator’s voice and background effects. This part of the DAB overview for libraries sort of bothered me because it seems like an awfully nitpicking thing to concern oneself with when – in comparison – people who read physical books don’t have a say on industry standards regarding typeface, color of pages, size of print or pages, etc. Perhaps these stylistic details truly are a serious issue to users, but I think we need to consider the accessibility problems still in the pipeline to be dealt with before what seem like other surface level details. I will note that I found it interesting that Peters et al (2007) split up the collection content of DABs into frontlist, backlist, and public-domain titles, because that was what my big drawback of thinking the world of DABs is, that I can only ever access really old books on Playaways. However, given how fast the market is changing, that’s probably different by now.

Seeing the License and Agreement Terms highlighted in the Peters et al (2007) article put into perspective how as librarians we are faced with an ever-evolving licensing environment. Technology really does continue to change everything and whether we want to or not, it will affect us. I liked the variety of reports that are available through OverDrive with regard to DAB circulation, where you can see activity charts as well as current waiting list reports and waiting list history reports.

And to give a quick shout-out to e-books, I didn’t realize that they were not as highly favored in academic library settings. They have received such positive attention in the public library sector and general population, but I also have not considered e-books as a part of a university library collection prior to this week. I don’t know, however, if the Think Tank’s suggestion to catalog the e-books that a library acquires would necessarily solve the problems they highlighted. This is especially considering that the source of many of the errors were typically not on the library processing side but rather on a more fundamental level with the e-book publishers.

Image credit: Wikimedia by Free Software Foundation

Unit 11: Finding Content


During my week home, our unit focused on finding content with electronic resource management systems and discovery tools, primarily things like OpenURL and OpenURL standards, link resolvers, and digital object identifiers, or DOIs. Boston & Gedeon (2008) discuss each of these tools and the ways to use technology to link to library resources. In the old days (which, relatively speaking, really isn’t that long ago), internet resources were always located at static URLs on servers, but they were always at risk of breaking because links would get moved around to new servers, sites would undergo redesign, etc. So the new age solution is to use the digital object identifier protocol and OpenURL standard to find items electronically wherever they may be. Today’s practice, according to Boston & Gedeon (2008), is an OpenURL standard combined with added features of link resolving technology.

One of the problems with implementing linking standards is the “appropriate copy” problem, which is how to determine which copy of an article is the one to which a link should direct users. One solution would be context-sensitive links from databases to an institution’s resources by way of a publisher service. Another solution is to use a link resolver, like SFX. Link resolvers, as Boston & Gedeon (2008) write, were precursors of the OpenURL because they didn’t provide standards-based linking protocol. The benefits of using this approach are interoperability, resource discovery, and linking between resources in a personalized and user-friendly approach.

The primary concern when looking at all these technologies is interoperability, as Weddle (2008) points out that things like the link resolver will affect other things like the MARC record service. All are connected through a single point of data entry. Weddle (2008) does a nice job reviewing some of the tools available, two of which I list below with some notes for each:

  • A-Z list: service provider responsible for changing global knowledgebase when resources added/removed, and librarian worries only about the local list, facilitate known-item searching
  • OpenURL and Link Resolver: OpenURL via link resolver implementation provide crucial links among resources and is closely connected to A-Z list, allows patron to go from browsing citations and jump directly to the resource housed in another location, the source and target relationship,

Weddle (2008) then introduces the topic of federated searching, which addresses the need to search across many e-resources and helps with topic searching. Walker (2010) also discusses federated searching, or metasearching. “Federated searching represents a direct response to users’ behaviors and preferences…Through careful selection and grouping of library resources, libraries still provide value-added services—services that are not available via free Web searching,” writes Weddle (2008). And Walker (2010) agrees: “Web resources are more ‘discoverable’ when other sites link to them,” which is what CrossRef, DOIs, Link Resolvers, and OpenURL help.

The benefits of linking to the article level are that it makes citations actionable immediately and enhances efficiency of online research. In other words, the author and topic is more important than the publisher in terms of identification, so linking to the article level enables the system to go across all publishers and takes users to their websites and versions of the document. While it helps fix errors and serves as a business opportunity for publishers, libraries also benefit: “Namely, [DOIs] make linking reliable at any level of granularity possible. In the same way that links drive readers to publisher content, libraries may also see increased usage of acquired electronic resources as an additional benefit” (Brand, 2003).

Grogg (2006) takes a different route in his examination of finding content in ERMS, looking at reference linking. I pulled this excerpt from the article: “Internal linking may seem more attractive because, on the surface, there appears to be no need to address the appropriate copy. If a user has entrance to a content provider’s front door, then the user will most likely have access to whatever lies within the provider’s content database(s). That sort of reasoning is fault, however, because in addition to the large aggregator issue previously described, no one provider can hope to include all literature cited in every reference list” (Grogg, 2006). To help illustrate this point, Grogg (2006) likens internal linking to a shopping mall, and external linking as a downtown shopping district, where the difference is between closed and open environments. External linking (or intersystem) is needed when a user goes from one content provider to another.

Brand (2003) looks at concepts that reach beyond citation linking, things like

  • Forward linking: I understand this to be the same as a cited reference search in a single search interface, as found in WOK.
  • Parameter passing: using OpenURL syntax to send a key along with the DOI that enables extra functionality to benefit publishers and users (i.e. a parameter might be info about the source entity).
  • Multiple resolution: user clicks on one link and can see a menu of publisher-supplied options

Walker (2010) also is forward thinking in a way that I found particularly appealing. Consider for a moment this excerpt:

“The behaviors and expectations of today’s users are changing, so make the effort to learn about them. For example, users take for granted, and expect, relevance ranking; they like the user experience to be intuitive, without compelling them to learn how to use the system; and they expect to be able to interact with the system by contributing, for example, reviews and tags that will help them and others find the information in future.”

It is interesting to me from a “behind the desk” experience because – with the exception of a handful of physicians I have assisted at the health sciences library at which I work – hardly ever have patrons asked me about how results are assessed in relevance ranking. But regardless of whether we think patrons should think about this, we as information professionals need to be aware of their expectations, their in-the-moment “This is what a link/library/resources should do” thinking. This might mean examining things like tagging and faceted browsing as effective search strategies, which might before have seemed like less academic, I guess you could say, methods of inquiry.

Image credit: Wikimedia by Petritap

What’s In a Name: Definitions

So as I was taking notes over the past month on our readings, I couldn’t help but jot down every definition provided, as my what I consider to be still “rookie status” to this stuff always appreciates hearing things explained a different way to enforce understanding. Here is a list of some definitions I’ve compiled, to which I could probably add in the future:

OpenURL:
  • open source reference linking using link resolver, source AND target, simplicity and transparency, points to other related resources (i.e. availability in print or other ILL pages) (Boston & Gedeon, 2008)
  • “mechanism for transporting metadata and identifiers describing a publication, for the purpose of context-sensitive linking.” (Brand, 2003)
  • framework has both local control AND standardized transport of metadata

Link Resolver:

  • “…system for linking within an institutional context that can interpret incoming OpenURLs, take the local holdings and access privileges of that institution into account, and display links to appropriate resources.” (Brand, 2003)
  • OpenURL is sent to local link resolver, metadata in the OpenURL is compared with localized knowledgebase, then links if there’s a match. OpenURL framework+CrossRef+DOI=link resolvers are possible! (Weddle, 2008)

Digital Object Identifier (DOI):

  • Managed by the CrossRef system which uses DOIs to reliably locate individual digital items, can be considered problematic because it only takes users to publisher-supplied full text, not to the “appropriate copy” (Boston & Gedeon, 2008)
  • “DOIs link to the full text of information objects housed at publishers’ Web sites. If full text is available elsewhere—for example, in an aggregator or in an Open Access (OA) source—then a link resolver becomes necessary, which, depending on the quality of the metadata and the knowledgebase, can present any number of full-text options, not just the publisher’s full text,” a persistent, unique identifier, can be associated with multiple instances of an article (Weddle, 2008)
  • DOI: NISO standard syntax; prefix (assigned to content owner by DOI registration agency, all begin with 10 to distinguish DOI from other types of Handle Systems)+suffix(flexible syntax, composed by publisher according to internal content management needs, but must be unique within prefix); alphanumeric name for a digital content entity (i.e. book, journal article, etc.), allows content to be moved but kept updated so no broken links. DOI stays the same, while URL (which DOI is linked to) can change as needed (Brand, 2003)

CrossRef:

  • service of the Publishers International Linking Association, largest DOI Registration Agency, works as “a digital switchboard connecting DOIs with corresponding URLs” (Weddle, 2008)
  • “nonprofit membership association, founded and directed by publishers. Its mission is to improve access to published scholarship through collaborative technologies…it operates a cross-publisher citation linking system, and it is an official digital object identifier (DOI) registration agency” (Brand, 2003)

Reference Linking:

  • aka citation linking, a user’s ability to move from an information object to another, can be external or internal (Grogg, 2006)

Context-sensitive Linking:

  • “…takes the user’s context, usually meaning his or her affiliations but possibly also the user’s intent for desired information objects, into account; therefore, ideally, context-sensitive linking only offers links to content the user should be able to access,” appropriate copy (Weddle, 2008)
  • “In short, a workable linking framework should take a user’s context into consideration.” (Weddle, 2008)

Faceted Browsing:

  • Drilling down a search, or limited different facets until you get to your desired end result, takes guessing out of the process (Walker, 2010)

And to wrap the list up, here’s a nice summary from Brand (2003) of exactly how it all fits together:

  • “The DOI and the OpenURL work together in several ways. First, the DOI directory itself, where link resolution occurs in the CrossRef platform, is OpenURL enabled. This means that it can recognize a user with access to a local resolver. When such a user clicks on a DOI, the CrossRef system redirects that DOI back to the user’s local resolver, and it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenURL targeting the local resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources.”

Mission Catch-up: Commence


Yep, I’m still on the map but fell off the radar for a bit following a trip home to MN for a cousin’s wedding where I got to exercise my otherwise dormant creative side (aka I sang and played piano at the ceremony). So to follow are a series of my “make-up” blog posts that I’m finally able to buckle down and get posted.

Photo caption: Don't you wish you were a part of the happy Westby family? We took this after the wedding ceremony (my sister was a bridesmaid hence the schmancy dress). Notice how the only normal-looking one was actually married into this crazy family - lucky girl. :)

Photo credit: Mama Westby

Saturday, November 6, 2010

Unit 10: Standards, Value, and the Pursuit of Electronic Articles


Echoing my own reflections from last week, this past unit provided another opportunity to consider the importance of data standards for electronic information resources. Yue (2007) bulleted out three reasons with which I agree reign with greatest importance: interoperability, efficiency, and quality. Without standards, systems and information will become increasingly incompatible, resulting in increased confusion for end-users and work/time/money for libraries attempting to somehow integrate their resources into a user-friendly knowledgebase. Pesch (2008), who provides a nice overview of the E-journal life cycle and lists groups involved in creating standards including NISO, COUNTER, and UKSG, emphasized that while libraryland has a good start on establishing extensible working standards, there is still a long way to go.

One area in particular that I have extensive experience working hands on with end-users and am realizing why standards are so important is with OpenURLs. Whenever I assist a patron with accessing an article electronically, the UW’s electronic resource system is using SFX (ExLibris) as a link resolver to connect the source with the target. In other words, the link resolver is separating out the elements of the OpenURL – not a static link – from the source and linking it to the appropriate target. If either the source or the target is not OpenURL-compliant, the metadata will not be exchanged (read: the end-user won’t get what they need) (read: panic/chaos/rage/confusion ensues, depending on the assignment/project deadline...). With the OpenURL Framework, standards are a large step closer in preventing this all around unpleasant interaction. :) Thanks to classmate Anna for her super presentation on FindIt, UW’s link resolver.

One of the tangential roads my thoughts took me down this week was thinking about how much influence user behavior truly has on the research process in libraries. When user behavior changes, we as librarians try to change with it, or even be a step ahead of the change, and allow systems development to evolve with what users do or expect. On one hand, I can see where this makes sense; yes, we should make electronic resources available in a way that users expect it to be accessible in order to enhance the resources’ utility, and subsequently the user’s perception of the library as a whole (on a much broader level). However, on the other hand, I’m beginning to hear reverb from our units on copyright and how flabbergasted we were to learn that copyright policy is essentially not created by librarians, or by people being affected by it, or by elected officials, or even by representatives from the larger companies reigning over this sector of the field; rather, it is primarily made by these large companies’ copyright lawyers. So with the surprise that accompanied this realization, I find myself going through the same response to patrons’ control over library systems’ organization.

It feels awfully trusting in the efficiency of your average user’s search behaviors, especially those from larger institutions with huge volumes of subscribed resources where considering arguably 75+% of users don’t have any idea of just how many and diverse resources are actually at their disposal. Seems a little uninformed, as though dated expectations are being manifested in new software for libraries. However, projects like the ones done by COUNTER make a case for the value of usage statistics, or measured user behavior.

Colleagues of mine gave a fantastic presentation and lab demo on the COUNTER project, where we got to see data from a few COUNTER reports and learned ways to actually use that data (i.e. putting it into charts, graphs, etc.). The COUNTER projects about which we read this week seem interesting, especially the one about impact factor. However, I’m not sure I understand the difference between the citation based impact factor and the author/usage based metrics. The Shepard (2010) article discussed usage bibliometrics, or usage based measures of value, status, and impact. It isn’t the first time this concept has surfaced, but up until now I have generally accepted the information presented on this topic. I’ve never worked in a situation where this mattered, and though I will work someplace someday with something closely tied to these issues/concepts, I have no frame of reference whatsoever in which to situate myself in challenging, pushing, or actually evaluating them. Hopefully, as the semester progresses, I will continue expanding my frame and find methods of meaningful takeaway, like the class demos and talking to librarians with whom I work on campus about their experiences – if any – with these programs, issues, and initiatives.

As I hinted at last week, this time of the semester is when things get more stressful than ever. I continue to complete and take thorough notes on all the readings for this class because ERM is very new to me. As easy as ERM may seem from an end-user perspective, it truly is a challenging arena of the library world. I wish there was a way to better demonstrate my understanding of the concepts despite the absence of any “real-life” context in which to consider the topics, which is how I believe people best learn. But after this class is done, regardless of the grades any student receives, each of us will take away as much as we put into it. Honest effort, time, and novice insight should always register some merit in a learning environment, but even when it doesn’t, I know that I will still come away from my classes this semester in an exponentially more learned position than I was before the term started. I am really proud of the time and effort that I have invested in this course and hope you’re enjoying the journey with me.

P.S. Happy Daylight Savings!

Saturday, October 30, 2010

Unit 9 continued: Winks thinks...


Part II of my Unit 9 post…

As I worked my way through this week’s readings, I wrote in my journal about my desire to be in a work position where I could get hands-on experience with ERMS’s to get a better understanding of where all these acronyms and systems fit together. And then my wish came true! One of my classmates gave a fantastic hands-on demo with using the open source ERMS called ERMes, developed by a librarian at UW-Eau Claire. It was great! We first downloaded the program, then learned about the interface and how to navigate it, then actually used it on our local desktops with some dummy data prepared by the presenter.

As much as I wanted to get experience with these systems in order to better understand them, I will admit that I was skeptical of the demo and nervous about using it for fear that the ERMS would be too complicated and just confuse me more. However, it was so good to go through the motions of what exactly an ERMS entails, understand why “manual data entry” is a huge inconvenience especially at a place with as many resources as UW (I don’t mind a little manual data entry every now and then, but after only working with five databases’ information? No. Way.).

Last but not least, what I really enjoyed this week was how much of the information really started to evidence connections to other units as well as across my other courses this fall. First, the connections to other units was very apparent when I read, “Ted Koppel (2006) predicts there will be a move away from aggregators to more title selection with more truth in pricing...Additionally, there will be increased transparency for content providers on pricing, and the demise of aggregator packages...There will be pay-per-view support and tracking within ERM” (Hogather & Bloom, 2003). Whoa – didn’t we just spend an entire unit a few weeks ago discussing pricing and subscription models, and the Big Deal still ropes even more libraries into their package deals right alongside even less transparency for pricing. Also, learning about the forthcoming developments in Phase II of ERMI really motivated me that licensing will continue to become more streamlined for libraries with things like licensing expression and data dictionaries.

Speaking of data dictionaries, this week’s unit really connected to the database design (DBD, I’ll call it for short) course I am taking. I feel very fortunate to be taking DBD because not only am I spared feeling overwhelmed at the overload of lingo, but I understand the pathways and relationships between user, library, resource, and specific content. It is neat to see from the librarian end of things why data standards and types of usage data allowed for on the DBD end of things is so important. Plus, I felt pretty good reading about ERDs and knowing that not only did I know what it is but also how to make one and make it well for the highest efficiency for the purpose it was meant to have. No matter how hectic this time of the semester can get, it always seems to coincide with moments when I start realizing how different classes begin to overlap and connect - love it.

Image credit: Me! :) Happy pre-Halloween!

Unit 9: Vewwy, vewwy intewesting


Yesterday’s class session was a lot of fun! With so many things to which I want to respond, I’m splitting this week into two posts. The topic this week focused on Electronic Resource Management Systems: Vendors and Functionalities—sounds like hardly the place for the descriptor “fun,” right? What I enjoyed so much about it was a) the relevancy of the information given certain libraries’ and library types’ trends toward primarily e-resources, b) the demo we did, and c) the connections I was able to make between my semester classes.

The need for ERM systems stems from the sheer volume of electronic resources available in the packages and products to which libraries subscribe, an amount that was simply not manageable by a single ILS. ERMI, or Electronic Resource Management Initiative, is guided by the following principles (Hogarth & Bloom, 2003):

1. Print and e-resource management should be through an integrated environment.
2. Information provided should be consistent regardless of path taken.
3. Each data element should have a single point of maintenance.
4. Electronic resource management systems should be sufficiently flexible to make possible to add new/additional fields and data elements.

In other words, in the coming years, librarians will be working toward ensuring these guidelines are interwoven with the new technologies and software available for managing their electronic resources. I found the list helpful as a sort of checklist against which I could compare different ERMS vendors’ products and consider how the different platforms and modules either enhance or obstruct these characteristics.

One of the major differences between the Hogarth & Bloom (2003) article that we read compared to the Collins’ chapter was that Collins spoke much more about the challenges in implementing the ERMS. Hogarth & Bloom (2003), on the other hand, focused on pushing the envelope in the future rather than stressing out about making them work now. It really illustrated – for me, at least – the tension kind of inherent in libraries and the systems they use to manage their resources, and with technology in general, depending on library type. Perhaps the authors are on different sides of the library field fence, but librarians (or who I think of as being on the use and implementation side of things) are concerned with making things actually work in the here and now, saying “Whoa, slow the heck down, what?” whereas program developers (or who I think of as being on the software design side of things – which sometimes include librarians) are intent on staying one step ahead of the game, pushing forward so not to spend too much time tweaking a possible soon-to-be-outmoded system.

I’m finding evidence of this in my research on mobile devices (more specifically Internet-capable handheld devices) and electronic resources, where there’s a ton of literature available that focuses on all the new tools and apps and opportunities for development. At the same time, however, there is hardly any literature that discusses actually implementing mobile services for licensed materials in a library. Vewy vewy interesting...

Saturday, October 23, 2010

Unit 8: Coming Soon..PasswordMan and the Digital Info Superheros!


This week’s readings focused on technological protection measures and the implications they have on libraries, patrons, and a dabble of information ethics/policy. Having just finished the last reading about the socio-technical construction of who is allowed use in our digital information environment, it is interesting to think back to the beginning of the semester and how copyright really stole the limelight, warning us against letting it dominate information policy. Yet here, this week we see that both social values and business also very closely affect information policy, illustrating in real time the danger of letting any one entity control something that affects diverse users in varied situations.


Reflecting on what Eschenfelder (2008) discussed, libraries do need to pay more attention to the soft restrictions that are just bundled up in the licenses they sign. I think after reading the Harris (2009) book, I came away thinking that the most important point in licensing materials was to make sure that the vendors or publishers 1) didn’t make any outrageous restrictions on materials, 2) didn’t place any requirements for trafficking patron use, and 3) were actually able to license the material. Having not been involved in any licensing of materials whatsoever, I cannot speak directly to the notions may or may not exist. However, I will say that going into a position of licensing materials, I would probably subscribe to the assumptions that soft user restrictions are status quo that Eschenfelder (2008) describes as all too prevalent in our current professional environment.


But after reading the Eschenfelder (2010) article, I started asking myself what is it that is keeps libraries from picking up the new technologies to use in controlling their collections? I think there’s a good possibility that the new technologies would help streamline their collections more in the long run, rather than as with the example of watermarking interfering with legitimate use it would instead keep protections there but without obstruction of the resource. However, maybe it’s a side effect of those long-held assumptions I discussed earlier, where librarians are not picking up the new technologies because they aren’t part of the expected use restrictions, and therefore imply greater restriction and control of access to the resources. Rather than being better for their end users, organizers of cultural institutions in the U.S. may be viewing the new technologies as the Big Bad Wolf, a terror in disguise. When really, the old stuff is what’s making their collections difficult to use.


With that being said, I wouldn’t exactly place money on this discussion, not yet at least. Why? A) I haven’t thought it through enough, and B) because I myself am victim of fearing the newer tools for controlling use and access to resources. If I were a librarian controlling access to my materials, I would stick with what I know best because it’s what’s safe and what works (and what we think still works), so – as I told my dad last winter when he suggested I trade in my dearly beloved ’91 Lumina, Sporty, for a newer vehicle – why fix it if it ain’t broke?


(And, as a total aside, I’m glad I didn’t get a newer car last year, seeing as Mother Nature just totaled Sporty a month ago with a nasty September hailstorm…yep.)


Taking a turn in my stream of consciousness... Millman (2003) writes, “Authorization may be considered an implementation of policy, and an important goal of authorization systems is flexibility, accuracy, and efficiency in implementing policies.” Now, I’m not sure what policies they’re talking about here, but I think it’s maybe the policy that says who can access what to whatever extent. I experience this each week between what I’m authorized to do and view in the library as a student vs. what I can access at different levels of staff between my different jobs. Authorization is a complex process, especially with people – like me – who have multiple roles within a single institution, where the Role-Based Access Control system that Millman (2003) describes is used.


As far as authorization and authentication go, my immediate thought was that I do enough authenticating these days as anyone ever should have to, especially at UW it seems they have a greater security concern than other things like Gmail. Logging into my three different email accounts, two blogging accounts, a social network profile, online accounts for banking and shopping, a distance learning portal for my online classes, and multiple work terminals (each with their own passwords), I don’t know how we as human beings do it, remember which identity we chose for which program/website/account/etc. I foresee a new age of digital information superheroes, with PasswordMan (see picture above) leading their forces in the fight against cybercrime.


Image credit: Wikimedia by Thibault fr

Wednesday, October 20, 2010

Unit 7: “It’s the way you ride the trail that counts.”


Tomas Lapinski came to speak in our class about the ins and outs of the TEACH Act. Regardless of how unlibrarian-ish of me it may seem, I have never heard of the TEACH Act (*gasp!). Yes, it's true. Somehow, freshman woes and other plagues of high school seemed to have been demanding my attention at the same time in history, so cut me some slack here...

The TEACH Act signed in by President Bush brought some pretty weighty changes on distance education as we know it. After reading about it in Lapinski’s article and in an ARL issue brief, however, I can’t say we seem like we have it figured out entirely yet. It was interesting to compare the pros and cons with the TEACH Act, seeing as it seemingly provides a lot more leniency for use of copyrighted materials in distance education. However, Lapinski pointed out some of the drawbacks that I would otherwise have missed. Not only does TEACH introduce potential licensing provision problems, but it also makes it difficult for users to negotiate between multiple copyrights on works (Lapinski gives the example of a CD of Copland and the copyright – or lack thereof – of the music versus copyright on the sound recording).


Right away, I thought the TEACH Act didn’t sound so bad. It seemed to make available many more resources than had otherwise been accessible by distance students. As one who has taken online/distance courses (not really from a distance, however, as I’m still here in MadTown), I can definitely speak to the importance of having materials that provide the equivalent experience of the face-to-face instructional setting for use at the learner’s convenience within the time appropriated to the course. The only streaming I have had in courses were voice lectures and voiceover tutorials created specifically by instructors. Speaking of which, the whole issue of MIA materials still has me hung up in not understanding exactly what it means. When going through the readings, I felt like the moment I understood what TEACH meant, it was explained further that just re-confused me.

I agree with Lapinski in that the “compliances” required by TEACH is troublesome on multiple levels, most significantly (from my experience and perspective) that user rights would be more dependent on institutional compliance. Yet, this too could be debated. Have user rights ever been independent from the institution providing the resources for one’s education? As a student using materials, it’s easy to overlook the licensing details and the remote access clearance and the copyright stipulations. But at the end of the day, we as users are quite dependent on our institution. I think the problem comes into play when that dependency shifts from the institution to an entity beyond the institution that is demanding compliance with non-institutionally created rules and requirements. That’s the kicker.

With the test coming up this Friday, I’m going to be spending a lot of time with this and our other readings from the semester, trying to weave some common themes between them other than “copyright.” The TEACH Act is one that I’m still trying to place my finger on, as the Lapinski article wasn’t exactly a “bit of light reading.” Although he did a nice job explaining what it meant, the Act itself is just very wordy and circular, in my opinion. At the end of the day, I’m still going to ride the trail more traveled and wave my white flag of Fair Use until they take me in for good. Apparently, that seems good enough for now, to at least act with good intentions and to be willing to a) change or b) argue your case if someone thinks otherwise.

Saturday, October 9, 2010

Unit 6: Puttin' on my attitude shoes


This week's unit focused on pricing models and consortial arrangements. The further I get into this class the more I realize that even with as much as we're reading and discussing, this is just skimming the surface of electronic resource management. I almost wish there was some way to include a practicum option with the course in order to get some hands-on shadowing time with librarians who confront and work with the issues about which our class is reading.

For example, one reading this week was from 1988 called "Journal Publishing: Pricing and Structural Issues in the 1930s and 1980s." As I read the historical rundown of libraries' journey with serials and the unique issues it faced back then, I was not sure how to respond to the recommendations provided by the authors at the completion of the article. The authors recommended in four of six different points that, in order to change the way serials were going, librarians had the responsibility to – in short – step it up. While the other two recommendations focused on library organizations and the academic and research library community, librarians as individuals were called on to do a better job at being more informed and up to speed on the publishing industry.

If only Astle and Hamaker could see us now! It is truly amazing to have read something written a year after I was born (yeah, do the math…I'm a young'in) and how very recent it was that librarians were being called out for not knowing enough about the publishing industry and not taking better initiative with serials management. When I think of journals and the issues and obstacles librarians have topped and continue to work with today, I am very impressed. An entire generation of librarians, in my lifetime alone, have come SO far with figuring out how to negotiate the publishing environment and have seen such unbelievable leaps and bounds in issues they face that challenge the way "business is done" in libraries.

So what does that mean for me, an up-and-coming librarian, one aspiring to work in the field of medical librarianship where cost of journals and subscriptions are of prime importance? For one thing, I think this generation of library school students has big shoes to fill. :) Second, I think it is vital that we are able to have the face to face working relationships, almost like apprenticeships, with the librarians who helped usher libraries to where we currently stand today. I am a firm believer that even the most streamlined and sophisticated of educational programs cannot trump real life experience, and while newcomers to the field may offer fresh perspective, there will always be significant value in understanding and building on what came before.

Lastly, one thing I guess I'm still wondering about is whether or not consortia are the "way of the future" with regard to how to manage electronic journals. I am not skillfully versed in what we have here at UW, but we had the opportunity to listen to a few guest speakers in class yesterday who work at Wisconsin Library Service (WiLS), who negotiate electronic resources through cooperative purchasing licenses as well as other consortial services. As Clement (2008) points out, the literature evidences that there are both intellectual as well as financial benefits to consortial arrangements for electronic resources. However, does each library in the consortia have the same resources, and if so, will that make for more uniform collections across agreements? Lack of diversity of collections was one of the drawbacks cited in reference to Big Deals offered by publishers, as scholars like Best (2009) believed Big Deals risked creating standardized collections on campuses around the country. Could consortia risk doing the same?

Image credit: Wikimedia by Albi V R.

Thursday, October 7, 2010

Unit 5: Oh my ears and whiskers! How late it's getting!


This week's post comes a bit belated but here nonetheless! A little update from KateLibraryLand, last weekend I had the opportunity to both attend as well as present twice at the Midwest Chapter Medical Library Association conference here in Madison, WI. I presented both a group research paper (about the American Family Children's Hospital's Family Resource Center) and an individual poster (about building complementary and alternative medicine collections in libraries). It was a fantastic time, and I met a lot of wonderful colleagues. It is exciting to have these experiences that will help me continue building my career in the direction of medical librarianship.

And now, back to our regularly scheduled programming. :)

Last week's class focused on e-reserves and the joy of negotiating demands from academia with an appropriate balance of copyright and protecting intellectual property. I have to admit that in thinking back on my college career that concluded only a short year and handful of months ago, I used to think I clearly remembered having e-reserves. However, I now struggle to recall exactly how they looked, how they were set up, or to what extent they were used. As an English major, the majority of my coursework was done using books, and any research in electronic journals was done for term papers, not in e-reserve fashion (again, I think...).

While one of my first courses at UW required students to purchase a course packet, the remaining courses since have all used e-reserves. As a user, I do prefer a course packet because $15-$20 beats $200 in books for a semester. This semester, however, I've gotten a chance to really compare the pros and cons from a user perspective of having e-reserves set up versus having to go find each article individually. While the convenience is hard to beat, there is an added benefit of improving library resources search skills when having to go into the system for individual articles.

But our ERM discussion was not about user opinions of e-reserves, but rather it focused on the technical aspect of e-reserves, the deciding what is ok to post and how long it can be there and under what circumstances. The director of UW's College Library visited class as a guest speaker, having served as the university's e-reserve guru for a number of years. I enjoyed hearing her insight into what is a much more complicated process than users have any idea. It seems like a simple situation: university library purchases subscription to journal, journal article is available, librarians collocate articles and post links to them for e-reserves. Easy, right? Not so much, because in order for that progression to flow successfully, there needs to have been consideration for e-reserves as early on as drawing up a license for the resource in the first place, not to mention the safeguards that needed to be created and limits/re-uses/etc. issues that come up as well.

In class discussions, I must admit, these things get overly complicated and I struggle to find actual footing on an "opinion" toward this issue. However, hearing our guest speaker talk through things with us really put e-reserves in a different light. It doesn't have to be an intricate, complicated web (though it most definitely seems to be) but rather a careful process of consideration, balance, and well-founded decision done in the best interest for users and the creators/owners of information.

We also discussed the Georgia State case, in which GSU had an ultra-lenient e-reserves policy and publishers responded with legal action, after which GSU changed its policy and proceeded to defend that policy as if it were the one originally cited for copyright and intellectual property violations. I think it called to attention the responsibility of universities to remain fair players in the scholarly literature scene/cycle. I've been looking for a response from Lesley Harris on the GSU case, but I haven't found one yet (will continue looking and if found, will post it here).

To be continued (in a much more prompt fashion)...

Saturday, September 25, 2010

Handy web resource for Copyright Q's

In the Harris (2009) reading, the blog Copyright Questions and Answers was referenced, so I thought I would look into it and see what there was to offer. Turns out that the blog contents - which still is live at the link above but no longer active - and tons of new resources now are at the weblog CopyrightLaws.com. Like I said, lots of great resources and a solid reference when face with copyright concerns. Check it out!

Unit 4: Hmm...


This week's readings focused on contemporary licensing best practices. To say there are two schools of thought on licensing best practices would not be entirely accurate, but I do want to touch on two issues that seem to reside on either end of theoretical licensing spectrum.

UCITA, or the Uniform Computer Information Transactions Act, is legislation that would standardize all electronic resource licenses. In other words, regardless of if you are a business, a non-profit, a school, or a library, you would use a standard cookie-cutter license for getting access to the resources you needed. Sure, standardizing contracts might seem like a good idea because of the potential time, cost, and effort it would save because let's face it, who honestly enjoys having to juggle legal language and draft a contract that feels more like a round of tug-of-war. However, libraries are vehemently against UCITA because 1) libraries and businesses are different institutions with very different needs and users, and 2) the standard license would be based on a business exchange model, which they see as bringing security risks and higher costs to the library.

Although UCITA has only been passed in two states (Virginia and Maryland), already there are efforts to propose alternative solutions to licensing hassles (besides 38 amendments to it in 2002). One such effort, residing on the other end of the spectrum, is SERU, or Shared Electronic Resource Understanding. SERU is not a licensing standard or contract but rather a basis on which a non-license agreement contract can be established between two parties. Hahn (2007) writes that SERU is "an expression of community accord" and seeks to enhance the "trust-building cycle." Wait, "trust-building cycle"? "Expression of community"? Um, are we still talking about licensing here??

I’ll be honest; this week's readings had me scratching my head asking "Huh?" There is an evident dissonance in the way licensing is viewed. Hahn (2007) said in the statement about SERU that licensing is an "inherently antagonistic negotiating process" and that "It has become apparent that there is no single agreement a publisher can offer that all libraries can accept" (and vice versa). Yet Harris (2009) insists that negotiating licenses should be "what makes sense to both parties and…a compromise to satisfy both parties – a win-win situation" and assures readers that they are never forced into a contract but always have alternatives. So…which one is it? A fight until a victory determines the triumphant winner and a desolate loser, or an easy-going conversation about wants and needs resulting in a proactive solution that pleases both parties? Or is it either?

Which brings me back to UCITA and SERU. Just the fact that I think about them as opposites makes me think neither one is better in terms of the answers they suggest for licensing e-content, yet I don’t think a hybrid of the two would work either. Perhaps a good license is one that takes elements of agreement (like SERU) and follows standard outlines, not standardized content (like UCITA). As Harris (2009) writes in her chapter on Questions and Answers on Licensing, a perfect license is one that fulfills both parties' needs and wants, and that no two licenses are really going to be the exact same. Like snowflakes. Except I don't think 2nd grade classrooms take joy in cutting licenses out and pasting them as seasonal décor for the classroom.

But how funny would that be, right? :)

Image credit: Wikimedia Cartoon of Hall Caine Harry Furniss (1854-1925)

Saturday, September 18, 2010

Unit 3: Brain Freeze!


This past week's readings focused on copyright and licensing history, and I found that while these texts don't exactly keep you on the edge of your seat, they kept me more alert than most of what I have read in grad school, primarily because of the fear they strike in me. For example, Lesley Ellen Harris' Licensing Digital Content introduces the "basics" of making and negotiating licenses for electronic resources. While it's all well and good to read about the process, I can't help but ask myself, "Wait...will this be something I'm expected to know how to do in the first job I have out of library school?" Chances are no, I won't. (*whew!) BUT regardless, taking into account that electronic resources are here to stay and the environment surrounding their licensure and use continues to experience change, I very much want to learn all I can to get some grounding to build on in the future when I am called upon to participate in these library activities.

"Copyright makes sense as an incentive if its purpose is to encourage the dissemination of works, in order to promote public access to them" (Litman, 175). The idea that copyright is actually seen as a way to expand access continued to appear throughout the second half of Litman's book, and it still surprises me. It feels too easy, too simplified. Given the heated discussions and issues surrounding how to address copyright law in the digital age, I just don't buy it. It may make sense as an incentive for creators to have ownership of their works, and that they will be compensated and credited for any usage of it. But at the same time, a dissonance still seems present. Copyright promotes access, yet underlying principles of copyright practice attempt to exercise greater control over use.

Intellectual property is something I've heard about during my studies at SLIS. I attended a guest lecture (fall 2009) who discussed intellectual property and copyright, but I remember being entirely confused and struggling to write a decent response paper to prove to my professor at the time that I was indeed in attendance and listening to the presentation. I hope in this semester I'm able to gain a better understanding of intellectual property, and Litman's book was a good introduction to the "big picture" into which intellectual property fits. Moreover, while it was an interesting book to read and I learned much about the details of copyright and licensing history (*cue fireworks), my main takeaway was derived from the same connection that has been made in three of my classes this week: information versus knowledge. I promise not to get too flighty here, but this continuum of information-knowledge was brought up too many times to not stop and reflect on their intersection, tension, and implications for issues of copyright and, subsequently, the people who will be dealing with them in years to come: library school students (aka future rockstars).

Going back to this two-faced purpose of copyright (access vs. policing use), I think a parallel can be drawn. Not necessarily in that access: information, knowledge: policing use, because that doesn't really make sense. Nor is it that access with policed usage leadds to knowledge from controlled information (though I bet there's an interesting op-ed topic laced into that analogy). Rather: why is it that this excruciatingly lengthy, expensive, and relatively unfair battle over the logistics of information and its ownership is what determines how people are allowed - not able, but allowed - to access that which is necessary for education, individual thought, and eventual knowledge? It seems backwards that the means (information) justifies the end ("knowledge" or whatever creative growth and thought comes from what we are allowed to see/use/remember/expand upon/etc). Especially since all along, primary stakeholders have executed a very end-justified agenda, with the ultimate goal to maintain their market position and power.

And then this is the part where I try to tie my musings back to "real life." *distant cricket chirps...... *muffled cough...... *looooong awkward silence.....

To be continued. :)

Image credit: Wikimedia by Lotus Head from Johannesburg, Gauteng, South Africa.

Thursday, September 9, 2010

Unit 2: Reflections on an introduction to the real deal of copyright


Just as a disclaimer, I am entering this class with little to no formal experience with anything to do with electronic resource management, licensing, or copyright, except that I know I cannot avoid purchasing books for class by scanning an entire copy borrowed from the library (tempting as it is). Generationally, I think the most debated copyright issues for kids like me (still in the ripe-hood of our youth) have been the Napster and purchasing music fiasco while in college (most of which I understood the gist but with which I had no real beef); being pawned what my peers termed “black market” DVDs while on a band trip to New York (they were recordings of the first LOTR movie taken with a home recording device in the movie theater—which I did not end up purchasing, needless to say); and utter disappointment at not being able to watch the opening ceremonies of the most recent Olympics on YouTube the day after they aired, thanks to a “This content has been removed by the licensed owner” (or the likes) from broadcasting companies. Yes, I could be losing sleep over worse...

So basically, it’s not like this has come up every day, unless you count the itty bitty yet very ominous looking messages that prelude movies. I’m very new to this stuff. Our first class readings are from Jessica Litman’s book Digital Copyright (2001). Litman, who I believe is a copyright lawyer herself, does a great job of giving readers a lay of the land with regard to copyright. I won’t give you the play by play, but I will spout out a few reflections I have after what I’ve read thus far.

One of the first things I realized is this: I am not alone! According to Litman, most of us "common" individuals today are pretty clueless about copyright law, let alone how it could potentially affect them legally and financially. I almost fell off my chair when I read that businesses and restaurants are actually supposed to purchase licenses to play musical recordings in their stores (flashback to my own days as a scooping specialist at a 50’s ice cream sweets shop, where we played CDs – probably without a license - because the language and content of modern radio DJs provided a less-than-appropriate ambiance).

I oftentimes hear talk that my generation (kids in their 20s, for those of you who don’t know me) neither knows nor cares about things like privacy because we are not taught to value it but rather make our lives a very open and accessible book, thanks to the advancement of the Internet at large, more specifically social networking sites. I began wondering if this principle also applied to the explanation of why I feel in the dark about all this copyright stuff; is it because I’ve grown up in an Internet world in which information is readily accessible and I am both publisher and consumer, where I am not forced to recognize the middle man and worry about copyright law beyond how I get the music that's on my iPod?

Litman provides some wiggle room for those of us on the sidelines (aka the general public left out of copyright negotiations yet still subjected to liability and legal repercussions). She says that many of your average Joes and Janes don’t know the ins and outs of copyright because copyright law is written by copyright lawyers for copyright lawyers of copyright-concerned institutions and businesses. It is not exactly tailored to be applied to everyday life for individual consumers yet can be enforced as such, which confuses and baffles people. Like me.

Tune in again for more exciting adventures as I learn about, grapple with, and hopefully gain a better understanding of the complex web of copyright, licensing, and electronic resource management.

Monday, September 6, 2010

Greetings, Earthlings


Me again. Yep, still here at SLIS working on finishing my degree. AND have no fear, the revelry of a librarian's life continues as I embark upon my final semester and plan to use this blog as a platform for posting about one class in particular: Electronic Resource Management and Licensing.

*Cue wild cheers, applause, and general enthusiasm, complete with several cries of "Yes!" "I can't believe it!" and "This is the best day of my life!"

Enjoy, kids... :)

Image credit: Wikimedia by Isunsetimes.

Friday, May 28, 2010

So...did she drop out?


To my many adoring fans of Kate@SLIS: have no fear - I am, in fact, STILL in Madison working on my MLS. However, the academic rigors of the recently completed semester kept me from regularly updating (or rather, updating at all!) my blog of library adventures. Unfortunately, this has lead some (johnsjer) to believe that the life of a librarian is dull, uneventful, and altogether boring...which isn't entirely true...though by the looks of my blog, I have no evidence to prove otherwise.

So in an effort to add a little cayenne pepper to the mix, I'll be posting regular updates on what I'm doing in AND outside of the classroom, as well as some of my experiences on desk and behind the scenes of the libraries where I work.

Sounds utterly tantilizing, doesn't it? Trust me, not everyone can lead as glamorous a life as us librarians... :)