Saturday, December 11, 2010

Reflection Post


For our final blog post, our assignment is to provide an assessment of this reflective experience and look back at what we’ve learned and how our blogging project has contributed to that. So here goes…

This experience of blogging my reflections on the coursework in ERM got me thinking that there may be an upside to blogging that I have since ignored in exchange for sharing the sentiments of those who claim: “Blogs: never have so many had so much to say about so little to so few.” :) I used to see blogs as online diaries, I guess, which feels extremely invasive for someone like myself who does as little Facebook updating, Twittering, and whatever other modern social networking is available in an attempt to avoid advertising myself for future generations’ entertainment and study. However, this class found me looking forward to blogging as a medium by which I felt like I could synthesized the readings for class after having digested them and listened to discussions and lectures in class. Being able to post my thoughts about things here helped keep me accountable to intentional reflection on coursework.

Also, more directly related to this class, an interesting element of my blogging that I grew in was the pictures I attach to each entry. Prior to this course and in other projects, I would simply Google images and put them up, or search in Wikimedia and post them without attribution (guilty as charged – but I’ve gone back and added attribution now!). However, I am now paying much more attention to the licensing information on the Wikimedia licenses for each different photo and website, and am much more interested in the concept of Creative Commons licensing. At first I was a lot more paranoid and let it almost affect my “creative thought and growth” in constant fear of trying to avoid potholes of copyright trouble, like a little minefield where copyright infringement could explode at any second.

So on the whole, my thinking about ERM has changed in the following ways:
  • Copyright is not as scary/frightening/overwhelming/ominous. Take your pick.

  • I understand people’s concern with wanting to protect their stuff through copyright, though there are instances of copyright that still surprise me.

  • Overall, I have a bigger picture perspective on things like click through licenses, disclaimers, and getting electronic access to journal articles.

  • I actually like looking up license and terms of use agreements on the databases I’m using at school. And know how to read them.

  • I am up to speed on the real-life issues facing libraries with ERM, such as Georgia State, and the initiatives currently in the works that seek to further advance this area of the field.

  • I see how standards are a good thing, especially in LibraryLand, because without them things get very complicated.

  • I know how to look for supplemental information regarding licensing, copyright, and e-resource management, and I’m familiar with the foundational concepts and terms used by ER librarians on a day to day basis.

  • I know that the heck is going on when I click Find It!

  • And finally, I have grasped a piece of the LIS puzzle that I didn’t truly understand before, and see where it fits in as well as how interconnected it is with the other pillar issues in the field (i.e. ownership versus access, collection development, virtual versus physical space, etc.).

I think my most important post was the Unit 11: Finding Content. Perhaps it is because I had a whole month to be thinking about it (again, apologies for the delay in posting), but it is also the post where I suddenly starting seeing everything come together as far as concepts in the class. Copyright and licensing issues are big picture, but those are all dependent on what goes on at the link level with the OpenURLs and link resolvers. I realized how interconnected everything is that we’ve been working on this semester, and even more so, how much of an effect Unit 11’s concepts in particular have on patrons’ access to information. Also, my “What’s In A Name: Definitions” post really helped clarify so much of what we have been discussing in the later half of the semester, and connecting it to everything that we discussed at the beginning of the semester. I always think it’s a good thing to hear something explained a number of different ways, so this helped immensely with reinforcing our course content.

And in closing, here is the quote that puts it together for me, where ERM, libraries, and users intersect and help further illuminate the job we have as information professionals, perhaps even ER librarians:

“The behaviors and expectations of today’s users are changing, so make the
effort to learn about them. For example, users take for granted, and expect,
relevance ranking; they like the user experience to be intuitive, without
compelling them to learn how to use the system; and they expect to be able to
interact with the system by contributing, for example, reviews and tags that
will help them and others find the information in future” (Walker, 2010).

Farewell for now – I’ll be back again soon (*cue uproarious laughter and smirks of, “That’s what you said last time!”) with news from the job search, or even hopefully from my new job, if I find one!

Happy Holidays!

Image credit: Wikimedia by DR04

Unit 14: Perpetual Access


This week focused on perpetual access. People are more familiar with preservation of physical items, but preservation of electronic resources has the spotlight this week. Why is this important? Watson (2008) writes, "If a cohesive preservation strategy for electronic resources is not devised and implemented in the near future, there is a real danger that current research output may be lost to future generations." I thought perpetual access meant simply being able to access the content of an e-journal after the library no longer subscribes to it, but it turns out that there's a bit more and it gets a little complicated regarding definitions. Here's what Temper & Barribeau (2006) write about perpetual access:
  • Perpetual access right to an electronic journal: the right to permanently access licensed materials paid for during the period of a license agreement (not to be confused with the right to copy journal content solely for preservation purposes).

  • Perpetual access right VS Archving right: archiving right is the right to permanently retain an electronic copy of the licensed materials. "The emphasis of [perpetual access right] is retaining access, not on how such access is achieved (through the publisher's site, a locally retained copy, or a third-party site)."

So, as I understand it, perpetual access right is being able to always access the content (regardless of how dynamic it is) rather than just having the right to keep a static copy of the content you licensed. As I was working through the readings, I kept thinking about the similarities between OpenURL vs static links. Basically it comes down to this: when a library purchases physical materials, it would have back copies archived and available. However, e-content is another story, including anything that has been accessed through a subscription. Sure, a library could ILL the item, but Watson (2008) points out that more recently, ILL clauses found in licenses actually make it harder for institutions that still have the resources to loan them to other libraries.

The question then turns to how libraries and publishers are trying to make these resources accessible in both the short- and long-term because, "Just-in-time collection development is on the rise, and just-in-time delivery of content is becoming increasingly attractive to libraries" (Watson, 2008). I liked that quote because in one of my early SLIS classes, we discussed the old "just-in-case" model of collection development, and its evolution to "just-in-time," to which Watson (2008) also refers. The entire "ownership vs access" issues that are pillar to LIS are now not as cut and dried as they used to be (but really, were they ever?). :) Accreditation bodies, writes Watson (2008), are starting to change what even constitutes ownership in libraries.

Temper & Barribeau (2006) write, "What type of access a lapsed subscriber might have at such an archive and what license wording is specific enough to ensure such access are the critical issues." So the major concerns affecting preservation of e-resources are:

  1. Maintaining perpetual access: Watson (2008) writes that the risk of losing perpetual access hasn't "had any significant impact on libraries' decision making in terms of the models and licenses they rely upon to provide their users with access to e-resources." Why? a) Patron pressure: they want current access, so libraries just need to ensure they provide current access. b) Availability of content: Watson (2008) predicts that someday in the near future, print copies of journals will have less content than the electronically published copies, plus more libraries are just collecting electronic, or at least mostly electronic. And lastly, c) Financial pressures: e-only eliminated some of the big costs of space, time, and upkeep. It isn't that perpetual access would be impossible to get, as Watson (2008) seems to think that publishers would be open to providing it, but people on the library side are in short supply or time crunches where negotiating for it is challenging and couldn't even supply staff to set up perpetual access for e-resources.

  2. Data copying and migration: Rather than copying to paper in order to preserve, now looking at using digital means. However, digital storage can be expensive thanks to the cost of hardware, software, technical expertise, etc. Additionally, technology keeps changing so it might not be safe to keep digitally stored things in the same system that stored it a few years ago due to compatibility issues.

  3. Preserving content in the "information explosion": Watson (2008) refers to the information explosion as a profound increase in the creation modification and distribution of digital content. For the most part, this means electronic content. In other words, finding a way to preserve things like websites, blogs, wikis, and online reference works. Having an archive of past versions of these media would benefit, for example, researchers in citing these sources or going back and verifying citations/using the original version of the work.

What is being done to address these concerns? "Libraries are driven by a shortage of funding and space, and patron demand for desktop access, to cancel and discard print journals in favor of online, but this same lack of funding restricts their ability to invest in or support preservation initiatives" (Watson, 2008). Despite lack of available funds, here is a list of the current initiatives for e-preservation that I've summarized from Watson (2008):

Third Parties:

  • JSTOR: (Journal Storage Project) subscription-based, fee for service to back issues of journals on collection by collection basis not individual titles. Must have a subscription to keep getting this service, no perpetual access otherwise. JSTOR content supplements subscriptions to current content, “to this end, there is a rolling embargo, the length of that varies from publisher to publisher” (Watson 2008) in order to not damage current content sales from other publishers. Interesting note: JSTOR is working with University of California and Harvard to make a print archive of all its online holdings.
  • LOCKSS: (Lots of Copies Keep Stuff Safe) Individual libraries have LOCKSS servers for all types of electronic content, but to put it in the server it needs to be in the license with the publisher to do it. Servers are backups to each other, creates a distributed environment. Benefit is that it sort of can be perpetual access for libraries’ cancelled subscription content. Started with funding from Mellon Foundation. Libraries maintain their own LOCKSS servers, pubs don’t have much control over content on LOCKSS servers. Not good for e-books.
  • Portico: started with funding from Mellon Foundation in 2005. “dark” or unavailable, central archive until publisher stops offering back issues. put their stuff there and pay to have it stored. Manages the server for institutions (good option for libraries without tech support for it), a little more control for publishers than with LOCKSS. Eileen Fenton spoke about uncertainty of the field of electronic preservation and how even efforts like Portico don’t necessarily guarantee long-term access, and the importance of synergizing efforts from around the field to come up with solutions, also not good for e-books.
  • Google Book Search, PubMed Central

Libraries:

  • CIC: consortial effort, “Libraries determined to go e-only to save money may wish to allocate money for print retention projects and seek membership in a consortia that has such projects” (Temper & Barribeau, 2006).
  • Institutional repositories: material produced by researchers within the institution can be stored (uncertain about completeness and long-term integrity of repositories, preprints/postprints not really good substitutes for the published journal article
  • IFLA

Publishers:

  • “...questions remain about how reliable publishers are as a solution to long-term preservation needs. Publishers are bought, sold, and go out of business. Their main concern is profit rather than preservation” (Watson, 2008). Going to third parties mostly in response to this.

Governments:

  • Some countries require publishers to submit a print copy of all publications to a legal depository. Much like NIH’s mandate to deposit articles that received NIH funding to PubMed Central.

Foundations:

  • Mostly there for the money to support the initiatives.

On the whole, there are a lot of unknowns in the topic of e-preservation. How will it be funded? Who should be in charge of it? What technology will last long enough to make it worth doing now? Much like with the discussion about e-books and accessibility, each initiative has different high points, so as future initiatives continue to evolve, the aforementioned questions will continue to be addressed by the key players.

Image credit: Wikimedia by Kris De Curis.

Friday, December 10, 2010

Mobile Devices ERM Presentation

Here are the slides from my ERM class presentation about mobile devices and licensed electronic content. My presentation was November 19. Kudos to Jessi Van Der Volgen for the tip on using SlideShare.com to get my slides to appear here! Westby erm presentation

Enjoy!

Wednesday, December 8, 2010

Unit 13: Reflections on the Worklife of the ER Librarian


While working through this unit’s readings, I was interested to learn about the crossover between ER librarians and what I understand constitute special librarians (which, for non-libraryland people, probably sounds very funny, but is – in fact – an actual type of librarian). The two seem to both negotiate in the spaces between business and libraries and therefore must be skilled in multidisciplinary professional strengths: business, law, communications, technology, etc.

"Librarians who find themselves in the role of ER manager often experience this same unsettled feeling because of rapid changes in product availability, new technologies, pricing structures, and licensing terms...Like the products themselves, the ER librarian position still evolves. It plays many roles, e.g. traditional librarianship, Web design, systems management, and accepts many responsibilities, e.g., contract attorney, business manager, technology troubleshooter" (Albitz & Shelburne, 2007).

Yikes!

This is probably where the process mapping discussed by Afifi (2008) comes into play. Process mapping is used in systems development and for managing large and complex projects, whereby business processes are mapped and continually re-evaluated to improve performance and productivity. In Afifi’s (2008) own words, "Process mapping itself is a way of breaking down a process into distinct steps with beginning and end points, somewhat similar to a flow chart.” It shows an input and output, starting with the big picture then get more detailed as it continues. It is also called business process reengineering, and the primary benefit is that it helps to create flexible organizations and help illustrate communication breakdowns, which is especially useful for constantly changing environments like in managing electronic resources from back-end delivery perspective. "Process maps can help businesses understand and control how their companies function, what actions are involved and who is performing the necessary steps to manufacture a product or manage a service." (Afifi, 2008). But where would a topic in ERM be without the need for standards! NISO and the Digital Library Federation (DLF) have tried to help in standardizing process mapping, but according to Afifi (2008), there is no concensus yet.

Afifi (2008) isn’t the only scholar to have touched upon the importance of process mapping in our semester readings. In Unit 11, Weddle (2008) mentioned how important workflow is because different departments use the tools and all share that entry point. Also, Unit 11’s Grogg (2006) pointed out in relation to the necessity of external linking when going from one content provider to another, process mapping is very difficult despite the opportunities that come with it because things like external linking is hard to map and manage.

At first, I was surprised to see the Albitz & Shelburne (2007) survey data showed that most people had been in reference or a bibliographer prior to working as an ER librarian, assuming people in the ERM line of work would come from the Tech Services persuasion. However, in retrospect, I’ve realized that it would be more beneficial to come from experience with hands-on time with the resources to truly understand how they are utilized and user needs. Albitz & Shelburne (2007) highlight this when they write about ER librarians’ "...increased complexity of the position and the need to focus the incumbent's responsibilities on ER management rather than on reference or bibliographic instruction.” This, however, got me thinking about whether there should be a separate degree or specialization for this type of work. Back in LIS 450 during my first semester at SLIS, we talked a little about dividing IT people, or computer science, and librarians. However, ER librarians really seem to be the hybrid breed here, needing a little of everything.

In the handy guide for the “serialist” provided by Griffin (2009), I found the suggestions helpful and the guidance it provided to be something that I would expect from the people who hired me in a serialist position. However, I wonder just how much time someone has to actually go through all the extra research and discovering materials through browsing print reference collections that libraries keep on-hand, especially at a place like UW. I think it's hard enough to really get into the system and workflow of a new employment environment, and I've heard multiple estimates on how many years (yes, years) it takes someone to actually get comfortable with the collection they have at their library. Thus, to go into a new job learning all of this at once, I cannot imagine that this is easy. Ha. Understatement of the year. But it was nice to see that Griffin (2009), too, was an English major like myself and could make it in this area of librarianship – there is hope yet!

And just to throw it out there, I think web developers in libraries will follow a similar pattern that ER librarians have, in going from back-burner "on the side" positions to very up-front and vital to library operations. As libraries recognize the importance of a web presence, I think the jobs of “Web Librarians” will be permanent full-time positions sought after with great urgency. It will be interesting to monitor how this changes the theory behind LIS and either the further separation between or blending of libraries and IT.

Image credit: Wikimedia by Petritap

Unit 12: E-Books: Audio & Text


This week’s course content centered on e-books: audio and text. I am one of the seemingly few public library patrons who are lamenting the withdrawl of books-on-tape. My old Lumina sports a perfectly functioning tape deck, so all this talk of digital audio books (DABs) has me a little bitter... :)

Grudges aside, Peters (2005) kicks things off this week by giving a nice overview of the major players in DABs in libraries, which I’ve listed below with my notes from each:

OverDrive - Play on PC with free software, transfer to playback device and/or burned, WMA files, simultaneous users, media markers to going between stuff, allows consortial agreements good administrative module

Playaway - Has some different language materials, wide range of content, can purchase by individual title, circulates at most libraries as an actual physical item

NetLibrary - No software needed, can be played on multiple devices, no burning, need to set up an account, have different file types and sound qualities - radio and CD - which allows for users on dial-up and other connections, can preview the book very much like flipping through a few pages

TumbleBooks - Has different language materials, games and books and puzzles, need to be online - no downloads, does not offer a consortial pricing model

Audible.com - More individual subscriptions, issue: not allowing multiple people to "check out" a DAB copy simultaneously, libraries need in-house circ, no way to independently load books onto player because need to go directly to device, frustration of having to relearn new technologies

So to rewind a moment, the National Library Service for the Blind and Physically Handicapped started a digital talking book program in 2008, and it is a sector of the field that is emerging very quickly. Initiatives to develop DAB systems that enhance accessibility for all patrons continue to grow, including the Braille and Audio Reading Download website. Accessibility issues are very real but not often recognized as important and driving factors for this segment of library user populations. Today, you can see that there are a lot of different devices available, and the readings and videos for this week brought to my mind some of the issues associated even with things like DABs. DABs have great potential to bridge accessibility gaps for certain segments of library user populations, especially those with vision impairment or physical handicaps. However, there are still accessibility issues becoming apparent as more DAB technologies find their way into libraries, such as web access to content (Audible) and the physical operation of a device (Playaway), for example.

I began to think about the importance of keeping in mind that when things like mp3 devices were created, they were born out of convenience, something to play music on wherever you were (and I even owned an early model that doubled as a flashdrive for my Microsoft Office files, too). I don’t believe they were created necessarily as the primary media by which people with disability and accessibility concerns would receive information. As it is realized more and more by the industry just how fitting these devices would be to fill a gap in library accessibility, devices will continue and have continued to mature, as do the software that accompany them. It is difficult to tell at this time what a future device would look like, one that would “have it all.” My immediate thought was to just take the best of each one and create a single device that would, in fact, have it all. But then I start to question if, for example, there any areas where adding a feature to one device would take away from the best advantages of the original device? For example, would adding something to a DAB gizmo or program end up compromising its accessibility in another way?

As a librarian, I imagine it is difficult to work around and with the different players, appreciating the diversity each offers but uncertain about how to best integrate them into your library holdings, let alone a single cohesive DAB collection. Peters et al (2007) notes this: “...if an individual library subscribes to NetLibrary audiobooks and gains access to OverDrive audiobooks through some consortia or statewide initiative, that library may want to pull all this digital audiobook content together into a seamlessly whole collection and service that it presents to its users.” Also, as one of the videos we viewed for this week noted for e-books, there are challenges presented in purchase versus lease plans for DABs. “Under the purchase model, libraries purchase and own each copy of each title. On the other hand, a leasing model offers multiple concurrent users access to a title” (Peters et al, 2007). And we find ourselves again back at one of the pillar concerns facing libraries today: ownership versus access.

With regard to pricing, Peters et al (2005) write that authors think unlimited simultaneous user pricing model is best and that it should be industry standard along with variable speed playback, enlarged content scope, and more focus on usability than style. I found this a little ironic considering two years later, Peters et al (2007) picked apart every stylistic detail of DABs content including narrator’s voice and background effects. This part of the DAB overview for libraries sort of bothered me because it seems like an awfully nitpicking thing to concern oneself with when – in comparison – people who read physical books don’t have a say on industry standards regarding typeface, color of pages, size of print or pages, etc. Perhaps these stylistic details truly are a serious issue to users, but I think we need to consider the accessibility problems still in the pipeline to be dealt with before what seem like other surface level details. I will note that I found it interesting that Peters et al (2007) split up the collection content of DABs into frontlist, backlist, and public-domain titles, because that was what my big drawback of thinking the world of DABs is, that I can only ever access really old books on Playaways. However, given how fast the market is changing, that’s probably different by now.

Seeing the License and Agreement Terms highlighted in the Peters et al (2007) article put into perspective how as librarians we are faced with an ever-evolving licensing environment. Technology really does continue to change everything and whether we want to or not, it will affect us. I liked the variety of reports that are available through OverDrive with regard to DAB circulation, where you can see activity charts as well as current waiting list reports and waiting list history reports.

And to give a quick shout-out to e-books, I didn’t realize that they were not as highly favored in academic library settings. They have received such positive attention in the public library sector and general population, but I also have not considered e-books as a part of a university library collection prior to this week. I don’t know, however, if the Think Tank’s suggestion to catalog the e-books that a library acquires would necessarily solve the problems they highlighted. This is especially considering that the source of many of the errors were typically not on the library processing side but rather on a more fundamental level with the e-book publishers.

Image credit: Wikimedia by Free Software Foundation

Unit 11: Finding Content


During my week home, our unit focused on finding content with electronic resource management systems and discovery tools, primarily things like OpenURL and OpenURL standards, link resolvers, and digital object identifiers, or DOIs. Boston & Gedeon (2008) discuss each of these tools and the ways to use technology to link to library resources. In the old days (which, relatively speaking, really isn’t that long ago), internet resources were always located at static URLs on servers, but they were always at risk of breaking because links would get moved around to new servers, sites would undergo redesign, etc. So the new age solution is to use the digital object identifier protocol and OpenURL standard to find items electronically wherever they may be. Today’s practice, according to Boston & Gedeon (2008), is an OpenURL standard combined with added features of link resolving technology.

One of the problems with implementing linking standards is the “appropriate copy” problem, which is how to determine which copy of an article is the one to which a link should direct users. One solution would be context-sensitive links from databases to an institution’s resources by way of a publisher service. Another solution is to use a link resolver, like SFX. Link resolvers, as Boston & Gedeon (2008) write, were precursors of the OpenURL because they didn’t provide standards-based linking protocol. The benefits of using this approach are interoperability, resource discovery, and linking between resources in a personalized and user-friendly approach.

The primary concern when looking at all these technologies is interoperability, as Weddle (2008) points out that things like the link resolver will affect other things like the MARC record service. All are connected through a single point of data entry. Weddle (2008) does a nice job reviewing some of the tools available, two of which I list below with some notes for each:

  • A-Z list: service provider responsible for changing global knowledgebase when resources added/removed, and librarian worries only about the local list, facilitate known-item searching
  • OpenURL and Link Resolver: OpenURL via link resolver implementation provide crucial links among resources and is closely connected to A-Z list, allows patron to go from browsing citations and jump directly to the resource housed in another location, the source and target relationship,

Weddle (2008) then introduces the topic of federated searching, which addresses the need to search across many e-resources and helps with topic searching. Walker (2010) also discusses federated searching, or metasearching. “Federated searching represents a direct response to users’ behaviors and preferences…Through careful selection and grouping of library resources, libraries still provide value-added services—services that are not available via free Web searching,” writes Weddle (2008). And Walker (2010) agrees: “Web resources are more ‘discoverable’ when other sites link to them,” which is what CrossRef, DOIs, Link Resolvers, and OpenURL help.

The benefits of linking to the article level are that it makes citations actionable immediately and enhances efficiency of online research. In other words, the author and topic is more important than the publisher in terms of identification, so linking to the article level enables the system to go across all publishers and takes users to their websites and versions of the document. While it helps fix errors and serves as a business opportunity for publishers, libraries also benefit: “Namely, [DOIs] make linking reliable at any level of granularity possible. In the same way that links drive readers to publisher content, libraries may also see increased usage of acquired electronic resources as an additional benefit” (Brand, 2003).

Grogg (2006) takes a different route in his examination of finding content in ERMS, looking at reference linking. I pulled this excerpt from the article: “Internal linking may seem more attractive because, on the surface, there appears to be no need to address the appropriate copy. If a user has entrance to a content provider’s front door, then the user will most likely have access to whatever lies within the provider’s content database(s). That sort of reasoning is fault, however, because in addition to the large aggregator issue previously described, no one provider can hope to include all literature cited in every reference list” (Grogg, 2006). To help illustrate this point, Grogg (2006) likens internal linking to a shopping mall, and external linking as a downtown shopping district, where the difference is between closed and open environments. External linking (or intersystem) is needed when a user goes from one content provider to another.

Brand (2003) looks at concepts that reach beyond citation linking, things like

  • Forward linking: I understand this to be the same as a cited reference search in a single search interface, as found in WOK.
  • Parameter passing: using OpenURL syntax to send a key along with the DOI that enables extra functionality to benefit publishers and users (i.e. a parameter might be info about the source entity).
  • Multiple resolution: user clicks on one link and can see a menu of publisher-supplied options

Walker (2010) also is forward thinking in a way that I found particularly appealing. Consider for a moment this excerpt:

“The behaviors and expectations of today’s users are changing, so make the effort to learn about them. For example, users take for granted, and expect, relevance ranking; they like the user experience to be intuitive, without compelling them to learn how to use the system; and they expect to be able to interact with the system by contributing, for example, reviews and tags that will help them and others find the information in future.”

It is interesting to me from a “behind the desk” experience because – with the exception of a handful of physicians I have assisted at the health sciences library at which I work – hardly ever have patrons asked me about how results are assessed in relevance ranking. But regardless of whether we think patrons should think about this, we as information professionals need to be aware of their expectations, their in-the-moment “This is what a link/library/resources should do” thinking. This might mean examining things like tagging and faceted browsing as effective search strategies, which might before have seemed like less academic, I guess you could say, methods of inquiry.

Image credit: Wikimedia by Petritap

What’s In a Name: Definitions

So as I was taking notes over the past month on our readings, I couldn’t help but jot down every definition provided, as my what I consider to be still “rookie status” to this stuff always appreciates hearing things explained a different way to enforce understanding. Here is a list of some definitions I’ve compiled, to which I could probably add in the future:

OpenURL:
  • open source reference linking using link resolver, source AND target, simplicity and transparency, points to other related resources (i.e. availability in print or other ILL pages) (Boston & Gedeon, 2008)
  • “mechanism for transporting metadata and identifiers describing a publication, for the purpose of context-sensitive linking.” (Brand, 2003)
  • framework has both local control AND standardized transport of metadata

Link Resolver:

  • “…system for linking within an institutional context that can interpret incoming OpenURLs, take the local holdings and access privileges of that institution into account, and display links to appropriate resources.” (Brand, 2003)
  • OpenURL is sent to local link resolver, metadata in the OpenURL is compared with localized knowledgebase, then links if there’s a match. OpenURL framework+CrossRef+DOI=link resolvers are possible! (Weddle, 2008)

Digital Object Identifier (DOI):

  • Managed by the CrossRef system which uses DOIs to reliably locate individual digital items, can be considered problematic because it only takes users to publisher-supplied full text, not to the “appropriate copy” (Boston & Gedeon, 2008)
  • “DOIs link to the full text of information objects housed at publishers’ Web sites. If full text is available elsewhere—for example, in an aggregator or in an Open Access (OA) source—then a link resolver becomes necessary, which, depending on the quality of the metadata and the knowledgebase, can present any number of full-text options, not just the publisher’s full text,” a persistent, unique identifier, can be associated with multiple instances of an article (Weddle, 2008)
  • DOI: NISO standard syntax; prefix (assigned to content owner by DOI registration agency, all begin with 10 to distinguish DOI from other types of Handle Systems)+suffix(flexible syntax, composed by publisher according to internal content management needs, but must be unique within prefix); alphanumeric name for a digital content entity (i.e. book, journal article, etc.), allows content to be moved but kept updated so no broken links. DOI stays the same, while URL (which DOI is linked to) can change as needed (Brand, 2003)

CrossRef:

  • service of the Publishers International Linking Association, largest DOI Registration Agency, works as “a digital switchboard connecting DOIs with corresponding URLs” (Weddle, 2008)
  • “nonprofit membership association, founded and directed by publishers. Its mission is to improve access to published scholarship through collaborative technologies…it operates a cross-publisher citation linking system, and it is an official digital object identifier (DOI) registration agency” (Brand, 2003)

Reference Linking:

  • aka citation linking, a user’s ability to move from an information object to another, can be external or internal (Grogg, 2006)

Context-sensitive Linking:

  • “…takes the user’s context, usually meaning his or her affiliations but possibly also the user’s intent for desired information objects, into account; therefore, ideally, context-sensitive linking only offers links to content the user should be able to access,” appropriate copy (Weddle, 2008)
  • “In short, a workable linking framework should take a user’s context into consideration.” (Weddle, 2008)

Faceted Browsing:

  • Drilling down a search, or limited different facets until you get to your desired end result, takes guessing out of the process (Walker, 2010)

And to wrap the list up, here’s a nice summary from Brand (2003) of exactly how it all fits together:

  • “The DOI and the OpenURL work together in several ways. First, the DOI directory itself, where link resolution occurs in the CrossRef platform, is OpenURL enabled. This means that it can recognize a user with access to a local resolver. When such a user clicks on a DOI, the CrossRef system redirects that DOI back to the user’s local resolver, and it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenURL targeting the local resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources.”