Wednesday, December 8, 2010

What’s In a Name: Definitions

So as I was taking notes over the past month on our readings, I couldn’t help but jot down every definition provided, as my what I consider to be still “rookie status” to this stuff always appreciates hearing things explained a different way to enforce understanding. Here is a list of some definitions I’ve compiled, to which I could probably add in the future:

OpenURL:
  • open source reference linking using link resolver, source AND target, simplicity and transparency, points to other related resources (i.e. availability in print or other ILL pages) (Boston & Gedeon, 2008)
  • “mechanism for transporting metadata and identifiers describing a publication, for the purpose of context-sensitive linking.” (Brand, 2003)
  • framework has both local control AND standardized transport of metadata

Link Resolver:

  • “…system for linking within an institutional context that can interpret incoming OpenURLs, take the local holdings and access privileges of that institution into account, and display links to appropriate resources.” (Brand, 2003)
  • OpenURL is sent to local link resolver, metadata in the OpenURL is compared with localized knowledgebase, then links if there’s a match. OpenURL framework+CrossRef+DOI=link resolvers are possible! (Weddle, 2008)

Digital Object Identifier (DOI):

  • Managed by the CrossRef system which uses DOIs to reliably locate individual digital items, can be considered problematic because it only takes users to publisher-supplied full text, not to the “appropriate copy” (Boston & Gedeon, 2008)
  • “DOIs link to the full text of information objects housed at publishers’ Web sites. If full text is available elsewhere—for example, in an aggregator or in an Open Access (OA) source—then a link resolver becomes necessary, which, depending on the quality of the metadata and the knowledgebase, can present any number of full-text options, not just the publisher’s full text,” a persistent, unique identifier, can be associated with multiple instances of an article (Weddle, 2008)
  • DOI: NISO standard syntax; prefix (assigned to content owner by DOI registration agency, all begin with 10 to distinguish DOI from other types of Handle Systems)+suffix(flexible syntax, composed by publisher according to internal content management needs, but must be unique within prefix); alphanumeric name for a digital content entity (i.e. book, journal article, etc.), allows content to be moved but kept updated so no broken links. DOI stays the same, while URL (which DOI is linked to) can change as needed (Brand, 2003)

CrossRef:

  • service of the Publishers International Linking Association, largest DOI Registration Agency, works as “a digital switchboard connecting DOIs with corresponding URLs” (Weddle, 2008)
  • “nonprofit membership association, founded and directed by publishers. Its mission is to improve access to published scholarship through collaborative technologies…it operates a cross-publisher citation linking system, and it is an official digital object identifier (DOI) registration agency” (Brand, 2003)

Reference Linking:

  • aka citation linking, a user’s ability to move from an information object to another, can be external or internal (Grogg, 2006)

Context-sensitive Linking:

  • “…takes the user’s context, usually meaning his or her affiliations but possibly also the user’s intent for desired information objects, into account; therefore, ideally, context-sensitive linking only offers links to content the user should be able to access,” appropriate copy (Weddle, 2008)
  • “In short, a workable linking framework should take a user’s context into consideration.” (Weddle, 2008)

Faceted Browsing:

  • Drilling down a search, or limited different facets until you get to your desired end result, takes guessing out of the process (Walker, 2010)

And to wrap the list up, here’s a nice summary from Brand (2003) of exactly how it all fits together:

  • “The DOI and the OpenURL work together in several ways. First, the DOI directory itself, where link resolution occurs in the CrossRef platform, is OpenURL enabled. This means that it can recognize a user with access to a local resolver. When such a user clicks on a DOI, the CrossRef system redirects that DOI back to the user’s local resolver, and it allows the DOI to be used as a key to pull metadata out of the CrossRef database, metadata that is needed to create the OpenURL targeting the local resolver. As a result, the institutional user clicking on a DOI is directed to appropriate resources.”

Mission Catch-up: Commence


Yep, I’m still on the map but fell off the radar for a bit following a trip home to MN for a cousin’s wedding where I got to exercise my otherwise dormant creative side (aka I sang and played piano at the ceremony). So to follow are a series of my “make-up” blog posts that I’m finally able to buckle down and get posted.

Photo caption: Don't you wish you were a part of the happy Westby family? We took this after the wedding ceremony (my sister was a bridesmaid hence the schmancy dress). Notice how the only normal-looking one was actually married into this crazy family - lucky girl. :)

Photo credit: Mama Westby

Saturday, November 6, 2010

Unit 10: Standards, Value, and the Pursuit of Electronic Articles


Echoing my own reflections from last week, this past unit provided another opportunity to consider the importance of data standards for electronic information resources. Yue (2007) bulleted out three reasons with which I agree reign with greatest importance: interoperability, efficiency, and quality. Without standards, systems and information will become increasingly incompatible, resulting in increased confusion for end-users and work/time/money for libraries attempting to somehow integrate their resources into a user-friendly knowledgebase. Pesch (2008), who provides a nice overview of the E-journal life cycle and lists groups involved in creating standards including NISO, COUNTER, and UKSG, emphasized that while libraryland has a good start on establishing extensible working standards, there is still a long way to go.

One area in particular that I have extensive experience working hands on with end-users and am realizing why standards are so important is with OpenURLs. Whenever I assist a patron with accessing an article electronically, the UW’s electronic resource system is using SFX (ExLibris) as a link resolver to connect the source with the target. In other words, the link resolver is separating out the elements of the OpenURL – not a static link – from the source and linking it to the appropriate target. If either the source or the target is not OpenURL-compliant, the metadata will not be exchanged (read: the end-user won’t get what they need) (read: panic/chaos/rage/confusion ensues, depending on the assignment/project deadline...). With the OpenURL Framework, standards are a large step closer in preventing this all around unpleasant interaction. :) Thanks to classmate Anna for her super presentation on FindIt, UW’s link resolver.

One of the tangential roads my thoughts took me down this week was thinking about how much influence user behavior truly has on the research process in libraries. When user behavior changes, we as librarians try to change with it, or even be a step ahead of the change, and allow systems development to evolve with what users do or expect. On one hand, I can see where this makes sense; yes, we should make electronic resources available in a way that users expect it to be accessible in order to enhance the resources’ utility, and subsequently the user’s perception of the library as a whole (on a much broader level). However, on the other hand, I’m beginning to hear reverb from our units on copyright and how flabbergasted we were to learn that copyright policy is essentially not created by librarians, or by people being affected by it, or by elected officials, or even by representatives from the larger companies reigning over this sector of the field; rather, it is primarily made by these large companies’ copyright lawyers. So with the surprise that accompanied this realization, I find myself going through the same response to patrons’ control over library systems’ organization.

It feels awfully trusting in the efficiency of your average user’s search behaviors, especially those from larger institutions with huge volumes of subscribed resources where considering arguably 75+% of users don’t have any idea of just how many and diverse resources are actually at their disposal. Seems a little uninformed, as though dated expectations are being manifested in new software for libraries. However, projects like the ones done by COUNTER make a case for the value of usage statistics, or measured user behavior.

Colleagues of mine gave a fantastic presentation and lab demo on the COUNTER project, where we got to see data from a few COUNTER reports and learned ways to actually use that data (i.e. putting it into charts, graphs, etc.). The COUNTER projects about which we read this week seem interesting, especially the one about impact factor. However, I’m not sure I understand the difference between the citation based impact factor and the author/usage based metrics. The Shepard (2010) article discussed usage bibliometrics, or usage based measures of value, status, and impact. It isn’t the first time this concept has surfaced, but up until now I have generally accepted the information presented on this topic. I’ve never worked in a situation where this mattered, and though I will work someplace someday with something closely tied to these issues/concepts, I have no frame of reference whatsoever in which to situate myself in challenging, pushing, or actually evaluating them. Hopefully, as the semester progresses, I will continue expanding my frame and find methods of meaningful takeaway, like the class demos and talking to librarians with whom I work on campus about their experiences – if any – with these programs, issues, and initiatives.

As I hinted at last week, this time of the semester is when things get more stressful than ever. I continue to complete and take thorough notes on all the readings for this class because ERM is very new to me. As easy as ERM may seem from an end-user perspective, it truly is a challenging arena of the library world. I wish there was a way to better demonstrate my understanding of the concepts despite the absence of any “real-life” context in which to consider the topics, which is how I believe people best learn. But after this class is done, regardless of the grades any student receives, each of us will take away as much as we put into it. Honest effort, time, and novice insight should always register some merit in a learning environment, but even when it doesn’t, I know that I will still come away from my classes this semester in an exponentially more learned position than I was before the term started. I am really proud of the time and effort that I have invested in this course and hope you’re enjoying the journey with me.

P.S. Happy Daylight Savings!

Saturday, October 30, 2010

Unit 9 continued: Winks thinks...


Part II of my Unit 9 post…

As I worked my way through this week’s readings, I wrote in my journal about my desire to be in a work position where I could get hands-on experience with ERMS’s to get a better understanding of where all these acronyms and systems fit together. And then my wish came true! One of my classmates gave a fantastic hands-on demo with using the open source ERMS called ERMes, developed by a librarian at UW-Eau Claire. It was great! We first downloaded the program, then learned about the interface and how to navigate it, then actually used it on our local desktops with some dummy data prepared by the presenter.

As much as I wanted to get experience with these systems in order to better understand them, I will admit that I was skeptical of the demo and nervous about using it for fear that the ERMS would be too complicated and just confuse me more. However, it was so good to go through the motions of what exactly an ERMS entails, understand why “manual data entry” is a huge inconvenience especially at a place with as many resources as UW (I don’t mind a little manual data entry every now and then, but after only working with five databases’ information? No. Way.).

Last but not least, what I really enjoyed this week was how much of the information really started to evidence connections to other units as well as across my other courses this fall. First, the connections to other units was very apparent when I read, “Ted Koppel (2006) predicts there will be a move away from aggregators to more title selection with more truth in pricing...Additionally, there will be increased transparency for content providers on pricing, and the demise of aggregator packages...There will be pay-per-view support and tracking within ERM” (Hogather & Bloom, 2003). Whoa – didn’t we just spend an entire unit a few weeks ago discussing pricing and subscription models, and the Big Deal still ropes even more libraries into their package deals right alongside even less transparency for pricing. Also, learning about the forthcoming developments in Phase II of ERMI really motivated me that licensing will continue to become more streamlined for libraries with things like licensing expression and data dictionaries.

Speaking of data dictionaries, this week’s unit really connected to the database design (DBD, I’ll call it for short) course I am taking. I feel very fortunate to be taking DBD because not only am I spared feeling overwhelmed at the overload of lingo, but I understand the pathways and relationships between user, library, resource, and specific content. It is neat to see from the librarian end of things why data standards and types of usage data allowed for on the DBD end of things is so important. Plus, I felt pretty good reading about ERDs and knowing that not only did I know what it is but also how to make one and make it well for the highest efficiency for the purpose it was meant to have. No matter how hectic this time of the semester can get, it always seems to coincide with moments when I start realizing how different classes begin to overlap and connect - love it.

Image credit: Me! :) Happy pre-Halloween!

Unit 9: Vewwy, vewwy intewesting


Yesterday’s class session was a lot of fun! With so many things to which I want to respond, I’m splitting this week into two posts. The topic this week focused on Electronic Resource Management Systems: Vendors and Functionalities—sounds like hardly the place for the descriptor “fun,” right? What I enjoyed so much about it was a) the relevancy of the information given certain libraries’ and library types’ trends toward primarily e-resources, b) the demo we did, and c) the connections I was able to make between my semester classes.

The need for ERM systems stems from the sheer volume of electronic resources available in the packages and products to which libraries subscribe, an amount that was simply not manageable by a single ILS. ERMI, or Electronic Resource Management Initiative, is guided by the following principles (Hogarth & Bloom, 2003):

1. Print and e-resource management should be through an integrated environment.
2. Information provided should be consistent regardless of path taken.
3. Each data element should have a single point of maintenance.
4. Electronic resource management systems should be sufficiently flexible to make possible to add new/additional fields and data elements.

In other words, in the coming years, librarians will be working toward ensuring these guidelines are interwoven with the new technologies and software available for managing their electronic resources. I found the list helpful as a sort of checklist against which I could compare different ERMS vendors’ products and consider how the different platforms and modules either enhance or obstruct these characteristics.

One of the major differences between the Hogarth & Bloom (2003) article that we read compared to the Collins’ chapter was that Collins spoke much more about the challenges in implementing the ERMS. Hogarth & Bloom (2003), on the other hand, focused on pushing the envelope in the future rather than stressing out about making them work now. It really illustrated – for me, at least – the tension kind of inherent in libraries and the systems they use to manage their resources, and with technology in general, depending on library type. Perhaps the authors are on different sides of the library field fence, but librarians (or who I think of as being on the use and implementation side of things) are concerned with making things actually work in the here and now, saying “Whoa, slow the heck down, what?” whereas program developers (or who I think of as being on the software design side of things – which sometimes include librarians) are intent on staying one step ahead of the game, pushing forward so not to spend too much time tweaking a possible soon-to-be-outmoded system.

I’m finding evidence of this in my research on mobile devices (more specifically Internet-capable handheld devices) and electronic resources, where there’s a ton of literature available that focuses on all the new tools and apps and opportunities for development. At the same time, however, there is hardly any literature that discusses actually implementing mobile services for licensed materials in a library. Vewy vewy interesting...

Saturday, October 23, 2010

Unit 8: Coming Soon..PasswordMan and the Digital Info Superheros!


This week’s readings focused on technological protection measures and the implications they have on libraries, patrons, and a dabble of information ethics/policy. Having just finished the last reading about the socio-technical construction of who is allowed use in our digital information environment, it is interesting to think back to the beginning of the semester and how copyright really stole the limelight, warning us against letting it dominate information policy. Yet here, this week we see that both social values and business also very closely affect information policy, illustrating in real time the danger of letting any one entity control something that affects diverse users in varied situations.


Reflecting on what Eschenfelder (2008) discussed, libraries do need to pay more attention to the soft restrictions that are just bundled up in the licenses they sign. I think after reading the Harris (2009) book, I came away thinking that the most important point in licensing materials was to make sure that the vendors or publishers 1) didn’t make any outrageous restrictions on materials, 2) didn’t place any requirements for trafficking patron use, and 3) were actually able to license the material. Having not been involved in any licensing of materials whatsoever, I cannot speak directly to the notions may or may not exist. However, I will say that going into a position of licensing materials, I would probably subscribe to the assumptions that soft user restrictions are status quo that Eschenfelder (2008) describes as all too prevalent in our current professional environment.


But after reading the Eschenfelder (2010) article, I started asking myself what is it that is keeps libraries from picking up the new technologies to use in controlling their collections? I think there’s a good possibility that the new technologies would help streamline their collections more in the long run, rather than as with the example of watermarking interfering with legitimate use it would instead keep protections there but without obstruction of the resource. However, maybe it’s a side effect of those long-held assumptions I discussed earlier, where librarians are not picking up the new technologies because they aren’t part of the expected use restrictions, and therefore imply greater restriction and control of access to the resources. Rather than being better for their end users, organizers of cultural institutions in the U.S. may be viewing the new technologies as the Big Bad Wolf, a terror in disguise. When really, the old stuff is what’s making their collections difficult to use.


With that being said, I wouldn’t exactly place money on this discussion, not yet at least. Why? A) I haven’t thought it through enough, and B) because I myself am victim of fearing the newer tools for controlling use and access to resources. If I were a librarian controlling access to my materials, I would stick with what I know best because it’s what’s safe and what works (and what we think still works), so – as I told my dad last winter when he suggested I trade in my dearly beloved ’91 Lumina, Sporty, for a newer vehicle – why fix it if it ain’t broke?


(And, as a total aside, I’m glad I didn’t get a newer car last year, seeing as Mother Nature just totaled Sporty a month ago with a nasty September hailstorm…yep.)


Taking a turn in my stream of consciousness... Millman (2003) writes, “Authorization may be considered an implementation of policy, and an important goal of authorization systems is flexibility, accuracy, and efficiency in implementing policies.” Now, I’m not sure what policies they’re talking about here, but I think it’s maybe the policy that says who can access what to whatever extent. I experience this each week between what I’m authorized to do and view in the library as a student vs. what I can access at different levels of staff between my different jobs. Authorization is a complex process, especially with people – like me – who have multiple roles within a single institution, where the Role-Based Access Control system that Millman (2003) describes is used.


As far as authorization and authentication go, my immediate thought was that I do enough authenticating these days as anyone ever should have to, especially at UW it seems they have a greater security concern than other things like Gmail. Logging into my three different email accounts, two blogging accounts, a social network profile, online accounts for banking and shopping, a distance learning portal for my online classes, and multiple work terminals (each with their own passwords), I don’t know how we as human beings do it, remember which identity we chose for which program/website/account/etc. I foresee a new age of digital information superheroes, with PasswordMan (see picture above) leading their forces in the fight against cybercrime.


Image credit: Wikimedia by Thibault fr

Wednesday, October 20, 2010

Unit 7: “It’s the way you ride the trail that counts.”


Tomas Lapinski came to speak in our class about the ins and outs of the TEACH Act. Regardless of how unlibrarian-ish of me it may seem, I have never heard of the TEACH Act (*gasp!). Yes, it's true. Somehow, freshman woes and other plagues of high school seemed to have been demanding my attention at the same time in history, so cut me some slack here...

The TEACH Act signed in by President Bush brought some pretty weighty changes on distance education as we know it. After reading about it in Lapinski’s article and in an ARL issue brief, however, I can’t say we seem like we have it figured out entirely yet. It was interesting to compare the pros and cons with the TEACH Act, seeing as it seemingly provides a lot more leniency for use of copyrighted materials in distance education. However, Lapinski pointed out some of the drawbacks that I would otherwise have missed. Not only does TEACH introduce potential licensing provision problems, but it also makes it difficult for users to negotiate between multiple copyrights on works (Lapinski gives the example of a CD of Copland and the copyright – or lack thereof – of the music versus copyright on the sound recording).


Right away, I thought the TEACH Act didn’t sound so bad. It seemed to make available many more resources than had otherwise been accessible by distance students. As one who has taken online/distance courses (not really from a distance, however, as I’m still here in MadTown), I can definitely speak to the importance of having materials that provide the equivalent experience of the face-to-face instructional setting for use at the learner’s convenience within the time appropriated to the course. The only streaming I have had in courses were voice lectures and voiceover tutorials created specifically by instructors. Speaking of which, the whole issue of MIA materials still has me hung up in not understanding exactly what it means. When going through the readings, I felt like the moment I understood what TEACH meant, it was explained further that just re-confused me.

I agree with Lapinski in that the “compliances” required by TEACH is troublesome on multiple levels, most significantly (from my experience and perspective) that user rights would be more dependent on institutional compliance. Yet, this too could be debated. Have user rights ever been independent from the institution providing the resources for one’s education? As a student using materials, it’s easy to overlook the licensing details and the remote access clearance and the copyright stipulations. But at the end of the day, we as users are quite dependent on our institution. I think the problem comes into play when that dependency shifts from the institution to an entity beyond the institution that is demanding compliance with non-institutionally created rules and requirements. That’s the kicker.

With the test coming up this Friday, I’m going to be spending a lot of time with this and our other readings from the semester, trying to weave some common themes between them other than “copyright.” The TEACH Act is one that I’m still trying to place my finger on, as the Lapinski article wasn’t exactly a “bit of light reading.” Although he did a nice job explaining what it meant, the Act itself is just very wordy and circular, in my opinion. At the end of the day, I’m still going to ride the trail more traveled and wave my white flag of Fair Use until they take me in for good. Apparently, that seems good enough for now, to at least act with good intentions and to be willing to a) change or b) argue your case if someone thinks otherwise.