Saturday, November 6, 2010

Unit 10: Standards, Value, and the Pursuit of Electronic Articles


Echoing my own reflections from last week, this past unit provided another opportunity to consider the importance of data standards for electronic information resources. Yue (2007) bulleted out three reasons with which I agree reign with greatest importance: interoperability, efficiency, and quality. Without standards, systems and information will become increasingly incompatible, resulting in increased confusion for end-users and work/time/money for libraries attempting to somehow integrate their resources into a user-friendly knowledgebase. Pesch (2008), who provides a nice overview of the E-journal life cycle and lists groups involved in creating standards including NISO, COUNTER, and UKSG, emphasized that while libraryland has a good start on establishing extensible working standards, there is still a long way to go.

One area in particular that I have extensive experience working hands on with end-users and am realizing why standards are so important is with OpenURLs. Whenever I assist a patron with accessing an article electronically, the UW’s electronic resource system is using SFX (ExLibris) as a link resolver to connect the source with the target. In other words, the link resolver is separating out the elements of the OpenURL – not a static link – from the source and linking it to the appropriate target. If either the source or the target is not OpenURL-compliant, the metadata will not be exchanged (read: the end-user won’t get what they need) (read: panic/chaos/rage/confusion ensues, depending on the assignment/project deadline...). With the OpenURL Framework, standards are a large step closer in preventing this all around unpleasant interaction. :) Thanks to classmate Anna for her super presentation on FindIt, UW’s link resolver.

One of the tangential roads my thoughts took me down this week was thinking about how much influence user behavior truly has on the research process in libraries. When user behavior changes, we as librarians try to change with it, or even be a step ahead of the change, and allow systems development to evolve with what users do or expect. On one hand, I can see where this makes sense; yes, we should make electronic resources available in a way that users expect it to be accessible in order to enhance the resources’ utility, and subsequently the user’s perception of the library as a whole (on a much broader level). However, on the other hand, I’m beginning to hear reverb from our units on copyright and how flabbergasted we were to learn that copyright policy is essentially not created by librarians, or by people being affected by it, or by elected officials, or even by representatives from the larger companies reigning over this sector of the field; rather, it is primarily made by these large companies’ copyright lawyers. So with the surprise that accompanied this realization, I find myself going through the same response to patrons’ control over library systems’ organization.

It feels awfully trusting in the efficiency of your average user’s search behaviors, especially those from larger institutions with huge volumes of subscribed resources where considering arguably 75+% of users don’t have any idea of just how many and diverse resources are actually at their disposal. Seems a little uninformed, as though dated expectations are being manifested in new software for libraries. However, projects like the ones done by COUNTER make a case for the value of usage statistics, or measured user behavior.

Colleagues of mine gave a fantastic presentation and lab demo on the COUNTER project, where we got to see data from a few COUNTER reports and learned ways to actually use that data (i.e. putting it into charts, graphs, etc.). The COUNTER projects about which we read this week seem interesting, especially the one about impact factor. However, I’m not sure I understand the difference between the citation based impact factor and the author/usage based metrics. The Shepard (2010) article discussed usage bibliometrics, or usage based measures of value, status, and impact. It isn’t the first time this concept has surfaced, but up until now I have generally accepted the information presented on this topic. I’ve never worked in a situation where this mattered, and though I will work someplace someday with something closely tied to these issues/concepts, I have no frame of reference whatsoever in which to situate myself in challenging, pushing, or actually evaluating them. Hopefully, as the semester progresses, I will continue expanding my frame and find methods of meaningful takeaway, like the class demos and talking to librarians with whom I work on campus about their experiences – if any – with these programs, issues, and initiatives.

As I hinted at last week, this time of the semester is when things get more stressful than ever. I continue to complete and take thorough notes on all the readings for this class because ERM is very new to me. As easy as ERM may seem from an end-user perspective, it truly is a challenging arena of the library world. I wish there was a way to better demonstrate my understanding of the concepts despite the absence of any “real-life” context in which to consider the topics, which is how I believe people best learn. But after this class is done, regardless of the grades any student receives, each of us will take away as much as we put into it. Honest effort, time, and novice insight should always register some merit in a learning environment, but even when it doesn’t, I know that I will still come away from my classes this semester in an exponentially more learned position than I was before the term started. I am really proud of the time and effort that I have invested in this course and hope you’re enjoying the journey with me.

P.S. Happy Daylight Savings!

No comments:

Post a Comment