Gathering Statistics: Measuring the Value of Technical Services

Recently, ALCTS e-forum held a forum on “Turning Statistics Into Assessment: How Technical Services Measure the Value of Their Services.” If you missed the discussion, you should be able to see the thread on their archive page. I enjoyed the thread as I myself wonder how we can measure the value of technical services. Is gathering all types of statistics on the number and format of items cataloged helpful? What statistics should be gathered? How does one go from numbers to assessment to a statement on the value of services?

It turns out that many gather statistics on the number of items cataloged (as in added to the catalog) by format; number of items ordered, items withdrawn; how much money was spent according to categories such as buying MARC records, LC class number, or etc.; some count brief and full bibliographic, holdings and item records created; number of authority records created and/or updated; number of OCLC records upgraded and/or added; keeping track of electronic resources as in the number of items loaded, purchased, those that have received licenses, etc.; items repaired or sent to the bindery; usage statistics; creation and/or editing of metadata records, etc.

It seems that these statistics are shared in a variety of ways from reporting them in the annual report or for administration. Statistics also seemed to have several purposes. Some were used for staff evaluations. Some were even used to help hire contract employees for grant projects.

One of the issues with what is gathered is that several people asked what is missing. One of the more interesting threads was determining the turnaround time for the complete process of a material. One issue raised with this topic was how does one define a complete process or in the words of the person “start to finish”. Someone mentioned that they can get vendor turnaround time. One solution to recording turnaround time was to place a physical workslip with the item that is dated at the beginning and end of the process.

Another concern was standardization. This issued raised was multifaceted. First, there was the question of how to standardize statistics within an organization so that those statistics made sense. Many now rely on their ILS to track a variety of actions within the system. This is much more reliable than keeping a paper trail. However, people are gathering all types of different statistics in very different ways. Basically the statistics gathered seem to be custom tailored to the institution. One institution even used a vendor solution called V-Insight. This is the other facet to this question on standardization. If statistics use local practices, how is it possible to compare the work of technical services across institutions? To help with these, at least for electronic resources, there is a standard called COUNTER, CORAL and a harvesting protocol called SUSHI.

It seemed that the value of these statistics is based on several factors. One person coined this as “a reason and a comparison”. First you need a reason to be collecting the statistics that you collection. In other words, just don’t collect anything but focus on what your institution needs to know. Also, it is essential to keep these statistics over time. Then, from what is essentially a dataset for your technical services, you can determine a baseline in relation to the goals set by the unit or department or institution. These goals can even be those of a consortium or association like the Association for Research Libraries. Another person summarized this as coupling the “why” with “how data is used”. Are you using data to evaluate process and performances? Are the data needed for assessment or to create benchmarks? Are the data needed for trends? One person provided this resource: Megan Oakleaf, The Value of Academic Libraries:  A Comprehensive Research Review and Report (Chicago: Association of College and Research Libraries, 2010. Available at http://www.ala.org/ala/mgrps/divs/acrl/issues/value/val_report.pdf.

The one issue that I found sourly missing in the entire thread was metadata. One institution reported that they gather statistics in CONTENTdm for metadata records created and edited. Other than that, it seemed that no one else gathered statistics on metadata. I have been trying to determine how to measure the value of metadata services. How do you measure the value of creating and editing data dictionaries, transformations, crosswalks, data cleanup, or writing guidelines for digital repositories? It seems that this type of work cannot be “quantified” in the way described in this e-forum. Or perhaps it can and institutions are already doing it. Is it based on document deliverables such as MIT’s page on metadata services might suggest? Is it the analysis of how effective search and retrieval is within a system? If you are using statistics to measure the value of metadata services, leave a comment!

Advertisements

Leave a comment

Filed under cataloging

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s