Membership

Not a member? Join now

Union list of periodicals

What is quality and how can we measure it?

Andrew Booth

Abstract

Pressures to demonstrate the quality of library services are felt by all libraries. Customers are becoming more discriminating and there is an imperative for managers to respond to customer demand. This article begins by piecing together three complementary perspectives that create a semblance of quality. It briefly outlines some of the limitations of each approach before examining recent initiatives to capture library quality. The article concludes by encouraging an evidence-based approach to service quality measurement.

Introduction

Libraries everywhere face increasing pressure to demonstrate the quality of their services. Academic libraries encounter increasing competition, research libraries must further their institution’s objectives while maximizing use of resources and workplace health service libraries face no lesser challenge as they strive to secure service level agreements, support professional accreditation and contribute to clinical governance and other quality initiatives. Meanwhile library customers become more discriminating in demanding quality services. Such services enhance profitability, improve productivity, and contribute to competitive advantage. Managers must supplement consideration of efficiency and economy with “behavioural” values, such as perceived quality, customer satisfaction, perceived value and customer loyalty. Most information professionals prefer not to merely address “must dos”. Where quality goes beyond lip-service and reflects an organisational commitment to progressive change, we find characteristics of the “learning organisation”. Working within such an organisation is professionally fulfilling as we identify best practices and learn from one another, continually improving current library practice.

What is quality?

Quality is an elusive concept. Definitions range from the vague (e.g. ‘the totality of characteristics of an entity that bear on its ability to satisfy stated and implied need’(1)) to the Martini advert (“doing the right thing, at the right time, in the right way, for the right person - and having the best possible results”(2)). In defining the concept we either let it escape butterfly-like from our grasp or transfix it upon a pin. In either case we lose sight of what really constitutes quality. Alternatively, and equally unsatisfactorily, we seek refuge in the professional’s mantra “I know it when I see it”. Imagine that you are trying to capture the essence of a famous statue - perhaps the Venus de Milo - equipped only with a Polaroid camera. To convey the full three-dimensional effect you take a number of still photographs each from a different angle or aspect. The composite image, while bearing little resemblance to the reality, is infinitely preferable to a single two-dimensional view. So it is with the quality of our library services - “the value of library services is clearly a multidimensional construct that will not easily be captured by single or simplistic measures”(3). In the absence of some universal all-embracing view we piece together a composite from three different but complementary perspectives:

  • “top-down” - organisational audit and accreditation where reference standards are determined and then applied to our service.
  • “sideways” - peer review and benchmarking where some consensus of acceptable standard or performance is used to locate the relative position of our own service.
  • “bottom-up” - customer satisfaction and user feedback where performance of a service is considered against user expectations (4).

 Each of the above, though useful, has limitations:

  • “top-down” approaches generally fail to take into account local limitations or circumstances. They emphasise structures and processes rather than actual impact (e.g. two libraries may have the same number of staff and books but one could provide an excellent service and the other a substandard one) and focus on documentation (“have we got a written procedure?” rather than “do staff handle this well and consistently?”)
  • “sideways” approaches focus on what is done, not on what ought to be done. In an increasingly evidence based culture, libraries cannot argue that their standards are no better or worse than similar services when recourse to some external standard is required. Averaging approaches have the inherent disadvantage that wherever the mean is located there will always be 50% of individuals (or services) who are below average!
  • “bottom-up” approaches command considerable emphasis in this era of consumer empowerment. However perceptions of quality are often driven by user expectations, realistic or otherwise, and are not necessarily bounded within the cost envelope in which library managers operate. Neither is it enough to ask users how much they value library services as they may be insufficiently informed about what is available or possible. Cynics claim “keep expectations low and you have fewer disappointed customers”. Few would consider this a genuine commitment to quality!

Even if we triangulate our top-down, bottom-up and sideways approaches, and can demonstrate high performance across each set of criteria, this is no guarantee that we are delivering a quality service. Most quality assessment methods neglect “soft attributes” of library services (knowledge, courtesy, friendliness, politeness, empathy, promptness, accuracy, individualized attention, ability to convey trust and confidence (5)).

Librarians also need to understand the difference between the expected and perceived value and quality of their services. Perceived quality is the consumers’ judgement about a service’s overall excellence or superiority (6). Service quality compares the desired service and the perceived service (7). Customer satisfaction, on the other hand, compares the predicted service (the level of service customers believe is likely to occur) and the perceived service. Quality is therefore fundamentally subjective - in the eyes of the beholder.

How can we measure quality?

Librarians have struggled for many years with the challenge of demonstrating the quality of the service they provide (8). Such evaluations may occur at an individual, service or organisation level. At an individual level staff development review, or performance appraisal, assesses and addresses the performance of a particular member of library staff. At a service level “evaluation” covers everything from informal opinions about whether a service is working well to carefully structured programmes of evaluation. Organisationally, formal evaluation is in the ascendancy as management strategies, such as total quality management (TQM) and continuous quality improvement (CQI), are adopted - strategies that rely on gathering and using data for measuring service quality (9).

The librarian’s professional judgement used to be the primary arbiter of service quality. To the observer the competence of the librarian was a surrogate measure for the quality of the service. Two associated trends now challenge this position; increasing demand for accountability and the greater transparency of library skills, embodied in end-user searching and use of the Internet. These trends place a premium on objective approaches, externalized from the service and yet remaining true to a library’s service-oriented principles. The following brief summary characterizes some common approaches.

SERVQUAL

 Recent years have seen service quality assessment influenced by the SERVQUAL conceptual model which identifies five dimensions consistently ranked by customers as most important for service quality. These dimensions are tangibles, reliability, responsiveness, assurance and empathy. Reliability is consistently the most important contributor to service quality, a finding that translates to libraries where SERVQUAL has measured reference, interlibrary loan, and reservation services (10).

 SERVQUAL identifies potential gaps between expectations and perceptions, both internal and external, of service delivery. It helps service providers to understand both customer expectations and perceptions of specific services, as well as quality improvements over time. Doing this may lead to further insights such as service elements requiring improvement and training opportunities for staff.

Introduced in 1988, SERVQUAL has been used across a wide range of service industries including health care and banking. It has been used in public, special, and academic libraries.10 U.S. research libraries have been particularly influenced by this model and recent years have seen its introduction within health libraries. Martin (11) describes use of SERVQUAL by ten NHS library services across Somerset, Devon and Cornwall. The project not only provided an overall picture of the quality of library services but also enabled the ten libraries to measure their own service quality and to benchmark themselves against others. This customer-based approach counters the “big is beautiful” emphasis of traditional collection-based criteria of quality. The SERVQUAL instrument, modified for library settings, provides an outcome measure for managers to spotlight service quality. Within the health library community there is an ongoing need to understand which aspects of service quality are most important.

LibQUAL+TM

Following the development of the 22-item SERVQUAL instrument concern was expressed that not all issues that it measures are relevant to libraries. A 22-item instrument, LibQUAL+TM, was developed from 56 items, identified following interviews with students and academics. This instrument has been shown psychometrically to be reliable and valid. Like SERVQUAL, LibQUAL+TM focuses on users’ perceptions and expectations. It includes such dimensions as empathy, place, collections, reliability, and access (12). In 2002 the Association of Academic Health Sciences Libraries (AAHSL) in the United States piloted the LibQUAL+TM instrument. 36 libraries in the consortium participated, in a project funded by the National Library of Medicine, identifying 5 unique “health library” questions inadequately covered by the LibQUAL+™ survey. These questions related to “providing health information when and where I need it”, “employees teaching me how to access or manage information”, an “environment that facilitates group study and problem solving”, “access to information resources that support patient care” and “having comprehensive electronic resources”(13). Quantifiable data obtained from LibQUAL+™, or indeed any validated tool, is not an end in itself. Library staff should discuss user perceptions and expectations, using their experience to interpret service quality data and suggest how perceived shortfalls might be addressed.

Performance indicators

Hewlett describes how performance indicators (PIs) can be used in health libraries to quantify how well a library service is performing (14). PIs are comparative values, often ratios or percentages, which indicate the quality or level of services and can be used to compare similar services, or the same service across time. PIs extend performance measures which merely show how much has been done.

Benchmarking

Any of the above three methods can be used as the vehicle for benchmarking. Benchmarking compares productivity, quality and practices in your own organisation with a chosen similar organisation. Benchmarking can be:

  • internal, where agreed good practices may be identified,
  • competitive, where performance is compared with organisations or services in the same field, where performance of a particular function (say interlibrary loans) is compared across sectors,
  • and generic, where performance is compared with organisations or services regardless of the field.
  • Benchmarking requires careful selection of the measures to be used, so that they represent measurements that are central to success or failure of the service. Results are analysed and benchmarking partners identified, thereby facilitating identification of "best practice".

Accreditation

Accreditation has enjoyed popularity within the health sector over many years (15), particularly with the development of the HeLICON accreditation checklist and the accompanying toolkit. Although accreditation is time consuming, sharing the paperwork burden of many quality assurance processes, it carries many benefits (16). These include the customer focus shared by SERVQUAL and LibQUAL, a heightened profile for the service, motivation and team building for library staff and, above all, the quest for ongoing improvement.

What you can count versus what counts

With users, providers and commissioners becoming increasingly aware of the importance of service quality and approaches to measuring this previously evasive concept become increasingly sophisticated, service quality assessment has been transformed in many health libraries. This move, from measuring what you can count to measuring what counts, parallels the evolution of clinical quality from audit to evidence based practice. Use of standard instruments such as SERVQUAL and LibQUAL, within the health sector and across sectors, are establishing an increasingly important evidence base. Crude idiosyncratic questionnaires for evaluating local library services seem destined for extinction. A remaining challenge is for accreditation toolkits and checklists to move to being based on evidence-based criteria and not on professional consensus. Approaches to quality from wider health care such as practice guidelines, integrated care pathways and variance of care analysis also hold exciting potential for our services. Of course the challenge for all of us is - how can we invest needed time and energy in initiatives to measure and evaluate service quality without it being at the expense of the quality of the library services themselves!

References

1. International Organization for Standardization, Technical Committee ISO/TC 176. ISO 8402: Quality Management and Quality Assurance - Vocabulary, 2nd ed. Geneva: ISO, 1994.

2. Health Care Quality: What is quality? http://www.consumer.gov/qualityhealth/quality.htm (Accessed on 14th November 2003)

3. De Jaeger K (2001) Impact and Outcome: Searching for the most elusive indicators of academic library performance, presented at Meaningful Measures for Emerging Realities: an IFLA satellite preconference to the 4th Northumbria International Conference on Performance Measurement in Libraries & Information Services. http://www.arl.org/stats/north/PM4_notebook.pdf (Accessed on 14th November 2003)

4. Zeithaml VA, Parasuraman A, and Berry LL (1990), Delivering Quality Service: Balancing Customer Perceptions and Expectations. New York: The Free Press.

5. Snoj, B and Petermanec, Z (2001) Let users judge the quality of faculty library services. New Library World, 102 (9): 314-324.

6. Rowley, J. (1998) Quality measurement in the public sector: some perspectives from the service quality literature, Total Quality Management, 9 (2/3), 321-35.

7. Oliver, R.L., 1996, Satisfaction. A Behavioral Perspective on the Consumer, McGraw-Hill Company, New York, NY.

8. Broady-Preston, J. & Preston, H. (1999). Demonstrating Quality in Academic Libraries. New Library World, 100 (3): 124-129.

9. Marshall JG (1995). Using evaluation research methods to improve quality. Health Libraries Review. 12(3):159-72.

10. Nitecki DA (1997) SERVQUAL: Measuring Service Quality in Academic Libraries. ARL Bimonthly Report 191 (April) http://www.arl.org/newsltr/191/servqual.html (Accessed on 14th November 2003)

11. Martin S (2003) Using SERVQUAL in health libraries across Somerset, Devon and Cornwall. Health Information & Libraries Journal. 20(1):15-21.

12. Kyrillidou, M and Hipps K (2001). Symposium on Measuring Library Service Quality. ARL Bimonthly Report 215: 9-11. http://www.arl.org/newsltr/215/octsymp.html (Accessed on 14th November 2003)

13. Lee TP (2003) Exploring outcomes assessment: the AAHSL experience. http://www.libqual.org/documents/admin/Exploring.ppt (Accessed on 14th November 2003)

14. Hewlett J (1998). Performance indicators in NHS libraries. Health Libraries Review. 15(4):245-53.

15. Fowler, C (1998). Accreditation for health care libraries in the United Kingdom. Health Libraries Review, 15, 296-9.

16. Sharp, S (1999). Evidence based accreditation: the experience of preparing for and undergoing LINC accreditation. Library Association Health Libraries Group Newsletter, 16(4), 6 9.

Andrew Booth

Director of Information Resources

ScHARR, University of Sheffield Regent Court,

30 Regent Street, SHEFFIELD, S1 4DA

 0114 222 0705

A.Booth@sheffield.ac.uk