SERVICE QUALITY IN ACADEMIC LIBRARIES: AN ANALYSIS OF .

2y ago
19 Views
2 Downloads
764.75 KB
121 Pages
Last View : 19d ago
Last Download : 3m ago
Upload by : Kaden Thurman
Transcription

SERVICE QUALITY IN ACADEMIC LIBRARIES: AN ANALYSIS OF LibQUAL SCORES AND INSTITUTIONAL CHARACTERISTICSbyKATHLEEN F. MILLERB.A. State University of New York at Albany, 1979M.L.S. State University of New York at Albany, 1981A dissertation submitted in partial fulfillment of the requirementsfor the Degree of Doctor of Educationin the Department of Educational Research, Technology, and Leadershipin the College of Educationat the University of Central FloridaOrlando, FloridaSpring Term2008Major Professor: William Bozeman

2008 Kathleen F. Millerii

ABSTRACTThis exploratory study considered the problem of assessing quality in academiclibraries. The research question that framed the investigation asked whether servicequality scores from the LibQUAL instrument were related to the following college oruniversity characteristics: institutional type, enrollment level, or the level of investmentmade in libraries. Data regarding Carnegie classification, FTE enrollment, and libraryexpenditures were collected for 159 college and university libraries that participated inLibQUAL during 2006. Descriptive statistics, bivariate correlations, and regressionanalyses were calculated and the Bonferroni adjustment was applied to significancelevels to compensate for errors caused by repeated calculations using the same data.Several statistically significant relationships were found; notably, negativecorrelations were found between each of the LibQUAL scores and total libraryexpenditures. The study suggested that higher expectations among library users in large,research libraries led to slightly lower LibQUAL scores. Implications for practiceincluded that survey results should only be used as one component of an assessmentstrategy, and practitioners might consider the potential role of library marketing or publicrelations efforts to influence user expectations. Recommendations were made for futureresearch including replicating some aspects of this study with a more representativesample, analyzing respondent comments as well as score data, and exploring whetheriii

there are reliable differences in results for different types of institutions or among groupsof respondents (students and faculty, or faculty by discipline).iv

To my parents, with love and gratitudev

ACKNOWLEDGMENTSNo dissertation is successfully completed without the assistance, patience, andsupport of instructors, advisors, and colleagues. I am particularly grateful to Dr. WilliamBozeman, who graciously stepped in as my advisor to supervise the final months of mywork at UCF and the preparation of this dissertation. I also wish to express my gratitudeto the members of my dissertation committee: Mr. Barry Baker, Dr. George Pawlas, andDr. Levester Tubbs. Each of them has generously offered his time, expertise,encouragement, and advice to me throughout this project. Finally, I am indebted to Dr.Jess House, for his advice and encouragement throughout my doctoral studies and forhelping me shape some vague ideas about library service quality into a successfulresearch proposal.vi

TABLE OF CONTENTSLIST OF FIGURES . ixLIST OF TABLES . xCHAPTER ONE: INTRODUCTION . 1Background. 1Customer Satisfaction and Service Quality . 3Measuring Library Quality . 6LibQUAL . 8Research Questions . 12Methodology. 15Significance of the Study . 18Summary . 19CHAPTER TWO: REVIEW OF THE LITERATURE . 21Customer Satisfaction and Service Quality . 21The Service-Based Economy. 23SERVQUAL . 26Library Quality Assessment . 29LibQUAL . 32Acting on LibQUAL Data . 36Validity and Reliability . 39Conceptual Framework . 41Significance of the Study . 42Summary . 43CHAPTER THREE: METHODOLOGY . 44Definitions . 44Variables . 45Sample and Population . 45Limitations and Delimitations . 49Data Acquisition . 50Statistical Analysis . 51Summary . 52CHAPTER FOUR: DATA ANALYSIS . 53Problem and Approach . 53Purpose and Design of the Study . 56Methodology. 57Results . 58Summary . 77CHAPTER FIVE: CONCLUSIONS AND DISCUSSION OF FINDINGS . 79Statement of the Problem . 79vii

Research Questions . 82Methodology. 83Summary of Findings . 85Conclusions, Implications, and Recommendations . 89Summary . 96APPENDIX A: LibQUAL TM SURVEY INSTRUMENT . 98APPENDIX B: SCATTERPLOTS: CORRELATIONS OF LIBQUAL TM SCORESAND LIBRARY EXPENDITURES .101LIST OF REFERENCES . 106viii

LIST OF FIGURES1. Expectancy Disconfirmation Theory . 232. LibQUAL TM Service Quality Assessment Factors . 423. Information Control Dimension Scores and Carnegie Basic Classification . 664. Library as Place Dimension Scores and Library Expenditures . 70B1. Service Affect Dimension and Total Library Expenditures . 102B2. Information Control Dimension Scores and Total Library Expenditures . 103B3. Library as Place Dimension Scores and Total Library Expenditures . 104B4. Overall Scores and Total Library Expenditures . 105ix

LIST OF TABLES1. LibQUAL Dimensions and their Component Items . 102. Data Sources and Analytical Tools that Addressed the Research Questions . 173. SERVQUAL Dimensions and their Components . 284. Refinement of LibQUAL TM Dimensions . 355. 2006 LibQUAL Participants by Library Type . 476. 2006 LibQUAL Participants by Country. 487. LibQUAL Dimensions and Corresponding Survey Questions . 558. Descriptive Statistics for LibQUAL Scores (n 159) . 599. Descriptive Statistics for Scale Institutional Characteristics (n 159) . 6110. Definitions of the Carnegie Basic Classifications . 6311. Population and Sample Enrollment and Distribution of Carnegie Classifications . 6412. ANOVA for Carnegie Basic Classification and Information Control Scores . 6713. Coefficients for Carnegie Basic Classification and Information Control Scores . 6814. ANOVA for Library as Place Dimension Scores and Library Expenditures . 7115. Coefficients for Library as Place Dimension Scores and Library Expenditures . 7216. Regression ANOVA for Service Affect Dimension Scores and FTE Enrollment . 7517. Regression Coefficients for Service Affect Dimension Scores and FTE Enrollment 7518. Correlations between Library Expenditures and LibQUAL TM Scores . 7619. Summary of Statistically Significant Correlations . 78x

CHAPTER ONE: INTRODUCTIONThis dissertation is a report of an exploratory study of service quality scoresobtained in 159 college and university libraries, and the relationships of those scores withthe following characteristics: institutional type, institutional size, or the level ofinvestment made in libraries. This first chapter will introduce the background of thestudy, identify the problems that the research questions were intended to address,describe the study‘s methodology, and outline its professional significance.BackgroundLibraries exist to collect the record of human experience and to provideintellectual and physical access to that record. For academic libraries in particular, thereis a responsibility to preserve scholarly communications as well as the primary resourcesupon which scholarship often depends. During the past two decades, myriad challengesand opportunities for libraries have been presented as a result of the rapid developmentand deployment of information technologies. This environment has spurred librarians toreconsider and redefine collections, services, organizational structure, the skill setsrequired of library staff, and the attributes of library facilities. A task force of theUniversity of California Libraries recognized this state of change in libraries.The continuing proliferation of formats, tools, services, and technologies hasupended how we arrange, retrieve, and present our holdings. Our users expect1

simplicity and immediate reward and Amazon, Google, and iTunes are thestandards against which we are judged (University of California Libraries, 2005,p. 7).Library decision makers must therefore determine how to meet new and evolvingexpectations for library services and materials. Clearly, libraries are operating from vastlydifferent assumptions about the ways in which they might best carry out theirresponsibilities than they did a few, short years ago.While library practice is changing, it remains based in a commitment to service.Collections of books and other information resources without accompanying access tools,instruction, or other library services are mere warehouses, not libraries. Librarians in alltypes of libraries work to ensure that their organizations provide high quality service insupport of the goals of the library‘s parent institution. It would be rare indeed to discoveran academic library, for example, that did not consider service quality an importantaspect of carrying out its mission to support teaching, learning, and research in thecollege or university in which it operates. But how do library administrators knowwhether their libraries are meeting the new expectations of users or providing highquality service?2

Customer Satisfaction and Service QualityIn the for-profit sector, customer satisfaction measurement and management haslong been a common practice, and contemporary service quality assessment has its rootsin customer satisfaction measurement. During the past 40 years, the concept of customersatisfaction has changed a number of times. From the corporate image studies of the1960s to the total quality approach in Western economies in the late 1980s (which hadbeen embraced in Japan more than 40 years earlier), several approaches to customersatisfaction led to the contemporary conceptual model of service quality (Crosby, 1993,p. 389-392).The first phase of customer satisfaction measurement took the form of corporateimage studies in the 1960s. Customer satisfaction and perception of quality were oftenincluded indirectly in image surveys as questions about company characteristics such asprogressiveness or involvement in the community. The second phase saw the birth ofproduct quality studies beginning in the late 1960s. The primary measurement was theadequacy–importance model that created an index of satisfaction to explain customerattitudes. The index was created by ―summing (across attributes) measures of satisfactionwith product performance multiplied by measures of feature importance‖ (Crosby, 1993,p. 390).Beginning in the 1970s, a new phase was evidenced by some early customersatisfaction studies that were implemented in regulated industries, notably by AT&T.3

Without market-based performance indicators, monopolies sought to justify rate increasesby garnering favorable customer satisfaction measures. The 1980s marked the next majorevolution in thinking about customer satisfaction. The increased competition in theAmerican automobile market from foreign companies gave rise to syndicated automotivestudies, such as the J. D. Powers & Associates studies (Crosby, 1993, p. 391).The current focus of customer satisfaction measurement can be traced mostdirectly to the 1980s, when the total quality movement captured the attention ofbusinesses in Western economies and businesses recognized the need for a model thataddressed the fundamental shift to a service-based, rather than product-based, economy.There was no longer a specific, tangible product to assess, and businesses turned tocustomer perceptions of whether their expectations were being met or exceeded (Crosby,1993, p. 392).The Gaps Model of Service QualityThe marketing research group of Parasuraman, Zeithaml, and Berry (1985)developed an approach to customer satisfaction measurement in the 1980s called theGaps Model of Service Quality. The Gaps Model assessed customer satisfaction byidentifying the differences, or gaps, between customer expectations and customerperceptions of service (Parasuraman et al., 1985; Parasuraman, Berry, & Zeithaml, 1991).In this model, customer expectations are established by the customer, who defines the4

minimum acceptable and the desired levels of service. The customer then describes his orher perception of the level of service he or she received and the gap is thereby defined bythe difference between perceived level of service and desired level of service.Hernon and Nitecki (2001) noted that service quality definitions vary across theliterature and are based on four underlying perspectives.1. Excellence, which is often externally defined.2. Value, which incorporates multiple attributes and is focused on benefit to therecipient.3. Conformance to specifications, which enables precise measurement, butcustomers may not know or care about internal specifications.4. Meeting or exceeding expectations, which is all-encompassing and applies toall service industries (p. 690).Most marketing and library science researchers, however, have focused on thefourth perspective (Hernon & Nitecki, 2001), and the Gaps Model of Service Qualityuses that perspective as a framework to identify the gaps created when performanceeither exceeds or falls short of meeting customer expectations. In fact, the Gaps Modelexpands the fourth perspective to five, with the addition of ―gaps that may hinder anorganization from providing high quality service‖ (Hernon, 2002, p. 225).In the Gaps Model customer expectations are viewed as subjective and based onthe extent to which customers believe a particular attribute is essential for an excellentservice provider. Customer perceptions are judgments about service performance.5

Furthermore, expectations are not viewed as static; they are expected to change andevolve over time. Hernon (2002) wrote thatthe confirmation/disconfirmation process, which influences the Gaps Model,suggests that expectations provide a frame of reference against which customers‘experiences can be measured . . . customers form their expectations prior topurchasing or using a product or service. These expectations become a basisagainst which to compare actual performance (p. 225).The measurement of service quality using the Gaps Model, therefore, focuses onthe interaction between customers and service providers and the difference, or gap,between expectations about service provision and perceptions about how the service wasactually provided (Parasuraman et al., 1985; Parasuraman et al., 1991). The differencebetween the minimum acceptable and the perceived levels of service is the adequacygap; larger adequacy gaps indicate better performance. The difference between thedesired and perceived levels of service is the superiority gap; ideally, these scores wouldbe identical so a perfect score is zero. As the superiority gap score gets further fromzero, either positive or negative, it indicates poorer performance.Measuring Library QualityThe recent emphasis on assessment in higher education has affected every facet ofpost-secondary institutions. Administrators in college and university libraries are noexception; they need assessment tools that provide data for continuous improvement,6

documentation of assessment, and evidence of the thoughtful use of assessment data foraccreditation organizations.The traditional measure of academic library quality has been collection size. Infact, many institutions still organize special events to commemorate the acquisition of alibrary‘s millionth volume. Rather than providing a census of its collections, however, theMiddle States Commission on Higher Education now requires the institution todemonstrate the ―availability and accessibility of adequate learning resources, such aslibrary and information technology support services, staffed by professionals who arequalified by education, training, and experience to support relevant academic activities(―Characteristics of excellence,‖ 2006, p. 43). Colleges and universities are thereforerequired to determine adequacy without prescriptive measures such as volume counts ornumbers of professional staff. The other regional associations have similarly broadstatements, leaving librarians and institutional effectiveness staff to figure out a newapproach (Gratch-Lindauer, 2002, p. 15). This shift in the assessment of libraries hasbeen described as a ―move beyond the rearview mirror approach‖ (Crowe, 2003, ¶ 5) ofsimply reporting what libraries acquired or how many users walked through the frontgates in a given year.This emphasis on assessment for accountability has motivated librarians to seekout more meaningful measures of quality. Rather than focusing solely on inputs such ascollection size or staffing level, the first new library measures were output measures thatsought to describe what libraries produced with their inputs. That is, in the 1990s7

librarians began to report outputs such as the number of items borrowed or the number ofreference questions answered (Kyrillidou, 2002, pp. 43-44). Those measures alone,however, still fell short of addressing whether library services were sufficient. Ascolleges and universities created student learning outcomes beginning in the late 1990s,librarians also created measures that were based on outcomes, or the extent to whichstudent and faculty contact with libraries affected them and contributed to the mission ofthe university (Hernon, 2002; Kyrillidou, 2002). New instruments and protocols,however, were needed for libraries to meet demands for accountability, measure servicequality, and generate data for effective library management.LibQUAL Service-based industries in the private sector began using an instrument calledSERVQUAL for assessing customer perceptions of service quality in the 1980s.SERVQUAL was developed by Parasuraman et al. (1985) and grounded in their GapsModel of Service Quality. In 1995, 1997, and 1999, the Texas A&M UniversityLibraries, seeking a useful model for assessment, used a modified SERVQUALinstrument. Their experience revealed the need for an adapted tool that would use theGaps Theory underlying SERVQUAL and better address the particular requirements oflibraries (Thompson, 2007). In 1999 the Association of Research Libraries (ARL)partnered with Texas A&M University to develop, test, and refine the adapted8

instrument. As a result of their collaboration, LibQUAL was ―initiated in 2000 as anexperimental project for benchmarking perceptions of library service quality across 13libraries‖ (Kyrillidou, 2006, p. 4). During 2006 the LibQUAL survey wasadministered in 298 institutions.This study analyzed data collected from the two administrations of LibQUAL during 2006. A description of the instrument will facilitate an understanding of theinvestigation. With each administration, the LibQUAL instrument was improved andit is currently composed of 22 questions and a comment box (see the complete instrumentin Appendix A). As shown in Table 1, the results for each library include three dimensionscores derived from responses to the 22 questions. There is also an overall, weightedscore.9

Table 1LibQUAL Dimensions and their Component ItemsDimensionService AffectInformationControlLibrary as omponentsEmployees who instill confidence in usersGiving users individual attentionEmployees who are consistently courteousReadiness to respond to users‘ questionsEmployees who have the knowledge to answer user questionsEmployees who deal with users in a caring fashionEmployees who understand the needs of their usersWillingness to help usersDependability in handling users‘ service problemsMaking electronic resources accessible from my home or officeA library Web site enabling me to locate information on my ownThe printed library materials I need for my workThe electronic information resources I needModern equipment that lets me easily access needed informationEasy-to-use access tools that allow me to find things on my ownMaking information easily accessible for independent usePrint and/or electronic journal collections I require for my workLibrary space that inspires study and learningQuiet space for individual activitiesA comfortable and inviting locationA getaway for study, learning or researchCommunity space for group learning and group studyThe three dimensions measured by LibQUAL are service affect, informationcontrol, and library as place. The perceptions of customers about library staff competencyand helpfulness are derived from nine questions that compose the service affectdimension score. The information control dimension is derived from eight questions andfocuses on whether the library‘s collections are adequate to meet customer needs and10

whether the collections are organized in a manner that enables self-reliance for libraryusers. Finally, the library as place dimension is derived from five questions that addressuser perceptions regarding the facility‘s functionality and adequacy for academicactivities. All of the scores are scaled from 1 to 9 with 9 being the highest rating, so thatscores can be compared (Thompson, Cook, & Kyrillidou, 2006b).Reliability and ValidityA number of studies have examined the LibQUAL instrument for scorereliability (Cook, Heath, Thompson, & Thompson, 2001a; Cook, Heath, Thompson, &Thompson, 2001b; Thompson, Cook, & Thompson, 2002) and validity (Thompson,Cook, & Kyrillidou, 2006a). In a key study by Heath, Cook, Kyrillidou, and Thompson(2002), validity coefficients replicated closely across different types of post-secondarylibraries, leading them to conclude that ―LibQUAL scores may be valid in reasonablydiverse library settings” [italics original] (p. 38). This study explored that conclusion asit relates to institutional size, institutional type, and level of investment by the institutionin its library.Since 2000 LibQUAL has been administered in every state except Alaska andSouth Dakota (M. Davis, personal communication, May 16, 2007), and11

. . . in various language variations in Canada, Australia, Egypt, England, France,Ireland, Scotland, Sweden, the Netherlands, and the United Arab Emirates. The2005 cycle saw administration in several South African universities. And thesummer of 2005 brought training in Greece (Thompson, Cook, & Kyrillidou,2005, p. 517).The instrument has consistently tested as psychometrically valid and the protocol has ―auniversality that crosses language and cultural boundaries at the settings whereLibQUAL has been implemented to date‖ (Thompson et al., 2005, p. 517).Research QuestionsIn this section, the research questions that framed the investigation areenumerated and the underlying assumptions are explained. For this exploratory study of2006 LibQUAL scores, the overarching research question was whether, and to whatextent, LibQUAL scores were related to the following college or universitycharacteristics: institutional type, institutional size, or the level of investment made inlibraries. Institutional type was represented by Carnegie basic classification, institutionalsize was represented by 12-month FTE enrollment, and investment in libraries wasrepresented by annual library expenditures. An analysis of LibQUAL scores andthese institutional characteristics was performed with data from 159 American colleges oruniversities that participated in the 2006 administration of LibQUAL .LibQUAL results include scores for minimum, perceived, and desired levelsof service for each of the 22 items included in the survey. The scores are combined to12

produce an adequacy gap and superiority gap for each question and for each of the threedimensions. The adequacy gap is the difference between the minimum and perceivedscores, and the superiority gap is the difference between the desired and perceived scores.Large adequacy gap scores indicate that respondents perceive services to exceed theirminimum expectations. A large superiority gap score, however, may indicate the libraryis expending resources to provide a level of service beyond the level that its users desire.In addition, superiority gap scores below zero indicate the library is not meeting itscustomers‘ desired service level.The following questions were designed to result in data that addressed theresearch question.1. What were the 2006 LibQUAL scores for American college anduniversity libraries?The central tendency of the LibQUAL data, in terms of means and confidenceintervals, and shape of the distribution, or normality of kurtosis and skewness, wasanticipated to indicate that the sample was representative of the population.2.What were the characteristics of the American college and universitylibraries that administered LibQUAL in 2006?A description of the independent variables at the sample institutions wasanticipated to indicate a normal distribution and central tendency for data regardingCarnegie classifications, enrollment, and library expenditures.13

3. To what extent, if any, were scores for the information control dimensionrelated to institutional type as expressed by the Carnegie basic classification?Libraries in research universities, unlike their counterparts in primarilyundergraduate institutions, are intended to support significant graduate programs andresearch activity. In such libraries students and faculty will find rich, well-organizedcollections. In contrast, libraries that support solely undergraduate work have collectionsthat support the curriculum but are not likely to have the resources required to supportfaculty research. Information control dime

May 16, 2007 · The Gaps Model of Service Quality The marketing research group of Parasuraman, Zeithaml, and Berry (1985) developed an approach to customer satisfaction measurement in the 1980s called the Gaps Model of Service Quality. The Gaps Model assessed customer satisfaction by identifying the differences, or gaps, between customer expectations and customer

Related Documents:

Academic libraries loaned some 10.5 million documents to other libraries in fiscal year 2012 (table 1). Academic libraries also borrowed approximately 9.8 million documents from other libraries and commercial services. The majority of academic libraries, 2,417, were open between 60-99 hours during a

Libraries: Easy, High-Quality Acceleration Ease of use: Using libraries enables GPU acceleration without in-depth knowledge of GPU programming "Drop-in": Many GPU-accelerated libraries follow standard APIs, thus enabling acceleration with minimal code changes Quality: Libraries offer high-quality implementations of functions

Tip 1: How to use Agilent 82357B USB/GPIB converter in NI’s MAX or LabVIEW? Figure 2. Typical setup for Agilent IO Libraries Suite. 1 Agilent I/O Libraries Each Agilent IO product is bundled with the Agilent I/O libraries. There are four I/O libraries included in Agilent IO libraries Suite: Agilent

8 Grand Valley State University (USA) G.V.S.U. Libraries 9 Harvard University (USA) Harvard University Libraries Office of Scholarly Communication 10 Massachusetts Institute of Technology (USA) MIT Libraries 11 Memorial University of Newfoundland (Canada) Memorial University Libraries 12 Simon Fraser University (Canada)

5 HIGHLIGHTS ARL ACADEMIC LAW LIBRARY STATISTICS, 2002-03 Out of 113 ARL university libraries, 75 responded to this survey.1 Law libraries reported median values of 304,887 volumes held and 8,248 volumes added. Also, these libraries employed the full-time equivalent of 2,243 staff members in the fiscal year 2002-03. Responding libraries

the study were as follows: C. Colleen Cook, Texas A&M University Libraries; Terri Fishel, Macalester College Library; Kit Keller, ALA Consultant; Martha Kyrillidou, Association of Research Libraries; William Miller and Rita Pellen, Florida Atlantic University Libraries; Kenley Neufeld, Santa Barbara City College; Patricia Profeta, Indian River .

academic libraries. Measure visitor interactions at touch points in the library. Generate information on how public and academic libraries in the Chicago area can better service and educate their visitors, thereby creating a more satisfying library visit. Research Objectives Research Methodology Four libraries in the Chicago area were

ACRL ENVIRONMENTAL SCAN Introduction and Methodology The 2015 Environmental Scan of Academic Libraries is the product of ACRL’s Research Planning and Review Committee. In 2014 the committee produced the “Top Trends in Academic Libraries,” published in College and Research Libraries News (Middleton et al. 2014).