Many Are Called But Few Are Chosen: Modeling The Selection . - Free Download PDF

1m ago
0 Views
0 Downloads
668.35 KB
32 Pages
Transcription

Many Are Called But Few Are Chosen:Modeling the Selection Process for theInnovations in American Government AwardsSandford BorinsProfessor of Strategic ManagementUniversity of Toronto, [email protected] M. WalkerProfessor of Public Management and PolicyCity University of Hong Kong, Hong [email protected] 2012Ash Center for Democratic Governance and InnovationHarvard Kennedy School

Many Are Called But Few Are Chosen:Modeling the Selection Process for theInnovations in American Government AwardsSandford BorinsProfessor of Strategic ManagementUniversity of Toronto, [email protected] M. WalkerProfessor of Public Management and PolicyCity University of Hong Kong, Hong [email protected] 2012Ash Center for Democratic Governance and InnovationHarvard Kennedy School

ContentsAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivAbstract. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Innovation Awards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Conclusions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Appendix: Innovations in American Government Awards Program . . . . . . . . . . . . 19Endnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22List of TablesTable 1: Innovation Types, 2010 and 1990–1993 . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Table 2: Logit Regression Results Predicting IAGA Semifinalists . . . . . . . . . . . . . 14

AcknowledgementsResearch assistants Kaylee Chretien did the coding and Elizabeth Lyons thedata analysis. Beth Herst copyedited the manuscript. Research funding wasprovided by the Ash Center for Democratic Governance and Innovation atHarvard Kennedy School.

Many Are Called But Few Are ChosenAbstractThe adoption of new services and practices is widespread in public organizations as they respond to demands in the external environment and internalaspirations. In order to recognize these activities and disseminate good practices, awards programs have proliferated around the globe. Given the limitedempirical analysis of the characteristics of innovation award winners, thisarticle examines the 2010 Innovations in American Government Awards(IAGA) program. Using a quasi-experimental methodology, a sample of 234applications, of which approximately half were selected as semifinalists andhalf were not, was subjected to multivariate logit analysis. Analysis revealsthat the selection criteria of the IAGA played varying roles in explainingprogress to the semifinalist round and that some confounding effects wereidentified. The implications of these findings for the future conduct ofawards and ongoing research in this area is discussed.IntroductionTwenty-seven thousand applications over 25 years, 2,300 semifinalists, 500finalists, and 200 publicly-celebrated winners. With its distinguished panelsof judges, wide range of applicants, generous resources devoted to publicizing winning innovations, and of course the renown of its supporting institution, the Harvard University John F. Kennedy School of Government’sInnovations in American Government Awards (IAGA) program is undoubtedly the highest profile such program in the United States and probablyglobally.1 Its importance is not limited to the prestige that accompanies awin. Practitioners both nationally and internationally look to the competition’s highest-ranked innovations for models to emulate, while academics—attracted by the extensive database that 25 years of annual competitions hasgenerated—take them as representative subjects of study. Given this twopronged influence, it is fair to say that these awards matter—shaping thepractice and the study of public-sector innovation.Awards competitions seek to disseminate and promote good practice. Therehas been extensive growth in the number of such competitions in fields ofmanagement ranging from quality to e-government, in addition to innovation(Hartely and Downe 2007; Rashman and Radnor 2005; Wu, Ma, and Yang2012). Studies in the public and private management literature have examined1

Many Are Called But Few Are Chosenthe consequences of awards (Kapucu, Volkov and Wang 2011; Przasnyski andTai 2002; Radnor 2009) and provided evidence on their internal logic andtheir design (Löffler 2001; Wilson and Collier 2000). Thus, while awards andtheir study are extensive, a number of questions remain unanswered about theselection methodology for competitive awards programs such as IAGA. Howrepresentative are the highest-ranking applications? Does the selectionprocess consistently favor certain types of applicants? If it does, then arepractitioners seeking to replicate not the most significant and effective newapproaches, but simply (in the case of this example) what the HarvardKennedy School judges deemed important? Similarly, researchers are notstudying a representative sample of the best new practices, but a cherrypicked selection reflecting the Harvard Kennedy School’s priorities.This paper analyzes a natural experiment to explore the factors that determinethe selection of the semifinalists. A random sample of 234 initial applicationsto the IAGA program, of which approximately half were selected as semifinalists and half were not, is analyzed statistically to determine the factors thatexplain selection. Potential selection factors include the four stated criteria forthe award, the extent to which applicants created a narrative regarding theirinnovation in addition to describing its operations and impact, and other characteristics of the application. The narrative component became salientbecause the initial application form was changed in 2010 explicitly to encourage applicants to “tell their story.” The other characteristics of the applications include such factors as the size of the jurisdiction; how long after theinception of the innovation the application was made; whether this was arepeat application and, if so, the results of the previous application, and characteristics of the application, such as its policy area and the managementtechniques it incorporates. The other characteristics incorporate hypothesesfrom the innovation literature as well as explore for biases in the sense thatthey would indicate that the selection of semifinalists was influenced by factors other than the stated criteria for the award. Ultimately, the analysis can beseen as creating a statistical model of the selection process.This exploration grows out of Sandford Borins’ (1998) book Innovating withIntegrity, which studied a sample of 217 of the best (that is, semifinalist,finalist, or winning) applications to the Innovations in State and LocalGovernment Awards2 (IAGA’s precursor) between 1990 and 1993 to explorecharacteristics of the structure and process of innovation in government.Borins’ set of coding criteria have been adapted for use here. This study can2

Many Are Called But Few Are Chosenthus be viewed as an instance of the much valued but rarely undertakenprocess of replication of previous research (King, Keohane, and Verba 1994,26–7). This study is not exact replication, but rather a lagged—almost twenty years after the initial data was gathered—and modified replication in thesense that it expands the database to include original applications that werenot selected as semifinalists and expands the focus, to include narrative.Nonetheless, it is informed by the previous work and seeks to carry it forward in new directions.In the following section, the literature on innovation awards is reviewed (adescription of the IAGA program and its application form, are provided in theappendix). The subsequent section presents eight hypotheses that highlightfactors associated with awards programs. Methods, data, and measures arethen outlined. Statistical results explore the role that the four IAGA criteria,storytelling, and other factors all play in explaining the selection of semifinalists. The implications of these findings for the future conduct of award programs and ongoing research in this area are discussed in the conclusion.Innovation AwardsA stream of the literature on public-sector innovation studies programs applying to innovation awards. This focus differentiates itself from research on theadoption and implementation of innovation and the characteristics of innovative public organizations (Berry 1994; Damanpour and Schneider 2006;Moon and Bretschneider 2002). Some of the research in this stream hasinvolved case studies of award-winning programs, such as Barzelay’s (1992)on Minnesota’s Step program, Donahue’s (1999) on several award-winninginnovations in the U.S. government (1999), and Bardach’s (1998) on innovative programs involving inter-organizational cooperation. Such studies havedescribed the history and mechanics of these programs to show what makesthem effective and to draw out their implications for other managers. Morerecent studies have taken this line of research in new directions. Donahue(2008) used award-winning innovations undertaken in the U.S. Department ofLabor in the first term of the Clinton Administration to show how the seniorleadership of that department created a culture supportive of innovation.Bardach (2008) revisited programs exemplifying effective intergovernmentalcooperation that he had written about a decade earlier to see how they hadfared and to develop a process model of the trajectory of such programs.3

Many Are Called But Few Are ChosenThere have also been attempts to use the most successful applications toinnovation awards to create and analyze databases. Behn’s (1988) hypothesisof “groping along” as a methodology for launching innovations was testedby Golden (1990) using a sample of 17 successful applications to theInnovations in State and Local Government Awards and by Levin and Sanger(1992, 1994) using a larger sample of 29. Borins (1998) used a much largersample of 217 semifinalist, finalist, or winning applications to the State andLocal Government Innovations Awards between 1990 and 1993, anothersample of 104 finalist or winning applications to the Innovations inAmerican Government Awards between 1994 and 1998 (Borins 2000), andsamples of applications to Canadian and Commonwealth innovation awardsin 1998 and 2000 (Borins, 2001). Because the IAGA semifinalist questionnaires and site visit reports for finalists are so comprehensive, Borins wasable to analyze a wide range of issues including the characteristics of theinnovations themselves; process characteristics such as where in the organization innovations are initiated, sources of opposition, and how oppositionwas overcome; financial and organizational structure; and results and replication. The questionnaires used by the Canadian and Commonwealth innovation awards were not as comprehensive as that of the IAGA, so Borins sentout a comparable ex post questionnaire to applicants. More recently, quantitative analyses of applications to innovation awards in Brazil (Farah andSpink 2008), China (Wu, Ma, and Yang 2012), and Canada (Bernier, Hafsi,and Deschamps 2011) have also been undertaken.Innovation research based on the best applications to public-sector innovation awards has been criticized for the methodological problem of selectionon the dependent variable (Kelman 2008) just as has research undertaken inbusiness schools about successful firms or successful national industrialstrategies (King, Keohane, and Verba 1994, 133–5). If a researcher isattempting to determine what distinguishes government agencies that innovate from those that do not, that criticism is relevant. On the other hand, if aresearcher is attempting to characterize the initiatives that innovators haveundertaken, that criticism would not be relevant. Still, there are several directions in which research using data from innovation awards can and shouldgo, beyond a narrow focus on the most highly-rated applications. Thus,applications that are selected by award programs could be compared to thosethat are not, which is the focus of this paper.4

Many Are Called But Few Are ChosenHypothesesWell-functioning awards programs are characterized by providing clear criteria for judges to follow that are complete and that include mutually exclusivecategories that are internally homogeneous. When judges make decisions, itis expected that they abide by the award selection criteria and do not includespurious variables that may reflect personal agendas. The IAGA’s four criteria are each defined for both applicants and judges: novelty, the degree towhich an innovation demonstrates a leap in creativity; effectiveness, thedegree to which it achieves tangible results; significance, the degree towhich it addresses a problem of widespread public concern; and transfer/transferability, the degree to which the program or aspects of it have beensuccessfully transferred to other government entities or show promise ofbeing successfully transferred. It is expected that judges attempt to determine the extent to which any program achieves these criteria. If researcherscan operationalize the criteria in a way that reflects the judges’ thinking, thecriteria should be significant in that programs that go farther to meet anyone criterion should be more likely to be selected as IAGA semifinalists.H1: Innovations that demonstrate better performance in terms of the criteria for the program are more likely to be selected as semifinalists.Detailed analytical work by Borins (2011) has examined the comprehensiveness of the narratives contained in a small subsample of IAGA finalists in2008 and 2009. This research concluded that the judges are more likely to bepersuaded by applications that provide more comprehensive narratives thanthose that do not. IAGA’s initial application form was changed in 2010 toinclude question 2, which explicitly invites applicants to “tell their story.” Inaddition, there is now considerable literature arguing that public managersand politicians are more persuasive if they incorporate stories—whether personal or organizational—into their presentations (Denning 2005, Lakoff2008, Westen 2007).H2: Innovations that provide more comprehensive narratives are morelikely to be selected as semifinalists.Awards are expected to be given on the basis of the selection criteria andconsequently the policy area the innovation is located in should not influenceselection as an IAGA semifinalist. If there were no bias among policy areas,then we would expect the coefficients of all policy areas to be zero and5

Many Are Called But Few Are Choseninsignificant (actually, all but one, because the policy areas are mutuallyexclusive and collectively exhaustive, so that one area must be excludedfrom the regression equation to prevent singularity). Indeed, IAGA staff’sguidance to judges that roughly the same proportion of the applications ineach area should advance to the semifinalist stage would reinforce theexpectation that the process is not biased with respect to any policy area.H3: Policy areas will not influence the likelihood of being selected as aninnovation semifinalist.The research evidence on the impact of organizational size on innovation isbecoming more robust, particularly in relation to process innovation (Walker,2011). Larger organizations are associated with access to more complex anddiverse facilities, more professional and skilled workers, more slackresources, and higher technical potential and knowledge (Hage and Aiken1970; Damanpour, Walker, and Avellaneda 2009; Rogers 1995). Berry(1994) argues that even holding these variables constant, it is likely that larger organizations will innovate more than smaller ones. These arguments havebeen shown to have relevance beyond the public sector (Camison-Zornoza etal. 2004; Damanpour 1991, 2010). Given the growing body of evidence onsize, it can be hypothesized that IAGA judges would view larger governments—whether the U.S. government, the larger states, or the larger cities—to be more innovative than smaller governments. This fourth hypothesis confounds our presumption of a well-functioning awards program because itsuggests that a spurious variable will cloud the judges’ choices.H4: Larger governments are more likely to be selected as innovationsemifinalists.Hypothesis five looks at diversity among governments not by size but bylevel of government. Level of government is not the same as size becausethe state and local government categories incorporate great diversity themselves. Our hypothesis is that there is nothing intrinsic about the level ofgovernment that would lead any of the three levels to be more likely to beselected than the others (when controlling for size).H5: Innovations from any level of government are equally likely to beselected as semifinalists.6

Many Are Called But Few Are ChosenPrior experience has been shown to be an important variable influencing theadoption of an innovation (Boyne et al. 2005; Rogers 1995). Prior experienceis likely to have an effect on the probability of being selected as a semifinalist. This argument points towards some bias inherent in awards programs.The IAGA selection process tells judges only if a program applied previously, not the result the previous time (Marchand 2011). Nevertheless, becausethere is some carryover in the composition of judging panels from one yearto the next, at least some judges will be aware of which applications wereselected previously. Second, it would be expected that if a program wasselected previously, the people who prepare its new application have someexperience with, and possibly feedback from, the process that will enablethem to prepare a stronger application. Thus we expect that applications thatwere selected as semifinalists or finalists previously were more likely to beselected as semifinalists this time.H6: Prior experience as semifinalists or finalists will be positively associated with selection as a semifinalist.Age of the program is likely to be a further factor influencing the chances ofbeing selected as an IAGA semifinalist. This relationship is anticipated to benonlinear, because the programs that apply when they have just begun operations are at a disadvantage as they will not have demonstrated results, andthus show weakness in terms of the criterion of effectiveness. Conversely theprograms that were initiated a considerable time before they apply will beconsidered by judges to be “yesterday’s news,” and thus show weakness interms of the criterion of novelty. We operationalize this by hypothesizing thatthe best time to apply would be between two and four years after a programhas been initiated.H7: Both recently initiated and longstanding innovations are less likelyto be selected as semifinalists.Innovations encompass services and organizational and technologicalprocesses. While knowledge on innovation has been driven by a technological imperative, public service innovation research points toward the importance of service and process innovations as well as the mutual reinforcementderived from the adoptio

Many Are Called But Few Are Chosen: Modeling the Selection Process for the Innovations in American Government Awards Sandford Borins Professor of Strategic Management University of Toronto, Canada [email protected] and Richard M. Walker Professor of Public Management and Policy City University of Hong Kong, Hong Kong [email protected]