Differential Item Functioning Of The UWES-17 In South Africa

3y ago
18 Views
2 Downloads
1.00 MB
11 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Ciara Libby
Transcription

Page 1 of 11Original ResearchDifferential item functioning of the UWES-17in South AfricaAuthors:Leanne Goliath-Yarde1Gert Roodt1Orientation: South Africa’s unique cultural diversity provides a constant challenge about thefair and unbiased use of psychological measures in respect of their cross-cultural application.Affiliations:1Department of IndustrialPsychology and PeopleManagement, University ofJohannesburg, South AfricaResearch purpose: This study assesses the Differential Item Functioning (DIF) of the UtrechtWork Engagement Scale (UWES-17) for different South African cultural groups in a SouthAfrican company.Correspondence to:Gert RoodtEmail:groodt@uj.ac.zaPostal address:PO Box 524, Auckland Park2006, South AfricaDates:Received: 26 Feb. 2010Accepted: 05 Sept. 2011Published: 15 Nov. 2011How to cite this article:Goliath-Yarde, L., & Roodt,G. (2011). Differential itemfunctioning of the UWES-17in South Africa. SA Journalof Industrial Psychology/SATydskrif vir Bedryfsielkunde,37(1), Art. #897, 11 e:The research this articlereports is the product ofa collaborative researchproject between theUniversity of Johannesburgand the Vrije University,Amsterdam.The authors acknowledge thefinancial support from theSouth African Netherlandsresearch Program onAlternatives in Development(SANPAD) for conducting thisresearch. The conclusionsthe authors reach and theopinions they express do notnecessarily reflect the viewsof SANPAD.Motivation for the study: Organisations are using the UWES-17 more and more in South Africato assess work engagement. Therefore, research evidence from psychologists or assessmentpractitioners on its DIF across different cultural groups is necessary.Research design, approach and method: The researchers conducted a Secondary DataAnalysis (SDA) on the UWES-17 sample (n 2429) that they obtained from a cross-sectionalsurvey undertaken in a South African Information and Communication Technology (ICT)sector company (n 24 134). Quantitative item data on the UWES-17 scale enabled the authorsto address the research question.Main findings: The researchers found uniform and/or non-uniform DIF on five of the vigouritems, four of the dedication items and two of the absorption items. This also showed possibleDifferential Test Functioning (DTF) on the vigour and dedication dimensions.Practical/managerial implications: Based on the DIF, the researchers suggested thatorganisations should not use the UWES-17 comparatively for different cultural groups oremployment decisions in South Africa.Contribution/value add: The study provides evidence on DIF and possible DTF for theUWES-17. However, it also raises questions about possible interaction effects that need furtherinvestigation.IntroductionPeople often call South Africa the ‘rainbow nation’. This illustrates the country’s unique culturaldiversity. This very uniqueness provides a platform for the constant challenge of testing whetherpsychological measures (like the UWES-17) are unbiased for different cultural or race groups. Wecall psychological measures that are not as ‘culturally biased’.One of the criteria for deciding whether a psychological instrument is valid is whether it isappropriate as a measure for a person, group or organisation from another context, culture orsociety. We cannot assume that it is appropriate without investigating possible test bias andwithout considering whether it needs adapting and re-norming (Foxcroft & Roodt, 2009). TheEmployment Equity Act (No. 55 of 1998) and the Amended Employment Equity Act of South Africa(Republic of South Africa, 1998) obliges all test developers and users to consider the effect ofpsychometric assessments on different groups. The Employment Equity Act No. 55 of 1998, ChapterII par. 8, states that:Psychological testing and similar assessments are prohibited, unless the test is scientifically valid andreliable, can be applied fairly to all employees, and is not biased against any employee or group . 2011. The Authors.Licensee: AOSISOpenJournals. This workis licensed under theCreative CommonsAttribution License.(Employment Equity Act No. 55 of 1998, Chapter II par. 8)Therefore, measuring instruments have to meet these specific requirements so that one can usethem for different cultural and race groups in South Africa. Schaufeli and Bakker (2003) developedthe UWES-17 in the Netherlands to measure work engagement. Work engagement emerged as theopposite or positive antipode of ‘burnout’, a construct that Maslach and Leiter (1997) v37i1.897

Page 2 of 11One cannot assume that we can generalise the results oneobtains on the UWES in one culture to other cultural groups.Van de Vijver and Leung (1997) stated that, before onecompares scores for cultural groups, one should test itemsfor possible bias. Therefore, this study will investigate DIFmore closely. DIF is a difference in item scores between twogroups that match on the concept of interest (Zumbo, 1999).Schaufeli and Bakker (2010) stated that different (mostlyEuropean) studies reported that the factor structure ofthe UWES-17 remained invariant across different nationalsamples. Although organisations use the UWES-17 widelyin South Africa, only two studies reported validationresults. These are the Storm and Rothmann (2003) and theBarkhuizen and Rothmann (2006) studies. Both studiesreferred to problematic items in the instrument and that oneneeds to examine these carefully for South African samples.Storm and Rothmann (2003) found evidence that suggeststhat item content may need improving. This implies thatthe wording of certain items needs modifying to make themmore appropriate for a specific context. These findings showpotential item bias or differential item functioning in respectof the UWES-17. No specific studies reported on the DIF ofthe UWES-17 in the South African context.Given the diverse cultural landscape of South Africa, it isnecessary for organisations to measure work engagementlevels scientifically. Furthermore, because South Africanresearch frequently uses or cites the scale, it is necessary forthe instrument to measure work engagement in differentSouth African cultural groups validly and reliably. Themain research question is whether the UWES-17 shows DIFbetween cultural groups in South Africa. Therefore, the mainobjective of this study is to assess the DIF on all three of theUWES-17 sub-scales for different cultural groups in a largeICT sector company. More specifically, the sub-objectives ofthe study are to: test the vigour sub-scale for DIF in different cultural groups test the dedication sub-scale for DIF in different culturalgroups test the absorption sub-scale for DIF in different culturalgroups.The structure of the rest of the article follows. Firstly, itwill present a review of the literature on current researchundertaken on the UWES-17. A description of the researchmethod and procedures the researchers used follows, aswell as a description of the results. Finally, it discusses thefindings, its implications for managers, its limitations andrecommendations for future research.Literature reviewMeasuring work engagementIn one of the first discussions of work engagement, Khan(1990) defined personal engagement as the ‘harnessingof organisation members’ selves to their work roles; inengagement, people employ and express themselveshttp://www.sajip.co.zaOriginal Researchphysically, cognitivelyperformances’ (p. 694).andemotionallyduringroleMaslach and Leiter (1997) assumed that work engagement isthe opposite or positive antipode of burnout. One assesses itusing the opposite pattern of scores of the three dimensionsof burnout (energy, involvement and efficacy) as the MaslachBurnout Inventory measures them. However, Schaufeliand Bakker (2004) argued that burnout and engagementare independent states and that one should assess themindependently. Schaufeli, Salanova, Gonzalez-Roma andBakker (2002, p. 74) defined work engagement as a ‘ positive,fulfilling, work-related state of mind that is characterised byvigour, dedication and absorption’. It refers to a persistentand pervasive affective-cognitive state that does not focus onany particular object, event, person or behaviour.High levels of energy and mental resilience whilst working,the willingness to invest effort in one’s work and persistencein the face of difficulties characterise vigour. A sense ofsignificance, enthusiasm, inspiration, pride and challengefrom one’s work characterise dedication. Being fullyconcentrated and deeply engrossed in one’s work, whentime passes quickly and one has difficulty in detaching fromwork, characterise absorption (Schaufeli et al., 2002).In their research, Maslach and Leiter (1997) felt that theopposite scores of the Maslach Burnout Inventory measureswork engagement adequately. Schaufeli, Martínez, MarquesPinto, Salanova and Bakker (2002) argued that burnout andengagement were opposite concepts and that one shouldmeasure them independently using different instruments.Therefore, they operationalised work engagement with theUWES-17, a self-reporting instrument that includes the threedimensions of vigour, dedication and absorption. VIgour(VI) consists of six items like ‘At my work, I feel bursting withenergy’. DEdication (DE) consists of five items like ‘My jobinspires me’. ABsorption (AB) consists of six items like ‘I feelhappy when I am working intensely’. Using specific words oridiomatic expressions may result in different interpretationsor meanings attaching to these words or phrases when oneapplies them cross-culturally.Studies in the Netherlands and Portugal (Schaufeli,Martinez et al., 2002) further demonstrated the psychometricsoundness of the UWES-17 in a cross-national study, wherethe instrument was translated from Dutch to Portuguese, onburnout and engagement of university students. Schaufeliand Bakker (2010) reported that the factor structure of theUWES-17 remained invariant across different nationalsamples, as different studies by Llorens, Bakker, Schaufeliand Salanova (2006); Schaufeli, Bakker and Salanova (2006);Schaufeli, Martínez, Marques-Pinto, Salanova and Bakker(2002); Xanthopoulou, Bakker, Demerouti and Kantsas (inpress) reported. However, the question arises of whether thisconclusion is still valid when one includes other traditionallynon-European cultures in the comparisons. To find answersto this question needs further investigation.doi:10.4102.sajip.v37i1.897

Page 3 of 11The key concepts of item bias and differentialitem functioningMetric or measurement equivalence in cross-culturalcomparisons is a key concern in cross-cultural comparativeresearch. One cannot determine equivalence using unequalcomparisons (Foxcroft & Roodt, 2009) because the essenceof what one is measuring will be questioned. Constructs areequivalent for different cultural groups when one obtainsthe same or similar scores when using the same or differentlanguage versions of the items or measures. If not, the itemsare culturally biased or the measures are not equivalent.Therefore, cultural bias indicates the presence of factors thatchallenge the validity of cross-cultural comparisons. Onecan ascribe the presence of these factors to a host of reasons,all with the generic term of cultural bias. Van de Vijverand Leung (1997) presented three types of bias. These areconstruct, method and item bias. They describe sources foreach of these.‘Construct bias’ will occur when the construct one measuresis not identical in different cultural groups. This can happenwhen: there is an incomplete overlap of definitions of theconstruct in different cultures poor sampling of a domain in the instrument (where asmall number of items in a questionnaire represents broadconstructs) many studies have been exported from Western to nonWestern countries and some of the issues have little or norelevance to non-Western cultures.‘Method bias’ will occur because of particular characteristicsof the instrument, its administration and aspects that themethods sections of research reports describe. It usuallyaffects scores at the level of the whole instrument. Typicalsources are: differential response styles differential familiarity with the stimuli measures, oftenbecause of the different backgrounds of the respondents differences in environmental conditions duringadministration communication problems between the examiner andexaminee or interviewer and interviewee.‘Item bias’ refers to differential item functioning. Incidentaldifferences in the appropriateness of the item content, pooritem translation and inadequate item formulation usuallyproduce it. The consequences for equivalence are least clearin the case of item bias (Berry et al., 2002). If a single item ora few items show evidence of bias, one can eliminate themto improve the equivalence of scores. However, evidenceof item bias might also indicate that an instrument does nothave identical traits. Therefore, it is clear that bias (construct,method and item) will lower the level of equivalence.Kanjee and Foxcroft (in Foxcroft & Roodt, 2009) stated thatitem bias implies an unfair advantage or disadvantage to oneor more groups. By way of comparison, Wa Kivilu (2010, p. 309)states that test (or item) bias represents score differences thathttp://www.sajip.co.zaOriginal Researchnuisance abilities cause. These abilities are not the focus ofmeasurement. Item Response Theory (IRT), according toKanjee and Foxcroft (2009), is a test theory used to developand assemble test items, detect bias (own emphasis) inmeasuring instruments and analyse test data. They state that,by applying IRT, it is possible to analyse the relationshipbetween the characteristics of the individual and responsesto the individual items. An Item Characteristic Curve (ICC)can represent this relationship graphically.DIF is another term related to item bias. According toKristjansson, Aylesworth, Mcdowell and Zumbo (2005,p. 936), DIF occurs if there is a difference in an item scorebetween two groups that are matched on the concept ofinterest. Therefore, an item is unbiased if all people withthe same underlying ability one is measuring have the sameprobability of answering the item correctly, irrespective oftheir group membership.Kristjansson et al. (2005) also distinguish between uniformDIF and non-uniform DIF. For dichotomous (binary) items,uniform DIF occurs when the item is more difficult at allability levels for one group than for the other. Wa Kivilu(2010), by way of contrast, states that there is uniform DIFwhen there is dependence on group membership but nointeraction between score category and group membership(indicated by two parallel ICCs). According to Kristjanssonet al. (2005), non-uniform DIF occurs when there is aninteraction between ability level and group so that the item ismore difficult, for example, for one group at lower levels ofability but more difficult for the other groups at higher levelsof ability. This, according to Wa Kivilu (2010), indicates aninteraction term that two non-parallel ICCs characterise.Differential Test Functioning (DTF) will occur if severalitems in the test, or a dimension of the test, show DIF.One can trace possible reasons for DIF according to Kanjeeand Foxcroft (2009). They include using language, concepts orexamples that are unfamiliar, inappropriate, or ambiguous;or test speediness and unfamiliar item format. One canuse different procedures for detecting DIF in ordinal items(using Likert-type scales). Some of these are the Mantel,the Generalized Mantel-Haenszel, the logistic discriminantfunction analysis or an unconstrained cumulative logitsapproach to ordinal logistic regression (Kristjansson etal., 2005). The researchers used a variation of the OrdinalLogistic Regression approach in this study (see Kristjanssonet al. 2005).South African validation studies on the UWES-17Storm and Rothmann (2003) validated the UWES-17 in theSouth African Police Service. They investigated the internalconsistencies of the three engagement scales of the UWES aswell as the construct equivalence and bias for different racegroups in their sample. They found the scales had acceptablelevels of internal consistency: Vigour α 0.78 Dedication α 0.89 Absorption α 0.78.doi:10.4102.sajip.v37i1.897

Page 4 of 11Using exploratory factor analysis and bias analysis, theyfound the results acceptable for different race groupsand showed no uniform or non-uniform bias. This studyconfirmed that the UWES-17 was acceptable for comparingthe work engagement of different race groups. Their studyalso revealed ‘problematic items’. These were: items 4 and 14 in the three-factor model (item 4: ‘I feelstrong and vigorous in my job’ and item 14: ‘I get carriedaway by my work’) items 3, 11, 15 and 16 in the one-factor model (item 3: ‘Timeflies when I am working’, item 11: ‘I am immersed in mywork’, item 15: ‘I am very resilient mentally, in my job’and item 16: ‘It is difficult to detach myself from my job’).Storm and Rothmann (2003) suggested that the items mightbe problematic because they are ambiguous, or sample orcountry specific, and that some of the problems in the itemsmay relate to difficult words that the participants struggledto understand and/or interpret. Examples are ‘vigorous’,‘immersed’ and ‘resilient’. Idiomatic expressions like ‘I get‘carried away’ by my work’; ‘At my work, I feel bursting withenergy’; or ‘Time flies when I am working’ (own emphasis)may also be culture- or country-specific and may presentsimilar problems to other cultural groups.Storm and Rothmann (2003) suggested that we need furtherresearch in other occupations in South Africa to establishnorms for engagement levels. They encouraged using largersamples because larger samples might yield increasedreliability. They suggested that study findings would beconsistent in similar groups and recommended modifyingthe content of the problematic items.Barkhuizen and Rothmann (2006) conducted another study.The objectives of their study were to assess the psychometricproperties of the UWES-17 for academic staff in South Africanhigher education institutions and to investigate differencesin work engagement of the different demographic groups.In this study, different demographic groups referred toposition in the organisation, qualification, age and gender.The study confirmed that the UWES-17 shows high internalconsistency. However, they identified problems with items9 and 12 (item 9: ‘I feel happy when I am engrossed in mywork’ and item 12: ‘In my job, I can continue working forvery long periods of time’). These two items showed highstandardised residuals and the findings suggested that theseitems may require deletion or content modification.In both these studies (Barkhuizen & Rothmann, 2006; Storm& Rothmann, 2003) some of the items on the vigour andabsorption scales were problematic. This indicated thatitem bias of the UWES-17 in South Africa was possible.However, these studies reported no ICCs. Therefore, themain objective of the current study is to assess the DIF in allthree of the UWES-17 sub-scales for different cultural groupsin a large ICT sector company. In order to achieve this aim,the researchers used the research design that follows forthe study.http://www.sajip.co.zaOriginal ResearchResearch designResearch approachThe researchers used an existing data set. The basis of theSDA was a dataset of the UWES-17 that the researchersobtained from an empirical quantitative research surveyundertaken in a South African ICT sector company as partof a larger research project. They focused specifically on theresponses to the UWES-17 scale items in order to address theresearch objective.Research methodResearch participants and samplingThe primary data the researchers used for this studycame from a South African ICT sector company that has aworkforce of 24 134 full-time employees in positions up tomiddle management level. The

soundness of the UWES-17 in a cross-national study, where the instrument was translated from Dutch to Portuguese, on burnout and engagement of university students. Schaufeli and Bakker (2010) reported that the factor structure of the UWES-17 remained invariant across different national samples, as different studies by Llorens, Bakker, Schaufeli

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Item: Paper Item: Stapler Item: Staples Transaction: 2 CC#: 3752 5712 2501 3125 Item: Paper Item: Notebook Item: Staples Transaction: 1 CC#: 3716 0000 0010 3125 Item: Paper Item: Stapler Item: Staples Transaction: 2 CC#: 3716 0000 0010 3125 Item: Paper Item: Notebook Item: Staples Before us

rexroth a10vo & a10vso parts information view: a item # 1: rotary group item # 2: control-ass. item # 3: pump housing item # 4: end cover-ports item # 5: cradel ass. item # 6: shaft - drive item # 7: washer item # 8: adjusting disc item # 9: tappered brg item # 10: tappered brg item # 11: bearing cradle item # 12: seal - shaft