Are Cognitive G And Academic Achievement G One

2y ago
9 Views
2 Downloads
1.06 MB
16 Pages
Last View : 10d ago
Last Download : 3m ago
Upload by : Sabrina Baez
Transcription

Intelligence 40 (2012) 123–138Contents lists available at SciVerse ScienceDirectIntelligenceAre cognitive g and academic achievement g one and the same g?An exploration on the Woodcock–Johnson and Kaufman testsScott Barry Kaufman a,⁎, Matthew R. Reynolds b, Xin Liu c, Alan S. Kaufman d, Kevin S. McGrew e, faPsychology Department, New York University, USAThe University of Kansas, USANCS Pearson, Clinical Assessment, Bloomington, MN, USAdYale Child Study Center, Yale University School of Medicine, New Haven, CT, USAeWoodcock-Muñoz Foundation, USAfUniversity of Minnesota, Minneapolis, MN, USAbca r t i c l ei n f oArticle history:Received 13 January 2011Received in revised form 14 January 2012Accepted 14 January 2012Available online 7 February 2012Keywords:IntelligenceGeneral cognitive abilityAcademic achievementCattell–Horn–Carroll (CHC) theoryKaufman Assessment Battery for Children —Second Edition (KABC-II)Kaufman Test of Educational Achievement —Second Edition (KTEA-II)Second Edition Comprehensive FormWoodcock-JohnsonThird EditionWoodcock-Johnson IIIGeneral Intelligence (g)a b s t r a c tWe examined the degree to which the conventional notion of g associated with IQ tests andgeneral cognitive ability tests (COG-g) relate to the general ability that underlies tests of reading, math, and writing achievement (ACH-g). Two large, nationally representative data setsand two independent individually-administered set of test batteries were analyzed using confirmatory factor analysis procedures: (a) the Kaufman-II sample (N 2520), organized into sixage groups between 4–5 and 16–19 years, tested on both the Kaufman Assessment Batteryfor Children-2nd ed. (KABC-II) and the Kaufman Test of Educational Achievement-2nd ed.(KTEA-II) Comprehensive Form; and (b) the WJ III sample (N 4969), organized into fourage groups between 5–6 and 14–19 years, tested on both the Cognitive and Achievementbatteries of the Woodcock–Johnson-3rd ed. (WJ III). Second-order latent factor models wereused to model the test scores. Multi-group confirmatory factor analysis was used to investigatefactor loading invariance across the age groups. In general, invariance was tenable, whichallowed for valid comparisons of second-order COG-g and ACH-g factor variance/covariancesand correlations across age. Although COG-g and ACH-g were not isomorphic, they correlatedsubstantially, with an overall mean correlation coefficient of .83, and with the correlationsgenerally increasing with age (ranging from .77 to .94). The nature of the relation betweenCOG-g and ACH-g was explored and the best measures of COG-g were examined. 2012 Elsevier Inc. All rights reserved.1. IntroductionOne of the central purposes of intelligence testing, datingback to Alfred Binet, is to predict educational achievement(Binet & Simon, 1916). Research has shown a moderate tostrong relation between general cognitive ability (g) andschool grades, ranging from 0.40 to 0.70 (Mackintosh, 1998).Jensen (1998) noted that the median validity coefficient ofIQ for educational variables is about .50, with the spread of⁎ Corresponding author at: Department of Psychology, New York University,6 Washington Place, room 158, New York, NY, 10003, USA.E-mail address: scott.barry.kaufman@nyu.edu (S.B. Kaufman).0160-2896/ – see front matter 2012 Elsevier Inc. All rights reserved.doi:10.1016/j.intell.2012.01.009the validity coefficients varying considerably depending onthe variability of the group (the coefficient being higher forthose nearer to the lower end of the educational ladder).Even though the IQ-achievement correlations tend to bemoderate to high, typically there is about 50 to 75% of thevariance in academic achievement unaccounted for by measures of cognitive ability alone (Rohde & Thompson, 2007).While some of the unaccounted for variance is measurementerror, there are certainly many factors besides g that systematically play a role in determining school grades, includingdomain-specific aptitudes (e.g., Gustaffson & Balke, 1993),other student characteristics (e.g., social-emotional functioning, behavior, motivation, grit, affect, metacognition, specific

124S.B. Kaufman et al. / Intelligence 40 (2012) 123–138psychomotor skills), classroom practices, design and deliveryof curriculum and instruction, school demographics, climate,politics and practices, home and community environments,and, indirectly, state and school district organization and governance (Wang, Haertel, & Walberg, 1993). Indeed, in his seminal review of the link between cognitive ability and academicachievement, Ceci (1991) showed that the relation betweenIQ and academic achievement is substantially influenced bythe context of the learning environment. School learning isclearly the result of the interaction of a complex set of proximaland distal student and environmental characteristics.The correlation between IQ and academic achievement istypically higher when looking at standardized tests of achievement rather than grades in school (Mackintosh, 1998), becauseschool performance is more strongly tied to the curriculum,student effort, teacher competency, and other “irrelevant” variables. Research has shown that IQ and achievement test scoreshave yielded correlation coefficients that usually range fromthe mid-.60s to mid-.70s (Naglieri & Bornstein, 2003) and sometimes reach the mid-.80s (The Psychological Corporation, 2003,Table 5.15), based on a variety of individually-administered IQand achievement tests.Although such standardized achievement tests certainlydo not guarantee that all students will be on equal footingin terms of their learning context, the tests do minimizepotentially confounding variables such as idiosyncraticteacher grading styles and teacher perceptions. Anotherbenefit of using standardized achievement tests in assessingthe relation between intelligence and academic achievementis that factor analysis can be applied to multiple tests, allowing for an assessment of common variance across the testsand minimizing error variance, 1 which can contribute to aless-than-accurate correlation with g (e.g., Watkins, Lei, &Canivez, 2007). Lastly, individually administered tests eliminate confounds related to group administered tests in that askilled examiner may minimize examinee related variancerelated to motivation, comprehension of instructions, effort,attention and so forth, especially for younger school-agedchildren who are not used to standardized test situations(Kaufman, 1979).In a recent study, multiple measures of g were used topredict group administered standardized national publicexamination results across 25 academic subjects (Deary,Strand, Smith, & Fernandes, 2007). The correlation betweena latent g factor measured at age 11 and a latent general educational achievement factor measured at age 16 was 0.81.A longitudinal coefficient as substantial as .81 is remarkablyhigh and suggests that the latent g and academic achievement constructs might approach identity when assessed concurrently. Other studies have found that the average IQ of anation is highly correlated with the academic achievementof that nation (Lynn & Meisenberg, 2010; Rindermann,2007). Looking at differences in IQ across 86 countries, Lynnand Meisenberg (2010) found a correlation of .92 betweena nation's measured IQ and the educational attainment ofschool students in math, science and reading comprehension.Correcting for attenuation, they found a correlation of 1.0. Allof these results are consistent with Spearman (1904), whosuggested over 100 years ago that the general factor extractedfrom a performance battery based on school grades would bealmost perfectly correlated with general intelligence.A related issue is the extent to which the relation betweencognitive-g (COG-g) and academic achievement-g (ACH-g)varies as a function of age. Studies that have assessed therelation at various ages have reported a declining associationwith age, which has been attributed to dropout effects,increasing restriction of range, variability in educationalcomplexity, and the increasing role of dispositional factors(e.g., motivation and persistence) (Jensen, 1998). None ofthese studies, however: (a) estimated the correlations between a latent g factor from individually administered testsof cognitive ability and a latent general factor extractedfrom standardized achievement tests, and (b) tested for factorial invariance across the different age groups. Gustaffsonand Balke (1993) investigated the relationship between alatent cognitive ability factor and a latent school achievementfactor comprising course grades in 17 different subjects. Theyfound that COG-g explained a substantial amount (40%) ofthe variance in ACH-g. Similarly, among a sample of Germanstudents, Rindermann and Neubauer (2004) found a correlation of .63 between COG-g and an ACH-g consisting of schoolgrades. In both of these studies, standardized tests of ACH-gwere not administered.Among recent studies that have included standardizedmeasures of academic achievement (Deary et al., 2007;Spinks et al., 2007), the measures of academic achievementwere group administered, and the correlations between a latent general cognitive ability factor and latent educationalachievement scores were assessed within longitudinal designs, not via cross-sectional methodology. Although thesestudies provided important insights into the possible causalrelations linking COG-g to ACH-g, they did not directly measure the degree to which the two types of g are the same ordifferent, which is best examined when cognitive andachievement tests are administered concurrently. Also, theydid not assess whether the COG-g and ACH-g correlation differs as a function of chronological age. 2To cast light on these issues, the current study aimed: (a)to assess the relation between a latent g-factor extractedfrom a battery of individually administered cognitive abilitytests (reflecting COG-g) and a latent academic achievementfactor extracted from a battery of academic achievementtests (reflecting ACH-g) using large, nationally representativesamples; (b) to test the equivalence of second-order COG-gand ACH-g latent factor correlations across a wide range ofage groups; (c) to cross-validate these findings with a secondlarge battery of cognitive and achievement tests, normedon an independent, nationally representative sample; and(d) to interpret all results, from the perspective of Cattell–Horn–Carroll (CHC) theory (see Carroll, 1993; Horn & Noll,1997; McGrew, 2005, 2009).The CHC model represents a merger of the Horn–CattellGf–Gc theory (Horn & Cattell, 1966; Horn & Noll, 1997) andCarroll's (1993) three-tiered hierarchical organization of1If latent variable structural equation model-based factor methods areused, the relations between cognitive and achievement latent variables canbe estimated that are purified or purged of measurement error.2It is possible to investigate developmental age effects if more complexlongitudinal test–retest designs are used that include developmental timelag components (see McArdle & Woodcock, 1997, for an example).

S.B. Kaufman et al. / Intelligence 40 (2012) 123–138human abilities. Historically, both theories placed a keyemphasis on an array of 8–10 cognitive Broad Abilities. Themerged, contemporary CHC theory identifies 10 such BroadAbilities, for example, Crystallized Knowledge (Gc), FluidReasoning (Gf), Short-term Retrieval (Gsm) and ProcessingSpeed (Gs). CHC theory is particularly pertinent as a theoretical foundation for the present study because the 10 BroadAbilities include eight that are readily identifiable as cognitive and two that fit naturally into the academic achievementdomain: Grw (Reading & Writing) and Gq (QuantitativeKnowledge). Furthermore, CHC theory is the main theoreticalbasis for nearly all current individually-administered testsof cognitive ability (Kaufman, DeYoung Gray, Brown &Mackintosh, 2009), including both sets of instruments usedto address the COG-g and ACH-g relation in the present study.2. Method2.1. ParticipantsKaufman-II. The “Kaufman sample” included the conormingpopulation of the Kaufman Test of Educational AchievementSecond Edition, Comprehensive Form (KTEA-II;Kaufman &Kaufman, 2004b) and the Kaufman Assessment Battery forChildren-Second Edition (KABC-II; Kaufman & Kaufman,2004a). This sample comprised a total of 2520 students included in the KABC-II norm sample and also in either the agenorm or the grade-norm sample of the KTEA-II. About halfthe sample was tested on KTEA-II Form A (n 1227) and theother half on KTEA-II Form B (n 1293). Both KTEA-II andKABC-II norm samples were stratified to be close to populationpercentages for gender, ethnicity, parental education, andgeographic region, according to US population data from theCurrent Population Survey, March 2001. Analyses were conducted on six age groups: 4–5 (n 295), 6 (n 198), 7–9(n 565), 10–12 (n 577), 13–15 (n 511), and 16–19(n 374). Each age sample matched Census figures with reasonable accuracy. The total sample comprised 1257 (49.9%)females and 1263 (50.1%) males; 1569 (62.2%) Caucasians,375 (14.9%) African Americans, 445 (17.7%) Hispanics, and131 (5.2%) “Others” (e.g., American Indians, Alaska Natives,Asian Americans, and Pacific Islanders); 362 (14.4%) had parents who completed less than 12 years of formal schooling,818 (32.5%) had parents who graduated high school, 759(30.1%) had parents who completed 1–3 years of college, and581 (23.0%) had parents who graduated college; and 350(13.9%) lived in the Northeast, 662 (26.3%) lived in the NorthCentral region, 875 (34.7%) lived in the South, and 633(25.1%) lived in the West.WJ III. The “WJ III sample” comprised N 4969 individualstested on the Woodcock–Johnson III (WJ III; Woodcock, 2001).The sample was drawn from the nationally representativeWJ III standardization sample (see McGrew & Woodcock,2001). It was constructed using a three-stage stratified sampling plan that controlled for 10 individual (e.g., race, gender,educational level, occupational status) and community (e.g.,community size, community SES) variables as per the UnitedStates Census projection for the year 2000. Analyses wereconducted on four age groups: 5 to 6 (n 639), 7 to8 (n 720), 9 to13 (n 1995), 14 to 19 (n 1615). Withineach age group, two randomly divided subsamples were125used so that the analysis would consist of models calibratedin one sample and then cross-validated in another sample ateach respective age grouping.2.2. MeasuresKaufman-II. The cognitive and achievement tests usedin the Kaufman sample were from the KABC-II and KTEA-IItest batteries. Descriptions of KTEA-II and KABC-II subtestsare presented in the test manuals (Kaufman & Kaufman,2004a, Table 1.2, Kaufman & Kaufman, 2004b, Table 1.1) andare available in a number of assessment texts (Kaufman,Lichtenberger, Fletcher-Janzen, & Kaufman, 2005; Lichtenberger& Breaux, 2010). Estimates of reliability and evidence of validity for all KTEA-II and KABC-II scores are reported by Kaufmanand Kaufman (2004a, 2004b), Kaufman et al. (2005), andLichtenberger and Breaux (2010); the pros and cons of theinstruments, as evaluated by independent reviewers, arealso summarized by Kaufman et al. (2005), Lichtenbergerand Breaux (2010), and Reynolds, Keith, Fine, Fisher, andLow (2007).The KTEA-II is an individually administered measure ofacademic achievement for individuals ages 4.5 through25 years. It includes 14 subtests, nine of which measure (a)mathematics (Math Computation, Math Concepts & Applications), (b) reading (Letter & Word Recognition, ReadingComprehension, Nonsense Word Decoding), (c) readingfluency (Word Recognition Fluency, Decoding Fluency),and (d) written language (Written Expression, Spelling). Interms of the CHC taxonomy (McGrew, 2005, 2009), the reading, writing, and spelling tests are associated with Grw, andthe math tests with the Gq factor. The other five KTEA-IIsubtests did not fit into reading-writing (Grw) or math(Gq) domains and were best classified as measuring Gc(Listening Comprehension, Oral Expression; AssociationalFluency) and Glr (Naming Facility/RAN; Flanagan, Ortiz, &Alfonso, 2007; Flanagan, Ortiz, & Alfonso, in press; Kaufmanet al., 2005). These KTEA-II subtests were, therefore, includedin this study as measures of COG-g, although not all of thesesubtests were administered to all age groups.The KABC-II is a popular individually administered measure of intelligence. The scoring structure of the KABC-IIincludes five CHC broad ability composites: Gc, Glr, Gf, Gsm,and Visual Processing (Gv). A total of 18 KABC-II subtestswere included in this study, although not all of the testswere available for each age group. The KABC-II and KTEA-IIsubtests were organized into CHC broad ability factors,which, in turn were regressed onto cognitive and achievement second-order g factors (see Table 1).WJ III. The cognitive and achievement measures used inthe WJ III sample were from the WJ III Cognitive, Achievement, and Diagnostic Supplement test batteries test batteries(Woodcock, McGrew, Mather, & Schrank, 2003). A total of40 tests were used. The development, standardization, andpsychometric properties of the WJ-III battery have generallybeen evaluated favorably by independent reviewers (BradleyJohnson, Morgan, & Nutkins, 2004; Cizek, 2003; Sandoval,2003; Sares, 2005; Thompson, 2005). CHC theory (McGrew,2005, 2009) was used to organize the tests by CHC broadability factors. These classifications are shown in Table 2. Thevariables used were the same across all age groups except in

126Table 1Kaufman organization of subtests into broad abilities.COG-gACH-gGvGfGlrGsmGrwGqVerbal knowledgeExpressive vocabularyRiddlesOral expressionListening comprehensionWritten expressionReading comprehensionGestalt closureAssociational fluencyTrianglesBlock countingRoverPattern reasoningGestalt closureFace recognitionConceptual thinkingStory completionPattern reasoningHand movementsConceptual Naming facilityWord orderNumber recallHand movementsLetter–word recognitionWritten expressionNonsense word decodingSpellingReading comprehensionDecoding fluencyNonsense word decoding fluencyMath concepts and applicationsMath calculationS.B. Kaufman et al. / Intelligence 40 (2012) 123–138GcNotes. Italics indicated subtest was cross-loaded.Table 2WJ-III organization of tests into broad abilities.COG-gACH-gGcGvGfGlrGsmGaGsGrwGqVerbal compGeneral infoOral compStory recallMemory for sentencesRapid namingRetrieval fluencySpatial relationsBlock rotationVisual closureCross-outPlanningAnalysis-synthesisConcept formationNumerical reasoningMemory for namesMemory for names delayedPicture recognitionVisual–auditory memoryVisual–auditory memory delayedMemory for wordsMemory for sentencesAuditory working memoryNumbers reversedAuditory attentionSound patternsIncomplete wordsSound blendingDecision speedVisual-matchingCross-outRapid namingWriting fluencyMath fluencySpellingWriting sampleLetter–word identifyPassage compWord attackMath fluencyReading fluencyEditingwriting fluencyMath applied problemsMath calculationMath fluencyNumerical reasoningNotes. Italics indicated test was cross-loaded.

S.B. Kaufman et al. / Intelligence 40 (2012) 123–138the youngest sample (ages 5–6) where the Writing and Reading fluency tests and Editing were not applicable.2.3. Analytic strategyPreliminary models. Second-order CFA models were developed for each age group and each test battery. Initialmodels were based on information from the test manuals,CHC theory, and prior research. The goal was to developmodels that were acceptable from theoretical and statisticalstandpoints. Once acceptable models were estimated ineach age group, multi-group confirmatory factor modelswere estimated. General specifications for the Kaufman-IIand WJ III models are presented next with more detailedinformation presented in the Results section.Kaufman-II. To estimate the correlation between KABC-IICOG-g and KTEA-II ACH-g across ages, a model with correlated second-order COG-g and ACH-g factors was proposed ineach age group; this model was based on theoretical, empirical, and clinical considerations (Kaufman et al., 2005;Lichtenberger & Breaux, 2010; Reynolds et al., 2007). Becausenot all subtests were administered at all ages, the number offactor indicators varied by age. Factor to indicators specifications are shown in Table 1.The same 18 cognitive and 10 achievement subtests wereadministered to participants across the 10–12, 13–15, and16–19 age groups. The initial model for these age groupsincluded seven first-order CHC common factors (Gf, Gc, Glr,Gsm, Gv, Grw, Gq), with these first-order factors regressedon two correlated second-order latent factors. Grw andGq factors were regressed on a second-order ACH-g factorand Gf, Gc, Glr, Gsm, and Gv factors were regressed on asecond-order COG-g factor.Participants in the 7–9 age group were administered thesame 18 cognitive subtests as the older age groups; however,they were given two fewer KTEA-II achievement subtests(See Fig. 1). These fewer subtests, however, did not influencethe number of factors (See Figs. 1 and 2).Although there were few differences in factor indicatorsin the 7–9, 10–12, 13–15, 16–19 age groups, there were several differences in the youngest two age groups. In the 6 yearold age group, Word Recognition Fluency, Decoding Fluency,Spelling, Nonsense Word Decoding, and Reading Comprehension—all indicators of the Grw factor—are not age appropriate and were not administered. Grw was thus indicated bytwo subtests at age 6 (viz., Letter & Word Recognition andWritten Expression). In addition to these KTEA-II subtests,Atlantis Delayed and Rebus Delayed were not administered.Two subtests not administered to the older age groups, Conceptual Thinking and Gestalt Closure, were administered tochildren at age 6 (and also ages 4–5). Conceptual Thinkingwas specified to load on both Gf and Gv factors for 6-yearolds and Gestalt Closure was specified to load on both Gvand Gc.Last, the age 4–5 model included several departures fromthe models in the other age groups. First, there was not a Gffactor. Gv and Gf were not differentiated for this age groupin the KABC-II (see Kaufman & Kaufman, 2004a). Gv was indicated by Gestalt Closure (cross-loaded on Gc), ConceptualThinking, Face Recognition, and Triangles. Three KTEA-IIachievement subtests were administered at these ages:127Math Concepts & Applications, Written Expression, andLetter & Word Recognition. Gq was indicated by the singlesubtest, Math Concepts & Applications. To identify this factorthe residual variance was fixed to 12.97 (Keith, 2006), whichwas calculated by subtracting the internal reliability estimate(.93) from 1.0, and then multiplying this value (.07) by thevariance (185.29).There were some additional modifications to the specifications outlined above. In addition to second-order factorcorrelations, some first-order factor unique variances between the cognitive and achievement factors were correlated. These correlations were freed because previous researchhas supported the influence of broad ability CHC factors onachievement above and beyond the influence of g (e.g.,Benson, 2008; Keith, 1999). In addition, some KABC-II subtests were found to cross-load on two factors in previousresearch (Reynolds et al., 2007). We allowed these subtests(Hand Movements on Gf and Gsm; Pattern Reasoning on Gfand Gv; Gestalt Closure, when administered, on Gc and Gv)to cross-load on two cognitive factors across the age groups.Both Written Expression and Reading Comprehension crossloaded on Gc. Delayed subtest residuals correlated freelywith their respective tests from the initial measurement(e.g., Atlantis with Atlantis Delayed). Lastly, three additionalmeasurement residual correlations were estimated freelybecause they represented common content or method:Word Recognition Fluency and Decoding Fluency; DecodingFluency and Nonsense Word Decoding; and AssociationalFluency and Naming Facility. Additional age-group specificadjustments are discussed in the Results section (model calibration sample, See Fig. 2).WJ III. Initial models were developed using Subsample 1(model calibration sample). The same model specificationswere utilized when using Subsample 2 (model crossvalidation sample) data for analysis. The general COG-g modelwas specified to include seven first-order CHC latent factors(Gf, Gc, Glr, Ga, Gv, Gsm, Gs) which were, in turn, regressedon a second-order COG-g factor. The general ACH-g modelused across ages specified two broad first-order latent achievement factors (Gq, Grw), with those factors regressed on asecond-order ACH-g factor. The COG-g and ACH-g secondorder factors were correlated. Some correlated residuals andcross-loadings were also included across all age groups.Delayed recall test residuals were correlated with each otherand with their corresponding residual from the initial measurement. In addition, the three achievement fluency test residualswere correlated when they were administered. Five cognitivetests had cross-loadings, which were allowed across ages.Retrieval Fluency loaded on Gc and Gs, Numerical Reasoningon Gf and Gq, Memory for Sentences on Gc and Gsm, RapidNaming on Gc and Gs, and Cross Out on Gv and Gs. In addition,Writing (Grw) and Math (Gq) Fluency tests were cross-loadedon the Gs factor. The organization of tests by broad abilities isshown in Table 2.There were some age-group specific correlations betweenthe cognitive and achievement first-order unique variances.Moreover, there were some age-group specific measurementresidual correlations, as explained in the Results section.Multi-group models. The primary purpose of this researchwas to test the equivalence of g factor covariances (correlations) and variances across age levels. Valid comparisons of

128S.B. Kaufman et al. / Intelligence 40 (2012) 123–138Fig. 1. Kaufman COG-g/ACH-g second-order model for children aged 7–9. Note. Second-order factor model for the Kaufman-II data, with two correlated secondorder g factors. The correlation between the Glr–Grw uniqueness was not included in the figure.

S.B. Kaufman et al. / Intelligence 40 (2012) 123–138129Fig. 2. Woodcock Johnson-III COG-g/ACH-g second-order model for children aged 9–13. Note. Second-order factor model for the Woodcock Johnson-III data, withtwo correlated second-order g factors. Correlations between the Grw–Gc, Gf–Gq, Gs–Gq, and Gs–Grw uniqueness were not included in the figure.

130S.B. Kaufman et al. / Intelligence 40 (2012) 123–138factor variances and covariances required that factors hadthe same meaning across age. This assumption was testedempirically via tests of factorial invariance. Higher-ordermodels were used in this research; thus, age-invariance ofboth first- and second-order factor loadings was evaluatedin a set of nested, multi-group models across age groups. Invariant first- and second-order loadings allowed for validquantitative comparisons of factor variances and covariancesacross the age groups (Gregorich, 2006).Substantive questions were addressed using multi-groupanalysis in which like factor variances and covariances (correlations), in addition to invariant first- and second-order factorloadings, were constrained across age groups. Before factorcovariances were compared, the factor variances were testedfor equivalence across age. If equality of factor variances wastenable, these equality constraints were maintained and theequivalence of COG-g/ACH-g factor covariances (correlationsif factor variances were equal) across age was tested.One issue that arose while testing for factorial invariancewas that some subtests were not administered at all ages.Consequently some factor indicators varied across agegroups. If a subtest indicator of a common factor was missingin one age group, it was specified as a latent variable withzero variance (i.e., missing data). This specification, alongwith common indicators within each factor available acrossages, allowed for us to proceed with invariance tests in themulti-group models despite some differences in factor indicators across age (see Keith, Low, Reynolds, Patel, & Ridley,2010; Keith & Reynolds, 2012; McArdle, 1994; Reynolds etal., 2007).2.4. Model fitValues for model chi squared (χ 2), root-mean squareerror of approximation (RMSEA; Steiger & Lind, 1980), andcomparative fit index (CFI; Bentler, 1990) were reportedand used for evaluation of individual models. The standardized root mean square residual (SRMR; Hu & Bentler, 1999)was reported when available.There is no definitive method for evaluating the fit fortests of factorial invariance. To compare the fit of hypothesized models while testing for invariance, we used the likelihood ratio test (Bentler & Bonett, 1980) and ΔCFI. In asimulation study, Cheung and Rensvold (2002) found thatΔCFI .01 was considered meaningful change, and that theindex was not overly sensitive to small errors of approximation when applied to first-order factor models. Some mayconsider ΔCFI as a more liberal criterion. Alternatively, thelikelihood ratio test is often considered to be overpoweredat detecting unimportant and tiny differences in fit whenthere are a large number of constraints and large samplesize. The more liberal criterion was given more weight formeasurement invariance models due to the complexity ofthe model, number of constraints, and large sample size.The likelihood ratio test was used when a test of one orvery few specific parameters related to structural level substantive hypotheses was required. In addition, RMSEA valuesand SRMR (when available) were reported for all models.Steiger's (1998) multi-group correction (RMSEA # ofgroups) was applied to RMSEA values. Some general guidelines for changes in these index values for demonstratingfactor loading invariance have been provided in previous research: ΔCFI b .01; ΔR

Kaufman Test of Educational Achievement — Second Edition (KTEA-II) Second Edition Comprehensive Form Woodcock-Johnson Third Edition Woodcock-Johnson III General Intelligence (g) 1. Introduction One of the central purposes of intelligence testing, dating back to Alfred Binet, is to p

Related Documents:

L’ARÉ est également le point d’entrée en as de demande simultanée onsommation et prodution. Les coordonnées des ARÉ sont présentées dans le tableau ci-dessous : DR Clients Téléphone Adresse mail Île de France Est particuliers 09 69 32 18 33 are-essonne@enedis.fr professionnels 09 69 32 18 34 Île de France Ouest

The Allen Cognitive Level Screen (ACLS) can help you identify the Allen Cognitive Levels of clients with Alzheimer’s disease, dementia, and other cognitive disabilities. Also referred to as the leather lacing tool, this cognitive assessment tool measures global cognitive processing capaci

Cognitive Priming and Cognitive Training: Immediate and Far Transfer to Academic Skills in Children Bruce E. Wexler1, Markus Iseli2, Seth Leon2, William Zaggle3, Cynthia Rush4, Annette Goodman5, A. Esat Imal1 & Emily Bo2 Cognitive operations are supported by dynamically reconguring neural systems that integrate

Cognitive science and the Conceptual foundations of Cognitive-behavioral therapy: viva la evolution! 74 Rick E. Ingram and Greg J. Siegle chaPTEr 4. Cognitive-behavioral therapy and psychotherapy integration 94 T. Mark Harwood, Larry E. Beutler, and Mylea Charvat ParT ii. aSSESSmEnT conSidEraTionS chaPTEr 5. Cognitive assessment: issues and .

Cognitive Radio, Software Defined Radio, and Adaptive Wireless Systems, Arslan. Cognitive Networks, Mahmoud. Cognitive Radio Communications and Networks, Wyglinski. Cognitive Radio Technology, Fette. Cognitive Radio Architecture, Mitola. -16 May 2013 1213 AN D R E A F . C AT T O N I PHD COURSE “CRS AND CNS: THEORY

Defining Cognitive Psychology Cognitive Psychology is a psychological science which is interested in various mind and brain related subfields such as cognition, the mental processes that underlie behavior, reasoning and decision making. In the early stages of Cognitive Psychology, the high-tech measuring instruments used today were unavailable.

Cognitive-Behaviour Therapy 1 Cognitive-Behaviour Therapy and Problem Drinking: A Meta-analysis Introduction Cognitive-Behavioural Therapy (CBT) is a form of psychotherapy that combines cognitive-based and behaviour-based techniques in an effort to effect behaviour change (Beck, 1970; El

Cognitive properties of groups Individual and institutional learning The costs of ignoring culture when studying cognition Mindware (2001) Andy Clark Philosopher of Cognitive Science The development of cognitive science Andy Clark's combination history and critical reflection.