Quantitative Data Analysis - OUHK

1y ago
7 Views
2 Downloads
1.31 MB
34 Pages
Last View : 18d ago
Last Download : 2m ago
Upload by : Emanuel Batten
Transcription

Seminar onQuantitative Data Analysis:SPSS and AMOSMiss Brenda Lee2:00p.m. – 6:00p.m.24th July, 2015The Open University of Hong KongSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.1Agenda MANOVA, Repeated Measures ANOVA, LinearMixed Models– Demo and Q&A EFA– Demo and Q&A CFA and SEM– Demo and Q&ASBAS (Hong Kong) Ltd. ‐ All Rights Reserved.2

What is MANOVA MANOVA is short for Multivariate ANalysis OfVariance Have one or more independent variables andtwo or more dependent variables Tests for population group differences onseveral dependent measures simultaneously(a set or vector of means)SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.3What is MANOVA (cont’d)Two Groups Compared on Three Outcome MeasuresSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.4

MANOVA Assumptions Large samples or multivariate normality Homogeneity of the within group variance ‐covariance matrices (Box’s M test) ResidualsR id l ((errors)) ffollowll a multivariateli inormalldistribution in the population Linear model (additivity, independencebetween the error and model effecteffect,independence of the errors)SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.5What to Look for in MANOVA Multivariate statistical tests Post hoc test on marginal means (univariateonly) TypeT1 throughhh TType 4 sums off squares Specifyp y Multiplep Random Effect models,, ifnecessary Residuals,R id l predicteddi t d valueslandd iinfluenceflmeasuresSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.6

General Linear Model in SPSS GeneralGl LinearLiModelM d l– Factors and covariates are assumed to have linearrelationships to the dependent variable(s) GLM Multivariate procedure– Model the values of multiple dependent scalevariables, based on their relationships to categoricaland scale predictors GLM Repeated Measures procedure– Model the values of multiple dependent scalevariablesi bl measuredd att multiplelti l titime periods,i d basedb d ontheir relationships to categorical and scale predictorsand the time periods at which they were measured.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.7MANOVA Results Multivariate Tests– Pillai's trace is a ppositive‐valued statistic.Increasing values of the statistic indicate effectsthat contribute more to the model.– Wilks' Lambda is a positive‐valued statistic thatranges from 0 to 11. Decreasing values of thestatistic indicate effects that contribute more tothe modelmodel.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.8

MANOVA Results (cont’d)– Hotelling'sH lli ' trace isi theh sum off theh eigenvaluesiloffthe test matrix. It is a positive‐valued statistic forwhich increasing values indicate effects thatcontribute more to the model.– RoyRoy'ss largest root is the largest eigenvalue of thetest matrix. Thus, it is a positive‐valued statistic forwhich increasing values indicate effects thatcontribute more to the model.There is evidence that PillaiPillai'ss trace is more robustthan the other statistics to violations of modelassumptionspSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.9Post Hoc Tests LSDThe LSD or least significant difference method simply applies standard t tests to allpossible pairs of group means. No adjustment is made based on the number of testsperformed. The argument is that since an overall difference in group means has alreadybeen established at the selected criterion level (say .05),05) no additional control isnecessary. This is the most liberal of the post hoc tests. SNK, REGWF, REGWQ & DuncanThe SNK (Student‐Newman‐Keuls), REGWF (Ryan‐Einot‐Gabriel‐Welsh F), REGWQ (Ryan‐Einot‐Gabriel‐Welsh Q, based on the studentized range statistic) and Duncan methodsinvolve sequentialqtesting.g After orderingg the ggroupp means from lowest to highest,g, thetwo most extreme means are tested for a significant difference using a critical valueadjusted for the fact that these are the extremes from a larger set of means. If thesemeans are found not to be significantly different, the testing stops; if they are differentthen the testing continues with the next most extreme set, and so on. All are moreconservative than the LSD. REGWF and REGWQ improve on the traditionally used SNK inthat they adjust for the slightly elevated false‐positive rate (Type I error) that SNK haswhen the set of means tested is much smaller than the full set.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.10

Post Hoc Tests (cont’d) Bonferronifi & SidakSid kThe Bonferroni (also called the Dunn procedure) and Sidak (also called Dunn‐Sidak)perform each test at a stringent significance level to insure that the family‐wise(applying to the set of tests) false‐positive rate does not exceed the specified value.They are based on inequalities relating the probability of a false‐positive result oneach individual test to the probability of one or more false positives for a set ofi dindependentd t ttests.t FFor example,l theth BonferroniB fi isi basedb d on an additivedditi inequality,ilitso the criterion level for each pairwise test is obtained by dividing the originalcriterion level (say .05) by the number of pairwise comparisons made. Thus withfive meansmeans, and therefore ten pairwise comparisons,comparisons each Bonferroni test will beperformed at the .05/10 or .005 level. Tukey (b)The Tukey (b) test is a compromise test, combining the Tukey (see next test) andthe SNK criterion producing a test result that falls between the two.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.11Post Hoc Tests (cont’d) TukeykTukey’s HSD (Honestly Significant Difference; also called Tukey HSD, WSD, orTukey(a) test) controls the false‐positive rate family‐wise. This means if you aretesting at the .05 level, that when performing all pairwise comparisons, theprobability of obtaining one or more false positives is .05. It is more conservativethan the Duncan and SNK. If all pairwise comparisons are of interest, which isusuallyll theth case, Tukey’sT k ’ testt t isi more powerfulf l thanth theth BonferroniB fi andd Sidak.Sid k ScheffeScheffe’s method also controls the family‐wise error rate. It adjusts not only for thepairwise comparisons, but also for any possible comparison the researcher mightask. As such it is the most conservative of the available methods (false‐positiverate is least), but has less statistical power.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.12

Specialized Post Hoc Tests Hochberg’shb ’ GT2G 2 & Gabriel:G b i l Unequall NsMost post hoc procedures mentioned above (excepting LSD, Bonferroni & Sidak)were derived assuming equal group sample sizes in addition to homogeneity ofvariance and normality of error. When the subgroup sizes are unequal, SPSSsubstitutes a single value (the harmonic mean) for the sample size. Hochberg’sGT2 and Gabriel’s post hoc test explicitly allow for unequal sample sizes. Waller‐Duncanpp((Bayesian)y) that adjustsjthe criterion valueThe Waller‐Duncan takes an approachbased on the size of the overall F statistic in order to be sensitive to the types ofgroup differences associated with the F (for example, large or small). Also, you canspecify the ratio of Type I (false positive) to Type II (false negative) error in the test.This feature allows for adjustments if there are differential costs to the two typesof errors.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.13Unequal Variances and Unequal Nsand Selection of Post Hoc Tests Tamhane T2, Dunnett’s T3, Games‐Howell, Dunnett’s CEach of these post hoc tests adjust for unequal variances and sample sizes in thegroups. Simulation studies (summarized in Toothaker, 1991) suggest that althoughGGames‐HowellHll can bbe toot liberallib l whenh theth group variancesiare equall andd samplelsizes are unequal, it is more powerful than the others.An approach some analysts take is to run both aliberal (say LSD) and a conservative (Scheffe or TukeyHSD) post hoc test. Group differences that show upunder both criteria are considered solid findings,whilehil thosehfoundfd differentdiffonlyl underd theh liberallib lcriterion are viewed as tentative results.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.14

Repeated Measures ANOVA To test for significant differences in meanswhen the same observation appears inmultiplelti l llevelsl off a ffactortSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.15Linear Mixed Models The procedure expands the general linearmodel so that the error terms and randomeffects are permitted to exhibit correlated andnon‐constantnonconstant variabilityvariability. The linear mixedmodel, therefore, provides the flexibility tomodel not only the mean of a responsevariable, but its covariance structure as well.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.16

Linear Mixed Models (cont’d)SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.17MANOVA, RMANOVARepeatedt dMMeasures ANOVAand Linear Mixed Models Demo Q&ASBAS (Hong Kong) Ltd. ‐ All Rights Reserved.18

Exploratory Factor Analysis The purpose of data reduction is to removeg y correlated)) variables fromredundant ((highlythe data file, perhaps replacing the entire datafile with a smaller number of uncorrelatedvariables. Theh purpose off structure detection is toexamine the underlying (or latent)relationships between the variables.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.19EFA Methods For Data Reduction. The principal components methodof extraction begins by finding a linear combination ofp) that accounts for as muchvariables ((a component)variation in the original variables as possible. It thenpthat accounts for as much offinds another componentthe remaining variation as possible and is uncorrelatedwith the previous component, continuing in this wayuntil there are as many components as original variables.Usually, a few components will account for most of thevariation, and these components can be used to replacethe original variables. This method is most often used toreduce the number of variables in the data file.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.20

EFA Methods (cont’d) For Structure Detection. Other Factor Analysisextraction methods go one step further by addingthe assumption that some of the variability in thedata cannot be explained by the components(usually called factors in other extraction methods).As a result, the total variance explained by thesolution is smaller; however, the addition of thisstructure to the factor model makes these methodsideal for examining relationships between thevariables.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.21EFA Methods (cont’d) Principal components attempts to account for the maximum amount of variance in the set ofvariables. Since the diagonalgof a correlation matrix (the(ones)) representspstandardizedvariances, each principal component can be thought of as accounting for as much of thevariation remaining in the diagonal as possible.Principal axis factoring attempts to account for correlations between the variables, which inturn accounts for some of their variance. Therefore, factor focuses more on the off‐diagonalelements in the correlation matrix.Unweighted least‐squares produces a factor solution that minimizes the residual betweenthe observed and the reproduced correlation matrixmatrix.Generalized least‐squares does the same thing, only it gives more weight to variables withstronger correlations.Maximum‐likelihood generates the solution that is the most likely to have produced thecorrelation matrix if the variables follow a multivariate normal distribution.Alpha factoring considers variables in the analysis, rather than the cases, to be sampled froma universe of all ppossible variables. As a result, eigenvaluesgand communalities are notderived from factor loadings.Image factoring decomposes each observed variable into a common part (partial image) anda unique part (anti‐image) and then operates with the common part. The common part of avariablebl can bbe predictedd d ffrom a llinear combinationboff theh remaining variablesbl ((viaregression), while the unique part cannot be predicted (the residual).SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.22

EFA ‐ RotationTwo Factors Based on Six VariablesSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.23EFA Result Communalities indicate the amount of variancein each variable that is accounted for. Initialcommunalities are estimates of the variance ineach variable accounted for byy all componentsporfactors. For principal components extraction, thisis always equal to 1.0 for correlation analyses. Extraction communalities are estimates of thevariance in each variable accounted for by thecomponents.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.24

EFA Result (cont’d) FFor theh iinitiali i l solutionl i off TotalT l VarianceV iEExplained,l i dthere are as many components as variables, and in acorrelations analysisanalysis, the sum of the eigenvalues equalsthe number of components. Extracted thosepwith eigenvaluesgggreater than 1.components The rotated component matrix helps you to determinewhat the componentsprepresent.p For each case and each component, the componentscore is computed by multiplying the case'sstandardized variable values (computed using listwisedeletion) by the component's score coefficients.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.25Exploratory Factor Analysis Demo Q&ASBAS (Hong Kong) Ltd. ‐ All Rights Reserved.26

Confirmatory Factor Analysis Test whether measures of a construct areconsistent with a researcher's understandinggof the nature of that construct (or factor). Assuch the objective of confirmatory factorsuch,analysis is to test whether the data fit ahypothesized measurement model.model Thishypothesized model is based on theory and/orprevious analytic researchSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.27Difference between EFA and CFA Both exploratory factor analysis (EFA) andy ((CFA)) areconfirmatoryy factor analysisemployed to understand shared variance ofmeasured variables that is believed to beattributable to a factor or latent construct.Despite this similarity,similarity howeverhowever, EFA and CFAare conceptually and statistically distinctanalyses.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.28

Difference between EFA and CFA (cont’d)(’d) ThThe goall off EFA iis to ididentifyif ffactors basedb d on datad anddto maximize the amount of variance explained. Theresearcher is not required to have any specifichypotheses about how many factors will emerge, andpwhat items or variables these factors will comprise. By contrast, CFA evaluates a priori hypotheses and islargelyg y driven byy theory.y CFA analysesy requireqtheresearcher to hypothesize, in advance, the number offactors, whether or not these factors are correlated,andd whichhi h items/measuresit/loadl d ontot andd reflectfl t whichhi hfactors.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.29CFA and SEM StStructuralt l equationti modelingd li softwareftiis ttypicallyi ll useddfor performing confirmatory factor analysis. CFA is alsoqy used as a first stepp to assess the pproposedpfrequentlymeasurement model in a structural equation model.Many of the rules of interpretation regardingassessment of model fit and model modification inSEM apply equally to CFA. CFA is distinguished fromstructural equation modeling by the fact that in CFA,there are no directed arrows between latent factors. Inthe context of SEM, the CFA is often called 'themeasurement modelmodel', while the relations between thelatent variables (with directed arrows) are called 'thestructural model'.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.30

Structural Equation Modeling In general SEM is used when you have ayprelationshipspmodel to test with hypothesizedbetween variables. Typically, we want toassess which variables are important inexplaining/predicting another variable (orexplaining/predicting other variablesvariables, as wecan have more than one dependent variable).SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.31SEM Concepts and Definitions SEM pprocedures incorporatepboth observed and unobservedvariables Latent Variables (or Factors)– Thesehcannot beb observed,bd nor measuredd directlydl– We define latent variables in terms of behaviour believed to represent it(observed, or manifest, variables) Exogenous Variables– Synonymous with independent variables, in other words they ‘cause’fluctuations in the values of other latent variables in the model Endogenous Variables – Synonymous with dependent variables, they are influenced by theexogenous variablesi bl ini theth exogenous variablesi bl ini theth model,d l eitherithdirectly or indirectlyNote: In SEM variables are only either dependent or independent, but cannotbe both,both althoughaltho gh it mamay appear this wayaSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.32

AMOS can be used for Correlation – measure relationships between 2 2 variables Simple Regression – an extension of correlation, where weattempt to measure the extent to which one variable (thepredictor)di t ) can bbe usedd tot makek a predictiondi ti aboutb tacriterion measure Multiple Regression – extends simple regression byincorporating several predictor variables Factor Analysis – investigates relationships between sets ofobservedbd variablesi bl andd llatentt t variablesi bl Path Analysis – extends multiple regression bypg several predictorpvariables to explainporincorporatingpredict several dependent variables SEM – extension of Path Analysis, using latent variablesSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.33SEM Model NotationSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.34

Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.35Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.36

Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.37Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.38

Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.39Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.40

Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.41Introduction: types of modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.42

Introduction: real life exampleSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.43How to calculate degree of freedom A simple formula allows us to calculate the degrees of freedom for any model. Themost basic version of the formula is this:Df (number of pieces of information in sample) – (number of parametersestimated) By “pieces of information” we mean the sample means, variances, andcovariances in the data,, the information available to Amos to do its calculations. Byy“parameters estimated” we mean whatever we ask Amos to calculate, whichusually includes effects (single‐headed arrows), covariances (double‐headedarrows), and even population means and variances. Technically, the information in the sample is referred to as “sample moments,” justas the name Amos stands for “Analysis of Moment Structures.” As we have learned,th estimatestheti t AAmos makesk forf our modeld l are calledll d genericallyi ll parameters.tThus,Thanother more general version of the above formula is this:Df (number of distinct sample moments) – (number of parameters estimated)SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.44

How to calculate degree of freedom (cont’d)( t’d) Number of distinct sample moment p * (p 1) / 2, where p is the number ofobserved variables Number of pparameters estimated direct effects variances covariancesSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.45Amos – how to operate Steps involved– Open data– Draw model– RunR analysisl i– Interpret resultsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.46

Testing model adequacyWe re‐calculate estimates for this model,, but first ask forextra output (for instructional purposes ):Analysis properties, tab OutputCheck Sample momentsand Implied momentsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.47Testing model adequacy (cont’d)Chi2 value,, # of degreesgof freedom and probabilitypy levelcan be displayed in the diagram automatically:Add title to the diagramType in this text \cmin, \df, \p are“macromacro names”;names ;Amos will replacethese with theactual resultsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.48

Testing model adequacy (cont’d) EEvery modeld l impliesi li specificifi (population)(l i ) correlationsl ibetween the variables, so every model implies a certainpopulationppcorrelation matrix. The model is our null hypothesis. On the other hand we have the observed correlations, so wehave a sample correlation matrix A Chi2 test is used to asses the discrepancy between thesetwo matricesa. If probability 0.05,0 05 we reject our model; ifprobability 0.05, we do not reject our model– aTechnical note: actually the discrepancy between the samplevariance/covariance matrix and the implied variance/covariancematrix is used in the Chi2 test,, not the correlation matrixSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.49Testing model adequacy (cont’d) In traditional testing we (as the researcher) have acertain hypothesis we want to “prove” by trying toreject a null hypothesis which states the contrary. Westick to this null‐hypothesis until it’s very unlikely, inwhich case we accept our own hypothesis. Here, the null hypothesis has the benefit of thedoubt. In SEM we (as the researcher) postulate a model andwe believe in this model (and nothing but the model),until this model appears to be unlikely. Now,Now we (our model) has the benefit of the doubt.doubtSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.50

How to correct a modelAnalysis PropertiesProperties, tab OutputCheckk thisChthi optionti(note: that by default thethreshold is 4; if the MI for aparticular restriction 4, thenit will not be reported in theoutput)SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.51Multiple Group Analysis We run a multiple group analysis when weparticular model holdswant to test whether a pfor each and every group within our dataset In other wordswords, we are testing to see if there isan interaction effect: is the model group‐dependent?SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.52

Multiple Group Analysis (cont(cont’d)d)Double click Groupp number 1To display the Manage Groupsdialog boxRename thegroup into girlsClick NewRename the groupinto boys and Closethis windowSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.53Factor Analysis in AmosSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.54

Factor Analysis in Amos (cont’d) Note:– In CFA, only certainitems are proposed tobe indicators of eachfactor.– The curved lineindicates therelationship that couldexist between thefactors– Again, the errors inmeasurement areshown by the circleswith the ESBAS (Hong Kong) Ltd. ‐ All Rights Reserved.55The General Model in Amos Model fits,fits so we can interpret theresultsR‐SquaredSquared value is 0.32 compared Rto .22 in SPSSy g We have a better result analysingthe data in the correct way In general, the lower the loadingsare, the more we under‐estimatethe R‐Squared value.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.56

Fit MeasuresThe model undertest (your model)model where number ofestimated parameters number of data pointsmodel of completeindependence of all variablesin the model Absolute measures of fit: does the model reproduce the data( variance/covariance( i/imatrix)?i )? Incremental measures of fit: how does the model describe thedata, compared to a baseline model?SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.57Absolute Fit Measures Standardized Residual Covariances.big samples these elements are N (0, 1). In ‘big’Values less than –2 or greater then 2 indicateproblems (where covariances cancan’tt bereproduced by the model). This table appears when you request residualmoments in your output.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.58

Absolute Fit Measures (cont’d) Chi2/df (Wheaton,Muthén,(Wh t M thé AlwinAl i & SummersS1977) Problem:Problem distribution of this statistic does notexist, so people have rules of thumb: Wheaton (1977) Chi2 / df 5 is acceptable fit.fit Carmines: Chi2 / df 3 is acceptable fit ByrneB(1989):(1989) “it seems clearlthatth t a ratioti 2.002 00represents an inadequate fit.” Amos User Guide: ‘closeclose to 1 for correct modelsmodels’ Note: Wheaton (1987) later advocated that this ratio not be usedSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.59Absolute Fit Measures (cont’d) PopulationPl ti discrepancy.di– Idea: how far is Chi2 value from expected value? Thisdifference divided by sample size (labelled F0)F0). Root Mean Square Error of Approximation– Browne et al: ‘RMSEARMSEA of 00.0505 or less indicates a closefit’– It can be tested: H0: “RMSEA 0.05” (compare withregular Chi2 test: “RMSEA 0”)– Amos gives this probability (H0: RMSEA 0.05) inPclose In words: Pclose is the probability that thePclose.model is almost correct.SBAS (Hong Kong) Ltd. ‐ All Rights Reserved.60

Relative Fit Measures NFI – NormedNd Fit IndexI d (Bentler(B tl & Bonnett’sBtt’ 1980)– was the practical criterion of choice for several years– Addressing evidence that the NFI has shown atendency to underestimate fit in small samples,Bentler revised this measure, taking in to accountsample size – the CFI, Comparative Fit Index– Both range from 0 to 1– ValueV l off .99 was originallyi i ll proposedd as well‐fittingll fittimodel– Revised value of .95 95 advised by Hu &Bentler (1999)– Note: Bentler (1980) suggested CFI was measure of choiceSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.61Relative Fit Measures (cont’d) RFI – RelativeR l ti Fit IndexI d– Derivative of NFI– Range of values from 0 to 11, with values close to 00.9595indicating superior fit (Hu & Bentler 1999) IFI – Incremental Index of Fit– Issues of parsimony and sample size with NFI lead toBollen (1989) develop this measure– Same calculation as NFI, but degrees of freedom takeninto account– AgainA i valueslrange fromf0 tot 1,1 withith thosethcloseltot0.95 indicating well‐fitting modelsSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.62

Relative Fit Measures (cont’d) GFI – GoodnessG doff Fit IndexI d– A measures of the relative amount of variance &covariance in the sample covariance matrix (ofobserved variables) that is jointly explained by thepopulation matrix– Values range from 0 to 1 (though –ve valuetheoretically possible) with 1 being the perfect modelof fit. Rule of thumb is either .8 or .9 AGFI – Adjusted Goodness of Fit Index– Correction of GFI to include degrees of freedom– Values interpreted as aboveSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.63Relative Fit Measures (cont’d) PGFI – Parsimony Goodness of Fit Index– Takes into account the complexitypy ((i.e. number ofestimated parameters)– Provides more realistic evaluation of the model(Mulaik et al, 1989– Typically parsimony fit indices have lowerthresholds, so values in the .50’s are notuncommon and can accompany other indices inuncommon,the .90’sSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.64

Other Fit Measures AIC ‐ Akaike’sAk ik ’ InformationI fi CriteriaC i i andd CAIC –Consistent Akaike’s Information Criteria– Both address the issue of parsimony and goodness of fit,fitbut AIC only relates to degrees of freedom. Bozdogan(1987) proposed CAIC to take into account sample size– Used in the comparison of 2 or more models, with smallervalues representing a better fit of the model BIC (Bayes Information Criterion) and BCC (Browne‐Cudeck Criterion)–OOperate ini theh same way as AIC andd CAIC,CAIC butb imposeigreater penalties for model complexitySBAS (Hong Kong) Ltd. ‐ All Rights Reserved.65Other Fit Measures (cont’d) Hoelter’sH lt ’ CriticalC iti l N:N– Last goodness of fit statistic appearing in the Amosoutput– In fact two values for levels of significance of .05and .01– Differs substantially from those previously mentioned– Focuses directly on sample size, rather than model fit– It’s purpose is to estimate a sample size that would besufficient to yield an adequate model fit for a χ2 test– Hoelter proposed a value 200 is indicative of amodel that adequately represents the sample dataSBAS (Hong Kong) Ltd. ‐ All Rights Reserved.66

AMOS (CFA and SEM) Demo Q&ASBAS (Hong Kong) Ltd. ‐ All Rights Reserved.67

General Linear Model in SPSS GlGeneral Linear MdlModel – Factors and covariates are assumed to have linear relationships to the dependent variable(s) GLM Multivariate procedure – Model the values of multiple dependent scale variables, based o

Related Documents:

Quantitative Aptitude – Clocks and Calendars – Formulas E-book Monthly Current Affairs Capsules Quantitative Aptitude – Clocks and Calendars – Formulas Introduction to Quantitative Aptitude: Quantitative Aptitude is an important section in the employment-related competitive exams in India. Quantitative Aptitude Section is one of the key sections in recruitment exams in India including .

Morningstar Quantitative Ratings for Stocks Morningstar Quantitative Ratings for stocks, or "quantitative star ratings," are assigned based on the combination of the Quantitative Valuation of the company dictated by our model, the current market price, the margin of safety determined by the Quantitative Uncertainty Score, the market capital, and

The Plan Risk Management process should ensure the application of quantitative risk analysis in projects. Calculating estimates of overall project risk is the focus of the Perform Quantitative Risk Analysis process. An overall risk analysis, such as one that uses quantitative technique, estimates the implication

a Install MassHunter Quantitative Analysis in standard workflow mode. b Make sure that the Quantitative Analysis program operates properly. c Continue to step 3. 2 If you are doing a new offline installation: a Start the MassHunter Quantitative Analysis installation as a new installation. b W

quantitative determination of sulphates by this reaction this solubility becomes an important consideration. The operations of qualitative analysis are, therefore, the more accurate the nearer they are made to conform to quantitative conditions. The methods of quantitative analysis a

1. Describe the quantitative analysis approach 2 Understand the application of quantitative After completing this chapter, students will be able to:. Understand the application of quantitative analysis in a real situation 3. Describe the use of modeling in quantitative analilysis 4. Use computers and spreadsheet models to ppq yerform .

4. Quantitative Risk Analysis Quantitative risk analysis analyzes numerically the effect a project risk has on a project objective. The process generally follows qualitative analysis and utilizes techniques such as Monte Carlo simulation and decision analysis to: Determine the probability of achieving a specific project objective. Identify risks requiring the most attention by quantifying their

The facts and extensive procedural history of Albert Woodfox’s case have been recounted time and again, but they bear repeatingsince they factored into theunconditional writ granted by the district court On April 17, 1972, . Correctional Officer Brent Millerof the Louisiana State Penitentiary in , Angola, Louisiana, was found murderedin the prison dormitory , havingbeen stabbed 32 times. The .