Weakly Informative Priors - Harvard University

2y ago
18 Views
3 Downloads
1.11 MB
115 Pages
Last View : 9d ago
Last Download : 3m ago
Upload by : Ronan Orellana
Transcription

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWeakly informative priorsAndrew Gelman and Aleks JakulinDepartment of Statistics and Department of Political ScienceColumbia University3 Mar 2007Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesThemesIInformative, noninformative, and weakly informative priorsIThe sociology of shrinkage, orconservatism of Bayesian inferenceCollaboratorsIIIIIIIYu-Sung Su (Dept of Poli Sci, City Univ of New York)Masanao Yajima (Dept of Statistics, Columbia Univ)Maria Grazia Pittau (Dept of Economics, Univ of Rome)Gary King (Dept of Government, Harvard Univ)Samantha Cook (Statistics group, Google)Francis Tuerlinckx (Dept of Psychology, Univ of Leuven)Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWhat does this have to do with MCMC?II’m speaking at Jun Liu’s MCMC conferenceIWe don’t have to be trapped by decades-old modelsIThe folk theorem about computation and modelingIThe example of BUGSAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWhat does this have to do with MCMC?II’m speaking at Jun Liu’s MCMC conferenceIWe don’t have to be trapped by decades-old modelsIThe folk theorem about computation and modelingIThe example of BUGSAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWhat does this have to do with MCMC?II’m speaking at Jun Liu’s MCMC conferenceIWe don’t have to be trapped by decades-old modelsIThe folk theorem about computation and modelingIThe example of BUGSAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWhat does this have to do with MCMC?II’m speaking at Jun Liu’s MCMC conferenceIWe don’t have to be trapped by decades-old modelsIThe folk theorem about computation and modelingIThe example of BUGSAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesWhat does this have to do with MCMC?II’m speaking at Jun Liu’s MCMC conferenceIWe don’t have to be trapped by decades-old modelsIThe folk theorem about computation and modelingIThe example of BUGSAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsInformation in prior distributionsIInformative prior distIINoninformative prior distIIIA full generative model for the dataLet the data speakGoal: valid inference for any θWeakly informative prior distIIPurposely include less information than we actually haveGoal: regularlization, stabilizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors: some examplesIVariance parametersICovariance matricesILogistic regression coefficientsIPopulation variation in a physiological modelIMixture modelsIIntentional underpooling in hierarchical modelsAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forvariance parameterIBasic hierarchical modelITraditional inverse-gamma(0.001, 0.001) prior can be highlyinformative (in a bad way)!INoninformative uniform prior works betterIBut if #groups is small (J 2, 3, even 5), a weaklyinformative prior helps by shutting down huge values of τAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forvariance parameterIBasic hierarchical modelITraditional inverse-gamma(0.001, 0.001) prior can be highlyinformative (in a bad way)!INoninformative uniform prior works betterIBut if #groups is small (J 2, 3, even 5), a weaklyinformative prior helps by shutting down huge values of τAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forvariance parameterIBasic hierarchical modelITraditional inverse-gamma(0.001, 0.001) prior can be highlyinformative (in a bad way)!INoninformative uniform prior works betterIBut if #groups is small (J 2, 3, even 5), a weaklyinformative prior helps by shutting down huge values of τAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forvariance parameterIBasic hierarchical modelITraditional inverse-gamma(0.001, 0.001) prior can be highlyinformative (in a bad way)!INoninformative uniform prior works betterIBut if #groups is small (J 2, 3, even 5), a weaklyinformative prior helps by shutting down huge values of τAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forvariance parameterIBasic hierarchical modelITraditional inverse-gamma(0.001, 0.001) prior can be highlyinformative (in a bad way)!INoninformative uniform prior works betterIBut if #groups is small (J 2, 3, even 5), a weaklyinformative prior helps by shutting down huge values of τAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsPriors for variance parameter: J 8 goups8 schools: posterior on σα givenuniform prior on σα8 schools: posterior on σα giveninv gamma (1, 1) prior on σ2α0051015σα202530510Andrew Gelman and Aleks Jakulin15σα2025308 schools: posterior on σα giveinv gamma (.001, .001) prior on0Weakly informative priors51015σα2025

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsPriors for variance parameter: J 3 groups3 schools: posterior on σα givenuniform prior on σα3 schools: posterior on σα givenhalf Cauchy (25) prior on σα0050100σα150200Andrew Gelman and Aleks Jakulin50Weakly informative priors100σα150200

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forcovariance matricesIInverse-Wishart has problemsICorrelations can be between 0 and 1ISet up models so prior expectation of correlations is 0IGoal: to be weakly informative about correlations andvariancesIScaled inverse-Wishart model uses redundant parameterizationAndrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsSeparation in logistic regressionglm (vote female black income, family binomial(link "logit"))19601968coef.est coef.se(Intercept) 6coef.est coef.se(Intercept) 0719641972coef.est coef.se(Intercept) 90.06coef.est coef.se(Intercept) 5Andrew Gelman and Aleks JakulinWeakly informative priors

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forlogistic regression coefficientsIISeparation in logistic regressionSome prior info: logistic regression coefs are almost alwaysbetween 5 and 5:II5 on the logit scale takes you from 0.01 to 0.50or from 0.50 to 0.99Smoking and lung cancerIIndependent Cauchy prior dists with center 0 and scale 2.5IRescale each predictor to have mean 0 and sdIFast implementation using EM; easy adaptation of glmAndrew Gelman and Aleks JakulinWeakly informative priors12

Weakly informative priorsStatic sensitivity analysisConservatism of Bayesian inferenceA hierarchical frameworkConclusionReferencesVariance parametersCovariance matricesLogistic regression coefficientsPopulation variation in a physiological modelMixture modelsIntentional underpooling in hierarchical modelsWeakly informative priors forlogistic regression coefficientsIISeparation in logistic regressionSome prior info: logistic regression coefs are almost alwaysbetween 5 and 5:II5 on the logit s

Themes I Informative, noninformative, and weakly informative priors I The sociology of shrinkage, or conservatism of Bayesian inference I Collaborators I Yu-Sung Su (Dept of Poli Sci, City Univ of New York) I Masanao Yajima (Dept of Statistics, Columbia Univ) I Maria Grazia Pittau (

Related Documents:

With research validity and research integrity in mind, this paper discusses that the use of informative priors in Bayesian analyses -whereby not all parameter values or hypotheses have the same a priori probability- is for-mally equivalent to data falsification. Specifically, it will be shown that Bayesian analysis with an informative prior invo.

Chapter 15 Informative Speaking www.publicspeakingproject.org 15-2 . functions of informative speeches . People encounter a number of formal and informal informative presentations throughout their day, and these presentations have several consequences. First, informative presentations . provide people with knowledge. When others share facts or

Life science graduate education at Harvard is comprised of 14 Ph.D. programs of study across four Harvard faculties—Harvard Faculty of Arts and Sciences, Harvard T. H. Chan School of Public Health, Harvard Medical School, and Harvard School of Dental Medicine. These 14 programs make up the Harvard Integrated Life Sciences (HILS).

– Methodological: bias model, power priors, meta-analytic predictive priors (hierarchical modeling), commensurate priors. Approaches similar in spirit; closely related formalisms. – Practical (orphan indications, medical devices, pediatrics) Key issue: proper discounting of historical data – How much do we know?

Sciences at Harvard University Richard A. and Susan F. Smith Campus Center 1350 Massachusetts Avenue, Suite 350 Cambridge, MA 02138 617-495-5315 gsas.harvard.edu Office of Diversity and Minority Affairs minrec@fas.harvard.edu gsas.harvard.edu/diversity Office of Admissions and Financial Aid admiss@fas.harvard.edu gsas.harvard.edu/apply

Faculty of Arts and Sciences, Harvard University Class of 2018 LEGEND Harvard Buildings Emergency Phones Harvard University Police Department Designated Pathways Harvard Shuttle Bus Stops l e s R i v e r a C h r YOKE ST YMOR E DRIVE BEACON STREET OXFORD ST VENUE CAMBRIDGE STREET KIRKLAND STREET AUBURN STREET VE MEMORIAL

Annex A (informative) Cost and performance measurement analysis using earned value management data Annex B (informative) Schedule Analysis using earned value management data (Earned Schedule) Annex C (informative) Integrating other project management processes with earned value management Annex D (informative) Bibliography EV Management

What is the Japanese-Language Pro ciency Test? The largest Japanese-language test in the world The JLPT is a test for non-native speakers of Japanese which evaluates and certi es their Japanese-language pro ciency. The test is simultaneously conducted once a year in the United States. The JLPT began in 1984.