Learning Bayesian Networks And Causal Discovery - GitHub Pages

1y ago
10 Views
2 Downloads
4.14 MB
61 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : Jenson Heredia
Transcription

Learning Bayesian Networksand Causal DiscoveryMarek J. DruzdzelUniversity of PittsburghSchool of Information Sciencesand Intelligent Systems Programmarek@sis.pitt.eduhttp://www.pitt.edu/ druzdzelLearning Bayesian Networks and Causal Discovery

Overview MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks(Essentially, a handful of slides interleaved with software demos.)Learning Bayesian Networks and Causal Discovery

Bayesian networksMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksA Bayesian network (also referred to as belief network,probabilistic network, or causal network) is an acyclicdirected graph (DAG) consisting of:The qualitative part, encoding adomain's variables (nodes) andthe probabilistic (usually causal)influences among them (arcs).The quantitative part, encoding thejoint probability distribution overthese variables.Learning Bayesian Networks and Causal Discovery

Bayesian networks: Numerical parametersMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksPrior probability distribution tables fornodes without predecessors(History of viral hepatitis, History ofalcohol abuse, Obesity)Conditional probabilitydistributions tables fornodes with predecessors(Fatigue, Jaundice, .)Learning Bayesian Networks and Causal Discovery

What do the numbers come from? MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksTextbooksLiteratureExpert opinionDatabasesLearning Bayesian Networks and Causal Discovery

Reasoning in Bayesian networksMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksThe most important type of reasoning in Bayesian networks isupdating the probability of a hypothesis (e.g., a diagnosis)given new evidence (e.g., medical findings, test results).Example:What is the probability ofChronic Hepatitis in analcoholic patient withjaundice and ascites?Which disease is mostlikely?Which tests should weperform next?P(Hepatitis alcoholism present, jaundice present, ascites present)?Learning Bayesian Networks and Causal Discovery

Example: Hepar IIMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks70 variables; 2,139 numerical parameters (instead of over 270 1021!)Learning Bayesian Networks and Causal Discovery

Learning Bayesian networks from dataMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksThere exist algorithms with a capability to analyze data, discovercausal patterns in them, and build models based on these data.structurenumericalparametersdataLearning Bayesian Networks and Causal Discovery

Causality and probabilityMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksThe only reference to causality in a typical statistics textbook is:“correlation does not mean causation”(if the textbook contains the word “causality” at all J).Many confusing substitute terms: “confounding factor,” “latentvariable,” “intervening variable,” etc.What does correlation mean then (with respect to causality)?The goal of experimental design is often to establish (ordisprove) causation. We use statistics to interpret the resultsof experiments (i.e., to decide whether a manipulation of theindependent variable caused a change in the dependentvariable).How are causality and probability actually related and whatdoes one tell us about the other?Not knowing this constitutes a handicap!Learning Bayesian Networks and Causal Discovery

The problem of learningMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksGiven a set of variables (a.k.a. attributes) X and adata set D of simultaneous values of variables in X1. Obtain insight into causal connections amongthe variables X (for the purpose ofunderstanding and prediction of the effects ofmanipulation)2. Learn the joint probability distribution over thevariables XLearning Bayesian Networks and Causal Discovery

Why are we also interested in causality?MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksReason 1: Ease of model-building and modelenhancements: Experts already think in causal terms.Reason 2: Predicting the effects of manipulation.Given (2), is (1) really surprising?Learning Bayesian Networks and Causal Discovery

Causality and probabilityMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksCausality and probability are closely related and their relationshould be made clear in statistics.Probabilistic dependence is considered a necessary condition forestablishing causation (is it sufficient?).weatherbarometerreadingWeather and barometer reading are correlatedbecause the weather causes the barometerreading.A cause can cause an effect but it does nothave to. Causal connections result inprobabilistic dependencies (or correlations inlinear case).Learning Bayesian Networks and Causal Discovery

Causal graphsAcyclic directed graphs (hence, notime and no dynamic reasoning)representing a snapshot of the world ata given time.Nodes are random variables and arcsare direct causal dependenciesbetween them.glass onthe roadthorns onthe roadnails onthe roadCausal connections result in correlation(in general probabilistic dependence). glass on the road will becorrelated with flat tire glass on the road will becorrelated with noise bumpy feeling will becorrelated with noiseMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksflat tirebumpyfeelinganaccidentsteeringproblemsnoisea knifeinjurycardamageLearning Bayesian Networks and Causal Discovery

Causal Markov conditionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksAn axiomatic condition describing the relationshipbetween causality and probability.A variable in a causal graph is probabilistically independentof its non-descendants given its immediate predecessors.Axiomatic, but used by almost everybody in practice andno convincing counter examples to it have been shownso far (at least outside the quantum world).Learning Bayesian Networks and Causal Discovery

Markov condition: ImplicationsVariables A and B areprobabilistically dependent if thereexists a directed active path fromA to B or from B to A:Thorns on the road are correlatedwith car damage because there isa directed path from thorns to cardamage.glass onthe roadMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksthorns onthe roadnails onthe roadflat tirebumpyfeelinganaccidentsteeringproblemsnoisea knifeinjurycardamageLearning Bayesian Networks and Causal Discovery

Markov condition: ImplicationsVariables A and B areprobabilistically dependent if thereexists a C such that there exists adirected active path from C to Aand there exists a directed activepath from C to B:Car damage is correlated withnoise because there is a directedpath from flat tire to both (flat tireis a common cause of both).glass onthe roadMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksthorns onthe roadnails onthe roadflat tirebumpyfeelinganaccidentsteeringproblemsnoisea knifeinjurycardamageLearning Bayesian Networks and Causal Discovery

Markov condition: ImplicationsVariables A and B are probabilisticallydependent if there exists a D suchthat D is observed (conditioned upon)and there exists a C such that A isdependent on C and there exists adirected active path from C to D andthere exists an E such that B isdependent on E and there exists adirected active path from E to D:Nails on the road are correlated withglass on the road given flat tirebecause there is a directed path fromglass on the road to flat tire and fromnails on the road to flat tire and flata knifetire is observed (conditioned upon).glass onthe roadMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksthorns onthe roadnails onthe roadflat isecardamageLearning Bayesian Networks and Causal Discovery

Markov condition:Summary of implicationsMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksVariables A and B are probabilistically dependent if: there exists a directed active path from A to B or thereexists a directed active path from B to A there exists a C such that there exists a directed activepath from C to A and there exists a directed active pathfrom C to B there exists a D such that D is observed (conditionedupon) and there exists a C such that A is dependent on Cand there exists a directed active path from C to D andthere exists an E such that B is dependent on E and thereexists a directed active path from E to DLearning Bayesian Networks and Causal Discovery

Markov condition:Conditional independenceOnce we know all direct causes of anevent E, the causes and effects ofthose causes do not tell anything newabout E and its successors.glass onthe roadMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksthorns onthe roadnails onthe roadflat tire(also known as “screening off”)E.g., Glass and thorns on the road areindependent of noise, bumpyfeeling, and steering problemsconditioned on flat tire.a knife Noise, bumpy feeling, and steeringproblems become independentconditioned on flat oisecardamageLearning Bayesian Networks and Causal Discovery

InterventionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksManipulation theorem [Spirtes, Glymour & Scheines 1993]:Given an external intervention on a variable A in a causalgraph, we can derive the posterior probability distributionover the entire graph by simply modifying the conditionalprobability distribution of A.If this intervention is strongenough to set A to a specificvalue, we can view thisintervention as the only causeof A and reflect this byremoving all edges that arecoming into A. Nothing else inthe graph needs to be modified.intervention.othercausesof AA.effects of ALearning Bayesian Networks and Causal Discovery

Intervention: ExampleMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksSuicide eliminatescancer as a cause ofthis brave samurai’sdeath.Learning Bayesian Networks and Causal Discovery

Intervention: ExampleMaking the tire flat with a knife makesglass, thorns, nails, and what-haveyou irrelevant to flat tire. The knife isthe only cause of flat tire.glass onthe roadMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksthorns onthe roadnails onthe roadknife cutflat tirebumpyfeelinganaccidentsteeringproblemsnoisea knifeinjurycardamageLearning Bayesian Networks and Causal Discovery

ExperimentationMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksEmpirical research is usually concerned with testing causal hypotheses.Smoking and lung cancer are correlated.Can we reduce the incidence of lung cancer by reducing smoking?In other words: Is smoking a cause of lung cancer?Each of the following causal structures is compatiblewith the observed correlation:GGGG genetic factorsGS smokingSC lung cancerGSCSGCSSCGCSCSGGCSCCSLearning Bayesian Networks and Causal DiscoveryC

Selection biasMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksObserving correlation is in general not enough to establishcausality.genetic factorssmoking?lung cancer If we do not randomize, we run the danger that there are commoncauses between smoking and lung cancer (for example geneticfactors). These common causes will make smoking and lung cancerdependent. It may, in fact, also be the case that lung cancer causes smoking. This will also make them dependent without smoking causinglung cancer.Learning Bayesian Networks and Causal Discovery

ExperimentationMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksgenetic factorscoinasbestossmoking?lung cancer In a randomized experiment, coin becomes the only cause ofsmoking. Smoking and lung cancer will be dependent only if there is acausal influence from smoking to lung cancer. If Pr(C S) Pr(C S) then smoking is a cause of lung cancer. Asbestos will simply cause variability in lung cancer (add noiseto the observations).But, can we really experiment in this domain?Learning Bayesian Networks and Causal Discovery

Science by observationMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks“. Does smoking cause lung cancer or doeslung cancer cause smoking? .”Sir Ronald A. Fisher, a prominent statistician, father of experimental design“. George Bush taking credit for the end of the coldwar is like a rooster taking credit for the daybreak .”Vice-president Al Gore towards Dan Quayle during their first debate, Fall 1992 Experimentation is not always possible. We can do quite a lot by just observing. Assumptions are crucial in both experimentation andobservation, although they are usually stronger in the latter. New methods in causal discovery: squeezing data to the limitsLearning Bayesian Networks and Causal Discovery

Approaches to learning Bayesian networksMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksConstraint search-based learningSearch the data for independence relations to give us aclue about the causal relations [Spirtes, Glymour, Scheines1993].Bayesian learningSearch over the space of models and score each modelusing the posterior probability of the model given the data[Cooper & Herskovitz 1992; many others].Learning Bayesian Networks and Causal Discovery

Constraint search-based learningLearning Bayesian Networks and Causal Discovery

Constraint search-based learningMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksPrinciples: Search for independencies among variables in the database. Use the independencies in the data to infer (lack of) causallinks among the variables (given some basic assumptions).Learning Bayesian Networks and Causal Discovery

Constraint search-based learningMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks“Correlation does not imply causation”True but only in limited settings and typically abusedby the “statistics mafia” J.If x and y are dependent, we have indeed at leastfour possible cases:xxxxhyyybyLearning Bayesian Networks and Causal Discovery

Constraint search-based learningMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksNot necessarily true in case of three variables:x and z are dependenty and z are dependentx and y are independentx and y are dependent given zxWe can establishcausality! zyLearning Bayesian Networks and Causal Discovery

Foundations of causal discovery:(1) The Causal Markov ConditionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksRelates a causal graph to a probabilitydistribution.ABDFCEGIntuition:In a causal graph, the parents of each node“shields” the node from its ancestors.Formally:For any node Xi in the graph, we have P[Xi X’,Pa(Xi)] P[Xi Pa(Xi)],where Pa(Xi) are the parents of Xi in the graph,and X’ is any set of non-descendents of Xi in thegraph.Theorem: A causal graph obeys the Markov condition if and only ifevery d-separation in the graph corresponds to an independence inthe probability distribution.Learning Bayesian Networks and Causal Discovery

The Causal Markov Condition: d-separationIHGFABCJI(B,F) ?I(B, F D) ?I(B, F C,D )?DEMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksRestatement of “the rules:” Each node is a “valve” v-structures are “off” by default other nodes are “on” by default conditioning on a node flips itsstate conditioning on a v-structure’sdescendants also flips its state.YesNoYesLearning Bayesian Networks and Causal Discovery

Foundations of causal discovery:(2) Faithfulness conditionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks Markov Condition:d-separation independence in data. Faithfulness Condition:d-separation independence in data.In other words:All independences in the data are structural,i.e., are consequences of Markov condition.Learning Bayesian Networks and Causal Discovery

Violations of faithfulness conditionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksFaithfulness assumption is more controversial.While every scientist makes it in practice, it doesnot need to hold.Given that HIV virus infection has not takenplace, needle sharing is independent fromintercourse.Learning Bayesian Networks and Causal Discovery

Violations of faithfulness conditionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksThe effect of staying up late before the exam on theexam performance may happen to be zero:being tired may cancel out the effect of more knowledge.But is it likely?Learning Bayesian Networks and Causal Discovery

Equivalence criterionMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksTwo graphs are statistically indistinguishable (belong to thesame equivalence class) iff they have the same adjacenciesand the same tatisticallyuniqueLearning Bayesian Networks and Causal Discovery

Constraint search-based learningMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksAll possible networks can be divided into equivalence classesLearning Bayesian Networks and Causal Discovery

Causal model searchMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks1. Start with data.2. Find conditional independencies in the data.3. Infer which causal structures could have givenrise to these independencies.Learning Bayesian Networks and Causal Discovery

Theorems useful in searchMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksTheorem 1There is no edge between X and Y if and only if X and Y areindependent given any subset (including the null set) of theother variables.Theorem 2If X—Y — Z, X and Z are not adjacent, and X and Z areindependent given some set W, then X Y Z if and only ifW does not contain Y.Learning Bayesian Networks and Causal Discovery

PC algorithmMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksInput:a set of conditional independenciesOutput:a “pattern” which represents a Markov equivalenceclass of causally sufficient causal models.Learning Bayesian Networks and Causal Discovery

PC algorithm (sketch)MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksStep 0:Begin with a complete undirected graph.Step 1 (Find adjacencies):For each pair of variables X,Y if X and Y are independentgiven some subset of the other variables, remove the X–Yedge.Step 2: (Find v-structures):For each triple X–Y–Z, with no edge between X and Z, if X and Zare independent given some set not containing Y, then orientX–Y–Z as X Y Z.Step 3 (Avoid new v-structures and cycles):– if X Y—Z, but there is no edge between X and Z, then orientY–Z as Y Z.– if X—Z, and there is already a directed path from X to Z, thenorient X — Z as X Z.Learning Bayesian Networks and Causal Discovery

PC algorithm: ExampleACausalGraphIndependencies entailed bythe Markov condition:CDBA BA D B,C(1) From A B, remove A—B(0) Begin withAACBMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksDCDBLearning Bayesian Networks and Causal Discovery

PC algorithm: Example(1) From A D B,C, remove A—DACMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks(2) From A B, orientA–C–B as A C BADCBDB(3) Avoid a new v-structure (A C D),Orient C –D as C D.AACB(3) Avoid a cycle (B C D B),Orient B –D as B D.DCDBLearning Bayesian Networks and Causal Discovery

Patterns: Output of the PC algorithmMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksPC algorithm outputs a ‘pattern’, a kind of graph containingdirected ( ) and undirected (—) edges which represents aMarkov equivalence class of Models– An undirected edge A–B in the ‘pattern’, indicates thatthere is an edge between these variables in every graphin the Markov equivalence class– A directed edge A B in the ‘pattern’ indicates thatthere is an edge oriented A B in every graph in theMarkov equivalence classLearning Bayesian Networks and Causal Discovery

Continuous dataMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks Causal discovery is independent of the actual distribution ofthe data. The only thing that we need is a test of (conditional)independence. No problem with discrete data. In continuous case, we have a test of (conditional)independence (partial correlation test) when the data comesfrom multi-variate Normal distribution. Need to make the assumption that the data is multi-variateNormal. The discovery algorithm turns out to be very robust to thisassumption [Voortman & Druzdzel, 2008].Learning Bayesian Networks and Causal Discovery

NormalityMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksMulti-variate normality is equivalent to two conditions:(1) Normal marginals and (2) linear relationshipsLearning Bayesian Networks and Causal Discovery

LinearityMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksMulti-variate normality is equivalent to two conditions:(1) Normal marginals and (2) linear relationshipsLearning Bayesian Networks and Causal Discovery

Bayesian learningLearning Bayesian Networks and Causal Discovery

Elements of a search procedureMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks A representation for the current state (anetwork structure.) A scoring function for each state (theposterior probability). A set of search operators.– AddArc(X,Y)– DelArc(X,Y)– RevArc(X,Y) A search heuristic (e.g., greedy search). The size of the search space for nvariables is almost 3 Cn2 possible graphs!Learning Bayesian Networks and Causal Discovery

Posterior probability scoreMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksP( D S ) P( S ) P( D S ) P( S )P( S D) P( D)“Marginal likelihood” P(D S): Given a database Assuming Dirichlet priors over parametersnqiP ( D S ) i 1j 1Γ(α ij )Γ(α ij N ij )ri k 1Γ(α ijk N ijk )Γ(α ijk )Learning Bayesian Networks and Causal Discovery

Constraint-based learning: Open problemsPros: Efficient, O(n2) for sparsegraphs. Hidden variables can bediscovered in a modest way. “Older” technology, manyresearchers do not seem tobe aware of it.MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksCons: Discrete independence tests arecomputationally intensive heuristic independence tests? Missing data is difficult to deal with Bayesian independence test?Learning Bayesian Networks and Causal Discovery

Bayesian learning: Open problemsPros: Missing data and hiddenvariables are easy to dealwith (in principle). More flexible means ofspecifying priorknowledge. Many open researchquestions!MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksCons: Essentially intractable. Search heuristics (most efficient)typically lead to local maxima. Monte-Carlo techniques (moreaccurate) are very slow for mostinteresting problems.Learning Bayesian Networks and Causal Discovery

Example applicationMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks Student retention in US colleges. Large problem for US colleges. Correctly predicted that the main causal factorin low student retention is the quality ofincoming students.[Druzdzel & Glymour, 1994]Learning Bayesian Networks and Causal Discovery

Some challengesMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksScaling up -- especially Monte Carlo techniques.Practically dealing with hidden variables -unsupervised classification.Applying these techniques to real data and realproblems.Hybrid techniques: Constraint-based Bayesian(e.g., Dash & Druzdzel, 1999).Learning causal graphs in time-dependent domains(Dash & Druzdzel, 2002).Learning causal graphs and causal manipulation(Dash & Druzdzel, 2002).Learning dynamic causal graphs from time seriesdata (Voortman, Dash & Druzdzel 2010)Learning Bayesian Networks and Causal Discovery

Our softwareMotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksA developer’s environment for graphical decision models(http://genie.sis.pitt.edu/).Support for modelbuilding: osisLearning and discoverymodule: SMinerImaGeNIeModel developer module: GeNIe.Implemented in Visual C inWindows environment.DiagnosisGeNIeWrappers: SMILE.NETJ jSMILEJ,Pocket SMILEJAllow SMILEJ to be accessed fromapplications other than C compilerSMILE.NETJPocket SMILEJSMinerjSMILEJReasoning engine: SMILEJ (StructuralModeling, Inference, and Learning Engine).A platform independent library of C classes for graphical models.SMILEJGeNIeRateLearning Bayesian Networks and Causal Discovery

The rest MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarksLearning Bayesian Networks and Causal Discovery

Concluding remarks MotivationConstraint-based learningBayesian learningExampleSoftware demoConcluding remarks Observation is a valid scientific methodObservation allows often to restrict the class of possiblecausal structures that could have generated the data. Learning Bayesian networks/causal graphs is very exciting:It is a different and powerful way of doing science. There is a rich assortment of unsolved problems in causaldiscovery / learning Bayesian networks, both practical andtheoretical. Learning has been an active area of research of my researchgroup (GeNIe, http://genie.sis.pitt.edu/, is a product of thiswork).Learning Bayesian Networks and Causal Discovery

Learning Bayesian Networks and Causal Discovery

SeparabilitynA Criterion C(S,D) is separable ifC ( S , D) c( X i , Pai , Di )i 1For C(S,D) P(D S)P(S) [assuming P(S) 1]:qri Γ(α ij )Γ(α ijk Nijk ) i P( D S ) Γ(α N)Γ(α) i 1 k 1ijijijk j 1 nqi c( X i , Pai , Di ) j 1Γ(α ij )Γ(α ij N ij )ri k 1Γ(α ijk N ijk )Γ(α ijk )Learning Bayesian Networks and Causal Discovery

Learning Bayesian Networksand Causal DiscoveryMarek J. DrużdżelDecision Systems LaboratoryFaculty of Computer ScienceSchool of Information SciencesTechnical University of Bialystokand Intelligent Systems ProgramUniversity of t.eduhttp://aragorn.pb.bialystok.pl/ druzdzelhttp://www.pitt.edu/ druzdzelLearning Bayesian Networks and Causal Discovery

Learning Bayesian Networks and Causal Discovery Reasoning in Bayesian networks The most important type of reasoning in Bayesian networks is updating the probability of a hypothesis (e.g., a diagnosis) given new evidence (e.g., medical findings, test results). Example: What is the probability of Chronic Hepatitis in an alcoholic patient with

Related Documents:

Key words Bayesian networks, water quality modeling, watershed decision support INTRODUCTION Bayesian networks A Bayesian network (BN) is a directed acyclic graph that graphically shows the causal structure of variables in a problem, and uses conditional probability distributions to define relationships between variables (see Pearl 1988, 1999;

Chapter 1 (pp. 1 -7 & 24-33) of J. Pearl, M. Glymour, and N.P. Jewell, Causal Inference in Statistics: A Primer, Wiley, 2016. Correlation Is Not Causation The gold rule of causal analysis: no causal claim can be established purely by a statistical method. . Every causal inf

Alessandro Panella (CS Dept. - UIC) Probabilistic Representation and Reasoning May 4, 2010 14 / 21. Bayesian Networks Bayesian Networks Bayesian Networks A Bayesian (or belief) Network (BN) is a direct acyclic graph where: nodes P i are r.v.s

Bayesian networks can also be used as influence diagramsinstead of decision trees. . Bayesian networks do not necessarily imply influence by Bayesian uentists’methodstoestimatethe . comprehensible theoretical introduction into the method illustrated with various examples. As

preliminary results for causal explanation, and explore the significant differences be-tween causal reasoning in CLMs and fixed causal graphs, including the non-locality of manipulation and the non-commutability be-tween observation and manipulation. 1 Introduction Most existing causal models used in AI are based on

So a causal effect of X on Y was established, but we want more! X M Y The directed acyclic graph (DAG) above encodes assumptions. Nodes are variables, directed arrows depict causal pathways Here M is caused by X, and Y is caused by both M and X. DAGs can be useful for causal inference: clarify the assumptions taken and facilitate the discussion.

Causal inference with graphical models – in small and big data 1 Outline Association is not causation How adjustment can help or harm Counterfactuals - individual-level causal effect - average causal effect Causal graphs - Graph structure, joint distribution, conditional independencies - how to esti

7. What is the name of this sequence of events which results in the production of a protein? 8. What is Reverse Transcription? 9. When does Reverse Transcription occur? 10. How can Reverse Transcription be used in Biotechnology? DESIGNER GENES: PRACTICE –MOLECULAR-GENETIC GENETICS 2 CENTRAL DOGMA OF MOLECULAR GENETICS 1. Where is DNA housed in Eukaryotic Cells? most is stored in the nucleus .