A Tutorial On Computational Cognitive Neuroscience .

2y ago
9 Views
2 Downloads
405.57 KB
23 Pages
Last View : 25d ago
Last Download : 3m ago
Upload by : Ryan Jay
Transcription

Journal of Mathematical Psychology, in pressA Tutorial on Computational Cognitive Neuroscience:Modeling the Neurodynamics of CognitionF. Gregory Ashby & Sebastien HelieUniversity of California, Santa BarbaraComputational Cognitive Neuroscience (CCN) is a new field that lies at the intersection ofcomputational neuroscience, machine learning, and neural network theory (i.e., connectionism). Theideal CCN model should not make any assumptions that are known to contradict the currentneuroscience literature and at the same time provide good accounts of behavior and at least someneuroscience data (e.g., single-neuron activity, fMRI data). Furthermore, once set, the architecture ofthe CCN network and the models of each individual unit should remain fixed throughout allapplications. Because of the greater weight they place on biological accuracy, CCN models differsubstantially from traditional neural network models in how each individual unit is modeled, howlearning is modeled, and how behavior is generated from the network. A variety of CCN solutions tothese three problems are described. A real example of this approach is described, and some advantagesand limitations of the CCN approach are discussed.Keywords: computational cognitive neuroscience, neural network modeling, neuroscience1. IntroductionThe emerging new field of Computational CognitiveNeuroscience (CCN). lies at the intersection ofcomputational neuroscience and the similar fields ofmachine learning, neural network theory, connectionism,and artificial intelligence. Like computationalneuroscience, CCN strives for neurobiological accuracyand like connectionism, a major goal is to account forbehavior. In other words, using Marr’s (1982)nomenclature, CCN strives to develop models that ation levels. One main advantage of CCN isthat it offers many more constraints on the resultingmodels than more traditional approaches. As a result,two researchers independently modeling the samebehavior are more likely to converge on highly similarThis research was supported in part by Award NumberP01NS044393 from the National Institute of NeurologicalDisorders and Stroke and by support from the U.S. ArmyResearch Office through the Institute for CollaborativeBiotechnologies under grant W911NF-07-1-0072. We thankTodd Maddox, Dennis Rünger, Darrell Worthy, and MartijnMeeter for their helpful comments. Correspondenceconcerning this article should be addressed to F. GregoryAshby, Department of Psychological & Brain Sciences,University of California, Santa Barbara, CA 93106. (e-mail:ashby@psych.ucsb.edu). models with this new approach, and for this reason theresulting models should have a permanence that isunusual with older approaches. A growing number ofresearchers build and test CCN models (e.g., Anderson,Fincham, Qin, & Stocco, 2008; Ashby, Ell, Valentin, &Casale, 2005; Frank, 2005; Hartley, Taylor, & Taylor,2006; Leveille, Versace, & Grossberg, 2010), and anannual CCN conference is now included as a satellite tothe Annual Meeting of the Psychonomic Society.2. A brief historyThe field of computational neuroscience becamepopular with Hodgkin and Huxley’s (1952) Nobel Prizewinning efforts to model the generation of actionpotentials in the giant squid axon. Most models in thisfield include, at most, only a single neuron. For example,a common computational neuroscience approach, calledcompartment modeling, models a neuron’s axons anddendrites as cylinders and the soma as a sphere. Next,partial differential equations that describe thepropagation of action potentials are written for each ofthese compartments. A standard application will try toaccount for patch-clamp data collected from a variety oflocations on the cell. Some compartment models areextremely accurate and complex. For example, somesingle-cell models have hundreds or even thousands ofcompartments (e.g., Bhalla & Bower, 1993; Segev &Burke, 1998). Historically, computational neurosciencemodels have almost never tried to account for behavior.

Computational Cognitive NeuroscienceIn most cases, such a goal is precluded by thecomplexity of the single-cell models that are used.Neural network theory began with similar origins inthe work of McCulloch and Pitts (1943). However,because the goal quickly became to model behavior,neural network theory diverged from computationalneuroscience with the work of Newell, Shaw, and Simon(1958) and Rosenblatt (1958). At that time, there simplywas not enough known about the neural basis ofbehavior to support a research program that tried tomodel behavior in a biologically accurate way. Thus thefields of artificial intelligence and the more modernrelated field of machine learning place almost allemphasis on behavior and almost none on neuroscience.Modern neural network theory (e.g., Haykin, 2008)and connectionism (Rumelhart & McClelland, 1986)take an intermediate approach in the sense thatbiologically plausible properties are often seen asadvantages, although they rarely are requirements.Neural network models have some features in commonwith the brain. Included in this list are distributedrepresentation, continuous flow, and the modeling ofmemory as changes in synaptic strengths. Even so,almost all neural network models include many featuresthat are now known to be incompatible with brainfunction. For example, there is generally no attempt toidentify units in neural network models with specificbrain regions, and even when there is, there is littleattempt to model inputs and outputs to these regions in abiologically accurate way. Similarly, units in neuralnetwork models typically do not behave like realneurons, and the learning algorithms that are used oftenhave little biological plausibility (e.g., backpropagation).These observations are not criticisms. The vastmajority of computational neuroscientists are notpsychologists and many have no fundamental interest inbehavior. Similarly, artificial intelligence and machinelearning researchers are generally interested inoptimizing the performance of their models, not inmodeling human behavior. Neural network theory (i.e.,connectionism) often does have the goal of modelingbehavior and generally does view the neural-likeproperties of neural network models as an advantage ofthis approach. Even so, many applications of neuralnetwork models are to behaviors that are so complex orso poorly understood that it would be premature toattempt to build more biologically detailed models (thesefields focus on Marr’s ‘algorithmic’ level). The focus ofthese earlier approaches is well motivated because “theexplication of each level involves issues that are ratherindependent of the other two” (Marr, 1982, p. 25).However, Marr also acknowledged that “these threelevels are coupled” (p. 25). So, the new field of CCN isnot meant to supplant these older approaches, but rather2to fill a new niche by trying to focus on the “coupling”of the levels by using recent discoveries in psychologyand neuroscience.The field of CCN began shortly after the cognitiveneuroscience revolution of the 1990’s. The first breakwith existing approaches came with attempts to associatenodes in fairly traditional connectionist or neuralnetwork models with specific brain regions. This trendtoward increased biological detail continued with morebiologically plausible learning algorithms, and morerealistic models of the individual units (e.g., Ashby,Alfonso-Reese, Turken, & Waldron, 1998; Cohen,Braver, & O'Reilly, 1996; Cohen & Servan-Schreiber,1992; McClelland, McNaughton, & O’Reilly, 1995).During this time there were also attempts to formulategeneral modeling principles of this new approach(O’Reilly, 1998; O’Reilly & Munakata, 2000). Thepresent article represents a natural extension andsummary of this earlier work.This article is organized as follows. Section 3 givessome motivation for adopting a CCN approach. Section4 describes the CCN principles that guide modeldevelopment and model testing. Section 5 describessome common approaches used in CCN to modelindividual units or neurons. Section 6 briefly reviews thebiochemistry that underlies some common forms oflong-term synaptic plasticity, and describes simplecomputational models of these learning-related changesin synaptic strength. Section 7 reviews some solutions tothe problem of generating behavior from single-unitactivity. Section 8 describes an example of the CCNapproach, and Section 9 closes with some generalcomments and conclusions.3. Why use CCN models?Before getting into the details of how to developCCN models, it is natural to ask what advantages CCNhas over other approaches. First, CCN modelingincreases the number of constraints on behavioralmodels. Newell (1992) argued that “cognitive theory isradically underdetermined by data” (p. 426). AlthoughNewell was arguing for the use of ‘unified’ theories ofcognition (e.g., cognitive architectures), another possiblesolution to this problem is to add neuroscienceconstraints to the modeling process (i.e., going deepinstead of going wide). Rather than just selecting amongmodels based on goodness-of-fit to behavioral data,CCN adds the extra constraint that the winning modelshould also function in a manner that is consistent withexisting neuroscience data. Adding neuroscienceconstraints should reduce the class of candidate models,and equally important it should reduce the heterogeneityof this class. For example, if neuroscience data implicate

Computational Cognitive Neurosciencethe hippocampus in some behavior, then models of thisbehavior must all share some hippocampal-likeproperties. This reduction in model heterogeneity shouldcause different labs to converge on similar models andthereby facilitate rapid scientific progress.Second, attending to the neuroscience data canexpose relationships between seemingly unrelatedbehaviors. For example, cognitive neuroscience modelsof (information-integration) category learning and(implicit) sequence learning had independently identifiedsimilar cortical-striatal circuits (e.g., Ashby et al., 1998;Grafton, Hazeltine, & Ivry, 1995). This raised thepossibility that these two seemingly disparate behaviorsshared some previously unknown deep functionalsimilarity. Several studies have explored this possibility.First, Willingham, Wells, Farrell, and Stemwedel (2000)had showed that implicit motor sequence production isdisrupted when the response key locations are switched,but not when the hands used to depress the keys areswitched. Ashby, Ell, and Waldron (2003) showed thatthis same pattern of results holds for gcategorization and sequence learning through theirhypothesized underlying neural circuits, this dependenceof information-integration categorization on responselocation learning would have been much more difficultto discover. More recently, an inactivation study showedthat the basal ganglia are not required for the productionof overlearned motor sequences (Desmurget & Turner,2010), thereby suggesting that the same may be true ofinformation-integration categorization (as was predictedby the CCN model of Ashby et al., 2007). Thisprediction was recently confirmed with an fMRIexperiment (Waldschmidt & Ashby, 2011).Third, in many cases, studying the underlyingneuroscience leads to surprising and dramatic behavioralpredictions that would be difficult or impossible toderive from a purely cognitive approach. For hypothesized to depend on dopamine-mediatedreinforcement learning at cortical-striatal synapses(Ashby et al., 1998). Because the reward follows thebehavior, the dopamine must operate on a memory tracethat identifies recently active synapses. The most likelycandidate for this trace is partially phosphorylatedCaMKII, which loses its sensitivity to dopamine afterjust a few seconds (e.g., Lisman Schulman, & Cline,2002; see section 6.1 for details). Thus, attention to theunderlying neuroscience generates a novel andsurprising prediction: delaying feedback by just a fewseconds should impair information-integration categorylearning, but not other forms of category learning (e.g.,rule-based) thought to rely on executive attention andworking memory. Several studies have confirmed these3predictions (Maddox, Ashby, & Bohil, 2003; Maddox &Ing, 2005).Fourth, CCN models are especially amenable to aconverging operations approach to model testingbecause they make predictions about both behavioral andneuroscience data. Thus, rather than simply testing themagainst behavioral data, it should also be possible to testCCN models against a variety of neuroscience data,including single-unit recording data, lesion data,psychopharmacological data, fMRI data, and possiblyeven EEG data. The ability to test CCN models againstsuch a broad spectrum of data should facilitate theprocess of testing, rejecting, and refining new models.4. CCN IdealsOne thing that sets CCN apart from previousmodeling traditions is that its principles of modelbuilding and testing are unique. In traditional cognitivebased mathematical modeling of behavior, theoverriding criterion for establishing the validity of amodel is goodness-of-fit to the behavioral data (usuallypenalized for model complexity; see, e.g., Helie, 2006;Pitt, Kim, Navarro, & Myung, 2006). In general, there isa secondary goal of encapsulating existing cognitivetheory, but most cognitive theories are extremelydifficult to falsify, and as a result, if a cognitive modelfits data well then it is almost never rejected because ofthe cognitive assumptions it makes. There are manyexamples where models making very different cognitiveassumptions provide approximately equal levels ofgoodness-of-fit, so in many cognitive domains there aremany competing mathematical models that make verydifferent cognitive assumptions. For example, the resultsfrom many memory experiments can be well fit by avariety of models that make radically different cognitiveassumptions (e.g., Raaijmakers & Shiffrin, 2004), andunfortunately, appeals to cognitive theory have not donemuch to winnow down this crowded field. Many otherexamples exist in the cognitive literature, including thewell known difficulty in discriminating between serialand parallel models of visual or memory search (e.g.,Townsend & Ashby, 1983), the ability of exemplar,prototype, and decision bound models of categorizationto mimic each other (Ashby & Maddox, 1993), and thedifficulty in discriminating between single- and dualprocess models of recognition memory (e.g., Diana,Reder, Arndt, & Park, 2006; Jang, Wixted, & Huber,2009).These problems are greatly reduced in CCN becausegoodness-of-fit to behavioral data is only one of anumber of criteria that are used to assess model validity.This section describes four ideal principles used duringmodel building and testing in CCN. It should be stressed

Computational Cognitive Neurosciencethat these are ideals. Arguably, no existing models meetall these criteria. Nevertheless, these principles areuseful for helping researchers build and evaluate CCNmodels.4.1. The Neuroscience Ideal. A CCN model shouldnot make any assumptions that are known to contradictthe current neuroscience literature.In general, the Neuroscience Ideal means that whenbuilding or evaluating a CCN model, the validity of fourtypes of assumptions should be considered. First, themodel should only postulate connections among brainregions that have been verified in neuroanatomicaltracing studies. Second, the model should correctlyspecify whether each projection is excitatory orinhibitory. Third, the qualitative behavior of units ineach brain region should agree with studies of singleneurons in these regions. Finally, any learningassumptions that are made should agree with existingdata on neural plasticity [e.g., long-term potentiation(LTP) and long-term depression (LTD)]. If a modelmakes an assumption that is known to be incompatiblewith the neuroscience literature then the model should berejected, regardless of how well it accounts forbehavioral data.Note that the Neuroscience Ideal does not say that aCCN model must be compatible with all existingneuroscience data. In other words, not all errors areequal and the Neuroscience Ideal weighs errors ofcommission much more heavily than errors of omission(Meeter, Jehee, & Murre, 2007). Every model is anabstraction and thus omits some of the complexity foundin the natural world. One key to building a successfulCCN model is to identify the critical features of theexisting neuroscience literature that are mostfunctionally relevant to the behavior being modeled. Forexample, neuroanatomical tracing studies will identifymore interconnections among brain regions thantypically should be included in a CCN model becausefor the behavior under study some of theseinterconnections are likely to be more functionallyimportant than others.A related problem is that when building most CCNmodels it will be necessary to make some choices forwhich the neuroscience literature is little help – eitherbecause there are no neuroscience data or because theexisting data are equivocal. Thus, the Neuroscience Idealshould not be interpreted as suggesting that all knownneuroscience must be incorporated into a CCN model orthat every feature of a CCN model must be grounded inneuroscience, but rather only that the neuroscience thatis incorporated should not contradict the existingneuroscience literature. Finally, it is important to keep in4mind that the Neuroscience Ideal is just that: an ideal.No model is ever correct and, even if one wereeventually able to design a model fully compatible withthe Neuroscience Ideal, this would make the model socomplex that it likely would be impossible to test.1 Forthese reasons, the Neuroscience Ideal should be balancedwith the following heuristic.4.2. The Simplicity Heuristic. No extra neuroscientific detail should be added to the model unlessthere are data to test this component of the model or themodel cannot function without this detail.This is just a version of Occam’s razor. It isespecially important with CCN models however,because unlike cognitive models, there will almostalways be many extra neuroscientific details that onecould add to a CCN model. For example, one could usemulti-compartment models of each neuron or evenmodel specific ion channels. Adding untestedcomplexity, even if it is neuroscientifically valid,increases the number of free parameters in the model andthe computing time required for fitting. In addition,when untested details are added, it becomes difficult todetermine whether the success of the model is due tothese details or to the more macroscopic properties thatinspired the model in the first place.The phrase “to test this component of the model” inthe Simplicity Heuristic should be interpreted loosely.For example, if previous research shows that a behavioris dependent on the cerebellum then the cerebellumcould be included in the model, even if no cerebellardata will be fit by the model (e.g., single unit recordingsor lesion data).4.3. The Set-in-Stone Ideal. Once set, thearchitecture of the network and the models of eachindividual unit should remain fixed throughout allapplications.Connections between brain regions do not changefrom task to task, nor does the qualitative nature viawhich a neuron responds to input. Thus, the model’sanalogues of these features should also not change whenthe empirical application changes. This ideal greatlyreduces the mathematical flexibility of CCN models.Ideally, the overall architecture is constrained by knownneuroanatomy and the model of each individual unit isconstrained by existing single-unit recording data fromthe analogous brain region. Thus, although a CCN model1The reader is referred to Meeter et al. (2007) for furtherdiscussion of other common untested assumptions in CCNmodels.

Computational Cognitive Neuroscience5will initially have many unknown constants, most ofthese will be set by single-unit recording data and then,by the Set-in-Stone Ideal, they will remain invariantacross all applications of the model. If some of thedetails turn out to be incorrect after they have been ‘setin-stone’, then the incorrect details should be changedand a new model should be constructed. However, notethat such revisions do not add flexibility to the existingmodel; rather they lead to the creation of a new model.Thus, after a constant is set in stone, it should not beconsidered a free parameter in any future application ofthe model.The Set-in-Stone Ideal applies to the brain areas thatconstitute the focus of explanation of the model, andshould not be expected to apply to brain regions that areeither upstream or downstream from this hypothesizednetwork. For example, many models of learning,memory, or cognition will require visual input. In taskswhere variation in behavior depends primarily onprocessing within the hypothesized network rather thanon details of the visual processing, it is common togrossly oversimplify the model of this visual input. Forexample, a simple square wave might be used, ratherthan a spiking model. Applying such a model to adifferent task, which depends on different visual inputs,might require changing the abstract model of visualinput. Similarly, a model of working memory mightinclude a greatly oversimplified model of motorresponding that could change when the model is appliedto a new task with different motor requirements. So insummary, the Set-in-Stone Ideal is meant to apply to thebrain regions that are the focus of the model and not tothe inputs or outputs of that model.approached. For example, a CCN model might be testedagainst single-unit recording data, BOLD responsesfrom fMRI experiments, or even behavioral datacollected from animal or human participants with somespecific brain lesion, or under the influence of someparticular psychoactive drug.Because a CCN model can be tested against datafrom multiple sources, it can gain or lose support moreeasily than a cognitive model. For instance, followingthe Neuroscience Ideal (Section 4.1), the collection ofnew neuroscience data could invalidate a model byturning an error of omission into an error of commission.For example, a model might posit a direct projectionfrom the pre-supplementary motor area (preSMA) toSMA. After all, given their names, this seems a sensibleassumption. Recent neuroanatomy studies however,suggest that preSMA does not project to SMA (Dum &Strick, 2005), and therefore this discovery invalidatesany CCN model that posits such a projection, regardlessof how well it fits behavioral data. Another possibilityhowever, is that new neuroscience data could verify apreviously unsupported assumption, thereby lending newsupport to the CCN model. Of course, similar outcomescould follow the collection of behavioral data; a modelprediction could be verified or invalidated after newbehavioral data are collected. What is crucial here is thatboth types of data can be used to argue for or against theCCN model. This is different from cognitive orcomputational neuroscience models that restrict theirapplication to only one data type.4.4. The Goodness-of-Fit Ideal. A CCN modelshould provide good accounts of behavioral data and atleast some neuroscience data.The list of ideals proposed in Sections 4.1-4.4 is notthe first attempt to specify the essential characteristics ofCCN models explicitly. For instance, O’Reilly (1998)proposed six principles for computational models of thecortex: (1) biological realism, (2) distributedrepresentations, (3) inhibitory competition, (4)bidirectional activation propagation, (5) error-drivenlearning of specific tasks and, (6) Hebbian learning oftask-free statistical properties of the environment. Mostof these principles are related to the Neuroscience Idealabove in that they specify biological constraints thatshould be included in any CCN model of the cortex.Hence, they can be seen as an unpacking of theNeuroscience Ideal.A decade later, Meeter et al. (2007) proposed a listof more general criteria. Specifically, they suggested thata good CCN model (1) has few assumptions, (2) isinflexible and, (3) exhibits ontological clarity. The firstof these is similar to our Simplicity Heuristic, while thesecond is similar to the Set-in-Stone Ideal. Both sets ofA model must make predictions at both thebehavioral and neuroscience levels to classify as a CCNmodel. If it only makes behavioral predictions then itshould be classified as a cognitive model, whereas if itonly makes neuroscience predictions then it should beclassified as a computational neuroscience model. Thus,in general, CCN models are more ambitious thantraditional cognitive models because CCN models areexpected to account simultaneously for a wider range ofdata than cognitive models. Note that the only term thatmakes this statement an ideal rather than a requirementis the word “good”. Every CCN model should make bothbehavioral and neuroscience predictions, but the idealCCN model provides good accounts of both data types.There are many different types of neuroscience data,so there is wide latitude in how this ideal can be4.5. RelationcharacteristicstopreviouslistsofCCN

Computational Cognitive Neurosciencecriteria emphasize that a model should be simple andinflexible (two ideas that are mathematically related).The last criterion, ontological clarity, is similar to ourGoodness-of-Fit Ideal, in that the scope of the modelmust be clearly established in order to determine whatkind of data needs to be fit and what kind of experimentsshould be run. In other words, the rules need to be setearly on to specify what counts as evidence for oragainst the CCN model.4.6. DiscussionThe Neuroscience Ideal makes the relationshipbetween computational neuroscience and CCN explicitby ensuring that no biological detail in the CCN model isinconsistent with existing neuroscientific data (as incomputational neuroscience). However, following theSimplicity Heuristic, CCN models typically makesimplifying assumptions about the biological detailsincluded in the model. This is because the lowest levelof data usually accounted for by CCN models is singlecell recordings. Hence, although an increasing amount ofdata is now available about the molecular neurobiologyof neurons, these data are usually not accounted for byCCN models. This makes CCN models biologicallysimpler (and thus more scalable) than mostcomputational neuroscience models.The Set-in-Stone Ideal is used to control the growthof complexity in the model. Theoretically, the Set-inStone Ideal is an implementational constraint: the samebrain is used in every task. Computationally, the Set-inStone Ideal is used to fix the value of most constants inthe model, thus drastically reducing the CCN modelcomplexity. Once set-in-stone, the Goodness-of-Fit Idealstates that many different types of data should be used totest the adequacy of CCN models, at least some of whichare behavioral and some neuroscientific. In addition, theGoodness-of-Fit Ideal makes the relationship betweenconnectionism and CCN models explicit: The biologicaldetails in the CCN model should not make the modelunscalable and prevent it from explaining behavioraldata (i.e., it should scale up like a regular connectionistmodel). However, before doing any data fit, one needs toclearly define the scope of the model.These principles are used to guide modeldevelopment and evaluation. Because of theNeuroscience Ideal there are three areas where themathematical details of CCN models differ substantiallyfrom traditional connectionist models. The firstfundamental difference is in how each individual unit ismodeled and the second is in how learning is modeled.The third difference concerns the generation of behaviorfrom neurally realistic individual units. Sections 5 - 7discuss some example CCN solutions to these problems.65. Modeling Individual UnitsThere are many choices for modeling individualunits. The classic solution is still the Hodgkin-Huxleymodel (1952). This is a set of four coupled differentialequations. One describes fast changes in intracellularvoltage and three describe slow changes in various ionconcentrations (i.e., for Na , K , and Cl-). The modelcorrectly accounts for action potentials (both theupstroke and downstroke), the refractory period, andsubthreshold depolarizations that fail to produce a spike.From a computational perspective, perhaps the greatestdrawback is that four differential equations must besolved for every unit in the model. Also, for most CCNapplications the Hodgkin-Huxley model violates theSimplicity Heuristic because rarely do such applicationsattempt to account for data that depend on intracellularconcentrations of sodium, potassium, or chloride.For these reasons, there have been a number ofattempts to produce models with fewer equations thatdisplay as many of the desirable properties of theHodgkin-Huxley model as possible. Some of theseattempts are described in the following subsections.5.1 The leaky integrate-and-fire modelThe simplest cell model, and also the oldest(Lapicque, 1907), is the leaky integrate-and-fire model(e.g., Koch, 1999). Suppose neuron B receives anexcitatory projection from neuron A. Let VA(t) and VB(t)denote the intracellular voltages at time t in neurons Aand B, respectively. Then the leaky integrate-and-firemodel assumes that the rate of change of VB(t) is givenbydVB (t ) f VA (t ) VB (t ),dt(1)where α, β, and γ are constants and the function f [VA(t)]models temporal delays in the propagation of an actionpotential from the pre- to the postsynaptic neuron. Thisfunction is described in detail in Section 5.4. Theparameter α is a measure of synaptic strength becausethe larger this value the greater the effect of an actionpotential in the presynaptic cell. In many applications,learning is modeled by assuming that α changes as afunction of experience. The parameter β det

computational neuroscience, machine learning, and neural network theory (i.e., connectionism). The ideal CCN model should not make any assumptions that are known to contradict the current neuroscience literature and at the same time provide good accounts of behavior and at least some neuroscience

Related Documents:

theoretical framework for computational dynamics. It allows applications to meet the broad range of computational modeling needs coherently and with fast, structure-based computational algorithms. The paper describes the SOA computational ar-chitecture, the DARTS computational dynamics software, and appl

The Allen Cognitive Level Screen (ACLS) can help you identify the Allen Cognitive Levels of clients with Alzheimer’s disease, dementia, and other cognitive disabilities. Also referred to as the leather lacing tool, this cognitive assessment tool measures global cognitive processing capaci

Tutorial Process The AVID tutorial process has been divided into three partsÑ before the tutorial, during the tutorial and after the tutorial. These three parts provide a framework for the 10 steps that need to take place to create effective, rigorous and collaborative tutorials. Read and note the key components of each step of the tutorial .

Tutorial Process The AVID tutorial process has been divided into three partsÑ before the tutorial, during the tutorial and after the tutorial. These three parts provide a framework for the 10 steps that need to take place to create effective, rigorous and collaborative tutorials. Read and note the key components of each step of the tutorial .

Tutorial 1: Basic Concepts 10 Tutorial 1: Basic Concepts The goal of this tutorial is to provide you with a quick but successful experience creating and streaming a presentation using Wirecast. This tutorial requires that you open the tutorial document in Wirecast. To do this, select Create Document for Tutorial from the Help menu in Wirecast.

Tutorial 16: Urban Planning In this tutorial Introduction Urban Planning tools Zoning Masterplanning Download items Tutorial data Tutorial pdf This tutorial describes how CityEngine can be used for typical urban planning tasks. Introduction This tutorial describes how CityEngine can be used to work for typical urban .

Tutorial 1: Basic Concepts 10 Tutorial 1: Basic Concepts The goal of this tutorial is to provide you with a quick but successful experience creating and streaming a presentation using Wirecast. This tutorial requires that you open the tutorial document in Wirecast. To do this, select Create Document for Tutorial from the Help menu in Wirecast.

The CLARION Cognitive Architecture: A Tutorial Ron Sun, Sebastien Helie, Nick Wilson Cognitive Science, Rensselaer Polytechnic Institute Part 1: Introduction . What is a Cognitive . " CLARION has a general functional approximation capability (in its bottom level), while ACT-R does not.!