Learning Physical Parameters From Dynamic Scenes

3d ago
3.15 MB
45 Pages
Last View : 3d ago
Last Download : n/a
Upload by : Jewel Payne

Learning Physical Parameters from Dynamic ScenesTomer D. Ullmana, , Andreas Stuhlmüllera , Noah D. Goodmanb , Joshua B. TenenbaumaaDepartment of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, USAbDepartment of Psychology, Stanford University, Stanford, USAAbstractHumans acquire their most basic physical concepts early in development, and continueto enrich and expand their intuitive physics throughout life as they are exposed to moreand varied dynamical environments. We introduce a hierarchical Bayesian framework toexplain how people can learn physical parameters at multiple levels. In contrast to previousBayesian models of theory acquisition (Tenenbaum et al., 2011), we work with more expressive probabilistic program representations suitable for learning the forces and propertiesthat govern how objects interact in dynamic scenes unfolding over time. We compare ourmodel to human learners on a challenging task of estimating multiple physical parametersin novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able tolearn in this setting and are consistent in their judgments. Yet they also make systematicerrors indicative of the approximations people might make in solving this computationallydemanding problem with limited computational resources. We propose two approximationsthat complement the top-down Bayesian approach. One approximation model relies on amore bottom-up feature-based inference scheme. The second approximation combines thestrengths of the bottom-up and top-down approaches, by taking the feature-based inferenceas its point of departure for a search in physical-parameter space.Keywords: learning, intuitive physics, probabilistic inference, physical reasoning, intuitivetheory Corresponding author. Tel.: 1 617 452 3894Email address: tomeru@mit.edu (Tomer D. Ullman)Preprint submitted to Cognitive Psychology

1. IntroductionReasoning about the physical properties of the world around us is a ubiquitous featureof human mental life. Not a moment passes when we are not, at least at some implicit level,making physical inferences and predictions. Glancing at a book on a table, we can rapidlytell if it is about to fall, or how it will slide if pushed, tumble if it falls on a hard floor, sagif pressured, bend if bent. The capacity for physical scene understanding begins to developearly in infancy, and has been suggested as a core component of human cognitive architecture(Spelke & Kinzler, 2007).While some parts of this capacity are likely innate (Baillargeon, 2008), learning also occurs at multiple levels from infancy into adulthood. Infants develop notions of containment,support, stability, and gravitational force over the first few months of life (Needham & Baillargeon, 1993; Baillargeon, 2002), as well as differentiating between liquid substances andsolid objects (Hespos et al., 2009; Rips & Hespos, 2015). Young children build an intuitiveunderstanding of remote controls, magnets, touch screens, and other physical devices thatdid not exist over most of our evolutionary history. Astronauts and undersea divers learn toadapt to weightless or partially weightless environments (McIntyre et al., 2001), and videogame players can adjust to a wide range of game worlds with physical laws differing fromour everyday natural experience.Not only can we learn or adapt our intuitive physics, but we also seem to do so fromremarkably limited and impoverished data. While extensive experience may be necessary toachieve expertise and fluency, only a few exposures are sufficient to grasp the basics of how atouch screen device works, or to recognize the main ways in which a zero-gravity environmentdiffers from a terrestrial one. While active intervention and experimentation can be valuablein discovering hidden causal structure, they are often not necessary; observation alone issufficient to infer how these and many aspects of physics operate. People can also gain anintuitive appreciation of physical phenomena which they can only observe or interact withindirectly, such as the dynamics of weather fronts, ocean waves, volcanoes or geysers.Several questions naturally follow. How, in principle, can people learn aspects of intuitivephysics from experience? What is the form of the knowledge that they learn? How can theygrasp structure at multiple levels, ranging from deep enduring laws acquired early in infancyto the wide spectrum of novel and unfamiliar dynamics that adults encounter and can adaptto? How much, and what kind of data are required for learning different aspects of physics,and how are the data brought to bear on candidate hypotheses? In this paper we present atheoretical framework that aims to begin to answer these questions in computational terms,and a large-scale behavioral experiment that tests the lower-levels of the framework, as anaccount of how people estimate basic aspects of physical dynamics from brief moving scenes.Our modeling framework takes as a starting point the computational-level view of theory learning as rational statistical inference over hierarchies of structured representations(Tenenbaum et al., 2011; Gopnik & Wellman, 2012). Previous work in this tradition focusedon relatively spare and static logical descriptions of theories and data; for example, a law ofmagnetism might be represented as ‘if magnet(x) and magnet(y) then attract(x, y)’, and thelearner’s data might consist of propositions such as ‘attracts(objecta , objectb )’ (Kemp et al.,2

2010). Here we adopt a more expressive representational framework suitable for learning theforce laws and latent properties governing how objects move and interact with each other,given observations of scenes unfolding dynamically over time. Our representation includeslogical machinery to express abstract properties and laws, but also numerical and vectorresources needed to express the observable trajectories of objects in motion, and the underlying force dynamics causally responsible for those motions. We can express all of thisknowledge in terms of a probabilistic program in a language such as Church (Goodman et al.,2008; Goodman, 2014).In addition to extending the framework of learning as inference over generative programs,this work follows a growing body of work that represents intuitive physical notions usingprobabilistic programs and data structures similar to those used in game “physics engines”:computer software that efficiently simulates approximate Newtonian interactions betweenlarge numbers of objects in real time for the purposes of interactive video games (Battagliaet al., 2013). This framework is different from previously proposed feature-based or heuristicbased decision rules for intuitive physics (see e.g. Gilden & Proffitt, 1989; Todd & Warren,1982), as well as more recent models based on Bayesian networks (although it is certainlymore closely related to the latter). A contrastive illustration of the kind of dynamic sceneswe study – and the accompanying representation – is shown in Fig. 1. Previous researchon learning physics from dynamical scenes has tended to focus on the inference of objectproperties under known force laws, and typically on only the simplest cases, such as inferringthe relative mass of two objects in motion from observing a single collision between them,usually with one object starting at rest (see for example Runeson et al., 2000; Gilden &Proffitt, 1989; Todd & Warren, 1982; Andersson & Runeson, 2008, and the top row of Fig. 1aand b). People’s inferences in these studies have traditionally been modeled as combinationsof decision-rules based on different heuristics and features, or more recently, Sanborn et al.(2013) have proposed a Bayesian network framework for capturing similar inferences. Inboth heuristic and Bayesian network models, however, the accounts proposed apply directlyto just one inferential scenario, such as inferring a single property (relative mass) from asingle collision between two objects (bottom row of Fig. 1a and b). Neither the experimentsnor the models have attempted to elucidate general mechanisms that people might use toinfer multiple properties of multiple objects from complex dynamic scenes that could involvemany interactions over time. That is our goal here.Specifically, we consider a much more complex, challenging learning task that unfolds inscenarios such as that illustrated in the top row of Fig. 1c. Imagine this as something like anair hockey table viewed from above. There are four disk-shaped “pucks” moving in a twodimensional rectangular environment under the influence of various causal laws and causallyrelevant properties. In a physical domain the causal laws are force laws, and these forces maybe either local and pairwise (analogous to the way two magnetic objects typically interact) orglobal (analogous to the way gravity operates in our typical environment). The properties arephysical properties that determine how forces act on objects, and may include both objectbased and surface-based properties, analogous to inertial mass and friction respectively. Achild or adult looking at such a display might come to a conclusion such as ‘red pucks attract3

one another’ or ‘green patches slow down objects.’ With the right configuration differentphysical properties begin to interact, such that an object might be seen as heavy, but inthe presence of a patch that slowed it down its ‘heaviness’ might be explained away by theroughness of the patch.(a)(b)aV1fV1iα1α2V2fFeature F1:Feature F2:Heuristics:V1fα1(c)ba bab(define puck (make-dynamic-entitypos shape mass vel .))V2f(define (run-dynamics entities forcesinit-cond steps dt)(if ( steps 0) '()(let*((m (get-mass entities))(F (apply-forces forces entities))(a (/ F m)) .α2H(F1), H(F2)Decision Rule: θ[H(F1), H(F2)]Figure 1: Illustration of several previous physics domains and models, and the current domain and model.(a) Heuristic-based decision model for judging mass from two-dimensional collisions with one object at rest(adapted from Gilden & Proffitt, 1989) (b) Causal Bayes net for judging mass from one-dimensional collisionswith both bodies in motion (adapted from Sanborn et al., 2013) (c) Part of a generative-program based modelfor inferring the properties of multiple pucks moving and interacting on a two-dimensional plane, governedby latent physical properties and dynamical laws, such as mass, friction, global forces and pairwise forces.Such dynamical displays are still far simpler than the natural scenes people see fromearly in development, but they are much richer than what has been studied in previousexperiments on learning intuitive physics, and learning in intuitive causal theories moregenerally1 . Accordingly, we consider a richer representation that can accommodate multipleobjects and entities, as part of a generative probabilistic program (illustrated in the bottomrow of Fig. 1c and detailed in the next section). Our framework describes several levels ofa general hierarchy for estimating and learning physical parameters at different levels, butthe contribution of the higher levels remains as a theoretical alternative to the frameworksdiscussed above. The dynamic displays considered here will test only the lower levels of thegeneral hierarchy, much in the same way that exploring how people infer relative mass fromsingle one-dimensional collisions are a test for a more general proposal regarding the use offeatures and heuristics in estimating physical parameters.Some research on causal learning more generally has looked at the joint inference ofcausal laws and object attributes, but only in the presence of simple discrete events ratherthan a rich dynamical scene (Gopnik & Sobel, 2000; Gopnik & Schulz, 2004; Griffiths et al.,1The tradition of ‘qualitative physics’ has considered more complex inferences, but has focused more onqualitative scene descriptions and sketches (Forbus, 1988).4

2004). For example, from observing that a “blicket-detector” lights up when objects A orB are placed on it alone or together, but does not light up when objects C or D are placedon it alone or in combination with A or B, people may infer that only objects A and B areblickets, and that the blicket detector only lights up when all the objects on it are blickets(Lucas et al., 2014). It is not clear that studying how people learn from a small numberof isolated discrete events presented deliberately and pedagogically generalizes to how theylearn physics properties in the real world, where configurations of objects move continuouslyin space and time, and interact in complex ways that are hard to demarcate or discretize.In this sense our experiments are intended to capture more of how we learn and estimatethe physics of the real world. Participants observe multiple objects in motion over a periodof five seconds, during which the objects typically collide multiple times with each otheras well as with stationary obstacles, pass over surfaces with different frictional properties,and move with widely varying velocities and accelerations. We compare the performance ofhuman learners in these scenarios with the performance of an ideal Bayesian learner whocan represent precisely the dynamical laws and properties at work in these stimuli. Whilepeople are generally able to perform this challenging task in ways broadly consistent withan ideal observer model, they also make systematic errors which are suggestive of how theymight use feature-based inference schemes to approximate ideal Bayesian inference. Hencewe also compare people’s performance to a more feature-based model. Finally, we proposea rational approximation model that combines the strengths of the ideal and feature-basedinferences into a more psychologically plausible account that is based on both heuristics andan implicit understanding of Newtonian-like mechanics.2. Formalizing physics learningThe core of our formal treatment is a hierarchical probabilistic generative model for theories (Kemp et al., 2010; Ullman et al., 2012; Goodman et al., 2011), specialized to the domainof intuitive physical theories (Fig. 2). The hierarchy consists of several levels, with more concrete (lower-level) concepts being generated from more abstract versions in the level above,and ultimately bottoming out in data that take the form of dynamic motion stimuli. Whilein this work we only test the framework on the lower and more concrete levels, we detailthe higher level as a theoretical proposal. This theoretical proposal conveys a commitmentto a knowledge representation that is in contrast to the heuristic-based decision rules and aBayes-net formalisms for intuitive physics mentioned in the introduction. The kind of knowledge representation specified below, in the form of probabilistic programs over dynamics, isproposed as a way for reasoning and learning about multiple interacting aspects of physicsfrom complex ongoing interactions - not just learning about single physical parameters fromsingle, isolated interactions as explored previously.In our framework, generative knowledge at each level is represented formally using(define .) statements in Church, a stochastic programming language (Goodman et al.,2008). The (define x v) statement binds the value v to the variable x, much as thestatement a 3 binds the value 3 to the variable a in many programming languages. Inprobabilistic programming, however, we often bind variables with values that come from5

probability distributions, and thus on each run of the program the variable might have adifferent value. For example, (define dice (uniform-draw 1 6)) stochastically assigns avalue between 1 and 6 to the variable dice. Whenever the program is run, a different valueis sampled and assigned to dice, drawing from the uniform distribution.Probabilistic programs are useful for representing knowledge with uncertainty (see for example Goodman et al., 2008; Stuhlmüller & Goodman, 2013; Goodman & Stuhlmüller, 2013).Fig. 2(iii) shows examples of probabilistic definition statements within our domain of intuitive physics, using Church. Fig. 2(i) shows the levels associated with these statements, andthe arrows from one level to the next show that each level is sampled from the definitions andassociated probability distributions of the level above it. The definition statements provide aformalization of the main parts of the model. The full forward generative model implementing levels 2 to 0 is available at http://forestdb.org/models/learning-physics.htmlIn the text below we explain these ideas further, using informal English descriptionswhenever possible, but see Goodman et al. (2008) for a more formal treatment of the programming language Church, and probabilistic programming in general. We emphasize thatwe are not committed to Church in particular as the proposed underlying psychological representation, but rather to the notion of probabilistic programs as cognitive representations.Framework level. The top-most level N represents general framework knowledge (Wellman & Gelman, 1992) and expectations about physical domains. The concepts in this levelinclude entities, which are a collection of properties, and forces, which are functions ofproperties and govern how these properties change over time. Forces can be fields that applyuniformly in space and time, such as gravity, or can be event-based, such as the force impulses exerted between two objects during a collision or the forces of kinetic friction betweentwo objects moving over each other.Properties are named values or distributions over values. While different entities canhave any number of properties, a small set of properties are “privileged”: it is assumed allentities have them. In our setup, the properties location and shape are privileged in thissense.Entities are further divided into ‘static’ and ‘dynamic.’ Dynamic entities are those thatcan potentially move, and all dynamic entities have the privileged property mass. Dynamicentities correspond then to the common sense definition of matter as ‘a thing with mass thatoccupies space2 .’The framework level assumes a ‘Newtonian-like’ dynamics, where acceleration is proportional to the sum of the forces acting on an object’s position relative to the object’s2The static/dynamic distinction is motivated by similar atomic choices in most computer physics enginesthat are used for approximate dynamic simulations, engines that were suggested as models of human intuitivephysics (e.g. Battaglia et al., 2013). In these physics engines the static/dynamic divide allows computationalspeed-up and memory conservation, since many forces and properties don’t have to be calculated or updatedfor static entities. It is an interesting possibility that the same kind of short-cuts developed by engineerstrying to quickly simulate physical models might also represent a cognitive distinction. Similar notions havebeen proposed in cognitive development in the separation of ‘objects’ from more stable ‘landscapes’ (Lee &Spelke, 2010).6

(i)!(ii)!(iii)!Level N !Innate !concepts!Entity!(define (make-entity property1 property2 )(list property1 property2 ))Newtonian !Dynamics!(define (run-dynamics entities forcesinit-cond steps dt)(if ( steps 0) ‘()(let* ((m (get-mass entities))(F (apply-forces forces entities))(a (/ F m))(new-cond (integrate init-conda dt noise)))(pair new-cond (run-dynamics entitiesforces new-cond (- 1 step) dt)))))Puck:!(define puck (make-dynamic-entitypos shape mass vel ))Surface:!(define surface (make-static-entitypos shape friction ))Mass!(define (mass)(pair “mass” (uniform ’ (1 3 9))))Friction!(define (friction)(pair “friction” ’ (uniform ’ (0 5 20))))Level 2!Entity types!Properties!Force classes! Pairwise:!1!2!Global:!(define (pairwise-force c1 c2)(let* ((a (uniform-draw ’ (-1 0 1))))(lambda (o1 o2)(let ((r (euc-dist o1 o2)))(/ (* a (del o1 (col o1)) (del o2 (col o2)))(power r 2))))))(define (global-force)(let* ((d (uniform-draw compass-dir)))(lambda (o) (* k d))))Level 1 !Property !values!Force !parameters!Level 0 (data)!: large mass!: medium mass!: small mass!: high friction!: no friction!(define world-entities(map sample-values entity-list))Force b/w reds: attract! (define world-forces(map sample-parameters force-list)Initial conditions!(define scenario(let* ((init-cond (sample-init world-entities)))(run-dynamics world-entitiesworld-forces init-cond steps dt)))Figure 2: Formal framework for learning intuitive physics in different domains: (i) The general hierarchygoing from abstract principles and assumptions to observable data. The top-most level of the hierarchyassumes a general noisy-Newtonian dynamics. (ii) Applying the principles in the left-most column to theparticular domain illustrated by Fig. 1 (iii) Definition statements in Church, capturing the notions shown inthe middle column with a probabilistic programming language.7

mass. This is consistent with suggestions from several recent studies of intuitive physicalreasoning in adults (Smith & Vul, 2013; Gerstenberg et al., 2012; Sanborn et al., 2013;Hamrick et al., 2016) and infants (Téglás et al., 2011). As Sanborn et al. (2013) show, sucha ‘noisy-Newtonian’ representation of intuitive physics can account for previous findings indynamical perception that have supported a heuristic account of physical reasoning (Gilden& Proffitt, 1989, 1994; Todd & Warren, 1982), or direct perception models (Runeson et al.,2000; Andersson & Runeson, 2008). This basic assumption regarding dynamics raises thequestion of whether and how Newtonian-like dynamics are themselves learned. We do notattempt to solve this question here, assuming instead that this level is either innately established or learned very early (Spelke & Kinzler, 2007), through mechanisms outside the scopeof this paper.Descending the hierarchy. Descending from Level N to Level 0, concepts are increasingly grounded by sampling from the concepts and associated probability distributions ofthe level above (Fig. 2(i)). Each level in the hierarchy can spawn a large number of instantiations in the level below it. Each lower level of the hierarchy contains more specific entities,properties, and forces than the level above it. An example of moving from Level N to LevelN-1 would be grounding the general concepts of entities and forces as more specifically 2dimensional masses acting under collisions. An alternative would ground the same generalentities and forces as 3-dimensional masses acting under conservation forces. This groundingcan proceed through an indeterminate number of levels, until it ultimately grounds out inobservable data (Level 0).Space of learnable theories. Levels 0-2 in Fig. 2 capture the specific sub-domain ofintuitive physics we study in this paper’s experiments: two-dimensional discs moving overvarious surfaces, generating and being affected by various forces, colliding elastically witheach other and with barriers bounding the environment (cf. Fig. 1).Levels 0-2 represent the minimal framework needed to explain behavior in our task, andwe remain agnostic about more abstract background knowledge that might also be broughtto bear. We give participants explicit instructions that help determine a single Level 2schema for the task, which generates a large hypothesis space of candidate Level 1 theories,which they are asked to infer by using observed data at Level 0.Level 2: The “hockey-puck” domain. This level specifies the entity types puck andsurface. All entities within the type puck have the properties mass, elasticity, color, shape,position, and velocity. Level 2 also specifies two types of force: Pairwise forces cause pucks toattract or repel, following the inverse square form of Newton’s gravitation law and Coulomb’sLaw. Global forces push all pucks in a single compass direction. We assume forces of collisionand friction that follow their standard forms, but they are not the subject of inference here.Level 1: Specific theories. The hockey-puck domain can be instantiated as manydifferent specific theories, each describing the dynamics of a different possible world in thisdomain. A Level 1 theory is determined by sampling particular values for all free parametersin the force types, and for all entity subtypes and their subtype properties (e.g., massesof pucks, friction coefficients of surfaces). Each of the sampled values is drawn from aprobability distribution that the Level 2 theory specifies. So, Level 2 generates a prior8

distribution over candidate theories for possible worlds in its domain.The domain we study here allows three types of pucks, indexed by the colors red, blue,and yellow. It allows three types of surfaces (other than the default blank surface), indexedby the colors brown, green, and purple. Puck mass values are 1, 3, or 9, drawn with equalprobability. Surface friction coefficients values are 0, 5, or 20, drawn with equal probability.Different pairwise forces (attraction, repulsion, or no interaction) can act between each of thedifferent pairs of puck types, drawn with equal prior probability. Finally, a global force maypush all pucks in a given direction, either , , , , or 0, drawn with equal probability. Wefurther restrict this space by considering only Level 1 theories in which all subclasses differin their latent properties (e.g. blue, red, and yellow pucks must all have different masses).While this restriction (together with the discretization) limits the otherwise-infinite space oftheories, it is still a very large space, containing 131,220 distinct theories3 .Level 0: Observed data. The bottom level of our hierarchical model (Fig. 2) is aconcrete scenario, specified by the precise individual entities under observation and the initialconditions of their dynamically updated properties. Each Level 1 theory can be instantiatedin many different scenarios. The pucks’ initial conditions were drawn from a zero-meanGaussian distribution for positions, and a Gamma distribution for velocities, and filtered toremove cases in which the pucks began in overlap. Once the entities and initial conditionsare set, the positions and velocities of all entities are updated according to the Level 1theory’s specific force dynamics for T time-steps, generating a path of multi-valued datapoints, d0 , . . . , dT . The probability of a path is then simply the product of the probabilitiesof all the choices used to generate the scenario. Finally, the actual observed positions andvelocities of all entities are assumed to be displaced from their true values by Gaussian noise.The framework grounds out at the level of observed data, and it is assumed that all ofthe models discussed below have reliable access to a basic input representation that includesobject position, identity, velocity, size, shape, and collision-detection. Thus, the modelsbelow are not ‘end-to-end’ in the sense of learning from a pixel-based representation. Severalrecent machine-learning based approaches to physics-learning have also found that an initialunderlying representation of object position, velocity, and interaction (Chang et al., 2017;Battaglia et al., 2016) is better suited for learning physical relations and properties than apixel-based representation.2.1. Learning Physical parameters as ideal Bayesian inference: The IO modelHaving specified our overall generative model, and the particular version of it underlyingour “hockey puck” domain, we now turn to the question of learning. The model describedso far allows us to formalize different kinds of learning as inference over different levelsof the hierarchy. This approach can in principle be used for reasoning about all levels ofthe hierarchy, including the general shape of forces and types of entities, the unobservedphysical properties of entities, as well as the existence, shape and parameters of unseen3More precisely, the cross product N (mass)! N (f riction coef f icients)! N (direction) N (pairwise combination)N (f orce constant) 131, 220. Selecting the right theory in this space is equivalentto correctly choosing 17 independent binary choices9

dynamical rules (although it cannot in present form be used to reason about the assumptionof Newtonian-like dynamics). In this paper, we constrain the inference to the specific levelsconsidered in our experiment. Given observations, an ideal learner can invert the generativeframework to obtain the posterior over all possible theories that could have produced theobserved data. We can then marginalize out nuisance parameters (other irrelevant aspectsof the theory) to obtain posterior probabilities over the dynamic quantity of interest. In thefollowing sections we refer to this learning model as the Ideal Observer (IO).Inference at multiple levels includes both continuous parameter estimation (e.g. thestrength of an inverse-square attractive force or the exact mass value of an object) and morediscrete notions of structure and form (e.g. the very existence and shape of an attractiveforce, the fact that an object has a certain property). This parallels a distinction betweentwo modes of learning. The distinction appears in AI research as well as cognitive development, where it is referred to as parameter setting vs. conceptual change (Carey, 2004). Ingeneral, inferring structure and form (or conceptual change) is seen as harder than parameterestimation.Learning at different levels could unfold over different spans of time depending on thesize and shape of the learning space, as well as on background knowledge and the availableevidence. Estimating the mass of an object from a well-known class in a familiar settingcould take adults under a second, while understanding that there is a general gravitationalforce pulling things downwards, given little initial data, might take infants several months tograsp (Kim & Spelke, 1992). In

model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultane-ously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments.

Related Documents:

Dec 06, 2018 · Dynamic Strategy, Dynamic Structure A Systematic Approach to Business Architecture “Dynamic Strategy, . Michael Porter dynamic capabilities vs. static capabilities David Teece “Dynamic Strategy, Dynamic Structure .

physical education curriculum table of contents acknowledgements 2 district mission statement 3 physical education department mission statement 3 physical education task force 3 physical education and academic performance 4 naspe learning standards 8 new york state physical education learning standards 8 physical education high school curriculum guide 15 physical education curriculum analysis .

Capability (e.g. Spaceclaim, CAD) Evaluator in Cloud Transfers project and input parameters Transfers project and input parameters Transfers output parameters Transfers output parameters I’m here and I can Import input parameters Export output parameters can be all on one machine Yes!

Matlab and lapack functions. This mex function has four parameters: number of input parameters, array of pointers to input parameters, numbers of output parameters and array of pointers to output parameters. To retrieve the values of input parameters and store them into local variables mexFu

The Explicit Dynamic Model and Inertial Parameters of the PUMA 566 Am t Brian Armstrong, Oussama Khatib, Joel Burdick Stanford Artificial Intelligence Laboratory Stanford University Abstract To provide COSMOS, a dynamic model baaed manipulator control system, with an improved dynamic model, a PUMA 5

sustain dynamic pulse loadings based on difierent dynamic stability criteria is discussed as well. Keywords: Thin{walled plate structures, dynamic buckling, pulse load 1. Introduction The problem of dynamic buckling of thin walled structures such as shells and plates subjected to in{plan

Hands-on Introduction to Dynamic Blocks 2 What is a Dynamic Block? A dynamic block has flexibility and intelligence. A dynamic block reference can easily be changed in a drawing while you work. You can manipulate the geometry in a dynamic b

Dynamic Mechanical Analysis Dynamic mechanical properties refer to the response of a material as it is subjected to a periodic force. These properties may be expressed in terms of a dynamic modulus, a dynamic loss modulus, and a mechanical damping term. Typical values of dynamic moduli for polymers range from 106-1012 dyne/cm2 depending upon .

Optimization of EDM process parameters for Al-SiC reinforced metal matrix composite 26 www.erpublication.org C.Optimal input and output parameters for Al 10%SiC Fig. 14 - Optimal input parameters to achieve larger MRR for Al-10%SiC Fig. 15 Optimal input parameters to achieve smaller Ra for

Nihat Tosun et al. effect of machining parameters on the kerf width and MRR by Taguchi experimental design method. ANOVA method is used to find significant parameters and optimum machining parameter combination was obtained by S/N ratio. . 2.1 Selection of process parameters Selection of the process parameters is based on the literature .

Optimization of machining parameters using Taguchi coupled GRA 167 Table 3: Input parameters with Taguchi design Machining Parameters Levels of Parameters 1 (low) 2 (medium) 3 (high) Cutting speed (v) m/min 42 60 108 Feed rate (f) mm/rev 0.1 0.2 0.3 Depth-of-cut (d) mm 0.25 0.5 0.75 Evaluation of optimal cutting parameters

INFLUENCE OF ABRASIVE WATER JET MACHINING PARAMETERS ON THE SURFACE ROUGHNESS OF EUTECTIC . the proper selection of process parameters is important in achieving better surface finish. Abrasive water jet machining parameters such as water pressure, standoff distance and traverse speed, were . it was found that the optimum parameters were .

MLP hidden-layer size 1500 MLP drop-out fraction 0.5 Batch Size 16 Learning rate 0.0003 Table 2. CNN-MLP hyper-parameters hyper-parameters Batch Size 32 Learning rate 0.0003 Table 3. ResNet-50 and context-blind ResNet hyper-parameters hyper-parameters CNN kernels [8, 8, 8, 8] CNN kernel s

the entire power system is expanded with standard dynamic models for generators and their appropriate control devices by means of allocation criteria. Afterwards the initial parameters Fig. 3: The development process of the Dynamic Study Model of the standard dynamic models are tuned on the basis of measurement values of a system event as well .

1996 VLSI Circuits Workshop Dynamic Logic and Latches -Part II Dynamic Logic - Coupling Circuit Diagram Static Nodes Dynamic Nodes Note: High-up coupling on stored “1” nodes and Low-down coupling on stored “0” nodes can be a problem as well. 1996 VLSI Circuits Workshop Dynamic Logic and Latches -Part II

8 Rockwell Automation Publication PFLEX-AT001L-EN-P - September 2017 Chapter 1 Understanding How Dynamic Braking Works Dynamic Brake Components A Dynamic Brake consists of a Chopper (the chopper transistor and related control components are built into PowerFlex drives) and a Dynamic Brake Resistor

Why dynamic programming? Lagrangian and optimal control are able to deal with most of the dynamic optimization problems, even for the cases where dynamic programming fails. However, dynamic programming has become widely used because of its appealing characteristics: Recursive feature: ex

dynamic model to ensure that different WTGs from different manufacturers can be represented as accurately as possible to the actual turbines. Section II presents a discussion on WPP representation. Section III presents the dynamic model validation, followed by Section IV, which presents the dynamic simulations to validate the dynamic models.

Dynamic analyses can generate "dynamic program invariants", i.e., invariants of observed execution; static analyses can check them Dynamic analyses consider only feasible paths (but may not consider all paths); static analyses consider all paths (but may include infeasble paths) Scope Dynamic analyses examine one very long program path

Keyboards Together 2 Music Medals Bronze Ensemble Pieces (ABRSM) B (T) In the Meadow Stood a Little Birch Tree Trad. Russian, arr. Mike Cornick: p. 3 B (T) Jazz Carousel Jane Sebba: p. 4 B (T) Heading for Home John Caudwell: p. 5 B (T) Don’t Mess with Me! Peter Gritton: p. 6