Integrating Machine Learning And Multiscale

2y ago
3 Views
2 Downloads
2.76 MB
11 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Brenna Zink
Transcription

www.nature.com/npjdigitalmedREVIEW ARTICLEOPENIntegrating machine learning and multiscale modeling—perspectives, challenges, and opportunities in the biological,biomedical, and behavioral sciences1234567890():,;Mark Alber1, Adrian Buganza Tepole2, William R. Cannon 3, Suvranu De4, Salvador Dura-Bernal5, Krishna Garikipati6,George Karniadakis7, William W. Lytton5, Paris Perdikaris8, Linda Petzold9 and Ellen Kuhl 10*Fueled by breakthrough technology developments, the biological, biomedical, and behavioral sciences are now collecting moredata than ever before. There is a critical need for time- and cost-efficient strategies to analyze and interpret these data to advancehuman health. The recent rise of machine learning as a powerful technique to integrate multimodality, multifidelity data, and revealcorrelations between intertwined phenomena presents a special opportunity in this regard. However, machine learning aloneignores the fundamental laws of physics and can result in ill-posed problems or non-physical solutions. Multiscale modeling is asuccessful strategy to integrate multiscale, multiphysics data and uncover mechanisms that explain the emergence of function.However, multiscale modeling alone often fails to efficiently combine large datasets from different sources and different levels ofresolution. Here we demonstrate that machine learning and multiscale modeling can naturally complement each other to createrobust predictive models that integrate the underlying physics to manage ill-posed problems and explore massive design spaces.We review the current literature, highlight applications and opportunities, address open questions, and discuss potential challengesand limitations in four overarching topical areas: ordinary differential equations, partial differential equations, data-drivenapproaches, and theory-driven approaches. Towards these goals, we leverage expertise in applied mathematics, computer science,computational biology, biophysics, biomechanics, engineering mechanics, experimentation, and medicine. Our multidisciplinaryperspective suggests that integrating machine learning and multiscale modeling can provide new insights into diseasemechanisms, help identify new targets and treatment strategies, and inform decision making for the benefit of human health.npj Digital Medicine (2019)2:115 ; NWould not it be great to have a virtual replica of ourselves toexplore our interaction with the real world in real time? A living,digital representation of ourselves that integrates machinelearning and multiscale modeling to continuously learn anddynamically update itself as our environment changes in real life?A virtual mirror of ourselves that allows us to simulate ourpersonal medical history and health condition using data-drivenanalytical algorithms and theory-driven physical knowledge?These are the objectives of the Digital Twin.1 In health care, aDigital Twin would allow us to improve health, sports, andeducation by integrating population data with personalized data,all adjusted in real time, on the basis of continuously recordedhealth and lifestyle parameters from various sources.2–4 But,realistically, how long will it take before we have a Digital Twin byour side? Can we leverage our knowledge of machine learningand multiscale modeling in the biological, biomedical, andbehavioral sciences to accelerate developments towards a DigitalTwin? Do we already have digital organ models that we couldintegrate into a full Digital Twin? And what are the challenges,open questions, opportunities, and limitations? Where do we evenbegin? Fortunately, we do not have to start entirely from scratch.Over the past two decades, multiscale modeling has emerged intoa promising tool to build individual organ models by systematically integrating knowledge from the tissue, cellular, andmolecular levels, in part fueled by initiatives like the United StatesFederal Interagency Modeling and Analysis Group IMAG5.Depending on the scale of interest, multiscale modelingapproaches fall into two categories, ordinary differentialequation-based and partial differential equation-basedapproaches. Within both categories, we can distinguish datadriven and theory-driven machine learning approaches. Here wediscuss these four approaches towards developing a Digital Twin.Ordinary differential equations characterize the temporalevolution of biological systemsOrdinary differential equations are widely used to simulate theintegral response of a system during development, disease,environmental changes, or pharmaceutical interventions. Systemsof ordinary differential equations allow us to explore the dynamicinterplay of key characteristic features to understand thesequence of events, the progression of disease, or the timelineof treatment. Applications range from the molecular, cellular,tissue, and organ levels all the way to the population levelincluding immunology to correlate protein–protein interactions1Department of Mathematics, University of California, Riverside, CA, USA. 2Department of Mechanical Engineering, Purdue University, West Lafayette, USA. 3ComputationalBiology Group, Pacific Northwest National Laboratory, Richland, WA, USA. 4Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy,NY, USA. 5SUNY Downstate Medical Center and Kings County Hospital, Brooklyn, NY, USA. 6Departments of Mechanical Engineering and Mathematics, University of Michigan,Ann Arbor, MI, USA. 7Division of Applied Mathematics, Brown University, Providence, RI, USA. 8Department of Mechanical Engineering, University of Pennsylvania, Philadelphia,PA, USA. 9Department of Computer Science and Mechanical Engineering, University of California, Santa Barbara, CA, USA. 10Departments of Mechanical Engineering andBioengineering, Stanford University, Stanford, CA, USA. *email: ekuhl@stanford.eduScripps Research Translational Institute

M. Alber et al.1234567890():,;2and immune response,6 microbiology to correlate growth ratesand bacterial competition, metabolic networks to correlategenome and physiome,7,8 neuroscience to correlate proteinmisfolding to biomarkers of neurodegeneration,9 oncology tocorrelate perturbations to tumorigenesis,10 and epidemiology tocorrelate disease spread to public health. In essence, ordinarydifferential equations are a powerful tool to study the dynamics ofbiological, biomedical, and behavioral systems in an integralsense, irrespective of the regional distribution of the underlyingfeatures.Partial differential equations characterize the spatio-temporalevolution of biological systemsIn contrast to ordinary differential equations, partial differentialequations are typically used to study spatial patterns of inherentlyheterogeneous, regionally varying fields, for example, the flow ofblood through the cardiovascular system11 or the elastodynamiccontraction of the heart.12 Unavoidably, these equations arenonlinear and highly coupled, and we usually employ computational tools, for example, finite difference or finite elementmethods, to approximate their solution numerically. Finiteelement methods have a long history of success at combiningordinary differential equations and partial differential equations topass knowledge across the scales.13 They are naturally tailored torepresent the small-scale behavior locally through constitutivelaws using ordinary differential equations and spatial derivativesand embed this knowledge globally into physics-based conservation laws using partial differential equations. Assuming we knowthe governing ordinary and partial differential equations, finiteelement models can predict the behavior of the system fromgiven initial and boundary conditions measured at a few selectedpoints. This approach is incredibly powerful, but requires that weactually know the physics of the system, for example through theunderlying kinematic equations, the balance of mass, momentum,or energy. Yet, to close the system of equations, we needconstitutive equations that characterize the behavior of thesystem, which we need to calibrate either with experimental dataor with data generated via multiscale modeling.Multiscale modeling seeks to predict the behavior of biological,biomedical, and behavioral systemsToward this goal, the main objective of multiscale modeling is toidentify causality and establish causal relations between data. Ourexperience has taught us that most engineering materials displayan elastic, viscoelastic, or elastoplastic constitutive behavior.However, biological and biomedical materials are often morecomplex, simply because they are alive.14 They continuouslyinteract with and adapt to their environment and dynamicallyrespond to biological, chemical, or mechanical cues.15 Unlikeclassical engineering materials, living matter has amazing abilitiesto generate force, actively contract, rearrange its architecture, andgrow or shrink in size.16 To appropriately model these phenomena, we not only have to rethink the underlying kinetics, thebalance of mass, and the laws of thermodynamics, but often haveto include the biological, chemical, or electrical fields that act asstimuli of this living response.17 This is where multiphysicsmultiscale modeling becomes important:18,19 multiscale modelingallows us to thoroughly probe biologically relevant phenomena ata smaller scale and seamlessly embed the relevant mechanisms atthe larger scale to predict the dynamics of the overall system.20Importantly, rather than making phenomenological assumptionsabout the behavior at the larger scale, multiscale models postulatethat the behavior at the larger scale emerges naturally from thecollective action at the smaller scales. Yet, this attention to detailcomes at a price. While multiscale models can provide unprecedented insight to mechanistic detail, they are not only expensive,but also introduce a large number of unknowns, both in the formnpj Digital Medicine (2019) 115of unknown physics and unknown parameters21,22. Fortunately,with the increasing ability to record and store information, wenow have access to massive amounts of biological and biomedicaldata that allow us to systematically discover details about theseunknowns.Machine learning seeks to infer the dynamics of biological,biomedical, and behavioral systemsToward this goal, the main objective of machine learning is toidentify correlations among big data. The focus in the biology,biomedicine, and behavioral sciences is currently shifting fromsolving forward problems based on sparse data towards solvinginverse problems to explain large datasets.23 Today, multiscalesimulations in the biological, biomedical, and behavioral sciencesseek to infer the behavior of the system, assuming that we haveaccess to massive amounts of data, while the governing equationsand their parameters are not precisely known.24–26 This is wheremachine learning becomes critical: machine learning allows us tosystematically preprocess massive amounts of data, integrate andanalyze it from different input modalities and different levels offidelity, identify correlations, and infer the dynamics of the overallsystem. Similarly, we can use machine learning to quantify theagreement of correlations, for example by comparing computationally simulated and experimentally measured features acrossmultiple scales using Bayesian inference and uncertaintyquantification.27Machine learning and multiscale modeling mutually complementone anotherWhere machine learning reveals correlation, multiscale modelingcan probe whether the correlation is causal; where multiscalemodeling identifies mechanisms, machine learning, coupled withBayesian methods, can quantify uncertainty. This natural synergypresents exciting challenges and new opportunities in thebiological, biomedical, and behavioral sciences.28 On a morefundamental level, there is a pressing need to develop theappropriate theories to integrate machine learning and multiscalemodeling. For example, it seems intuitive to a priori build physicsbased knowledge in the form of partial differential equations,boundary conditions, and constraints into a machine learningapproach.22 Especially when the available data are limited, we canincrease the robustness of machine learning by including physicalconstraints such as conservation, symmetry, or invariance. On amore translational level, there is a need to integrate data fromdifferent modalities to build predictive simulation tools ofbiological systems.29 For example, it seems reasonable to assumethat experimental data from cell and tissue level experiments,animal models, and patient recordings are strongly correlated andobey similar physics-based laws, even if they do not originate fromthe same system. Naturally, while data and theory go hand inhand, some of the approaches to integrate information are moredata driven, seeking to answer questions about the quality of thedata, identify missing information, or supplement sparse trainingdata,30,31 while some are more theory driven, seeking to answerquestions about robustness and efficiency, analyze sensitivity,quantify uncertainty, and choose appropriate learning tools.Figure 1 illustrates the integration of machine learning andmultiscale modeling on the parameter level by constraining theirspaces, identifying values, and analyzing their sensitivity, and onthe system level by exploiting the underlying physics, constrainingdesign spaces, and identifying system dynamics. Machine learningprovides the appropriate tools for supplementing training data,preventing overfitting, managing ill-posed problems, creatingsurrogate models, and quantifying uncertainty. Multiscale modeling integrates the underlying physics for identifying relevantfeatures, exploring their interaction, elucidating mechanisms,bridging scales, and understanding the emergence of function.Scripps Research Translational Institute

M. Alber et ynamicsexploringmassive certaintyty ingsupplementing training ainingdesignspacesunderstandingemergence antfeaturesexploringinteractionof featuresodephysicsFig. 1 Machine learning and multiscale modeling in the biological, biomedical, and behavioral sciences. Machine learning and multiscalemodeling interact on the parameter level via constraining parameter spaces, identifying parameter values, and analyzing sensitivity and onthe system level via exploiting the underlying physics, constraining design spaces, and identifying system dynamics. Machine learningprovides the appropriate tools towards supplementing training data, preventing overfitting, managing ill-posed problems, creating surrogatemodels, and quantifying uncertainty with the ultimate goal being to explore massive design spaces and identify correlations. Multiscalemodeling integrates the underlying physics towards identifying relevant features, exploring their interaction, elucidating mechanisms,bridging scales, and understanding the emergence of function with the ultimate goal of predicting system dynamics and identifying causality.We have structured this review around four distinct but overlapping methodological areas: ordinary and partial differentialequations, and data and theory driven machine learning. Thesefour themes roughly map into the four corners of the data-physicsspace, where the amount of available data increases from top tobottom and physical knowledge increases from left to right. Foreach area, we identify challenges, open questions, and opportunities, and highlight various examples from the life sciences. Forconvenience, we summarize the most important terms andtechnologies associated with machine learning with examplesfrom multiscale modeling in Box 1. We envision that our article willspark discussion and inspire scientists in the fields of machinelearning and multiscale modeling to join forces towards creatingpredictive tools to reliably and robustly predict biological,biomedical, and behavioral systems for the benefit of humanhealth.CHALLENGESA major challenge in the biological, biomedical, and behavioralsciences is to understand systems for which the underlying dataare incomplete and the physics are not yet fully understood. Inother words, with a complete set of high-resolution data, we couldapply machine learning to explore design spaces and identifycorrelations; with a validated and calibrated set of physicsequations and material parameters, we could apply multiscalemodeling to predict system dynamics and identify causality. Byintegrating machine learning and multiscale modeling we canleverage the potential of both, with the ultimate goal of providingquantitative predictive insight into biological systems. Figure 2illustrates how we could integrate machine learning and multiscale modeling to better understand the cardiac system.Ordinary differential equations encode temporal evolution intomachine learningOrdinary differential equations in time are ubiquitous in thebiological, biomedical, and behavior sciences. This is largelyScripps Research Translational Institutebecause it is relatively easy to make observations and acquire dataat the molecular, cellular, organ, or population scales withoutaccounting for spatial heterogeneity, which is often more difficultto access. The descriptions typically range from single ordinarydifferential equations to large systems of ordinary differentialequations or stochastic ordinary differential equations. Consequently, the number of parameters is large and can easily reachthousands or more.32,33 Given adequate data, the challengebegins with identifying the nonlinear, coupled driving terms.34 Toanalyze the data, we can apply formal methods of systemidentification, including classical regression and stepwise regression.24,26 These approaches are posed as nonlinear optimizationproblems to determine the set of coefficients by multiplyingcombinations of algebraic and rate terms that result in the best fitto the observations. Given adequate data, system identificationworks with notable robustness and can learn a parsimonious set ofcoefficients, especially when using stepwise regression. In additionto identifying coefficients, the system identification should alsoaddress uncertainty quantification and account for both measurement errors and model errors. The Bayesian setting provides aformal framework for this purpose.35 Recent system identificationtechniques24,26,36–40 start from a large space of candidate terms inthe ordinary differential equations to systematically control andtreat model errors. Machine learning can provide a powerfulapproach to reduce the number of dynamical variables andparameters while maintaining the biological relevance of themodel.24,41Partial differential equations encode physics-based knowledgeinto machine learningThe interaction between the different scales, from the cell to thetissue and organ levels, is generally complex and involvestemporally and spatially varying fields with many unknownparameters.42 Prior physics-based information in the form ofpartial differential equations, boundary conditions, and constraintscan regularize a machine learning approach in such a way that itcan robustly learn from small and noisy data that evolve in timenpj Digital Medicine (2019) 115

M. Alber et al.4Box 1 Terms and technologies associated with machine learning with examples from multiscale modeling in the biological,biomedical, and behavioral sciencesActive learning is a supervised learning approach in which the algorithm actively chooses the input training points. When applied to classification, it selects new inputs thatlie near the classification boundary or minimize the variance. Example: Classification of arrhythmogenic risk.74Bayesian inference is a method of statistical inference that uses Bayes’ theorem to update the probability of a hypothesis as more information becomes available. Examples:Selecting models and identifying parameters of liver,55 brain,56 and cardiac tissue.59Classification is a supervised learning approach in which the algorithm learns from a training set of correctly classified observations and uses this learning to classify newobservations, where the output variable is discrete. Examples: classifying the effects of individual single nucleotide polymorphisms on depression;75 of ion channel blockageon arrhythmogenic risk in drug development;74 and of chemotherapeutic agents in personalized cancer medicine.73Clustering is an unsupervised learning method that organizes members of a dataset into groups that share common properties. Examples: Clustering the effects ofsimulated treatments76,77.Convolutional neural networks are neural network that apply the mathematical operation of convolution, rather than linear transformation, to generate the followingoutput layer. Examples: Predicting mechanical properties using microscale volume elements through deep learning,78 classifying red blood cells in sickle cell anemia,79 andinferring the solution of multiscale partial differential equations.80Deep neural networks or deep learning are a powerful form of machine learning that uses neural networks with a multiplicity of layers. Examples: biologially inspiredlearning, where deep learning aims to replicate mechanisms of neuronal interactions in the brain,81 predicting the sequence specificities of DNA-and RNA-binding proteins.82Domain randomization is a technique for randomizing the field of an image so that the true image is also recognized as a realization of this space. Example: Supplementingtrianing data.83Dropout neural networks are a regularization method for neural networks that avoids overfitting by randomly deleting, or dropping, units along with their connectionsduring training. Examples: detecting retinal diseases and making diagnosis with various qualities of retinal image data84Dynamic programming is a mathematical optimization formalism that enables the simplification of a complicated decision-making problem by recursively breaking it intosimpler sub-problems. Example: de novo peptide sequencing via tandem mass spectrometry and dynamic programming.85Evolutionary algorithms are generic population-based optimization algorithms that adopt mechanisms inspired by biological evolution including reproduction, mutation,recombination, and selection to characterize biological systems. Example: automatic parameter tuning in multiscale brain modeling.86Gaussian process regression is a nonparametric, Bayesian approach to regression to create surrogate models and quantify uncertainty. Examples: creating surrogate modelsto characterize the effects of drugs on features of the electrocardiogram70 or of material properties on the stress profiles from reconstructive surgery.58Genetic programming is a heuristic search technique of evolving programs that starts from a population of random unfit programs and applies operations similar to naturalgenetic processes to identify a suitable program. Example: predicting metabolic pathway dynamics from time-series multi-omics data.72Generative models are statistical models that aim to capture the joint distribution between a set of observed or latent random variables. Example: using deep generativemodels for chemical space exploration and matter engineering.87Multifidelity modeling is a supervised learning approach to synergistically combine abundant, inexpensive, low fidelity and sparse, expensive, high fidelity data fromexperiments and simulations to create efficient and robust surrogate models. Examples: simulating the mixed convection flow past a cylinder29 and cardiacelectrophysiology27Physics-informed neural networks are neural networks that solve supervised learning tasks while respecting physical constraints. Examples: diagnosing cardiovasculardisorders non-invasively using four-dimensional magnetic resonance images of blood flow and arterial wall displacements11 and creating computationally efficientsurrogates for velocity and pressure fields in intracranial aneurysms.23Recurrent neural networks are a class of neural networks that incorporate a notion of time by accounting not only for current data, but also for history with tunable extentsof memory. Example: identifying unknown constitutive relations in ordinary differential equation systems.88Reinforcement learning is a technique that cirumvents the notions of supervised and unsupervised learning by exploring and combining decisions and actions in dynamicenvironments to maximize some notion of cumulative reward. Examples: understanding common learning modes in biological, cognitive, and artificial systems through thelens of reinforcement learning.89,90Regression is a supervised learning approach in which the algorithm learns from a training set of correctly identified observations and then uses this learning to evaluatenew observations where the output variable is continuous. Example: exploring the interplay between drug concentration and drug toxicity in cardiac elecrophysiology.27Supervised learning defines the task of learning a function that maps an input to an output based on example input–output pairs. Typical examples include classificationand regression tasks.System identification refers to a collection of techniques that identify the governing equations from data on a steady state or dynamical system. Examples: inferringoperators that form ordinary37 and partial differential equations.26Uncertainty quantification is the science of quantitative characterization and reduction of uncertainties that seeks to determine the likelihood of certain outputs if theinputs are not exactly known. Example: quantifying the effects of experimental uncertainty in heart failure91 or the effects of estimated material properties on stress profilesin reconstructive surgery.57Unsupervised learning aims at drawing inferences from datasets consisting of input data without labeled responses. The most common types of unsupervised learningtechniques include clustering and density estimation used for exploratory data analysis to identify hidden patterns or groupings.and space. Gaussian processes and neural networks have provenparticularly powerful in this regard.43–45 For Gaussian processregression, the partial differential equation is encoded in aninformative function prior;46 for deep neural networks, the partialdifferential equation induces a new neural network coupled to thestandard uninformed data-driven neural network,22 see Fig. 3. Thiscoupling of data and partial differential equations into a deepneural network presents itself as an approach to impose physics asa constraint on the expressive power of the latter. New theorydriven approaches are required to extend this approach tostochastic partial differential equations using generative adversarial networks, for fractional partial differential equations insystems with memory using high-order discrete formulas, and forcoupled systems of partial differential equations in multiscalemultiphysics modeling. Multiscale modeling is a critical step, sincebiological systems typically possess a hierarchy of structure,mechanical properties, and function across the spatial andtemporal scales. Over the past decade, modeling multiscalephenomena has been a major point of attention, which hasadvanced detailed deterministic models and their coupling acrossscales.13 Recently, machine learning has permeated into themultiscale modeling of hierarchical engineering materials3,44,47,48npj Digital Medicine (2019) 115and into the solution of high-dimensional partial differentialequations with deep learning methods.34,43,49–53 Uncertaintyquantification in material properties is also gaining relevance,54with examples of Bayesian model selection to calibrate strainenergy functions55,56 and uncertainty propagation with Gaussianprocesses of nonlinear mechanical systems.57–59 These trends fornon-biological systems point towards immediate opportunities forintegrating machine learning and multiscale modeling in thebiological, biomedical, and behavioral sciences and opens newperspectives that are unique to the living nature of biologicalsystems.Data-driven machine learning seeks correlations in big dataMachine learning can be regarded as an extension of classicalstatistical modeling that can digest massive amounts of data toidentify high-order correlations and generate predictions. This is notonly important in view of the rapid developments of ultra-highresolution measurement techniques,60 including cryo-EM, highresolution imaging flow cytometry, or four-dimensional-flowmagnetic resonance imaging, but also when analyzing largescale health data from wearable and smartphone apps.61,62Scripps Research Translational Institute

M. Alber et al.5exploringmassive designspacespredictingsystemdynamicsorgan levelexperimentorgan levelsimulationexploiting underlying physicsconstraining design HINELEARNINGbridgingthescalesidentifying parameter valuesidentifying system dynamicscell levelexperimentanalyzingsensitivitycell levelsimulationelucidatingmechanismsFig. 2 Machine learning and multiscale modeling of the cardiac system. Multiscale modeling can teach machine learning how to exploit theunderlying physics described by, e.g., the ordinary differential equations of cellular electrophysiology and the partial differential equations ofelectro-mechanical coupling, and constrain the design spaces; machine learning can teach multiscale modeling how to identify parametervalues, e.g., the gating variables that govern local ion channel dynamics, and identify system dyna

stimuli of this living response.17 This is where multiphysics multiscale modeling becomes important: . simulations in the biological, biomedical, and behavioral sciences . presents exciting challenges and new opportunities in the bio

Related Documents:

methods [29,30], the equation-free multiscale methods [31,32], the triple-decker atomistic-mesoscopic-continuum method [23], and the internal-flow multiscale method [33,34]. A nice overview of multiscale flow simulations using particles is presented in [35]. In this paper, we apply a hybrid multiscale method that couples atomistic details ob-

incomplete mathematical formulations, and numerical implementations that are inconsistent with both the mathematical and physical properties of the system. In general, multiscale research ef-forts remain largely disjoint across disciplines and typically focus on only one of the two multiscale categories.

The multiscale model described in this paper combines continuum submodels and a discrete stochastic submodel into a multiscale modeling environment for studying P. aeruginosa swarming. At the continuum level, thin liquid film submodel is used to describe the hydrodynamics of mixt

proposed algorithms. Matlab toolboxes are online accessible to reproduce the results and be implemented for general multiscale denoising approaches provided by the users. Index Terms—image denoising, multiscale analysis, cy-cle spinning, translation invariant, Gibbs phenomenon, Gaussian noise, Poisson noise, 2-dimensional image, 3-dimensional .

1) Multiscale Collisional-Radiative Kinetics ("chemistry") - Hierarchy of complexity: Single-fluid / MHD Multi-fluid / MHD / Maxwell Non-Maxwellian (discrete & MC) 2) Multiscale Transport ("dynamics") - LHS of FP/Boltzmann streaming operator - Different directions: PIC, Vlasov, Hybrid - Anisotropic effects (magnetized) 3) Multiscale Couplings (additional physics)

Multiscale Transient Thermal Analysis of Microelectronics In a microelectronic device, thermal transport needs to be simulated on scales ranging from tens of nanometers to hundreds of millimeters. High accuracy multiscale models are required to develop engineering tools for predicting temperature distributions with suffi-cient accuracy in such .

decoration machine mortar machine paster machine plater machine wall machinery putzmeister plastering machine mortar spraying machine india ez renda automatic rendering machine price wall painting machine price machine manufacturers in china mail concrete mixer machines cement mixture machine wall finishing machine .

BEAM Team Memo Rosalind Arwas Carolyn Perkins Helen Woodhall A very warm welcome to the March/April 2021 edition of The BEAM. This time last year, the spring edition unexpectedly almost became our last but, as the