Final Verification And Validation Paper - Nd.edu

1y ago
18 Views
3 Downloads
569.03 KB
9 Pages
Last View : 2m ago
Last Download : 2m ago
Upload by : Jewel Payne
Transcription

Verification and Validation of an Agent-based Simulation ModelNadine ShillingfordGregory MadeyRyan C. KennedyDepartment of Computer Science and EngineeringUniversity of Notre DameNotre Dame, IN 46556nshillin@nd.edu, gmadey@nd.edu, rkenned1@nd.eduAbstractVerification and validation (V&V) are two importanttasks that need to be done during softwaredevelopment. There are many suggestions in theliterature for techniques that can be used duringV&V. Some of these are varied and need to beadapted to the particular software product beingdeveloped. There are no defined V&V techniquesthat can be applied to agent-based simulationsspecifically. It is, therefore, left to the software teamto develop a subset of V&V techniques that can beused to increase confidence in the simulation. Thispaper introduces a subset of techniques gatheredfrom several authors and applies them to a computersimulation of natural organic matter.Keywords: Verification, Validation, ComputerSimulation, Natural organic matter, Agent-basedmodel1 IntroductionThe terms verification and validation are oftenused together. However, these two tasks are quitedifferent. In simple terms, verification asks thequestion “did I produce the simulation right?”Validation, on the other hand, asks the question “did Iproduce the right simulation?” Sargent [1] states thata simulation is valid “if its accuracy is within itsacceptable range of accuracy which is the amount ofaccuracy required for the model’s intended purpose.”The goal of validation is two-fold. First, it serves toproduce a model that represents true system behaviorclosely enough for the model to be used as asubstitute for the actual system for the purpose ofexperimenting with the system.Secondly, itincreases to an acceptable level the credibility of themodel [2]. Verification is concerned with therepresentation of the model structure.Propersoftware engineering practices indicate that themodel should be analyzed and designed sufficientlybefore coding can begin. This design referred to asthe model in simulation studies needs to beaccurately followed. Verification is complete when itcan be asserted that the design/model has beenimplemented accurately.The organization of this paper is as follows:First, we introduce the principles of V&V in Section2. Section 3 discusses some of the V&V techniquesfound in literature. Next, we introduce the naturalorganic matter case study that was used for thisresearch in Section 4. In Section 5, we give sometechnical details on the project. Section 6 discussesthe V&V techniques used in the case study, followedby, Section 7 which gives a summary of the results.A conclusion and future work is discussed in Section8.2 Principles of V&VBalci [3] gives fifteen (15) principles of VV&T(he includes testing along with verification andvalidation). Five (5) of these principles include:1) VV&T must be conducted throughout the lifecycle of the simulation study. In his paper entitledVerification, Validation and Testing, he displays adiagram of the different phases of the life cycle of asimulation [3].2) The outcome of the simulation model VV&Tshould be considered as a binary variable where themodel is absolutely correct or absolutely incorrect.He notes, however, that perfect representation isnever expected. It is up to the individual or teamperforming the VV&T to determine at which pointthe model can be considered to be either absolutelycorrect or absolutely incorrect.3) A simulation model is built with respect tothe study objectives and its credibility and is judgedwith respect to those objectives.4) Complete simulation model testing is notpossible.5) Simulation VV&T must be planned anddocumented.These principles were a result of research byBalci and knowledge gained through his experiencewith VV&T. A complete discussion of the principlescan be found in his paper Verification, Validation andTesting [3].1

3 V&V TechniquesThere are several authorities on V&Vtechniques. Some of these include Sargent, Banks,Balci and Adrion.The following are some of the techniquessuggested by Sargent [1]. He states that a simulationmodel can be compared to other models built for asimilar purpose. This process is commonly calleddocking. He also details the ideas of face validity andTuring tests. Face validity involves the process ofallowing an expert to look at the results of thesimulation and tell whether the results appear to becorrect. In a Turing test, an expert is allowed todistinguish between unmarked simulation results andunmarked system results. The aim is to determinewhether the results are so similar that the expert isnot able to tell the difference. This would be a veryimportant test in V&V. Unfortunately, sincesimulations are usually built due to impossibility ofattempting actual experiments (due to lack of timeand resources), a Turing test is often hard to produce.In their book, Discrete-event SystemSimulation, Banks et al give several V&Vtechniques. Some of these include:1) Have the computerized version checked bysomeone other than the developer.2) Closely examine the model output forreasonableness under a variety of settings of the inputparameters.3) Have the computerized representation printout the input parameters at the end of the simulation.4) Verify that what is seen in the animationimitates the actual system.5) Analyze the output data using goodness offit tests such as Kolmogorov-Smirnov tests and othergraphical methods.Along with his fifteen (15) VV&T principles,Balci specified seventy-five (75) different techniques.He categorized them into informal, static, dynamicand formal techniques. The informal techniques relyheavily on “human reasoning and subjectivitywithout stringent mathematical formalism” [3].Informal techniques, as well as static techniques donot require machine execution of the model. Statictechniques deal with assessing the accuracy of themodel based on the characteristics of the static modeldesign and source code. Dynamic techniques requiremodel execution and are intended for evaluating themodel based on its execution behavior. Formaltechniques are based on mathematical proof ofcorrectness.Adrion et al [4] discusses several interestingtechniques for V&V. Manual testing includes taskssuch as desk checking, peer reviews, walk-throughs,inspections and reviews. He also mentions proof-of-correctness techniques which he states have beenaround since “von Neumann time.”Other V&V techniques that can be used includestructural testing such as coverage-based testing.Coverage-based testing is similar to what Pressman[5] refers to as Basis Path Testing using flow graphnotation. During this process, the model is analyzedto ensure that all possible paths have been thoroughlytested. Functional testing includes boundary valueanalysis and cause-effect graphing. Both structuraland functional testing can be rather difficultdepending on the complexity of the software productbeing tested. However, there are several softwaretesting tools such as JStyle and TestWorks TCATthat simplify this process.4 Case Study – NOM SimulationNatural organic matter (NOM) is a “polydisperse mixture of molecules with stributions, molecular weights and reactivities thatforms primarily as the breakdown product of animaland plant debris” [6]. It can be found almostanywhere and is rather important in biogeochemistryof aquatic and terrestrial systems. Some of theprocesses that involve NOM include the evolution ofsoils, the transport of pollutants and the global biogeochemical cycling of elements [7].Throughout the years, it has been rather difficultto study the true structure and behavior of NOM.Most research has involved the investigation NOMon a wide scope. Little work has been done infocusing on individual NOM molecules. Perhaps themain reason for this is the amount of time andresources required to research on the molecular level.A computer simulation is a program written toperform research that would be considered costly andresource-intensive in an actual lab situation. Anagent-based computer simulation is a special type ofsimulation that can “track the actions of multipleagents that can be defined as objects with some typeof autonomous behavior” [6]. In the case of NOM,each agent is a molecule. The molecules includeprotein, cellulose, lignin and so on.The NOM simulation developed by a group ofgraduate students of the University of Notre Dame isa stochastic agent-based simulation of NOM behaviorin soils. There are presently four different modelimplementations. These include:1) SorptionFlowModel2) SorptionBatchModel3) ReactionFlowModel4) ReactionBatchModel2

ThiscasestudyfocusesontheReactionBatchModel which is a modeling oflaboratory batch adsorption experiments.In order to successfully represent molecules asindividual agents, several properties need to beincluded. In the ReactionBatchModel, the data usedto represent individual molecules include elementalformula, functional group count and record ofmolecular origin. The elemental formula consists ofthe number of C, H, O, N, S and P atoms in eachmolecule.The functional groups may includecarboxylic acid, alcohol and ester groups. The recordof molecular origin is the initial molecule, its startingposition in the system and its time of entry into thesystem.In an actual system, NOM molecules movearound interacting and reacting with each other.Twelve (12) types of reactions are built into theReactionBatchModel.These reactions arecategorized into:1) First order reactions with split2) First order reactions without split3) First order reactions with the disappearanceof a molecule4) Second order reactionsTable 1 gives examples of reactions that fallinto each of these categories.Reaction NameEster condensationEster hydrolysisAmide hydrolysisMicrobial uptakeDehydrationStrong C C oxidationReaction TypeSecond orderFirst order with splitFirst order with splitFirst order with moleculedisappearFirst order with spiltFirst order with split (50%of the time)First order without splitFirst order without splitMild C C oxidationAlcohol (C-O-H)oxidationAldehyde C OFirst order without splitoxidationDecarboxylationFirst order without splitHydrationFirst order without splitAldol condensationSecond orderTable 1 Chemical Reactions in the ConceptualModel [6]The ReactionBatchModel is designed to includeprobabilities to determine which reaction will occurwith a particular molecule within a 2D space. Theseare expressed in terms of intrinsic and extrinsicfactors. The intrinsic factors are derived from themolecular structure including the number offunctional groups and many other structural factors.The extrinsic factors arise from the environmentincluding concentrations of inorganic chemicalspecies, light intensity, availability of surfaces,presence of microorganisms etc [6].5 Technical DetailsThe ReactionBatchModel was programmedusing J2EE, Repast and an Oracle database. Thefront-end includes a web interface where the user canlog on and run simulations. To begin a simulation,the user enters the extrinsic parameters such as ph,light intensity, enzyme concentrations as well asparameters such as the number of time steps and aseed. Next, the user selects which types of moleculeshe would like to include in the sample along with thepercentage of each molecule. The user is also giventhe option of creating a new molecule by entering anew name as well as the structural properties. Theuser then submits the simulation which is given aunique simulation id. Each user who logs on to theweb interface can view the status of the simulationsthat he has submitted.On the back end there are eight (8) simulationservers and two (2) database servers. A special fileserver contains a Java program which apportionssimulations to each simulation server using a loadbalancing algorithm. The simulation servers thencommunicate to the database server to retrieve inputparameters and store results. Figure 1 is a diagram ofthe structure of the system.Figure 1 ReactionBatchModel structure [8]The data stored in the database produced by thesimulation is listed in Table 2.NameNumber of MoleculesCommentsNumber of moleculesin the system. Itchanges as moleculescondense, split or areconsumed.3

The number-averagemolecular weight.MWwThe weight-averagemolecular weightZ averageThe average charge oneach molecule at pH 7.Element massWeight of C, O, H, etcin the system.Percent of elementThe weightpercentages of eachelementReactions (1 n)Number of reactionsthat occur for eachtype of reactionTable 2 Molecular propertiescompared against recommended values found inliterature. See Table 3 for further information oneach of these metrics [9].6 V&V Techniques Used in this Case StudyResponse fora classMWnThe V&V techniques used for this case study are asubset of the techniques mentioned in Section 3.Some of these techniques include manual testing,static and structural testing, docking and internalvalidation.The first test we conducted on theReactionBatchModel was manual testing startingwith desk checking. This served as a means offamiliarizing ourselves with the model as well aschecking the code for errors that can be picked outwithout actually running the system. The other typeof manual testing conducted was a face validation ofthe system. Sargent states that “‘Face validity’ isasking people knowledgeable about the systemwhether the model and/or its behavior are reasonable.This technique can be used in determining if the logicin the conceptual model is correct and if a model’sinput-output relationships are reasonable.” [1] Forthis test, we addressed two post-doctoral scholarswho looked at the data and gave an opinion on theresults.The second set of tests we conducted was staticand structural testing. The focus was on code reviewand complexity analysis using an evaluation copy ofsoftware by Man Systems called JStyle.Weperformed this testing on the Java source files. Theresults of these tests were on three (3) levels –project, file and class. The project level testedfeatures such as reuse ratio, specialization ratio,average inheritance depth and average number ofmethods per class. On the file level, two featureswere noted – the number of source lines per file andthe comment density. The features noted on the classlevel included cyclomatic complexity (CC), weightedmethods per class (WMC), response for a class(RFC) and so on. Each of these results wasMetricCyclomaticComplexityMeasurementMethod# algorithmictest pathsWeightedmethods perclass# methodsimplementedwithin a class# methodsinvoked inresponse to amessageLack ofSimilarity ofcohesion ofmethodsmethodswithin a classby attributesDepth ofMaximuminheritancelength fromtreeclass node torootTable 3 Structural metricsInterpretationLow decisionsdeferred throughmessage passingLow notnecessarily lesscomplexLarger greaterimpact on childrenthroughinheritance;application specificLarger greatercomplexity anddecreasedunderstandabilityHigh good classsubdivisionLow increasedcomplexityHigher morecomplex; morereuseFor the docking process, we compared theAlphaStepimplementationtotheReactionBatchModel. The AlphaStep simulation wasdeveloped by Steve Cabaniss. Its conceptual modelis the same as the ReactionBatchModel although itsimplementation is slightly different. Table 4 lists odel implementations alrepresentationAlphaStepDelphi 6,PascalWindowsStandaloneNoneActualnumber Red Hat Linux clusterWeb based,standaloneSwarm, RepastlibrariesDistribution ofdifferent moleculesYes2D grid4

Second orderreactionRandompick onefrom listAdd to listChoose the nearestneighborFirst orderFind empty cellwith splitnearbyTable 4 Differences of features in AlphaStep andReactionBatchModel implementationTable 5 lists the input parameters we used during thedocking process. We only used three types differentmolecules for all the experiments.They arecellulose, lignin and proteins. Since the AlphaStepand ReactionBatchModel implementations havedifferent ways of expressing the initial population ofmolecules (see Table 4), it was important to estimateso that the number of molecules used in eachimplementation were comparable.In previousexperiments we realized that the average number ofmolecules of the initial population of theReactionBatchModel was about 754. Therefore thisvalue was used in the proportions of 34%, 33% and33% for cellulose, lignin and protein respectively.The equivalent in the AlphaStep implementation was256, 249 and 249 molecules for cellulose, lignin andprotein respectively.ParameterpH (affects oxidation anddecarboxylation rates)Light Intensity (affectsoxidation anddecarboxylation rates)Dissolved O2 (affectsoxidation)Temperature (affectsthermal (dark) reactionrates)Water (scaled 0-1,affects hydrolysis andhydration reactions)Bacterial Density (scaled0-1, affects rate ofmicrobial utilization)Enzyme ActivitiesProtease (scaled 0-1,affects hydrolysis rate)Oxidase (scaled 0-1,affects oxidation rate)Decarboxylase (scaled 01, affectsdecarboxylation rate)Batch ControlReaction Time (hrs)AlphaStep70.0001 µmolcm-2 hr -1ReactionBatchModel70.1 mM0.0001µmol cm-2hr -10.0001 M24.8 C298 K110.10.10.10.10.10.10.10.110001000.25Delta T0.1Sample Interval500Table 5 Parameters Used.0.251Table 6 gives a list of elemental composition andfunctional groups of each of the molecules 0181FunctionGroupsC C bonds150160Rings56040Phenyl 600Amines600Amides5400Table 6 Elemental and functional groupcomposition of molecules used.We ran 25 simulations of each implementationand graphed the average number of molecules,number-average molecular weight, weight-averagemolecular weight, weight percentage of Carbon andtotal mass of Carbon after a total of 1000 simulationhours. In addition to graphing, which allowed forvisual analysis of the results, we ran statistical testson each of the results. Two different types of testswere ran – Student’s t test and Kolmogorov-Smirnovtests.Xiang et al state that “a simulation model thatuses random seeds must have statistical integrity inthat independent simulations with the same input datashould have similar results. If the simulation modelproduced large variabilities because of random seeds,there would be a considerable problem with it” [6].In order to determine internal validation of theReactionBatchModel simulation we ran 141simulations. The actual number of simulationsattempted was much higher however a problem withthe load balancing algorithm resulted in a significantnumber of failed simulations. Each of the 141simulations was run for 1000 simulation hours. Ahistogram of the total number of molecules wasgenerated.5

Unfortunately, the results of the face validationwere inconclusive.The post-doctoral scholarsinformed us that it was difficult to determine whetherthe results were reasonable based on the lack ofactual system data involving similar processes. Theywere concerned that the study did not indicate thenumber of the types of reactions that occur in thesystem.Also, they admitted that the area ofmicrobiology was not their specialty and thereforesuggested repeating the experiments with a bacterialuptake of 0.The results of the static and structural testsconducted using JStyle were generally good. Theresults on the file level, however, were questionable.The average number of source lines per file was toohigh. Longer files are usually harder to debug and isgenerally not recommended. The comment densitywas also too low.Although the overall results for the structuraltests were good, this does not indicate thatstructurally the entire simulation was acceptablesince only a subset of the files was tested. Table 7displays the results.Interpretation0.05Specialization ratio19Average inheritancedepthClass hierarchy depthNumber of methodsper classMethod size – meanFile LevelAverage number ofsource lines per fileComment densityClass LevelCyclomaticComplexityWeighted methods perclassResponse per classLack of cohesion ofmethods1.2Low (butRepast moduleswere notconsidered)High – notgoodGood310.650GoodGood13.620Good185.8095Very high16.59952Very low2.6Good30.77Not 010002005001000ReactionBatchModel ValuesAlphaStep Values0400500600GoodGood0200Number of Molecules3000100200300ValueGoodThe docking tests had to be conducted twice.The results of the initial test did not appear to becorrect based on previous tests performed by Xiang[6]. After further investigation, the following factorswere altered to give much more accurate results:1) Units of measurement. The units for theinput parameters were different for the AlphaStepand ReactionBatchModel implementations.Forexample, the AlphaStep implementation acceptstemperature values in degrees Celsius while ure values in Kelvin.2) The initial population of molecules had tobe adjusted. As noted in Table 4, the initialpopulation of molecules in the AlphaStepimplementation is the actual number of moleculeswhile the ReactionBatchModel implementation usesthe distribution or percentage of different molecules.Figure 2 a-d shows the results of the first trialwhile Figure 3 a-e shows the results of the secondtrial.Simulated Time (hours)Figure 2 a) Number of Molecules (first aStep Values300100ParametersProject LevelNumber of classesNumber of abstractclassesReuse ratioDepth of inheritance1.33treeTable 7 Structural results.70080090010007 V&V ResultsSimulated Time (hrs)Figure 2 b) MWn (first trial)6

MWnMWw8000850070006000800050007500AlphaStep Values7000ReactionBatchModelValuesAlphaStep Model4000Reaction Batch 9001000700800600400500300200010010006000Figure 3 b) MWn (second trial)Simulation Time (hours)Figure 2 c) MWw (first trial)MWw8000The Weight Percentage of Carbon750070000.5465000.52AlphaStep Model60000.5Reaction Batch ReactionBatchModelValues0.465000100AlphaStep Values0.48Time (hrs)0.42Figure 3 c) MWw (second n Time (hours)Weight Percentage of CarbonFigure 2 d) Weight percentage of Carbon (firsttrial)0.60.55Number of Molecules0.5AlphaStep ModelReaction Batch 05002000.35300Reaction Batch Flow Model0AlphaStep Model25001003000Time (hrs)1500Figure 3 d) Weight percentage of Carbon 000Time (hrs)Figure 3 a) Number of Molecules (second trial)7

MillionsTotal Mass of CarbonHistogram of observed and theoreticalfrequencies3.5353302.5AlphaStep ModelReaction Batch Figure 3 e) Total mass of Carbon (second trial)5Although the results of the second trial appearedto indicate that both simulations produced the samemolecular properties, the statistical tests provedotherwise. Table 8 gives the result of the statisticaltests. For the Student’s t tests results, significantindicates a rejection of the null hypothesis of equalityof means at a level of significance Alpha 0.05 and aconfidence interval at 95%. Not significant indicatesthat the difference between the means is notsignificant.For the Kolmogorov-Smirnov testsignificant indicates a rejection of the null hypothesisthat the samples are not different. Not significantindicates that the difference between the samples isnot significant.Student’s t testNumber ofSignificantMoleculesMWnSignificantMWwNot SignificantWeightNot SignificantPercentage ofCarbonTotal Mass of Not SignificantCarbonTable 8 Results of statistical tests.KolmogorovSmirnov TestSignificantSignificantNot SignificantNot SignificantSignificantThe result of the internal validation is shown inFigure 4. We also applied Kolmogorov-Smirnov andChi-square tests to these results. The results of theKolmogorov-Smirnov test indicate whether thedifference between empirical and theoreticalcumulative distributions is significant. The results ofChi-square test indicate whether the differencebetween the observed frequencies and the theoreticalfrequencies (where µ 1469.816, Sigma² 9192.294) are significant. Table 9 gives the results.0ClassesObserved frequency2 per. Mov. Avg. (Observed frequency)Figure 4 Probability distribution fitted to thedata: Normal N (µ 1469.816, Sigma² 9192.294).TestSignificanceKolmogorov-SmirnovNot significantChi-SquareNot significantTable 9 Results of statistical tests on 141simulation runs of the ReactionBatchModel.8 Conclusions and Future WorkThe verification and validation of simulation modelsis quite important.Unfortunately, even afterconducting a battery of tests it is still often difficult tocome to a conclusion. In this case although tests suchas the internal validation appear to indicate that themodel is valid, other tests such as the statistical testsand face validation are inconclusive.Severalactivities can help improve the confidence in thesystem in the future. Some of these would be:1) Face Validation – include an analysis of thedifferent types of reactions that occur during thesimulation hours. The tests included in this studyfocused on the molecular properties retrieved fromthe system. Further work would have to be done ingraphing and statistically testing the number ofreactions. This would help in determining thevalidity of the system.2) Static and Structural Tests – include all thecode including the GUI in the tests.3) Internal Validation – redoing the test toinclude more than 141 simulation runs.8

References[1][2][3][4][5][6][7][8][9]Robert G. Sargent. Verification and Validationof Simulation Models. D.J. Medeiros, E.F.Watson, J.S. Carson and M.S. Manivannan, eds.Proceedings of the 1998 Winter SimulationConference, 1998.J. Banks, J.S. Carson, B.L. Nelson, and D.M.Nicol. Discrete-event System Simulation.Industrial and Systems Engineering, PrenticeHall, 3rd ed., 2001.O. Balci. Handbook of Simulation: Principles,Methodology, Advances, Applications andPractice, Chapter 10 Verification, Validationand Testing. John Wiley & Sons, New York,1998.W. Richards Adrion, Martha A. Branstad andJohn C. Cherniavsky. Validation, Verificationand Testing of Computer Software, ComputerSurveys, Vol 14, No. 2, June 1982.Roger S. Pressman. Software Engineering: APractitioner’sApproach,McGraw-HillPublishing, 6th ed, 2005.Xiaorong Xiang, Ryan Kennedy and GregoryMadey. Verification and Validation of Agentbased Scientific Simulation Models. AgentDirected Simulation Conference, San Diego,CA, April 2005.S. E. Cabaniss.Modeling and StochasticSimulation of NOM reactions, working paperhttp://www.nd.edu/ nom/Papers/WorkingPapers.pdf, July 2002.Yingping Huang and Greg Madey, "TowardsAutonomicComputingFor Web-BasedSimulations", International Conference onCybernetics and Information Technologies,Systems and Applications (CITSA 2004),Orlando, July, 2004Natural Aeronautics and Space Administration,Software Quality Metrics for Object OrientedSystem Environments, June 1995.9

Verification and Validation of an Agent-based Simulation Model Nadine Shillingford Gregory Madey Ryan C. Kennedy Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556 nshillin@nd.edu, gmadey@nd.edu, rkenned1@nd.edu Abstract Verification and validation (V&V) are two important

Related Documents:

new approaches for verification and validation. 1.1. Role of Verification and Validation Verification tests are aimed at "'building the system right," and validation tests are aimed at "building the right system." Thus, verification examines issues such as ensuring that the knowledge in the system is rep-

verification and validation. 1.2 PURPOSE This System Validation and Verification Plan provide a basis for review and evaluation of the effectiveness of the AIV program and its proposed elements. In addition it is an input to the lower level verification. In this document it is proposed a scenario for the full requirements traceability throughout a

Validation of standardized methods (ISO 17468) described the rules for validation or re-validation of standardized (ISO or CEN) methods. Based on principles described in ISO 16140-2. -Single lab validation . describes the validation against a reference method or without a reference method using a classical approach or a factorial design approach.

Cleaning validation Process validation Analytical method validation Computer system validation Similarly, the activity of qualifying systems and . Keywords: Process validation, validation protocol, pharmaceutical process control. Nitish Maini*, Saroj Jain, Satish ABSTRACTABSTRACT Sardana Hindu College of Pharmacy, J. Adv. Pharm. Edu. & Res.

to RACER Verification and Validation (V&V) activities: Air Force Instruction (AFI) 16-1001 Army Regulation (AR) 5-11 DoDI 5000.61. The purpose of this V&V report is to document verification and validation activities for the RACER 2008 system in accordance with DoDI 5000.61, AFI 16-1001, and AR 5-11. 1.1 Intended Use

- Validation (§ 117.160) - Verification that monitoring is being conducted - Verification that corrective action decisions are appropriate - Verification of implementation and effectiveness (§ 117.165) Calibration, product testing, environmental monitoring, review of records - Reanalysis

Independent Verification and Validation (IV&V) Verification by independent authorities necessary for but not limited to requirements that are safety-critical or of high-security nature Independent verification and validation is defined by three parameters: Technical, Managerial und Financial Independence

The Handbook has been prepared for University students as the textbook in English Phonetics. It can as well be used by the teachers and students of English at any level as a ‘guide’ to correct pronunciation. I am very grateful to my colleagues for reading the draft and giving me valuable recommendations for improving the material. 6 Section A THEORY What are the English sounds and how do .