An Introduction To Experimental Software Engineering

2y ago
11 Views
2 Downloads
2.45 MB
96 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Callan Shouse
Transcription

An Introduction toExperimental Software EngineeringFernando Brito e Abreu (fba@di.fct.unl.pt)Universidade Nova de Lisboa (http://www.unl.pt)QUASAR Research Group (http://ctp.di.fct.unl.pt/QUASAR)Português técnico cá e lá oTreinamentoFormação profissionalPesquisa / PesquisadorInvestigação / InvestigadorUsuárioUtilizadorPlanilhaFolha de cálculoBanco de dadosBase de udo o resto éigual, à parte apronúncia 21

What is Experimental Software Engineering ? Is a branch of Software Engineering where, by means ofexperimentation we want to validate hypotheses raised byinduction (and abduction), aiming at building theories thatwill allow us to: helpunderstand the virtues and limitations of methods,techniques and tools, namely by assessing current SE claims expressquantitatively the cause–effect relationships among swprocess characteristics (resources and activities) and sw productcharacteristicsSw development as a feedback loop Project team membersUsersMethodsTechniquesToolsOperating SystemHardwarePhysical EnvironmentScheduleRESOURCESImprovement actionsProject planning and staffingProject managementRequirements elicitationDesigningCodingConfiguration managementInspectionBlack-box testingDebuggingDeploymentUser trainingProject plansRequirements specsDesign modelsSource codeComponent librariesEstimation modelsTest batteriesExecutable codeInstallation manualsPRODUCTSACTIVITIESProcess metricsImprovement actionsQUANTITATIVEEVALUATIONProduct metrics2

Experimentation in Software Engineering5 Supports arguments concerning the suitability, limits,costs and risks, inherent to software engineering toolsand techniques, with experimental evidence Facilitator of the evolution of the Software Engineeringbody of knowledge as a quality driven process In line with the requirements of the highest levels ofmaturity models (e.g. CMMI)Software Engineering: the state of the art There are no “Laws” in Software Engineering Software Engineering is traditionally qualitative Software engineering is full of: “Theories”about effectiveness of software engineeringpractices, methods and techniques Unsubstantiatedclaims about efficiency of engineeringpractices, methods and techniques3

Some Software Engineering “theories”ABOUT THE PRODUCT: Cohesion should be maximized and coupling should be minimized Is this what practitioners are doing in practice? The complexity of a software system increases non linearly withits age (“Lehman’s “Law” of Software Evolution) A decade ago: OOSD improves modularity upon PSD (ProceduralSoftware Development) Nowadays: AOSD improves systems modularity upon OOPMore Software Engineering “theories”ABOUT THE PROCESS: Accurate effort estimates can be produced without a detaileddesign (e.g. using Function Points analysis) Software inspections are more efficient than testing Agile processes lead to shorter development cycles in the longterm (until final deployment is fully achieved) More controlled processes lead to an higher effectiveness indefect removal4

More Software Engineering “theories”ABOUT IT SERVICE MANAGEMENT: Some defect types take longer to correct than others The distribution of defect types is not uniform Aspects such as culture/nation, business area, licensing level(customer value) or size of the user community influence the bugsolving process Is there a concordance in the ordering between fault impact (userperspective) and urgency (support perspective)?Does that concordance vary from country to country?Does business area (e.g. public administration, banking, military, education,utilities) has an influence on incident priority?Is there a relation between licensing levels and the volume of incidents? (bycountry, by incident type, by business area)How can we assess these “theories” or evaluate these claims?Evaluating the claims Engineering method – Prototype, test and improve thesolution until it requires no further improvement! Trial and error approach When should we stop? Empirical method Analytical method5

Evaluating the claims Engineering method Empirical method – Model proposed and evaluatedthrough empirical studies The evaluation can follow different strategies, such assurveys, case studies or controlled experimentsCriteria for stopping evaluation can be defined in advance Systematic approaches (with a well-defined process) tovalidation exist Results preciseness depends on collecting a large sample Analytical methodEvaluating the claims Engineering method Empirical method Analytical method - Develop a formal theory andcompare its derived results with empirical observations Formal theories are often hard to conceive and express Requires less observations than in the empirical method 6

Empirical strategiesSurvey Objectives: Descriptive (of population), explorative Process: Documentpossible relationships (e.g. build taxonomy) Poll after events have occurred (post-mortem) Collect data through interviews or questionnaires Example: Attemptto determine major problems encountered in doingprogram maintenance by sending a survey to subscribers of apractitioners’ journal asking respondents to rank a list ofproblem types by order of importanceEmpirical strategiesCase study / action research Objectives: Comparison, phenomena interpretation Process: Identifykey factors that affect outcome of activity and thendocument the activity (inputs, constraints, resources, andoutputs) Collect data in work environment or real world situation Example: Forone project observe how practitioners use a given tool.Train then in the use of a tool. Observe how proficiencyimproved and build an interpretation for the phenomenon7

Empirical strategiesControlled experiment Objectives: Confirm theories or conventional wisdom,explore relationships, evaluate accuracy of models,validate measures Process: Controlledinvestigation of an activity - identify the key factorsand manipulate them in order to document their effects onoutcomes Data collected from subject performance Example: Indentationstudy, where different groups are assignedmaintenance activities based upon code indented differentlyComparison of empirical strategiesTYPE SurveyCase StudyExperimentFACTORExecution controlNone/LowMeasurement control LowInvestigation costLowMediumLowMediumHighHighHighEase of ReplicationLowMedium MediumExecution control describes how much control the researcherhas over the studyMeasurement control is the degree to which the researchercan decide upon which measures to be collected and to includeor exclude during the study8

The Scientific method It is a fundamental technique used by scientists to raisehypothesis and produce theories Assumption: world is a cosmos not a chaos Scientificknowledge is predictive (positivist philosophy) Cause and effect relationship exist Knowledge in an area is expressed as a set of theories Theories are raised based upon hypothesis Hypothesis are tested through experiments The scientific method progresses through a series ofstepsSteps in the Scientific Method (i/iii) 1 - Observe facts Fact means the “quality of being actual” or “a piece ofinformation presented as having objective reality”2 - Formulate hypothesis An hypothesis is a tentative theory that has not been tested(knowledge before experimental work performed)Formulation can be performed through: induction (generalization of observed facts)abduction (suggestion that something could be)9

Steps in the Scientific Method (ii/iii) 3 - Test the hypothesis Build experiments to see if the hypothesis holds Collect data Check that measurements are valid and eliminate outliers Perform statistical tests Document results to communicate to colleagues (resultspackaging) Use hypothesis to make predictions and compare them withnewly observed factsExperiments can only prove that an hypothesis is falseIf unsure revise hypothesis (step 2) in light of newexperiments or observationsSteps in the Scientific Method (iii/iii) 4 - Raise a theory After extensive experimentation corroborating the hypothesisA theory is a conceptual framework that explains existing factsor predicts new facts5 – Express a law Law is a theory or group of theories that has been widelyconfirmedConfirmation can be obtained with intensive “in vivo” evidenceA law should delimit its own application scope e.g.Newton’s laws (holds for velocities much less than speed of light)Laws (as well as theories) are open to rebuttalQuestion: Do we do experimentation in Software Engineering?10

Survey on experimental practices inSoftware Engineering [Sjøberg et al., 2005] 5453articles in 12 major journals and conferences insoftware engineering (1993-2002) Only 1,9% (103) reported controlled experiments 14 series of experiments (only 6 performed independently) 5 series included partial rejection of the claims of theinitiating experiment Only 1 of the rejections was reported by the initial authors21 Othersurveys report less than 10% of experiment-basedvalidation papers in software engineering and computerscience [Glass et al., 2002; Ramesh et al., 2004]Why is experimentation not commonpractice in Software Engineering? The five fears of experimental validation Shareholders - Commercial fearProject managers - Budgeting fearTeam members - Evaluation fearSoftware engineers - Misinterpretation fearResearchers - Apathy fear (not any more )(F. Brito e Abreu, 1997)11

Why is experimentation not commonpractice in Software Engineering? Traditional scientific method isn’t applicableCurrent Level of experimentation is good enoughExperiments cost too muchDemonstrations will sufficeThere’s too much noise in the wayExperimentation will slow progressTechnology changes too fastSoftware developers not trained in importance ofscientific method are not sure how to analyze data(Walter Tichy, 1998)Replication24 Experiments replication is required for wideacceptance of theories (e.g. F&DA). Why? A single experiment can’t be expected to providedefinitive evidence on an issue, namely because threatsto validity may bias the results Replication can be used to remove those biasesFear of lack of originality of the replicated experiment? Major venues of experimentation encourage replication 12

Replication25Many fundamental results in Software Engineering suffer from threats to validity thatcan be addressed by replication studies. The primary goal of this workshop is toraise the perceived value of replication work by creating both recognition for, andawareness of, replication studies. The workshop aims to encourage revisitingresults, including those that have long been accepted but which in fact have onlyweak empirical support. In addition, the workshop seeks to identify and suggestsolutions for recurring practical problems in selecting, designing, and performingreplication studies. The workshop also seeks to advance the state of researchreporting techniques and tool development and deployment, with a focus on makingexperiments repeatable and tools more reusable. By providing a venue in whichresearchers can discuss tools, methods, results and philosophical foundations ofreplication, this workshop will help to advance the empirical methods and scientificrigor of the Software Engineering community.More info 6 Replicating experiments is harder than it seems (the tacitknowledge problem), specially if people is involved Noteasy to repeat an experiment under the same conditions, ifthe human factor has a strong influence However, if the sample is large, the individual influences getaveraged and then cancelled Guidelines for experimentation could certainly helpmitigating those difficulties! You can find a set of proposed software engineeringexperiment reporting guidelines in the ESE course page atFCT/UNL: http://ctp.di.fct.unl.pt/phd/ese13

Replication requirements To perform replication we need access to theexperiments’ data samples Early attempts to build up experimental data databases in thesoftware field had limited successInternational Softwarea good exception is:BenchmarkingStandards Group, AU Experimentation in general requires samples ofconsiderable size relative to real world softwaredevelopment projects namely including process data(efforts, schedules, defect data, etc). Thisoften implies a relation among university and softwarecompanies, often encompassing non-disclosure agreements Who’s teaching ESE?Several CS departments have recently started dedicated ESE courses orinclude this topic very strongly in their SE courses: Colorado State University (USA)George Mason University (USA)University of Texas at Austin (USA)University of Maryland (USA)Walden University (USA)Worcester Polytechnic Institute (USA)University of Calgary (Canada)Oregon State University (USA)Florida Atlantic University (USA)University of Otago (New Zealand) University of Sannio (Italy)University of Oulu (Finland)Linköpings Universitet (Sweden)Lund University (Sweden)University of Skövde (Sweden)Kaiserlautern University (Germany)NTNU University (Norway) – 2005versionTechnical University of Sydney(Australia)Universidade Nova de Lisboa(Portugal) 14

Some landmark references? A journal . Empirical Software Engineering: An InternationalJournal http://www.kluweronline.com/issn/1382-3256. and a book: Experimentation in Software Engineering: AnIntroductionClaes Wohlin et al., Kluwer Academic Publishers,November 1999, ISBN: 07923868259-Jun-10Competence centers in ESE Lund University (Sweden) SERG - Software Engineering Research Group(USA) ESEG – Experimental Software Engineering GroupFounded in 1996,is directed byProf. DieterRombach15

USA Government support CeBASE – sponsored initiative by the National ScienceFoundation University of Maryland College ParkUniversity of Southern CaliforniaFraunhofer Center for Experimental Software Engineering - MarylandUniversity of Nebraska-LincolnMississippi State UniversityTutorial outline Requirements DesignPlanning Experiment DataDefinitionExecutionAnalysis ResultsPackaging16

– Requirements Definition–Fernando Brito e Abreu (fba@di.fct.unl.pt)Universidade Nova de Lisboa (http://www.unl.pt)QUASAR Research Group (http://ctp.di.fct.unl.pt/QUASAR)Experimental Process34 Present a process model that acts as: aguideline for conducting experiments a framework for supporting experiments comparison Integration of contributions from: experimentconduction guidelines experiment reporting guidelines models for representing experimental data Proposed in: [Goulão& Abreu, 2007] Modeling the experimental softwareengineering process, QUATIC’2007, FCT/UNL, Set. 20073417

This is the whole processmodel, but we also needto understand theontology behind it Overview of the experimental process18

Experimental steps Requirements definition Design planning Prepare and collect data in a controlled wayAnalysis and Interpretation Formalize the goals in research hypothesesDefine who, when and how the experiment will be conductedExperiment execution Which are the goals of the experiment?Check measurements and analyze data to test hypothesesResults Packaging Interpret and document results to communicate to colleaguesProblem statement what is the problem that the experiment will addresswhere can it be observedwhen can it be observedwho can observe it or is concerned with ithow does the problem impact those experiencing itwhy solving the identified problem is importantI keep six honest serving-men(They taught me all I knew);Their names are What and Why and WhenAnd How and Where and Who.Rudyard Kipling, Literature Nobel Prize, 190719

Objectives definition Those objectives can be summarized using an abstracttemplate with just five items (next slide) This systematic description: promotesan explicit goal-driven approach for experimentalwork helps the researcher delimiting the experiment’s boundaries andfocusing on its essential goals provides an abstract matching mechanismhelps other researchers in searching experiments that are relevant intheir own area of concernthis is common practice in other sciences (e.g. medicine)Objectives definition templateAnalyze Object of study Object of study – entity being studiedFor the purpose of Purpose Purpose – intention of studyWith respect to their Quality focus Quality focus – primary effect under studyFrom the point of view of the Perspective Perspective – viewpoint of interpretation of resultsIn the context of Context Context – environment in which experiment being run20

Objectives definition templateObject of study e.g. product, process, resource, model, theoriesPurpose e.g. compare two different techniques, characterize a learningcurve, analyze the impact of using a tool, replicate a studyQuality focus e.g. effectiveness, cost, reliability, maintainability, portabilityPerspective e.g. developer, program manager, customer, user, researcherContext e.g. practitioners in a softwarehouse, students in a courseExample 1: (Indentation study)Analyze program indentation levels For the purpose of evaluation With respect to their effectiveness in programcomprehension From the point of view of the researcher In the context of undergraduate students in aprogramming course 21

Example 2: (ITSM study)Analyze SLAs in providing IT services For the purpose of comparing system andmodel-based compliance verification With respect to their effectiveness andefficiency From the point of view of the service provider In the context of a network of financial selfservice terminals Context definitionThe context includes: where will the experiment take place (environment) university course, softwarehouse, client companywho will be involved in the experiment thosesubject should be characterized (e.g. number,experience, workload) which software artifacts are used in the experiment Thoseartifacts should be characterized (e.g. type, size,complexity, application domain)22

Context definition A specified context: hasits own benefits, costs, and risks facilitates the comparability among different studies allow practitioners evaluating to which extent results obtainedin a study apply to their specific needs determines our ability to generalize from the experimentalresults At this phase, an informal assessment of the context issufficient Thecontext will be detailed during the design phase– Design Planning –Fernando Brito e Abreu (fba@di.fct.unl.pt)Universidade Nova de Lisboa (http://www.unl.pt)QUASAR Research Group (http://ctp.di.fct.unl.pt/QUASAR)23

Experiment steps Requirements definition Design planning Prepare and collect data in a controlled wayAnalysis and Interpretation Formalize the goals in research hypothesesDefine who, when and how the experiment will be conductedExperiment execution Which are the goals of the experiment?Check measurements and analyze data to test hypothesesResults Packaging Interpret and document results to communicate to colleaguesBASIC CONCEPTS24

Population and sample A population is the universe of all the elements fromwhich a sample can be drawn for an experiment Examples:Java practitioners, UML 2.0 models, programsproduced in C# A sample is a number of elements drawn from apopulation Asample is usually a subset of the population Examples: My company’s programmers, internal projectsdeveloped, What are samples for? Sample items are used to test hypotheses about thepopulation A sample is wished to be representative of thepopulation E.gin digital communications the analog signal (voice) issampled, that is, its frequency spectrum (amplitude at severalfrequencies) is sampled at regular time intervals in order toconvert it to digital form25

Subjects Subject (aka case, aka entity) is a population or samplemember, from which we collect data in the experiment Examples:a specific practitioner, a given UML model, a givenprogram Subjects in ESE are usually persons or artifacts E.g. developers, users, tools, project deliverablesSubjects of the same kind are characterized by acommon set of variablesVariables Variables are features (descriptors) of subjects that wemeasure, control, or manipulate in research Eachvariable is expected to represent a given subjectcharacteristic or “quality” A variable has: anidentifier a role in our experimental research (independent ordependent) – to be discussed later a measurement scale type a statistical distribution26

Factors and Treatments A factor level is a distinct value of a given discreteindependent variable (aka factor) E.g. 3 factor levels for the language variable {C , Java, C#}A treatment is a tuple of factor levels to be consideredin the experiment e.g.for factors language and IDE we could have the followingtreatments {(Java, Eclipse), (Java, NetBeans), (C#, VisualStudio), }Types of empirical studies Correlation studies Attemptsto determine how much of a relationship (association orcollinearity) exists among variables It can not establish cause & effect (e.g., life-time expectancy vs. literacy)To measure the association strength we use a correlation coefficientRegression studies Attemptsto produce an estimation model that allows to determinethe values of a dependent (outcome) variable based upon a setof independent variables (aka predictors or regressors) Model parameters are determined by calibration (regression)Models can be linear or non-linearOutcome variables can be continuous or discrete (e.g., logistic regression)27

Types of empirical studies Controlled experiments (in vitro) Attemptsto establish cause & effect Independent variables are manipulated (controlled / blocked) Subjects are randomly assigned to groups Quasi-experiments (or field or natural experiment) Thevalues of independent variables are usually predefined(are “found” in the sample) Subjects cannot be randomly assigned to groups and / orindependent variables cannot be fully controlledIn this course we will concentrate in theselatter two types of empirical studiesIndependent variables(aka explanatory variables or factors) These are the ones whose effect on the dependentvariable(s) we want to investigate Arethose that are (expected to be) manipulated inexperimental research Examples: Programminglanguage, Development environment, Designsize, Experience/background of subjects28

Dependent variables(aka outcome or response variables) Those whose effect of the independent ones we want toassess in experimental researchFor each hypothesis we usually only have onedependent variableExamples: Productivity,Design complexity, Effort to produce a givendeliverable, Project schedule, Defects found in code inspection,System faults in operation (e.g. MTBF, MTTR)Exercise: Identify the independent anddependent variables PractitionerComponentComponent tyAssemblycomplexityDeveloperReviewer29

HYPOTHESES FORMULATIONBringing up hypotheses Observation is required Informal E.g. looking around and asking questions about possible causes forknown problems or for noticeable successes in previous projects Formal Survey related papers or books (e.g. start on this course’s bibliography) Use some qualitative approaches such as the laddering technique orcognitive maps to derive some hypothetical causalities perceived bydomain experts Top-down decomposition Often the hypotheses formulation process progresses froman abstract to a concrete version, whose level of detail isappropriate for clearly identifying the adequate variables30

Abstract vs. Concrete hypothesesExample 1: MethodA produces higher quality code than Method B(abstract) Using Method A will result in fewer defects being discoveredduring integration testing than using Method B. (concrete)Example 2: Programmerwill create better test suites using Tool A thanusing Tool B. (abstract) Test suites created by programmers using Tool A will havehigher branch cover than test suites created by programmersusing Tool B. (concrete)Example HypothesesAbstract or Concrete?1.2.3.4.5.6.Programmers using object-oriented programming will producehigh quality programs?Testers will obtain better code coverage using the C Test toolfrom Parasoft than testers not using this tool.Programmers using a visual language are more productive thanprogrammers using a procedural language.Functional programs are more understandable than proceduralprograms.Java programs are easier to maintain than C programs.Object-oriented languages encourage more reuse thanprocedural programs.31

Hypotheses formulationThis formulation is formally defined by two complementarystatements: Null hypothesis (H0) No cause-effect can be observedAlternative hypothesis (H1) Someeffect appears to be presentWhen we accept the null hypothesis, we reject thealternative one and vice-versa!Null hypothesis (H0) States that there is no statistically significant differencebetween treatments (tools, techniques, methods, ) orthere is no underlying trend or pattern in the outcomevariable due to factors The only reasons for differences in the observations arecoincidental (due to random error)32

Alternative hypothesis (H1) States that there is a statistically significant differencebetween treatments or an underlying trend or pattern inthe outcome variable due to factorsFalse positives If we inadequately reject a null hypothesis (accept thealternative one), we say we have a false positive Thisis called a Type I error (to be seen again later) False positives occur when you think you have observed aneffect when in fact that is not sustainableP(Type I error) P(reject H0 H0 true) False positives may be due to inadequate definition ofthe confidence level, sampling bias, coincidental data, 33

False negatives If we inadequately accept a null hypothesis (reject thealternative one), we say we have a false negative Thisis called a Type II error (to be seen again later) False negatives occur when you could not observe an effectthat in fact occursP(Type II error) P(accept H0 H0 false) False negatives are due to reduced test power and areusually less “harmful” then false positivesTest power Many different statistical tests can be used to testhypotheses The result of a test evaluates the outcome of an experimentStatistical tests have several characteristics such as preconditions for application and test power Thepower of a statistical test is the probability that it willreveal a true effectPower 1 - P(Type II error) P(reject H0 H0 false)34

Purpose of experiment Usually, the experimenter would like to reject the nullhypothesis with as high confidence as possible Inother words, he wants to find out some causality in a givenphenomenon This is what the positivist approach to Science is all about(believing the world is not a chaos) To reject H0 the researcher must conduct an experimentor quasi-experiment where he obtains subjects data thatshow (by applying an adequate statistical test) that thereis a significant difference between the treatmentsVARIABLES SELECTION35

GQM (Goal-Question-Metric) Often variables of interest (e.g., *ability) are not directlymeasurable and we have to identify indirect metrics We1.2.3.can use the Goal-Question-Metric approach to find themList major research goals (usually expressed uponcharacteristics of the object of study)Derive questions from each goal that must be answered todetermine if the goals are being metDecide what to measure to answer questions adequatelyTwo examples follow:students admission & testing processCS Graduate Student AdmissionMajor goals Assessquality of graduate students admitted Evaluate effectiveness of admission processQuestions Whatis the admission criteria? Are the criteria working? What is graduate student quality?36

CS Graduate AdmissionMeasures CSdegree, GRE score 1300, positive letters ofrecommendation % students admitted who complete degree (MSc and PhD) Academic record: Comprehensive exam performance, coursegrades Scientific record: number of published papers, theses, citationindexSoftware Testing ProcessMajor goals Improve the design testing processQuestions Howis design testing done currently? How much time does design testing take? How much does design testing cost? How efficient is the design testing process? How effective is the design testing process?37

Software testing processMeasures Numberof tests (reviews) per module Time spent testing/module Cost of design testing Weighted errors found / cost of design testing % of specification errors found in design testing % of design errors found during integration testingSUBJECTS SELECTION38

Choosing material subjects (artifacts)Materials should:Vary in experimental difference being tested Be representative Be of appropriate level of difficulty – not too hard ortoo easy Do not use toy programs (e.g., 100 lines)Be comparable across different experimentalconditionsChoosing human subjects Should be both representative and relatively uniformHow to select a representative sample? Subjects should be uniform in characteristics and abilities E.g. there are large individual differences between programmersHow can a homogeneous set be selected from a heterogeneouspopulation?Should reflect characteristics of population Studentsoften selected because of convenience (class, grade) What is a typical professional programmer? A softwareengineer? Do students reflect these characteristics? Can we assess abilities before selection?39

Choosing human subjects Should have enough subjects to reflect diversityProgrammers characterized by diversity requires alarge number of subjects Solution:1.2.Gro

Experimentation in Software Engineering Software Engineering: the state of the art . Experimentation in Software Engineering: An Introduction Claes Wohlin et al., Kluwer Academic Publishers,

Related Documents:

Keywords: Power analysis, minimum detectable effect size, multilevel experimental, quasi-experimental designs Experimental and quasi-experimental designs are widely applied to evaluate the effects of policy and programs. It is important that such studies be designed to have adequate statistical power to detect meaningful size impacts, if they .

Experimental and quasi - experimental desi gns for generalized causal inference: Wadsworth Cengage learning. Chapter 1 & 14 Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi -experimental designs for research. Boston: Hougton mifflin Company. Chapter 5. 3 & 4 Experimental Design Basics READINGS

work/products (Beading, Candles, Carving, Food Products, Soap, Weaving, etc.) ⃝I understand that if my work contains Indigenous visual representation that it is a reflection of the Indigenous culture of my native region. ⃝To the best of my knowledge, my work/products fall within Craft Council standards and expectations with respect to

Quasi experimental designs are similar to true experimental designs but in quasi experiments, the experimenter lacks the degree of control over the conditions that is possible in a true experiment Some research studies may necessitate the use of quasi designs rather than true experimental designs Faulty experimental design on the .

experimental or quasi-experimental designs. The eval . Principles in Experimental Designs (New York, McGraw Hill, 1962). the details of experimental design, attentionisfocused on

6. Stages in experimental design and scientific methodology 7.Development of experimental design disciplines and their implementation 4. Can provide examples of experimental designs in real cases. 5. Can explain several concepts of treatment design types, environment, and measurement (response). 6. Can explain the stages of experimental design and

tres tipos principales de software: software de sistemas, software de aplicación y software de programación. 1.2 Tipos de software El software se clasifica en tres tipos: Software de sistema. Software de aplicación. Software de programación.

The IC Dedicated Support Software is described in Section 1.4.3.2. 1.3.1.2 Security Software The IC Dedicated Software provides Security Software that can be used by the Security IC Embedded Software. The Security Software is composed of Services Software and Crypto Library. The Services Software consists of Flash Services Software, Services .