Data-Driven And Keyword-Driven Test Automation Frameworks

2y ago
20 Views
2 Downloads
989.95 KB
112 Pages
Last View : 1d ago
Last Download : 3m ago
Upload by : Helen France
Transcription

HELSINKI UNIVERSITY OF TECHNOLOGYDepartment of Computer Science and EngineeringSoftware Business and Engineering InstitutePekka LaukkanenData-Driven and Keyword-Driven TestAutomation FrameworksMaster’s thesis submitted in partial fulfillment of the requirements for the degreeof Master of Science in Technology.Espoo, February 24, 2006Supervisor:Professor Reijo SulonenInstructor:Harri Töhönen, M.Sc.

HELSINKI UNIVERSITYOF TECHNOLOGYAuthor:ABSTRACT OF THEMASTER’S THESISPekka LaukkanenName of the thesis: Data-Driven and Keyword-Driven Test Automation FrameworksDate:February 24, 2006Number of pages: 98 0Department:Department of ComputerProfessorship: T-76Science and EngineeringSupervisor:Prof. Reijo SulonenInstructor:Harri Töhönen, M.Sc.The growing importance and stringent quality requirements of software systems are increasing demand for efficient software testing. Hiring more test engineers or lengtheningthe testing time are not viable long-term solutions, rather there is a need to decrease theamount of resources needed. One attractive solution to this problem is test automation,i.e. allocating certain testing tasks to computers. There are countless approaches to testautomation, and they work differently in different contexts. This master’s thesis focuseson only one of them, large-scale frameworks for automated test execution and reporting,but other key approaches are also briefly introduced.The thesis opens its discussion of test automation frameworks by defining their high-levelrequirements. The most important requirements are identified as ease-of-use, maintainability and, of course, the ability to automatically execute tests and report results. Moredetailed requirements are derived from these high-level requirements: data-driven andkeyword-driven testing techniques, for example, are essential prerequisites for both easeof-use and maintainability.The next step in the thesis is constructing and presenting a framework concept fulfillingthe defined requirements. The concept and its underlying requirements were testedin a pilot where a prototype of the framework and some automated tests for differentsystems were implemented. Based on the pilot results, the overall framework conceptwas found to be feasible. Certain changes to the framework and original requirementsare presented, however. The most interesting finding is that it is possible to cover allthe data-driven testing needs with the keyword-driven approach alone.Keywords: test automation, test automation framework, data-driven testing, keyworddriven testingii

TEKNILLINEN KORKEAKOULUDIPLOMITYÖN TIIVISTELMÄTekijä:Pekka LaukkanenTyön nimi:Aineisto- ja avainsanaohjatut �:24.2.2006Sivuja: 98 0Osasto:Tietotekniikan osastoProfessuuri: T-76Työn valvoja:Prof. Reijo SulonenTyön ohjaaja:DI Harri TöhönenOhjelmistojärjestelmien merkityksen sekä laatuvaatimusten kasvaminen aiheuttaa paineita ohjelmistojen testaukselle. Testaajien määrän lisääminen tai testausajan pidentäminen ei ole loputtomasti mahdollista, pikemminkin resursseja halutaan vähentää.Yksi houkutteleva ratkaisu on testauksen automatisointi eli osan testaustyön antaminen tietokoneiden hoidettavaksi. Erilaisia tapoja testauksen automatisointiin on lukuisia ja ne toimivat eri tavoin erilaisissa tilanteissa ja ympäristöissä. Tämä diplomityökäsittelee tarkemmin vain yhtä lähestymistapaa, laajoja automaatiojärjestelmiä testienautomaattiseen suorittamiseen ja raportoimiseen, mutta myös muut tavat ovat tärkeitä.Testiautomaatiojärjestelmien käsittely aloitetaan määrittelemällä niille korkean tasonvaatimukset. Tärkeimmiksi vaatimuksiksi todetaan helppokäyttöisyys, ylläpidettävyyssekä tietenkin kyky automaattisesti suorittaa testejä ja raportoida niiden tulokset.Näiden vaatimusten kautta päästään tarkempiin vaatimuksiin ja todetaan mm. ettäaineisto-ohjattu (data-driven) ja avainsanaohjattu (keyword-driven) testaustekniikkaovat edellytyksiä sekä helppokäyttöisyydelle että ylläpidettävyydelle.Seuraavaksi työssä suunnitellaan määritellyt vaatimukset toteuttava testiautomaatiojärjestelmä. Järjestelmän toimintaa sekä sen pohjana olleita vaatimuksia testataanpilotissa, jossa toteutetaan sekä prototyyppi itse järjestelmästä että automatisoituja testejä erilaisille ohjelmistoille. Pilotin tuloksien perusteella suunnitellun automaatiojärjestelmän voidaan todeta olevan pääperiaatteiltaan toimiva. Lisäksi kokemusten perusteella järjestelmään sekä alkuperäisiin vaatimuksiin esitetään joitain muutoksia. Mielenkiintoisin löydös on se että kaikki aineisto-ohjatut testit voidaan toteuttaakäyttäen ainoastaan avainsanaohjattua lähestymistapaa.Avainsanat: testiautomaatio, testiautomaatiojärjestelmä, aineisto-ohjattu testaus,avainsanaohjattu testausiii

AcknowledgementsThis master’s thesis has been done for a Finnish software testing consultancy company Qentinel mainly during the year 2005. I wish to thank my instructor HarriTöhönen, M.Sc. and all other Qentinelians for comments, feedback and patience.From the Department of Computer Science and Engineering I first of all want tothank my instructor Professor Reijo Sulonen. Additionally I am grateful for JuhaItkonen and Casper Lassenius for their support and valuable comments.I also want to express my gratitude to Mark Fewster who was kind enough toreview the thesis in its early form. Mark’s comments and positive feedback mademe believe that the ideas I present are valid and the remaining hard work is worththe effort. Also I want to thank Petri Haapio who has been in charge of both theautomation project where I got the original idea for this thesis and a new one wherethe automation framework presented in this thesis has been successfully taken intoreal use.Finally, I would like to thank my family and my wonderful girlfriend Sonja foreverything. Kiitos.Espoo, February 24, 2006Pekka Laukkaneniv

ContentsTermsix1 Introduction11.1Promises and Problems of Test Automation . . . . . . . . . . . . . .21.2Different Test Automation Approaches . . . . . . . . . . . . . . . . .21.2.1Dynamic vs. Static Testing . . . . . . . . . . . . . . . . . . .51.2.2Functional vs. Non-Functional Testing . . . . . . . . . . . . .51.2.3Granularity of the Tested System . . . . . . . . . . . . . . . .61.2.4Testing Activities . . . . . . . . . . . . . . . . . . . . . . . . .81.2.5Small Scale vs. Large Scale Test Automation . . . . . . . . .91.3Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111.4Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121.5Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121.6Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132 Requirements for Test Automation Frameworks142.1High Level Requirements . . . . . . . . . . . . . . . . . . . . . . . .142.2Framework Capabilities . . . . . . . . . . . . . . . . . . . . . . . . .142.2.1Executing Tests Unattended . . . . . . . . . . . . . . . . . . .152.2.2Starting and Stopping Test Execution . . . . . . . . . . . . .152.2.3Handling Errors . . . . . . . . . . . . . . . . . . . . . . . . .152.2.4Verifying Test Results . . . . . . . . . . . . . . . . . . . . . .162.2.5Assigning Test Status . . . . . . . . . . . . . . . . . . . . . .162.2.6Handling Expected Failures . . . . . . . . . . . . . . . . . . .162.2.7Detailed Logging . . . . . . . . . . . . . . . . . . . . . . . . .17v

2.2.8. . . . . . . . . . . . . . . . . . . . . .20Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .212.3.1Linear Test Scripts . . . . . . . . . . . . . . . . . . . . . . . .212.3.2Test Libraries and Driver Scripts . . . . . . . . . . . . . . . .222.3.3Promises and Problems . . . . . . . . . . . . . . . . . . . . .23Data-Driven Testing . . . . . . . . . . . . . . . . . . . . . . . . . . .232.4.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .232.4.2Editing and Storing Test Data . . . . . . . . . . . . . . . . .242.4.3Processing Test Data . . . . . . . . . . . . . . . . . . . . . . .262.4.4Promises and Problems . . . . . . . . . . . . . . . . . . . . .26Keyword-Driven Testing . . . . . . . . . . . . . . . . . . . . . . . . .272.5.1Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . .272.5.2Editing and Storing Test Data . . . . . . . . . . . . . . . . .272.5.3Processing Test Data . . . . . . . . . . . . . . . . . . . . . . .292.5.4Keywords in Different Levels . . . . . . . . . . . . . . . . . .292.5.5Promises and Problems . . . . . . . . . . . . . . . . . . . . .31Other Implementation Issues . . . . . . . . . . . . . . . . . . . . . .312.6.1Implementation Language . . . . . . . . . . . . . . . . . . . .322.6.2Implementation Technique . . . . . . . . . . . . . . . . . . . .342.6.3Testware Architecture . . . . . . . . . . . . . . . . . . . . . .34Testability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .362.7.1Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .362.7.2Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382.8.1Test Automation Manager . . . . . . . . . . . . . . . . . . . .392.8.2Test Automation Architect . . . . . . . . . . . . . . . . . . .392.8.3Test Automator. . . . . . . . . . . . . . . . . . . . . . . . .402.8.4Test Designer . . . . . . . . . . . . . . . . . . . . . . . . . . .40Detailed Requirements . . . . . . . . . . . . . . . . . . . . . . . . . .402.10 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .422.32.42.52.62.72.82.9Automatic Reporting3 Concept for Large Scale Test Automation Frameworks3.1Framework Structure . . . . . . . . . . . . . . . . . . . . . . . . . . .vi4343

3.23.33.43.1.1Test Design System . . . . . . . . . . . . . . . . . . . . . . .433.1.2Test Monitoring System . . . . . . . . . . . . . . . . . . . . .443.1.3Test Execution System . . . . . . . . . . . . . . . . . . . . . .44Presenting and Processing Data-Driven Test Data . . . . . . . . . .473.2.1Presenting Test Cases . . . . . . . . . . . . . . . . . . . . . .483.2.2Using Test Data . . . . . . . . . . . . . . . . . . . . . . . . .503.2.3Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51Presenting and Processing Keyword-Driven Test Data . . . . . . . .553.3.1Presenting Test Cases . . . . . . . . . . . . . . . . . . . . . .553.3.2Presenting User Keywords . . . . . . . . . . . . . . . . . . . .573.3.3Using Test Data . . . . . . . . . . . . . . . . . . . . . . . . .58Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .614 Implementation and Pilot4.14.24.34.462Implementation Decisions . . . . . . . . . . . . . . . . . . . . . . . .624.1.1Technical Decisions . . . . . . . . . . . . . . . . . . . . . . . .624.1.2Decisions Regarding the Pilot . . . . . . . . . . . . . . . . . .63Implementing Reusable Framework Components . . . . . . . . . . .644.2.1Test Data Parser . . . . . . . . . . . . . . . . . . . . . . . . .644.2.2Logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .664.2.3Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67Data-Driven Windows Application Testing . . . . . . . . . . . . . . .684.3.1Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .684.3.2Driver Script . . . . . . . . . . . . . . . . . . . . . . . . . . .684.3.3Test Library. . . . . . . . . . . . . . . . . . . . . . . . . . .714.3.4Test Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .714.3.5Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71Keyword-Driven Windows Application Testing . . . . . . . . . . . .724.4.1Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .724.4.2Driver Script . . . . . . . . . . . . . . . . . . . . . . . . . . .724.4.3Test Library. . . . . . . . . . . . . . . . . . . . . . . . . . .764.4.4Test Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .764.4.5Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76vii

4.54.6Keyword-Driven Web Testing . . . . . . . . . . . . . . . . . . . . . .794.5.1Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .794.5.2Driver Script . . . . . . . . . . . . . . . . . . . . . . . . . . .794.5.3Test Library. . . . . . . . . . . . . . . . . . . . . . . . . . .814.5.4Test Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .814.5.5Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .855 Results865.1Feasibility of the Framework Concept . . . . . . . . . . . . . . . . .865.2Changes to the Framework and Requirements . . . . . . . . . . . . .875.2.1Using Only Keyword-Driven Approach . . . . . . . . . . . . .875.2.2Set Up and Tear Down. . . . . . . . . . . . . . . . . . . . .895.2.3Test Suites . . . . . . . . . . . . . . . . . . . . . . . . . . . .905.2.4Generic Driver Script . . . . . . . . . . . . . . . . . . . . . .905.3Revised List of Requirements . . . . . . . . . . . . . . . . . . . . . .915.4Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .916 Conclusions93Bibliography95viii

TermsAcceptance TestingA level of testing conducted from the viewpoint of the customer,used to establish the criteria for acceptance of a system. Typically based upon the requirements of the system. (Craig andJaskiel, 2002)Action WordSee keyword.Actual OutcomeOutputs and data states that the system under test producesfrom test inputs. See also expected outcome. (Fewster and Graham, 1999)AutomationSee test automation.Automation FrameworkSee test automation framework.Base KeywordA term defined in this thesis for keywords implemented in a testlibrary of a keyword-driven test automation framework. See alsouser keyword.Black-Box TestingA type of testing where the internal workings of the system areunknown or ignored. Testing to see if the system does what itis supposed to do. (Craig and Jaskiel, 2002)BugSee defect.Capture and ReplayA scripting approach where a test tool records test input as it issent to the software under test. The input cases stored can thenbe used to reproduce the test at a later time. Often also calledrecord and playback. (BS 7925-1)ComponentOne of the parts that make up a system.A collectionof units with a defined interface towards other components.(IEEE Std 610.12-1990)Component TestingTesting of individual components or groups of related components. (IEEE Std 610.12-1990)ix

Context-Driven TestingA testing methodology that underlines the importance of thecontext where different testing practices are used over the practices themselves. The main message is that there are good practices in a context but there are no general best practices. (Kaneret al., 2001)Control ScriptSee driver script.Data-Driven TestingA scripting technique that stores test inputs and expected outcomes as data, normally in a tabular format, so that a singledriver script can execute all of the designed test cases. (Fewsterand Graham, 1999)DefectIntroduced into software as the result of an error. A flaw in thesoftware with potential to cause a failure. Also called fault or,informally, bug. (Craig and Jaskiel, 2002; Burnstein, 2003)Domain codePart of the application code which contains system functionality.See also presentation code. (Fowler, 2001)Dynamic TestingThe process of evaluating a system or component basedon its behavior during execution. See also static testing.(IEEE Std 610.12-1990)DriverA software module that invokes and, perhaps, controls andmonitors the execution of one or more other software modules.(IEEE Std 610.12-1990)Driver ScriptA test script that drives the test execution process using testingfunctionality provided by test libraries and may also read testdata from external sources. Called a control script by Fewsterand Graham (1999).ErrorA mistake, misconception, or misunderstanding on the part of asoftware developer. (Burnstein, 2003)Expected FailureOccurs when a test case which has failed previously fails againsimilarly. Derived from Fewster and Graham (1999).Expected OutcomeOutputs and data states that should result from executing a test.See also actual outcome. (Fewster and Graham, 1999)FailureInability of a software system or component to perform its required function within specified performance criteria. The manifestation of a defect. (IEEE Std 610.12-1990; Craig and Jaskiel,2002)x

FaultSee defect.Functional TestingTesting conducted to evaluate the compliance of a system or component with specified functional requirements.(IEEE Std 610.12-1990)FeatureA software characteristic specified or implied by requirementsdocumentation. (IEEE Std 610.12-1990)FrameworkAn abstract design which can be extended by adding more orbetter components to it. An important characteristic of a framework that differentiates it from libraries is that the methods defined by the user to tailor the framework are called from withinthe framework itself. The framework often plays the role of themain program in coordinating and sequencing application activity. (Johnson and Foote, 1988)Integration TestingA level of test undertaken to validate the interface between internal components of a system. Typically based upon the systemarchitecture. (Craig and Jaskiel, 2002)KeywordA directive that represents a single action in keyword-driven testing. Called actions words by Buwalda et al. (2002).Keyword-Driven TestingA test automation approach where test data and also keywordsinstructing how to use the data are read from an external datasource. When test cases are executed keywords are interpretedby a test library which is called by a test automation framework.See also data-driven testing. (Fewster and Graham, 1999; Kaneret al., 2001; Buwalda et al., 2002; Mosley and Posey, 2002)LibraryA controlled collection of software and related documentationdesigned to aid in software development, use, or maintenance.See also framework. (IEEE Std 610.12-1990)Non-Functional TestingTesting of those requirements that do not relate to functionality.For example performance and usability. (BS 7925-1)Manual TestingManually conducted software testing. See also test automation.OracleA document or piece of software that allows test engineers orautomated tests to determine whether a test has been passed ornot. (Burnstein, 2003)PreconditionEnvironmental and state conditions which must be fulfilled before a test case can be executed. (BS 7925-1)xi

Predicted OutcomeSee expected outcome.Presentation CodePart of the application code which makes up the user interfaceof the system. See also domain code. (Fowler, 2001)Quality(1) The degree to which a system, component, or process meetsspecified requirements.(2) The degree to which a system, component, or process meetscustomer or user needs or expectations. (IEEE Std 610.12-1990)Record and PlaybackSee capture and replay.Regression TestingRetesting previously tested features to ensure that a change ora defect fix has not affected them. (Craig and Jaskiel, 2002)RequirementA condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documents. Can be eitherfunctional or non-functional. (IEEE Std 610.12-1990)Set UpCode that is executed before each automated test case in oneparticular test suite. A related term used in manual testing isprecondition. See also tear down.Smoke TestingA test run to demonstrate that the basic functionality of a systemexists and that a certain level of stability has been achieved.(Craig and Jaskiel, 2002)Software Test AutomationSee test automation.Software TestingThe process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.(IEEE Std 610.12-1990)Static TestingThe process of evaluating a system or component based on itsform, structure, content, or documentation. See also dynamictesting. (IEEE Std 610.12-1990)System TestingA comprehensive test undertaken to validate an entire systemand its characteristics. Typically based upon the requirementsand design of the system. (Craig and Jaskiel, 2002)System Under Test (SUT)The entire system or product to be tested. (Craig and Jaskiel,2002)xii

Tear DownCode that is executed after each automated test case in oneparticular test suite. Test automation frameworks run them regardless the test status so actions that must always be done (e.g.releasing resources) should be done there. See also set up.Test AutomationThe use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting upof test preconditions, and other test control and test reportingfunctions. (BS 7925-1)Test AutomationFrameworkA framework used for test automation. Provides some core functionality (e.g. logging and reporting) and allows its testing capabilities to be extended by adding new test libraries.Test CaseA set of inputs, execution preconditions and expected outcomesdeveloped for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. (BS 7925-1)Test OracleSee oracle.Test OutcomeSee actual outcome.Test RunnerA generic driver script capable to execute different kinds of testcases and not only variations with slightly different test data.Test SuiteA collection of one or more test cases for the software under test.(BS 7925-1)Test-Driven Development(TDD)Development technique where automated unit tests are writtenbefore the system code. Tests drive the design and developmentof the system and a comprehensive regression test suite is got asa by-product. (Beck, 2003)TestabilityA characteristic of system under test that defines how easily itcan be tested. Consists of visibility and control. (Pettichord,2002)TestingSee software testing.TestwareThe artifacts required to plan, design and execute test cases,such as documentation, scripts, inputs, expected outcomes, setup and tear down procedures, files, databases, environments andany additional software or utilities used in testing. (Fewster andGraham, 1999)xiii

UnitA piece of code that performs a function, typically written by asingle programmer. (Craig and Jaskiel, 2002)Unit TestingA level of test undertaken to validate a single unit of code. Unittests are typically automated and written by the programmerwho has written the code under test. (Craig and Jaskiel, 2002)User KeywordA term defined in this thesis for keywords constructed from basekeywords and other user keywords in a test design system. Userkeywords can be created easily even without programming skills.White Box TestingTesting based upon knowledge of the internal structure of thesystem. Testing not only what the system does, but also how itdoes it. (Craig and Jaskiel, 2002)xUnit FrameworksFrameworks that ease writing and executing automated unittests, provide set up and tear down functionalities for them andallow constructing test suites. The most famous xUnit framework is JUnit for Java but implementations exist for most programming languages. (Hamill, 2004)xiv

Chapter 1IntroductionSoftware systems are getting more and more important for organizations and individuals alike and at the same time they are growing bigger and more complex. It isthus only logical that importance of software quality 1 is also rising. Software faultshave caused loss of huge sums of money and even human lives. If quality does notget better as systems grow in size, complexity and importance, these losses are onlygetting bigger. (Burnstein, 2003)The need for better quality means more pressure for software testing and for testengineers taking care of it. Test automation, i.e. giving some testing tasks to computers, is an obvious way to ease their workload. Computers are relatively cheap,they are faster than humans, they do not get tired or bored, and they work overweekends without extra pay. They are not ideal workhorses, however, as they onlyfind defects from places where they are explicitly told to search them and they easilyget lost if something in the system under test (SUT) changes. Giving computers allthe needed details is not easy and takes time. (Fewster and Graham, 1999)Test automation can be used in multiple ways. It can and should be used differentlyin different contexts and no single automation approach works everywhere. Testautomation is no silver bullet either but it has a lot of potential and when donewell it can significantly help test engineers to get their work done. (Fewster andGraham, 1999)This thesis concentrates on larger test automation frameworks designed for test execution and reporting. Before the scope can be defined in more detailed manner some1New terms are emphasized when used for the first time and their explanations can be foundfrom the list of terms on pages ix–xiv.1

CHAPTER 1. INTRODUCTION2background information about different automation approaches is needed, however,and that is presented in Section 1.2. Even before that it is time to investigate a bitmore thoroughly why test automation is needed and what are the main challengesin taking it into use.1.1Promises and Problems of Test AutomationA comprehensive list of test automation promises, as presented by Fewster andGraham (1999), is shown in Table 1.1. Similar promises have been reported also byother authors like Pettichord (1999), Nagle (2000) and Kaner et al. (2001).Most of the benefits in Table 1.1 can be summarized with words efficiency andreuse. Test automation is expected to help run lots of test cases consistently againand again on different versions of the system under test. Automation can also easetest engineers’ workload and release them from repeating tasks. All this has thepotential to increase software quality and shorten testing time.All these promises make test automation look really attractive but achieving themin real life requires plenty of hard work. If automation is not done well it will beabandoned and promises will never be realized. A list of common test automationproblems, again by Fewster and Graham (1999), can be found from Table 1.2.The general problem with test automation seems to be forgetting that any largertest automation project is a software project on its own right. Software projects failif they do not follow processes and are not managed adequately, and test automation projects are not different. Of all people, test engineers ought to realize howimportant it is to have a disciplined approach to software development. (Kaner,1997; Zambelich, 1998; Fewster and Graham, 1999; Pettichord, 1999; Kaner et al.,2001; Zallar, 2001; Rice, 2003)1.2Different Test Automation ApproachesThis section briefly introduces main test automation categories as an introductionand background for the rest of this thesis. The focused scope and target for thisthesis are defined in next section.

CHAPTER 1. INTRODUCTIONRun existing regression tests ona new version of a programBeing able to run previously created tests withoutextra effort clearly makes testing more efficient.Run more tests more oftenAutomation means faster test execution whichmeans more test rounds. Automation should alsomake creating new test cases easy and fast.Perform tests which would bedifficult or impossible to domanuallyFor example performance and stress tests arenearly impossible to conduct without automation.Better use of resourcesAutomating repeating and boring tasks releasestest engineers for more demanding and rewardingwork.Consistency and repeatabilityof testsTests are always run the same way so test resultscan be consistently compared to previous resultsfrom previous testing rounds. Tests can also beeasily repeated on different environments.Reuse of testsReusing tests from earlier projects gives a kickstart to a new project.Earlier time to marketReusing tests and shortening test execution timefastens feedback cycle to developers. In the endthat shortens the time to market.Increased confidenceRunning an extensive set of tests often, consistently and on different environments successfullyincreases the confidence that the product really isready to be released.Table 1.1: Common test automation promises (Fewster and Graham, 1999)3

CHAPTER 1. INTRODUCTIONUnrealistic expectationsManagers may believe test automation will solveall their testing problems and magically makethe software quality better. Automation expertsshould help managers setting their expectationsright.Poor testing practiceIf testing practices and processes are inadequateit is better to start improving them than bringingin test automation. Automating chaos just givesfaster chaos.Expectation that automatedtests will find a lot of newdefectsAfter automated test has been run successfullyonce it is not very likely to find new bugs unlessthe tested functionality changes. Automators normally find more defects while they are developingtests than when tests are re-executed.False sense of securityJust seeing a test report with no failures does notmean that the SUT did not have any. Tests maybe incomplete, either not testing all features or notable to see failures when they occur. Tests mayalso have defects and show wrong results.MaintenanceWhen the SUT changes also its tests change. Human test engineers are able to handle even majorchanges without problems but automated tests canfail after a slightest change. If maintaining testautomation system takes more time than testingmanually it will surely be abandoned. The samewill happen also if adding new features to the automation system is too cumbersome.Technical problemsBuilding and taking test automation system to useis a technical challenge which is unlikely to proceed without problems. Tools may be incompatible with the t

the data-driven testing needs with the keyword-driven approach alone. Keywords: test automation, test automation framework, data-driven testing, keyword-driven testing ii. TEKNILLINEN KORKEAKOULU DIPLOMITYON TIIVISTELM A .

Related Documents:

Let's move on to the second tool that is crucial for this process: the Google Keyword Planner. The Google Keyword Planner is a helpful tool that can provide some extremely valuable information for your keyword research. By inserting your list of keywords, you will get back a data set that includes the number of searches each keyword receives.

3 Keyword Generation This component aims at proposing valid and representative keywords for a landing page capitalizing on keyword extraction methods, on the co-occurrence of terms, and on keyword suggestions extracted from relevant search result snippets. 6 A more detailed presentation of this component is described in [14]. In the following two

PPC 101: A BEGINNER'S GUIDE TO PPC 5 PPC Keyword Research Keyword research for PPC can be incredibly time-consuming, but it is also incredibly important. Your entire PPC campaign is built around keywords, and the most successful Google Ads advertisers continuously grow and refine their PPC keyword list. If you only do keyword research once .

The best and easiest way to find out for sure is using the Google Keyword Planner. Use the Google Keyword Planner by simply searching a keyword or phrase. You can go here to get to Google’s Keyword Planner: . The key to navigating the ClickBank Marketplace is to start with the category your subscribers are most likely to be interested in.

Finally, use the Google Keyword Planner. You'll need to have a Google AdWords account to access the tool, but the account is free and you don't need to buy an ad to use it. When you enter a keyword in the Keyword Planner it will output: An estimate of the number of exact match searches performed on that keyword monthly

The definition of a keyword is not restricted to one word in our conception. Here, a keyword might consist of more than one Chinese word, i.e., a term reflecting the main content of a document. In fact one document is composed of a set of terms every of which can be described by many features and must belong to a keyword or not. Thus, the

Keyword와 filler에 의한 log scoring 방법은 그림 2.5에 나타낸 바와 같이 두가지의 DPNN network을 병렬로 사 용한다[4]. 그 중 첫번째는 keyword 및 句ler로 구성된 network이고, 두번째 는 keyword 모델 없이 filler 모델 만으 로 구성된 network이다. 따라서, keyword 및 filler net-

Classical approach to management is a set of homogeneous ideas on the management of organizations that evolved in the late 19 th century and early 20 century. This perspective emerges from the industrial revolution and centers on theories of efficiency. As at the end of the 19th century, when factory production became pervasive and large scale organizations raised, people have been looking for .