Chapter VIII PRINCIPLES OF EXPERIMENTAL DESIGNS H. Bradley .

1y ago
23 Views
3 Downloads
1.25 MB
16 Pages
Last View : 9d ago
Last Download : 3m ago
Upload by : Wren Viola
Transcription

Chapter VIIIPRINCIPLES OF EXPERIMENTAL DESIGNSH. Bradley Wells *Many field studies have been conducted to determinethe impact of family planning programmes on fertilitybut, when examined carefully, few would qualify as trueexperiments. Some design features and limitation of thebetter known experiments are reviewed by several authors. I Another report" examines the design and methodological features of 40 selected studies of "fertilityplanning" and classifies them under eight categories.These categories are implicitly arranged more or less inorder of design strength, in a range from strong to weak,from true randomized experiments to a series of nonrandomized quasi-experiments. 3The aim in this chapter is to describe and illustrate thecommon elements of both types of designs, truerandomized experiments and non-randomized quasiexperiments, and to amplify their major differences withparticular reference to the way in which those- experiments might be useful in determining the impact offamilyplanning programmes on fertility levels. Before proceeding to the major purpose, an attempt is made to set thestage by describing the potential place of experiments andquasi-experiments in family planning programmeevaluation.In practice, evidence of programme effects is usuallydetermined in a piecemeal fashion. The logical processesand data required for informed decision making are ofsuch magnitude that an over-all evaluation plan isrequired as part of the programme itself. Such a plancould include a variety of different information systemsincluding both randomized experiments and prospectiveand retrospective non-randomized quasi-experiments, allof them phased and carried out over time so thatadministrators, planners and politicians have the bestavailable evidence at each point in time for programme Director, International Program of Laboratories for PopulationStatistics, University of North Carolina at Chapel Hill, North Carolina.1 W. Parker Mauldin and John A. Ross, "Family planning experiments: a review of design", in Proceedings of the Social StatisticsSection, 1965 (Washington, D.C., American Statistical Association,1966); Roberto Cuca and Catherine S. Pierce, "Experimentation infamily planning delivery systems: an overview", Studies in FamilyPlanning, vol, s, No. 12 (December 1977), .pp. 302-310.2 E. T. Hilton and A. A. Lumsdaine, "Field designs in gauging theimpact of fertility planning programs", in C. A. Bennett and A. A.Lumsdaine, eds., Evaluation and Experiment (New York, AcademicPress, 1975), chap. 5.3 Somewhat along the lines given in Donald T. Campbell and JulianC. Stanley, Experimental and Quasi-Experimental Desiqnsfor Research(Chicago, Ill. , Rand McNally, 1963).planning and evaluation. Sound evaluation requiresadvance planning, which must be included as an integralcomponent of programme management.The 40 studies of fertility planning referred to aboveinclude a wide range of programmes or treatmentsintended to increase the acceptance or continued use offertility regulating practices." Although the ultimate aimwas to reduce fertility rates, in these studies with fewexceptions the outcome measures used for assessingprogramme impact were (short-term measures of) reported behaviour indicating acceptance of contraceptivemethods rather than fertility rates. Use of such intermediate behavioural measures can be justified on thegrounds that programme administrators need "usefulinformation early'" for making decisions about programme organization. However, perhaps the majorreason that intermediate impact measures are used is thedifficulty of maintaining control over the experience ofthe different treatment and non-treatment groups for aperiod sufficiently long to provide useful measures offertility." This difficulty gives rise to the need to utilizeone or more of the various indirect methods, which aredescribed in the other chapters of this Manual, 7 in ordereither to convert such short-term measures as acceptordata into births averted or to correlate programme andnon-programme variables with fertility measures. Theuncertainties and ambiguities that must remain after eventhe most careful application of these non-experimentalapproaches to estimating fertility effects argue for moreextensive use of randomized experiments. The over-allprice in terms both of costs and of time requirements forconducting controlled randomized experiments may wellbe less than the costs of such non-experimental approaches. If it is sufficiently important to know morereliably the programme impact on fertility, then theexperimental method may well be worth the added costs.Unfortunately, practitioners use different terms moreor less interchangeably to refer to the same principle orpractice in design of randomized experiments as well as innon-randomized studies. In this chapter the followingconventions, more or less consistent with the statisticalE. T. Hilton and A. A. Lumsdaine, lac. cit.W. P. Mauldin and J. A. Ross, lac. cit.b E. T. Hilton and A. A. Lumsdaine, lac. cit.7 Some of the v.irtues an disadvantages of this general approach toevaluation are discussed 10 chapter II, "Standard couple-years ofprotection" .4S126

the impact of a family planning programme on fertility isof course very difficult. A brief review of the fundamentals may be useful in order to illustrate the strengths andweaknesses.literature, are followed (this listing of definitions fromexperimental design literature is very limited):(a) "Experiment" with no adjective is used in arestricted sense to refer to a "randomized (true) experiment", i.e., a study in which the experimental units(subjects) are allocated randomly to two or more treatment groups. As used by Campbell and Stanley," thisprocedure helps to "control for" internal validity, i.e.,within the experiment extraneous (non-treatment) variables are balanced at the beginning of treatment by priorrandomization. The term "controlled trial" is sometimesused to refer to a "field experiment" where randomization is possible;(b) "Control group" refers to the (randomly allocated)treatment group which gets no treatment although itsometimes refers to a known effective treatment. "Control", as a noun, is often used in lieu of control group;(c) "Quasi-experiment" and "observational study"are used interchangeably to refer to studies or trials inwhich experimental units either cannot be or are notrandomly assigned to treatments;(d) "Comparison group" refers to a non-randomlyselected group of experimental units in non-randomizedstudies. Comparison groups are analogous to controlgroups in experiments. They might better be called"pseudo-control groups";(e) "Control", as a verb, refers to the use of statisticalprocedure(s) either to eliminate, adjust for or balance(average out) the influence of intervening (nontreatment) variables on the response(s) to the treatmentvariable. To "control" thus includes randomization inexperiments as well as stratification, blocking, matching,covariance analysis and adjusted rates in both experiments and quasi-experiments.A.1. The experimental approachThe experimental design is a plan for carrying out anindividual experiment.t? An experiment is a test or a trialof the (hypothesized) response of the treatment units orsubjects to two or more treatments. Because the essenceof the experimental method is to compare results, two ormore treatments are required. One of the treatments maybe nothing, i.e., a presumably effective treatment may becompared with no treatment (a control group). Thesimplest type of true experiment would require assigningpeople or population (groups) at random either to receivethe programme or to receive no programme. After thetwo treatments (programme and no programme) areadministered, any differences in fertility levels betweenthe two groups would be due to Or caused by theprogramme. If a known treatment is effective, unknowntreatment(s) can be compared with the known treatmentrather than with no treatment.The key requirement is random assignment of experimental units to treatment groups, prior to initiatingtreatments. When randomization cannot be done bydefinition the study is quasi-experimental. Contamination or spill-over effects can occur with eitherexperimental or quasi-experimental designs. The evaluators must continuously be alert for such departuresfrom desirable design practices. If they do occur, attempts must be made to adjust for them at the analysisstage and/or to temper the conclusions which are drawnfrom the results. After randomization all treatmentgroups should be handled alike in all respects except fordifferences in the treatments. These two requirements,randomization and maintaining uniformity of treatments(over time), are the prime reasons that pure experimentsare difficult to conduct in human populations. Otherelements ofexperimental designs include: stratification orgrouping of subjects into more homogeneous groups; anincrease of the number of subjects (sample size); replication of the experiment; controlling for the effect ofintervening or related factors; use of unbiased measurement methods; taking of additional measurements; andapplication of appropriate analytical procedures. Allthese aspects of an experimental design are interrelated;for example, analysis is based on the design. A goodgeneral discussion of experimental design with emphasison social action experiments is given in a recent publication;"! and a more technical statistically orientedapproach is provided in two other studies. 12 As the intentin this chapter is to illustrate principles rather than coverUSE OF EXPERIMENTAL DESIGNSMeasuring the impact of a family planning programmeon fertility is analogous to any other scientific effort todetermine cause and effect. In the language of researchscience a family planning programme can be described asa treatment applied to (a group of) people in order toproduce (a response of) a lowered fertility rate. In thiscontext the group of people, or the community, can beconsidered an experimental unit in which the hypothesized demographic effect is lowered fertility caused bythe family planning programme. 9To determine the magnitude of the programme impacton fertility is not simple. Suppose that fertility rates dodecline after the programme is initiated. How can it be"proved" that the programme caused the decline? Researchers generally agree that the most acceptable approach to the determination of cause-and-effect relationships is through the experimental method. Strictapplication of the experimental approach to measuring10 Henry W. Riecken and Robert F. Boruch, eds., Social Experimentation: A Method/or Planning and Evaluating Social Intervention (NewYork, Academic Press, 1974).11 Ibid., chap. III.12 W. G. Cochran and G. M. Cox, Experimental Designs, 2nd ed.(New York, John Wiley and Sons, 1957); and B. J. Winer, StatisticalPrinciples in Experimental Designs (New York, McGraw Hill, 1962).Op. cit.The programme or demographic effect refers to the effect in thepopulation. It should not be confused with clinical or method effectiveness in prevention of births to individuals.89127

2.the details of experimental design, attention is focused onthe logic of fundamentals and some of the difficulties inconducting experiments. An understanding of the rationale of the experimental method is essential in order toappreciate the requirements for making inferences fromquasi-experimental studies.Randomization of subjects to the treatments is done inan attempt to obtain balance between treatment groupsof the influence of all the intervening variables which arenot controlled by other design features, such as stratification, or through analysis. Randomization providesinsurance against biases which might influence the subject or the investigator if non-random assignment procedures were followed.Logically, then, after deliberate stratification on theimportant variables to control for their effect and randomly balancing on (all other) variables between treatment groups, differences in effectsgreater than those dueto random error are attributed to treatment differences. Ifthe subjects or experimental units are representative ofthe population about which one wishes to generalize, anexperiment is, in theory, self-contained; i.e., all factors,other than treatments, that might affect the results areeither controlled or balanced through randomization;hence, inferences can be made about the effect(s) oftreatments.Randomized experiments are used extensively withnon-human subjects. However, random assignment ofindividual human subjects to specific treatments is notusually possible unless they volunteer for the experiments. The results of experiments that test treatmentswith volunteers are at best generalizable only to populations of such volunteers. Nevertheless, useful programme data can be obtained in this way, e.g., intrauterine device (IUD) acceptors who volunteer for anexperiment can be randomly assigned to any of the IUDsbeing tested. 13However, experiments can be designed with groups ofpeople as the experimental units, and the response of thegroup is the variable of interest. For example, the extentto which different education/propaganda efforts wouldinfluence women to accept different family planningmethods was tested in the randomized experiment conducted in Country E and is reported as illustration A (seesection B. 2).14 Unfortunately, the experiment was notdesigned to measure the impact of the different approaches on fertility changes, although that measurepresumably would have been possible by compilingfertility data for each small area. But the illustration doessuffice as an example of a randomized experiment.The strengths of experiments in reaching conclusionsabout cause and effect were briefly described above, butthere are some serious limitations in the conduct ofexperiments for testing the effect(s) of social actionprogrammes such as family planning.Limitations in use of experimental designsThe following difficultiesare important to bear in mindwhen attempting to design randomized experiments infamily planning using groups of people in administrativeareas or subareas as experimental units.Randomization of areasRandomization of areas either to receive a programmeor not to receive the programme (serve as control group)presents political and sometimes ethical problems. Mostpoliticians and the public are willing for their area tobecome the recipient of a new programme but will notwillinglyserve as a control group. It may also be unethicalto withhold a potentially beneficial programme fromthose who want it. One approach to the latter problemwould be to test planned variations in types or levels ofprogramme rather than having no programme as one ofthe treatments. 15Maintenance of uniformity of programmeMaintenance of uniformity of programme (treatment)is especially difficult in human populations because ofcontamination or spill-over effects through communication, migration and shifts in political attitudes and infunding policies over time. Unless experimental areas arefar enough apart so that very little communication ormigration takes place during the course of the experiment, it is likely that differences between treatments willbe diluted and the results therefore underestimated.Similarly, changes in local funding from other sources forrelated programme activities might cause changes in theprogramme. 16 Another pitfall is confounding the meritsof the programme with the staff involved in it.!? Forexample, programme staff members who personallyprefer one type of contraceptive might, perhaps inadvertently, influence the choice of method accepted byindividual patients. Many experiments have been damaged because a unique personality has created an exceptional effect in a particular group's performance. It is thusadvisable to rotate leadership personnel among theexperimental areas. It is essential that experiments andother evaluation methods take into account the timerequired for a response to show up-e.g., the time-lag ofnine months from conception to live birth assures that theeffects of a contraceptive programme show up, at theearliest, about a year after programme initiation.Unbiased measurement methodsUnbiased measurement methods may be difficult toachieve unless the measurement system is independent ofthe programme being evaluated. For example, fieldworkers whose primary job is conducting educationali s Bernard G. Greenberg, "Evaluation of social programs", Reviewof the International Statistical Institute, vol. 36 (1968), pp. 260-278.16 Karl E. Bauman, "An experimental design for family planning andprogram evaluation", in J. Richard Udry and Earl E. Huyck, eds., TheDemographic Evaluation ofDomestic Family Planning Programs (Cambridge, Mass., Ballinger Publishing Co., 1975), pp. 67-79.17 B. G. Greenberg, loc. cit.13 Christopher Tietze and Sarah Lewit, "Comparison ofthe copper Tand loop D: a research report", Studies in Family Planning, vol. 3, No.11 (1972), pp. 277-278. .14 Bernard Berelson and Ronald Freedman, "A study in fertilitycontrol", Scientific American, No. 210 (May 1964), pp. 29-37.128

.programmes to induce women or men to become familyplanning acceptors might well obtain different answers toa Knowledge, Attitude, Practice (KAP) and fertilitysurvey than would interviewers hired and trained by thestatistical office. The preferred source of data for measuring fertility levels is a civil registration system, whichrecords all live births and tabulates and analyses theresulting data for the various administrative units andgeographical areas of the country. (It is hoped that suchunits would be used in allocation of programme andcontrol areas for experimental design purposes.) Inaddition to an adequate civil registration system, censusdata are required in the same geographical detail to serveas the denominators for calculation of the appropriatefertility indices for each area or combination of areas. Amajor requirement of the measurement system for mostexperimental designs is that it must be capable ofmeasuring time trends. Unfortunately, most countries forwhich family planning programme evaluation is likely tobe undertaken do not have adequate vital statisticsregistration systems and are thus lacking essential datafor the design of experiments. In the absence of civilregistration data, fertility measures may be derived fromcensus data and/or special surveys. But these surveys arerarely conducted at sufficiently frequent intervals; and, atany rate, their use often introduces additional dataproblems.Increase in size of experimentIncreasing the size of the experiment can be expensive,depending upon what programme is being tested. Inmeasuring demographic response, the number of experimental units (subjects) is the number of areas rather thanthe number of persons involved in the experiments.Controlling for effect of intervening variablesControlling for the effect of intervening variables isusually difficult. Prior to randomization, stratification,blocking or matching on such factors as urban/ruralresidence, marital status, age and parity of women,educational level, prior use of contraception, desirednumber of children and other known factors related tofertility change can increase the precision of the resultsand increase the chances of detecting relatively smallchanges due to the programme. Similarly, post-stratification on such variables after the experiment and/or theuse of such techniques as mean adjustment and covariance analysis can enhance precision in randomizedexperiments.Attention is increasingly being given to the ways inwhich the experimental method can be used in evaluatingfield trials of social programmes. 18 It appears fair to18 R. L. Light, F. Mosteller and H. S. Winokur, Jr., "Usingcontrolled field studies to improve public policy", in Report of thePresident's Commission on Federal Statistics (Washington, D.C.,Government Printing Office, 1971), chap. 6; Carl A. Bennett and ArthurA. Lumsdaine, eds., Evaluation and Experiment: Some Critical Issues inAssessing Social Programs (New York, Academic Press, 1975); andRobert B. Boruch and Henry W. Riecken, eds., Experimental Testing ofPublic Policy, Proceedings of the Social Science Research Councilconclude that, because of the difficulties of implementingtrue experiments plus the general lack of knowledge andexperience of planners and decision-makers in using theexperimental method, it is not used to the extent that itshould be in evaluating social programmes includingfamily planning. The same factors undoubtedly alsooperate to result in poorly designed non-randomizedstudies.3.Example of an experimental design fora hypothetical countryTo illustrate how experiments can be designed, consider a hypothetical country and make different assumptions about how a programme might actually be implemented. The hypothetical country is divided into 20provinces and the provinces are further subdivided intoan average of 10 districts per province. The centralGovernment, president and legislature decide to implement a programme with the aim of reducing fertilityrates. Programmes will be implemented through districthealth departments. Each province is administered by agovernor and a local council which determines programme policy and allocates resources to the districtsthrough the provincial health directorate. An agreementis reached by all concerned that an experiment will beconducted for three years, 1978-1980, and a formula forallocating the funds and resources to the provinces isaccepted by all. Districts will be experimental units, thusyielding 200 units for experimentation.Assume further that even in this politically utopiancountry civil registration of births is not complete in alldistricts but a national census is planned for 1981. All or ahigh proportion of district governments must be willingto abide by a random allocation to serve either as acontrol (no programme) or as a treated (programme) areafor three years. (Note that if some but not all agree,randomization may be done within those districts, although this situation raises the question of representativeness of the "volunteers".) For simplicity, it isassumed here that all agree to participate.Assume that resources are such that the programmecan be conducted only in 100 districts and the experimental objective is to determine what the programme impactwill be over a three-year period. The simplest experimental design in the absence of time-series data or fertilitydata would be an "after only" randomized experiment.To accomplish this design, there is first stratification byprovince. Then a selection is made as nearly random aspossible, one half of the districts within each province tobe either treatment or control group. By collecting therequired data for each district in the 1981 census, fertilitycan be measured for each experimental unit (district).After the 1981 census data are processed, the (minimal)extent to which the family planning programme hasinfluenced the fertility measure would be as shown intable 106.Conference on Social Experiments, 1974 (Boulder, Colo., WestviewPress, 1975).129

TABLE 106.Sample size(number ofdistricts)100100Treatment differencepeople may be sufficiently motivated that they wouldtravel to an adjoining district for services or put pressureon their own government to provide them. Contamination would tend to cause underestimation of the overall impact of the programme. A serious limitation of thisexperiment is that only the short-term impact of theparticular programme on fertility would be measured. 20Another short-coming is that no variation in type orintensity of programme, except those which would resultfrom unplanned district-to-district variations in implementation, can be studied. Design No.2 does permit studyof planned variation in programmes.It probably is unrealistic to test "programme" versus"no programme"; and as has been pointed out, 21 it wouldbe more acceptable to test different types of programme.Types of programme could be defined as offering different services or, alternatively, as different levels offunding in the district. (Ideally, one type of programmeshould still be "no programme" in order to take fulladvantage of the randomized design.) If four differenttypes of programme are tested in a randomized designwith stratification by province, the results in 1981 couldbe depicted as shown in table 107.DESIGN NO.1: AFTER ONLY·TWO GROUPRANDOMIZEDTreatmentFertility(type of programme)ind.x. /98/No family planningFamily planning.Thus, R 1 - Ro would be a measure of the programmeimpact. Note that R o and R 1 can refer to any fertilitymeasure whether based on a census or on other data, forexample, civil registration or survey data. 19 However, iftime-seriesdata on fertility measures are available, betterdesigns are possible and desirable, as discussed below.Advantages of the designThe major advantage of this experimental design isthat random allocation should balance out such preprogramme and concurrent factors as:(a) Levels of fertility;(b) Previous use of contraception;(c) Non-programme use of contraception;(d) Quantity and quality of health staff and facilities;(e) Access of people to services;(j) Desire to practise contraception;(g) Type of economy;(h) Practice of abortion;(i) Registration level(s) of births;U) Other factors that may have an influence on thefertility measure.Another advantage is that each province is allocated acertain amount of money, although only half of thedistricts will receive any of it.TABLE 107. DESIGN No.2: AFTER ONLY·FOUR GROUPRANIX MIZEDSample size50505050(number ofTreatmentFertilitydistricts)(type of programme)imltx. /98/TypeTypeTypeTypeA: no family planningB: family planningC: family planningD: family planningRoR1R2R3Thus, R 1 - R o' R 2 - R o and R 3 - R o would be measuresof the impact of the respective programmes. All of theadvantages and disadvantages described above for designNo.1 still apply, except that an additional advantage isgained in that administrators and policy-makers haveinformation that may be useful for deciding which type ofprogramme among those tested is best.More complex after-only designs are available whichwould usually require more "treatment" groups, but theyare beyond the scope of this discussion. 22Additional data either in the form of time-seriesmeasures of fertility or of related measures of associatedvariables, if available, can often be utilized with appropriate statistical techniques." to strengthen conclusions inafter-only designs. In particular, pre-programme measures of fertility levels and trends by district used asDisadvantages of the designAside from the ethical question whether a familyplanning programme can be withheld from some districts, the major problems of this design are mainlyadministrative and political; for three years the districtsare arbitrarily defined as "programme" and "no programme" regardless of what is actually done. Some "noprogramme" districts may begin providing serviceswithout any provincial support; some "programme" districtsmay provide no family planning services even if themoney is included in their budget. Records should bekept of these deviations from the experimental design,where they are known, so that they may be taken intoaccount in analysis of the results. Another short-comingof this deisgn is the possibility of contamination and spillover effects. If services are not available in a district,20 Far-sighted planners and evaluators would plan for longer rangestudies based on the 1991 census and/or on interim measures offertility.21 B. G. Greenberg, loc. cit.22 For a good illustration of the logic of factorial designs, see N. K.Namboodiri, "A statistical exposition of the before-after and after-onlydesigns and their combinations", American Journal ofSociology, vol. 76(July 1970), pp. 83-102.23 Discussion of appropriate statistical techniques is beyond thescope of this chapter. The regression techniques discussed in chapterVII might also be appropriate for experimental designs. See also, forexample, W. G. Cochran and G. M. Cox, op. cit., for appropriatetechniques,19 Although time trends in fertility over the course of the programmewould not be available from (a single) census, recent trends may beestimated from "own children" or related techniques. See Lee-Jay Cho,"The own-children approach to fertility estimation: an elaboration", inInternational Population Conference. Liege. 1973 (Liege, InternationalUnion for the Scientific Study of Population, 1974), vol. 2, pp. 263-280.130

deciding who begins when. It is assumed here that 50 newdistricts could be initiated each year and that all otherpreviously established programmes would continue eachyear. The results in 1981 could be depicted as shown intable 109.covariables may strengthen conclusions and increase theprecision of analysis.Also, when districts are randomly assigned to experimental groups, time trends in numbers and rates for birthsrecorded in the civil registration system, even thoughcoverage is not lOOper cent, might prove to be a usefulindicator for post-programme comparisons. This indicator or any other that is influenced by the familyplanning programme might, however, be oflimited value.For example, if the programme emphasizes infant careafter delivery, birth registration levels might improvemore in programme than in non-programme districts,thus tending to underestimate programming effects.After-only experimental designs are appropriate whenno fertility measure is available nor can be made prior torandomization. If fertility levels can be measured prior torandomization, a number of alternative before-afterdesigns can be used. One possibility would be to stratifydistricts within provinces on fertility level before randomization, but this experiment would be more complex toconduct than simply using before measures as covariablesin the analysis of after results. For example, i

experimental or quasi-experimental designs. The eval . Principles in Experimental Designs (New York, McGraw Hill, 1962). the details of experimental design, attentionisfocused on

Related Documents:

Part One: Heir of Ash Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26 Chapter 27 Chapter 28 Chapter 29 Chapter 30 .

TO KILL A MOCKINGBIRD. Contents Dedication Epigraph Part One Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Part Two Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18. Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 Chapter 24 Chapter 25 Chapter 26

DEDICATION PART ONE Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 PART TWO Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Chapter 19 Chapter 20 Chapter 21 Chapter 22 Chapter 23 .

the secret power by marie corelli author of "god's good man" "the master christian" "innocent," "the treasure of heaven," etc. chapter i chapter ii chapter iii chapter iv chapter v chapter vi chapter vii chapter viii chapter ix chapter x chapter xi chapter xii chapter xiii chapter xiv chapter xv

About the husband’s secret. Dedication Epigraph Pandora Monday Chapter One Chapter Two Chapter Three Chapter Four Chapter Five Tuesday Chapter Six Chapter Seven. Chapter Eight Chapter Nine Chapter Ten Chapter Eleven Chapter Twelve Chapter Thirteen Chapter Fourteen Chapter Fifteen Chapter Sixteen Chapter Seventeen Chapter Eighteen

18.4 35 18.5 35 I Solutions to Applying the Concepts Questions II Answers to End-of-chapter Conceptual Questions Chapter 1 37 Chapter 2 38 Chapter 3 39 Chapter 4 40 Chapter 5 43 Chapter 6 45 Chapter 7 46 Chapter 8 47 Chapter 9 50 Chapter 10 52 Chapter 11 55 Chapter 12 56 Chapter 13 57 Chapter 14 61 Chapter 15 62 Chapter 16 63 Chapter 17 65 .

HUNTER. Special thanks to Kate Cary. Contents Cover Title Page Prologue Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter

BEC Higher Levels: B1 up to BEC-Higher Course (–C1) Thank you for your interest in our self-assessment test. This test should give you an idea how good your current business English skills are, and help you to decide whether you are ready to join one of our BEC Higher preparation courses.