Technology Assisted Review (Tar) Guidelines

1y ago
17 Views
2 Downloads
633.38 KB
50 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Casen Newsome
Transcription

[EDRM Update, 01 Sep 2021]The following paper was published in 2019 and emphasizes TAR 1.0. EDRM isgrateful to the project team that produced this work product, and Duke Bolch forproducing this point in time guidance.The EDRM Analytics and Machine Learning Project is in the process of creatingmore updated guidance, including Continuous Active Learning (CAL) and othermore advanced and generally accepted protocols and research.To join a subteam on the Analytics and Machine Learning Project, pleasecontact info@edrm.net.TECHNOLOGY ASSISTED REVIEW (TAR)GUIDELINESJanuary 2019

FOREWORD†In December 2016, more than 25 EDRM/Duke Law members volunteered todevelop and draft guidelines providing guidance to the bench and bar on the use oftechnology assisted review (TAR). Three drafting teams were formed andimmediately began work. The teams gave a progress status report and discussed thescope of the project at the annual EDRM May 16-17, 2017, workshop, held at theDuke University campus in Durham, N.C. The number of team volunteers swelledto more than 50.The augmented three teams continued to refine the draft during the summerof 2017 and presented their work at a Duke Distinguished Lawyers’ conference, heldon September 7-8, 2017, in Arlington, Virginia. The conference brought together 15federal judges and 75-100 practitioners and experts to develop separate “bestpractices” to accompany the TAR Guidelines. An initial draft of the best practices isexpected in summer 2019. While the EDRM/Duke “TAR Guidelines” are intended toexplain the TAR process, the “best practices” are intended to provide a protocol onwhether and under what conditions TAR should be used. Together, the documentsprovide a strong record and roadmap for the bench and bar, which explain andsupport the use of TAR in appropriate cases.The draft TAR Guidelines were revised in light of the discussions at theSeptember 2017 TAR Conference, which highlighted several overriding bench andbar concerns as well as shed light on new issues about TAR. The Guidelines are theculmination of a process that began in December 2016. Although Duke Law retainededitorial control, this iterative drafting process provided multiple opportunities forthe volunteers on the three teams to confer, suggest edits, and comment on theGuidelines. Substantial revisions were made throughout the process. Manycompromises, affecting matters on which the 50 volunteer contributors holdpassionate views, were also reached. But the Guidelines should not be viewed asrepresenting unanimous agreement, and individual volunteer contributors may notnecessarily agree with every recommendation.James WaldronDirector, EDRMJohn Rabiej, Deputy DirectorBolch Judicial Institute† Copyright 2019, All Rights Reserved. This document does not necessarily reflect the views of Duke Law School or the BolchJudicial Institute or its faculty, or any other organization including the Judicial Conference of the United States or any othergovernment unit.i

ACKNOWLEDGEMENTSThe Technology Assisted Review (TAR) Guidelines is the work product of morethan 50 experienced practitioners and experts, who devoted substantial time andeffort to improve the law. Three of them assumed greater responsibility as teamleaders, including:TEAM LEADERSMike QuartararoeDPM Advisory ServicesMatt PoplawskiWinston & StrawnAdam StrayerPaul, Weiss, Rifkind,Wharton & GarrisonThe following practitioners and ediscovery experts helped draft particularsections of the text:CONTRIBUTORSKelly AthertonNightOwl DiscoveryDoug AustinCloudNineBen BarnettDechertLilith Bat-LeahBlueStarChris BojarBarack FerrazzanoMichelle BriggsGoodwin ProcterJennifer MirandaClammeKeesal Young & LoganDavid CohenReed SmithXavier DioknoConsilioTara EmoryDriven Inc.Brian FlatleyEllis & WintersPaul GettmannAyfie, Inc.David GreethamRICOH USA, Inc.Robert KeelingSidley AustinDeborah KetchmarkConsilioJonathan KiangEpiqJohn KossMintz LevinJon LavinderEpiqBrandon MackEpiqRachi MessingMicrosoftMichael MinnickBrooks PierceConnie MoralesCapital Digital andCaliforensicsLynne NadeauWahlquistTrial AssetsTim OpsitnickTCDIConstantine PappasRelativityChris PaskachThe Claro GroupDonald RamseyStinson Leonard StreetNiloy RayLittler MendelsonPhilip RichardsConsilioBob RohlfExterroHerbert RoitblatMimecastJohn RosenthalWinston & StrawnJustin ScrantonConsilioDharmesh ShingalaKnovosMichael ShortnacyKing & Spalding LLPMimi SinghEvolver Inc.ii

CONTRIBUTORS (CONT.)Clara SkorstadKilpatrick TownsendPatricia WallaceMurphy & McGonigleHarsh SutariaHope Swancy-HaslamStroz FriedbergIan WilsonServientTiana Van DykBurnet Duckworth& PalmerCarolyn YoungConsilioWe thank Patrick Bradley, Leah Brenner, Matthew Eible, and Calypso Taylorthe four Duke Law, Bolch Judicial Institute Student Fellows, who proofread, citechecked, and provided valuable comments and criticisms. George Socha, co-founderof EDRM and BDO managing director, provided rock-solid advice and overviewguidance.In particular, we gratefully acknowledge the innumerable hours that MattPoplawski, Tim Opsitnick, Mike Quartararo, and James Francis, DistinguishedLecturer at the City University of New York School of Law and former United StatesMagistrate Judge, devoted to reviewing every public comment submitted on the draft.Their dedication markedly improved the document’s clarity and precision. We alsowant to single out the helpful and extensive suggestions made during the publiccomment period by Tara Emory, Xavier Diokno, Bill Dimm, and Trena Patton. Weare indebted to them for their input, along with comments and suggestions submittedby many others during the public-comment period.The feedback of the judiciary has been invaluable in exploring the challengesfaced by judges and the viability of the proposed guidelines. The ways in which theseguidelines have benefitted from the candid assessment of the judiciary cannot beunderstated. It is with the greatest of thanks that we recognize the contributions ofthe 14 judges, who attended the conference and the six judges who reviewed earlydrafts and provided comments and suggestions.EDRM/Duke Law SchoolJanuary 2019iii

PREFACEArtificial Intelligence (AI) is quickly revolutionizing the practice of law. AIpromises to offer the legal profession new tools to increase the efficiency andeffectiveness of a variety of practices. A machine learning process known astechnology assisted review (TAR) is an early iteration of AI for the legal profession.TAR is redefining the way electronically stored information (ESI) is reviewed.Machine learning processes like TAR have been used to assist decision-making incommercial industries since at least the 1960s leading to efficiencies and cost savingsin healthcare, finance, marketing, and other industries. Now, the legal community isalso embracing machine learning, via TAR, to automatically classify large volumes ofdocuments in discovery. These guidelines will provide guidance on the key principlesof the TAR process. Although these guidelines focus specifically on TAR, they arewritten with the intent that, as technology continues to change, the general principlesunderlying the guidelines will also apply to future iterations of AI beyond the TARprocess.TAR is similar conceptually to a fully human-based document review; thecomputer just takes the place of much of the human-review work force in conductingthe document review. As a practical matter, in many document reviews, the computeris faster, more consistent, and more cost effective in finding relevant documents thanhuman review alone. Moreover, a TAR review can generally perform as well as thatof a human review, provided that there is a reasonable and defensible workflow.Similar to a fully human-based review where subject-matter attorneys train ahuman-review team to make relevancy decisions, the TAR review involves humanreviewers training a computer, such that the computer’s decisions are just as accurateand reliable as those of the trainers.Notably, Rule 1 of the Federal Rules of Civil Procedure calls on courts andlitigants “to secure the just, speedy, and inexpensive determination of every actionand proceeding.” According to a 2012 Rand Corporation report, 73% of the costassociated with discovery is spent on review.The potential for significant savings in time and cost, without sacrificingquality, is what makes TAR most useful. Document-review teams can work moreefficiently because TAR can identify relevant documents faster than human reviewand can reduce time wasted reviewing nonrelevant documents.Moreover, the standard in discovery is reasonableness, not perfection.Traditional linear or manual review, in which teams of lawyers billing clients reviewboxes of paper or countless online documents, is an inefficient method. Problems withhigh cost, exorbitant time to complete review, fatigue, human error, disparateattorney views regarding document substance, and even gamesmanship are alliv

associated with manual document review. Studies have shown a rate of discrepancyas high as 50% among reviewers who identify relevant documents by linear review.The TAR process is also imperfect, and although no one study is definitive, researchsuggests that, in some contexts, TAR can be at least as effective as human review.Indeed, judges have accepted the use of TAR as a reasonable method of review, andimportantly, no reported court decision has found the use of TAR invalid. 1The most prominent law firms in the world, on both the plaintiff and thedefense side of the bar, are using TAR. Several large government agencies, includingthe DOJ, SEC, and IRS, have recognized the utility and value of TAR when dealingwith large document collections. But in order for TAR to be more widely used andaccepted in discovery, the bench and bar must become more familiar with it, andcertain standards of validity and reliability should be considered to ensure itsaccuracy. These guidelines will not only demonstrate the validity and reliability ofTAR but will also demystify the process.The TAR GUIDELINES reflect the considered views and consensus of theparticipants. They may not necessarily reflect the official position of Duke LawSchool or the Bolch Judicial Institute as an entity or of Duke Law’s faculty or anyother organization, including the Judicial Conference of the United States.One final note. There are several different variations of TAR software in themarketplace. TAR 1.0 and TAR 2.0 are the two most commonly marketedversions. Although one or the other version may be more prevalent, both continue tobe widely used. These Guidelines are intended to provide guidance to all users ofTAR and apply across the different variations of TAR. These Guidelines assiduouslytake no position on which variation is more effective, which may depend on variousfactors, including the size and richness of the TAR data population.As a further example of its reasonableness and legitimacy as a review process, the committee note toF. R. Evid. 502, states that "Depending on the circumstances, a party that uses advanced analyticalsoftware and linguistic tools in screening for privilege and work product may be found to have taken'reasonable steps' to prevent inadvertent disclosure."1v

TECHNOLOGY ASSISTED REVIEW (TAR) GUIDELINESEDRM/DUKECHAPTER ONEDEFINING TECHNOLOGY ASSISTED REVIEWCHAPTER ONEDEFINING TECHNOLOGY ASSISTED REVIEWA. INTRODUCTION . 1B. THE TAR PROCESS . .21.2.3.4.5.ASSEMBLING THE TAR TEAM .COLLECTION AND ANALYSIS “TRAINING” THE COMPUTER USING SOFTWARE TO PREDICT RELEVANCY .QUALITY CONTROL AND TESTING .TRAINING COMPLETION AND VALIDATION . . .22333A. INTRODUCTIONTechnology assisted review (referred to as “TAR,” and also called predictivecoding, computer assisted review, or supervised machine learning) is a review processin which humans work with software (“computer”) to train it to identify relevantdocuments. 2 The process consists of several steps, including collection and analysisof documents, training the computer using software, quality control and testing, andvalidation. It is an alternative to the manual review of all documents in a collection.Although there are different TAR software, all allow for iterative andinteractive review. A human reviewer 3 reviews and codes (or tags) documents as“relevant” or “nonrelevant” and feeds this information to the software, which takesthat human input and uses it to draw inferences about unreviewed documents. Thesoftware categorizes documents in the collection as relevant or nonrelevant, or ranksthem in order of likely relevance. In either case, the number of documents reviewedmanually by humans can be substantially limited while still identifying thedocuments likely to be relevant, depending on the circumstances.In fact, the computer classification can be broader than “relevancy,” and can include discoveryrelevance, privilege, and other designated issues. For convenience purposes, “relevant” as used in thispaper refers to documents that are of interest and pertinent to an information or search need.3 A human reviewer is part of a TAR team. A human reviewer can be an attorney or a non-attorneyworking at the direction of attorneys. They review documents that are used to teach the software. Weuse the term to help keep distinct the review humans conduct versus that of the TAR software.21

B. THE TAR PROCESSThe phrase “technology assisted review” can imply a broader meaning thattheoretically could encompass a variety of nonpredictive coding techniques andmethods, including clustering and other “unsupervised” 4 machine learningtechniques. And, in fact, this broader use of the TAR term has been made in industryliterature, which has added confusion about the function of TAR, defined as a process.In addition, the variety of software, each with unique terminology and techniques,has added to the confusion by the bench and bar in how each of these software works.Parties, the court, and the service provider community have been talking past eachother on this topic because there has been no common starting point to have thediscussion.These guidelines are that starting point. As these guidelines make clear, allTAR software share the same essential workflow components; it is just that there arevariations in the software processes that need to be understood. What follows is ageneral description of the fundamental steps involved in TAR. 51. ASSEMBLING THE TAR TEAMA team should be selected to finalize and engage in TAR. Members of this teammay include: service provider; software provider; workflow expert; case manager; leadattorneys; and human reviewers. Chapter Two contains details on the roles andresponsibilities of these members.2. COLLECTION AND ANALYSISTAR starts with the team identifying the universe of electronic documents tobe reviewed. A member of the team inputs documents into the software to build ananalytical index. During the indexing process, the software’s algorithms 6 analyzeeach document’s text. Although various algorithms work slightly differently, mostanalyze the relationship between words, phrases, and characters, the frequency andpattern of terms, or other features and characteristics in a document. The softwareuses this features-and-characteristics analysis to form a conceptual representation ofthe content of each document, which allows the software to compare documents toone another.4 Unsupervised means that the computer does not use human coding or instructions to categorize thedocuments as relevant or nonrelevant.5 Chapter Two describes each step in greater detail.6 All TAR software has algorithms. These algorithms are created by the software makers. TAR teamsgenerally cannot and do not modify the feature extraction algorithms.2

3. “TRAINING” THE COMPUTER USING SOFTWARE TO PREDICT RELEVANCYThe next step is for human reviewers with knowledge of the issues, facts, andcircumstances of the case to code or tag documents as relevant or nonrelevant. Thefirst documents to be coded may be selected from the overall collection of documentsthrough searches, identification through client interviews, creation of one or more“synthetic documents” based on language contained, for example, in documentrequests or the pleadings, or the documents might be randomly selected from theoverall collection. In addition, after the initial-training-documents are analyzed, theTAR software itself may begin selecting documents that it identifies as: (i) mosthelpful to refine its classifications; or (ii) most relevant, based on the humanreviewer’s feedback.From the human reviewer’s relevancy choices, the computer learns thereviewer’s preferences. Specifically, the software learns which combinations of termsor other features tend to occur in relevant documents and which tend to occur innonrelevant documents. The software develops a model that it uses to predict andapply relevance determinations to unreviewed documents in the overall collection.4. QUALITY CONTROL AND TESTINGQuality control and testing are essential parts of TAR, which ensure theaccuracy of decisions made by a human reviewer and by the software. TAR teamshave relied on different methods to provide quality control and testing. One popularmethod is to identify a significant number of relevant documents from the outset andthen test the results of the software against those documents. Other software test theeffectiveness of the computer’s categorization and ranking by measuring how manyindividual documents have had their computer-coded categories “overturned” by ahuman reviewer. Yet other methods involve testing random samples from the set ofunreviewed documents to determine how many relevant documents remain. Methodsfor quality control and testing continue to emerge and are discussed more fully inChapter Two.5. TRAINING COMPLETION AND VALIDATIONNo matter what software is used, the goal of TAR is to effectively categorize orrank documents both quickly and efficiently, i.e., to find a reasonable number ofrelevant documents while keeping the number of nonrelevant documents to bereviewed by a human as low as possible. The heart of any TAR process is to categorizeor rank documents from most to least likely to be relevant. Training completion isthe point at which the team has identified a reasonable amount of relevant documentsproportional to the needs of the case.3

How the team determines that training is complete varies depending upon thesoftware, the number of documents reviewed, and the results targeted to be achievedafter a cost benefit analysis. Under the training process in software commonlymarketed as TAR 1.0, 7 the software is trained based upon a review and coding of asubset of relevant and nonrelevant documents, with a resulting predictive model thatis applied to all nonreviewed documents. Here, the goal is not to have humans reviewall predicted relevant documents during the TAR process, but instead to review asmaller proportion of the document set that is most likely to help the software bereasonably accurate in predicting relevancy on the entire TAR set. The softwareselects training documents either randomly or actively (i.e., it selects the documentsit is uncertain about for relevancy that it “thinks” will help it learn the fastest),resulting in the predictive model being updated after each round of training. Thetraining continues until the predictive model is reasonably accurate in identifyingrelevant and nonrelevant documents. At this point, all documents have relevancyrankings, and a “cut-off” point is identified in the TAR set, with documents ranked ator above the cut-off point identified as the predicted relevant set, and documentsbelow the cut-off point as the nonrelevant set.In many TAR 1.0 processes, the decision whether the predictive model isreasonably accurate is often measured based on the use of a control set, which is arandom sample taken from the entire TAR set, typically at the beginning of training,and is designed to be representative of the entire TAR set. The control set is reviewedfor relevancy by a human reviewer and, as training progresses, the computer’sclassifications of relevance of the control set documents are compared against thehuman reviewer’s classifications. When training no longer substantially improvesthe computer’s classifications of the control set documents, training is viewed ashaving reached completion. At that point, the predictive model’s relevancy decisionsare applied to the unreviewed documents in the TAR set. Under TAR 1.0, theparameters of a search can be set to target a particular recall rate. It is important tonote, however, that this rate will be achieved regardless of whether the system is welltrained. If the system is undertrained, an unnecessarily large number of nonrelevantdocuments will be reviewed to reach the desired recall, but it will be reached. Ceasingtraining at the optimal point is not an issue of defensibility (achieving high recall),but rather a matter of reasonableness, minimizing cost of reviewing many extranonrelevant documents included in the predictive relevant set. 87 It is important to note that the terms TAR 1.0 and 2.0 can be seen as marketing terms with variousmeanings. They may not truly reflect the particular processes used by the software, and manysoftware use different processes. Rather than relying on the term to understand a particular TARworkflow, it is more useful and efficient to understand the underlying processes, and in particular,how training documents are selected, and how training completion is determined.8 In many TAR 1.0 workflows, this point of reaching optimal results has been known as reaching“stability.” It is a measurement that reflects whether the software was undertrained at a given pointduring the training process. The term “stability” has multiple meanings. The term “optimum results”is used throughout to eliminate potential confusion.4

Compare this process with software commonly marketed as TAR 2.0. Here,the human review and software training process are melded together; review andtraining occur simultaneously. From the outset, the software continuously analyzesthe entire document collection and ranks the population based on relevancy. Humancoding decisions are submitted to the software, the software re-ranks the documents,and then presents back to the human additional documents for review that it predictsas most likely relevant. This process continues until the TAR team determines thatthe predictive model is reasonably accurate in identifying relevant and nonrelevantdocuments, and that the team has identified a reasonable number of relevantdocuments for production. There are at least three indicators of when completenesshas been reached. The first is when a reasonable recall rate is reached (the humanreview team has reviewed a set of documents that reached a certain level of recallrate, which is calculated/tracked by the TAR software or by a TAR team memberduring the review). The second is the point at which the software appears to beoffering up for review only nonrelevant or a low number of marginally relevantdocuments. The third is the point at which the human review team has identified anexpected, pre-calculated number of relevant documents. In other words, the teamtook a sample before review started to estimate the number of relevant documents inthe TAR set, and then the human team reviewed documents until it reachedapproximately that number. When training is complete, the human reviewers willhave reviewed all the documents that the software predicted as relevant up to thatpoint of the review. If the system is undertrained, then the human reviewers will nothave reviewed a reasonable number of relevant documents for production, and theprocess should continue until that point is reached.Before the advent of TAR, producing parties rarely provided statisticalestimates as evidence to support the effectiveness of their document reviews andproductions. Only on a showing that the response was inadequate did the receivingparty have an opportunity to question whether the producing party fulfilled itsdiscovery obligations to conduct a reasonable inquiry.But when TAR was first introduced to the legal community, parties providedstatistical evidence supporting the TAR results, primarily to give the bench and barcomfort that the use of the new technology was reasonable. As the bench and barbecome more familiar with TAR and the science behind it, the need to substantiateTAR’s legitimacy in every case should be diminished. 9Nonetheless, because the development of TAR protocols and the case law onthe topic is evolving, statistical estimates to validate review continue to be discussed.Accordingly, it is important to understand the commonly cited statistical metrics andThe Federal Rules of Civil Procedure do not specifically require parties to use statistical estimates tosatisfy any discovery obligations.95

related terminology. At a high level, statistical estimates are generated to help thebench and bar answer the following questions: How many documents are in the TAR set?What percentage of documents in the TAR set are estimated to be relevant,how many are estimated to be nonrelevant, and how confident is the TAR teamin those estimates?How many estimated relevant documents did the team identify out of all theestimated relevant documents that exist in the review set, and how confidentis the team in that estimate?How did the team know that the computer’s training was complete?TAR typically ends with validation to determine its effectiveness. Ultimately,the validation of TAR is based on reasonableness and on proportionalityconsiderations: How much could the result be improved by further review and at whatcost? To that end, what is the value of the relevant information that may be foundby further review versus the additional review effort required to find thatinformation?There is no standard measurement to validate the results of TAR (or any otherreview process). One common measure is “recall,” which measures the proportion oftruly relevant documents that have been identified by TAR. However, while recall isa typical validation measure, it is not without limitations and depends on severalfactors, including consistency in coding and the prevalence of relevant documents.“Precision” measures the percentage of actual relevant documents contained in theset of documents identified by the computer as relevant.The training completeness and validation topics will be covered in more detaillater in these guidelines.6

CHAPTER TWOTAR WORKFLOWA.B.INTRODUCTION 8FOUNDATIONAL CONCEPTS & UNDERSTANDINGS . .91.2.KEY TAR TERMS 9TAR SOFTWARE: ALGORITHMS 9a. FEATURE EXTRACTION ALGORITHMS . 9b. SUPERVISED MACHINE LEARNING ALGORITHMS(SUPERVISED LEARNING METHODS) . 10c. VARYING INDUSTRY TERMINOLOGY RELATED TO VARIOUSSUPERVISED MACHINE LEARNING METHODS . . . 10C. THE TAR WORKFLOW . 111. IDENTIFY THE TEAM TO ENGAGE IN THE TAR WORKFLOW . . . 112. SELECT THE SERVICE PROVIDER AND SOFTWARE . . 123. IDENTIFY, ANALYZE, AND PREPARE THE TAR SET . . . 13a. TIMING AND THE TAR WORKFLOW . . 144. THE HUMAN REVIEWER PREPARES FOR ENGAGING IN TAR . 155. HUMAN REVIEWER TRAINS THE COMPUTER TO DETECT RELEVANCY,AND THE COMPUTER CLASSIFIES THE TAR SET DOCUMENTS . . 166. IMPLEMENT REVIEW QUALITY CONTROL MEASURES DURINGTRAINING . . . . 19a. DECISION LOG . . .19b. SAMPLING . . . 20c. REPORTS . . . . . 207. DETERMINE WHEN COMPUTER TRAINING IS COMPLETE ANDVALIDATE . 20a. TRAINING COMPLETION . .21i. TRACKING OF SAMPLE-BASED EFFECTIVENESS21ESTIMATE . .ii. OBSERVING SPARSENESS OF RELEVANT DOCUMENTS RETURNEDBY THE COMPUTER DURING ACTIVE MACHINE LEARNING . . 21iii. COMPARISON OF PREDICTIVE MODELBEHAVIORS . . 22iv. COMPARING TRADITIONAL TAR 1.0 AND TAR 2.0 TRAININGCOMPLETION PROCESSES . . . . 22b. VALIDATION 248. FINAL IDENTIFICATION, REVIEW, AND PRODUCTION OF THE PREDICTEDRELEVANT SET . . . 269. WORKFLOW ISSUE SPOTTING . . . . 27a. EXTREMELY LOW OR HIGH RICHNESS OF THE TAR SET . 27b. SUPPLEMENTAL COLLECTIONS . . . . 28c. CHANGING SCOPE OF RELEVANCY . . . . .28d. UNREASONABLE TRAINING RESULTS . . 297

A. INTRODUCTIONTAR can be used for many tasks throughout the Electronic DiscoveryReference Model (EDRM), from information governance to deposition and trialpreparation, which are discussed in Chapter Three. This chapter focuses on the useof TAR to determine relevancy of documents. To be more specific, the chapter focuseson a suggested workflow by which a human reviewer works with a computer that canbe taught to classify relevant and nonrelevant documents in support of documentproduction obligations. When the human training and computer review are complete,the documents capable of being analyzed will be classified into two piles: the predictedrelevant set, which may have been reviewed by humans or may be unreviewed butpredicted to be relevant (i.e., documents subject to potential production) and thepredicted nonrelevant set, which are typically not reviewed by humans (i.e.,documents not subject to potential production). 10Under this workflow, a human reviewer will have reviewed, or will have theoption to review, the predicted relevant set prior to production. The documents in thepredicted nonrelevant set typically are omitted from human review based on theclassification decisions made by th

Technology assisted review (referred to as "TAR," and also called predictive coding, computer assisted review, or supervised machine learning) is a review process in which humans work with software ("computer") to train it to identify relevant documents. 2.

Related Documents:

tar field 7 Part 2 – TAR Field Office Addresses Page updated: September 2020 Sacramento Field Office Submit TARs to the TAR Processing Center at one of the following addresses: TAR Processing Center 820 Stillwater Road West Sacramento, CA 95605-1630 TAR P

etc. Some hybrid machining processes, such as ultrasonic vibration-assisted [2], induction-assisted [3], LASER-assisted [4], gas-assisted [5] and minimum quantity lubrication (MQL)-assisted [6,7] machining are used to improve the machinability of those alloys. Ultrasonic-assisted machining uses ultrasonic vibration to the cutting zone [2]. The

Choosing the right technology-assisted review protocol to meet objectives White paper Using any technology-assisted review (TAR) protocol will undoubtedly reduce the time and expense of reviewing electronically stored information (ESI) compared to traditional linear review. But, getting the best results will depend on matching project

Tar is considered the most health-damaging component of tobacco.10 Tar envelops lungs and air sacs in smokers leading to lesser accessibility to oxygen. Moreover, Tar also results in the cilia neutralization in the are not swept out of the air passages. Several organs, not only the mouth, vocal cords,

The TAR will submit a staffing plan following the signing of the FEMA Tribe Agreement; 2. The TAR and Alternate TAR will be the minimum staffing required to administer the Federal Grant; 3. The TAR will assume initial responsibilities for Public Assistance activities upo

extraction of tar sand crude in Canada on the United States. Finally, the United States and Canada must work together to protect human health and the environment when regulating the extraction, transportation, and refinement of oil from tar sands and oil shale.8 Tar Sand Oil Refining Capacity Increases and Investment

assisted liposuction, vaser-assisted liposuction, external ultrasound-assisted liposuction, laser-assisted liposuction, power-assisted liposuction, vibro liposuction (lipomatic), waterjet assisted and J-plasma liposuction. This standard sets out the requirements for the provision of Liposuction service. Liposuction

cepté la motion du 18 février 2003 (03.3007 - Recherche sur l’être humain. Création d’une base constitutionnelle), la chargeant de préparer une disposition constitutionnelle concernant la re- cherche sur l’être humain. Pour sa part, la mise en chantier de la loi fédérale relative à la recher-che sur l’être humain a démarré en décembre 2003. La nouvelle disposition .