Kareo EHR Usability StudyReport of ResultsEXECUTIVE SUMMARYA usability test of Kareo EHR version 4.0 was conducted on August 21 & 23, 2017, in Irvine, CAby 1M2Es, Inc. The purpose of this study was to test and validate the usability of the currentuser interface, and provide evidence of usability in the EHR Under Test (EHRUT). During theusability test, ten healthcare providers matching the target demographic criteria served asparticipants and used the EHRUT in simulated, but representative tasks.This study collected performance data on 18 tasks typically conducted on an EHR:1. Patient Summary Screen2. Record Demographics3. Record Medication History4. Record Medication Allergy5. Add a Problem6. Create a prescription, DDI interactions7. Record Radiology/Imaging Order8. Record Laboratory Order9. Record Implantable Device10. Change Status of Implantable Device11. Change Demographic12. Change Medication Order13. Change Laboratory Order14. Change Radiology/Imaging Order15. Change Medication Allergy List16. Change Problem17. Clinical Decision Support18. Clinical Information ReconciliationDuring the 60-minute one-on-one usability test, each participant was greeted by the facilitycoordinator and asked to review and sign an informed consent/release form (example includedin Appendix A); they were instructed that their session would be recorded for note takingpurposes and that they could refuse to participate. Some participants had prior experience withthe EHR, and some did not. The administrator introduced the test and instructed participants tocomplete a series of tasks (given one at a time) using the EHRUT. During the testing, theadministrator timed the test and, along with the data logger, recorded user performance dataon paper. The administrator did not give the participant assistance in how to complete the task.The following types of data were collected for each participant: Number of tasks successfully completed within the allotted time without assistance.Time to complete the tasksNumber and types of errors
Path deviationsParticipants’ verbalizationsParticipants’ satisfaction ratings of the systemsAll participant data was de-identified - no correspondence could be made from the identity ofthe participant to the data collected. Following the conclusion of the testing, participants wereasked to complete a posttest questionnaire and were compensated with a 100 for their time.Various recommended metrics in accordance with the examples set forth in the NIST Guide tothe Processes Approach for Improving the Usability of Electronic Health Records, were used toevaluate the usability of the EHRUT.The risk assessment was gauged on a scale of 1 – 5 and was calculated off of the completion rateas well as taking into consideration any deviation patterns for each task. Tasks where thecompletion rate was between 80%-100% and/or 1 or fewer deviations received a rating of “1” –meaning there is a low risk of error when performing this task. As the completion ratesdecreased and the deviations increased the risk level increased. Tasks with a completion rate of60%-70% and/or 2 or fewer deviations were given a 2. Tasks with completion rate of 40%and/or 3 or fewer deviations were given a 3, and tasks with a completion rate of 10%-20%and/or more than 3 deviations were given a 4. One task had a 0% completion rate, thereforereceived a 5 for Risk Assessment. Following is a summary of the performance and rating datacollected on the .8.9.10.View PatientSummary ScreenRecordDemographicsRecord MedicationHistoryRecord MedicationAllergyAdd a ProblemCreate Prescription,DDI interactionsRecord Radiology/Imaging OrderRecord LaboratoryOrderRecord ImplantableDeviceChange Status ofImplantable Device#Task Time(in ings1 1(0.7)2(0.8)2(1.3)2(0.8)1(0.4)1(0.0)Risk1 low5 high11211211212
18.104.22.168.22.214.171.124.ChangeDemographicsChange MedicationOrderChange LaboratoryOrderChange Radiology/Imaging OrderChange MedicationAllergy ListChange ProblemView ClinicalDecision SupportClinical (0.5)NANA12241235The results from the System Usability Scale (SUS) scored the subjective satisfaction with thesystem based on performance with the tasks to be 71.3. Broadly interpreted, scores over 68represent systems with above average usability.In addition to the performance data, the following qualitative observations were made:Major Findings For the most part, participants seemed to be able to grasp quickly how to useKareo’s EHR. Learnability, a sign of consistency in the software, was observed as respondentsgradually learned how to use the system to complete their tasks, and where to gofor guidance when they felt uncertain, such as the left side navigation. Half of the tasks scored the highest success rate (80% - 100%) including:o Viewing Patient Summary Screen (#1)o Record Demographics (#2)o Record Medication Allergy (#4)o Add a Problem (#5)o Record Radiology/Imaging Order (#7)o Record Laboratory Order (#8)o Change Status of Implantable Device (#9)o Change Demographics (#10)o Change Medication Allergy List (#15) In contrast, “Change Radiology Imaging Order” scored the lowest success rate at10%. At the highest risk level, “Client Information Reconciliation” was the only task notcompleted by any of the participants.3
“Create a Prescription, DDI Interactions” resulted in the highest average number oferrors (2 errors). Six additional tasks also resulted in user error, however they averaged 1 error each.Tasks included are:o Record Demographics (#2)o Record Medication History (#4)o Record Radiology/Imaging Order (#7)o Record Laboratory Order (#8)o Record Implantable Device (#9)o Change Laboratory Order (#13) Observed task times were consistently lower than expected. Deviations were most observed when participants’ experienced uncertainty incompleting tasks. More path deviations were consistent with higher level of usererror.o Create a Prescription, with Drug Interaction Alerts (#4)o Record Radiology/Imaging Order (#6)o Change Medication Allergy List (#10)o Change Laboratory Order (#8) Generally, a task scoring high in both effectiveness and efficiency indicate a clearpath to completion. For example, ‘Record Medication Allergy’ is one of the bestexamples within Kareo’s EHR of a clear path to completion.o Input fields are all left aligned.o Primary action (Save) is closely aligned to input fields.o Visual noise is minimal.o Smart defaults are useful for a couple of fields (i.e. Reactions, Date ofOnset). In contrast, a high abandonment rate (90% – ‘Change Radiology Imaging Order’) or ahigh difficulty rating (3 – ‘Change Laboratory Order’) are strong indicators of asignificant usability issue and lack of a clear path to complete the task causingfrustration among participants.o As mentioned in the full report in the Effectiveness section: The primary action to complete the task is hidden in both ‘ChangeRadiology/Imaging Order’ and ‘Change Laboratory Order.’ Language used for a primary action “Save and Close” is a barrier toclarity, and not consistent with participants’ understanding of thetask to “change” an order. Long optimal paths add to the complexity and vulnerability toabandon the task. User interface (UI) elements (buttons, links, checkboxes, dropdown menus) are notdesigned with consistency, and lack uniformity.4
In addition, many UI elements were not visible to participants. Some did not invitethe correct action causing confusion and lack of confidence in participants. Labels of buttons or dropdown menus, in some cases, were confusing forparticipants. Autosuggest does not function intuitively. This tool created a cumbersome, timeconsuming, confusing and frustrating experience for participants. Selections formedications, lab studies, diagnosis, among others are less intuitive than participantsexpected. There is a low level of flexibility for inputting data, as exact wording is required, andno allowance for acronyms nor misspellings. The grey color used extensively throughout Kareo’s EHR, appears to reduce visibilitycausing confusion, lack of confidence, and in some cases contributed to incompletetasks. Error handling appeared to be limited possibly causing abandonment rate toincrease. Participants, in some cases, were not able to recover from errors andreached a stopping point without the ability to complete their task. Participants appeared to overlook the right side of the webpage. On most occasions,participants mentioned they didn’t notice seeing the controls on the right side ofthe screen.Areas for Improvement Consider making adjustments to form design to invite the appropriate action fromusers. Luke Wroblewski’s “Best Practices for Form Design” is well regarded as anexcellent resource in this area. The following suggestions are consistent with thisresource. Provide a clear path for users to complete their tasks. Examples are provided in theMajor Findings section of the full report that highlight specific issues to address inthis area. For example:o Primary actions to complete tasks need to be clearly visible, and not hidden.o Use language familiar to users, not confusing, nor company-centric words.o Apply progressive disclosure, and gradually expose additional options ifrequired by users. Create user interface (UI) design guidelines to ensure consistency and uniformity.Elements in a user interface should look and behave the same way throughout thesystem. This helps to constantly prove a user’s assumptions are correct about theuser interface, creating a sense of control, familiarity, and reliability.o In creating Kareo’s UI design guidelines, confirm each UI element creates avisual perception of how it can be or should be used. Each element should5
invite users to take the appropriate action. UI elements should not appearinvisible, nor blend in with the background if they are required to takeaction. Invite the correct action from users by following best practices when applying theappropriate UI element. Improve the functionality of the autosuggest tool to be intuitive to users. Provide a progress indicator and/or messaging to assist users in being moreeffective in accomplishing complicated tasks, especially those tasks with paths thatrequire 12 to 14 clicks to complete. Provide direct feedback as data is entered using inline validation for inputs thathave potentially high error rates, such as ‘Create a Prescription.’ Anticipate potential user error that may occur, and provide the ability to easilyrecover from errors to reduce uncertainty and build users’ confidence when usingKareo’s EHR. Consider more intuitive smart defaults that allow flexible data input of medications,allergens, lab studies, and diagnosis. One example mentioned by participants isrecognizing abbreviations of medications commonly used. Allow users to tailor some information or functionality to their context.INTRODUCTIONThe EHRUT tested for this study was Kareo EHR, version 4.0, designed to present medicalinformation to healthcare providers in ambulatory physician practices. The usability testingattempted to represent realistic exercises and conditions.The purpose of this study was to test and validate the usability of the current user interface andto provide evidence of usability in the EHRUT. To this end, measures of effectiveness, efficiencyand user satisfaction (such as time on task and path deviations) were captured during theusability testing.METHODParticipantsA total of ten participants were tested on the EHRUT. Participants in the test worked in themedical field and had experience using EHR. Participants were professionally recruited by 1M2Esand were compensated 100 for their time. In addition, participants had no direct connection tothe development or organization producing the EHRUT.6
For test purposes, end-user characteristics were identified and translated into a recruitmentscreener used to solicit potential participants; an example of a screener is provided in AppendixA.Recruited participants had a mix of backgrounds and demographic characteristics conforming tothe recruitment screener. The following is a table of participants by characteristics, includingdemographics, professional experience, and monthly EHR usage. Participant names werereplaced with Participant IDs so that an individual’s data cannot be tied back to 1-4031-4021-30OccupationEMTMedical CoordinatorNurseAdmissions DirectorNurseMedical AdminNurseNurse (PACU)NurseMedical AdminProf.Experience1-5yrs17yrs 17yrs 11-16yrs11-16yrs10yrs17yrs 10yrs6-10yrs5-10yrsMonthlyHER UsageEvery other dayDailyDailyDailyEvery other dayDailyEvery other dayEvery other dayDailyDailyTen participants were recruited and all participated in the usability test.Participants were scheduled for 60-minute sessions with 30 minutes in between each session fordebrief by the administrator and data logger and to reset systems to proper test conditions. Aspreadsheet was used to keep track of the participant schedule and included each participant’sdemographic characteristics as provided by the recruiting firm.Study DesignOverall, the objective of this test was to uncover areas where the application performed well –that is, effectively, efficiently, and with satisfaction – and areas where the application failed tomeet the needs of the participants. The data from this test may serve as a baseline for futuretests with an updated version of the same EHR and/or comparison with other EHRs provided thesame tasks are used. In short, this testing serves as both a means to record or benchmarkcurrent usability, but also to identify areas where improvements must be made.During the usability test, participants interacted with one EHR. Each participant used the systemin the same location and was provided with the same instructions. The system was evaluated foreffectiveness, efficiency, and satisfaction as defined by measures collected and analyzed foreach participant: Number of tasks successfully completed within the allotted time without assistanceTime to complete the tasksNumber and types of errorsPath deviations7
Participant’s verbalizations (comments)Participant’s satisfaction ratings of the systemAdditional information about the various measures can be found in the section UsabilityMetrics.TasksA number of tasks were constructed that would be realistic and representative of the kinds ofactivities a user might do with this EHR, 7.18.View Patient Summary ScreenRecord DemographicsRecord Medication HistoryRecord Medication AllergyAdd a ProblemCreate a Prescription, DDI InteractionsRecord Radiology/Imaging OrderRecord Laboratory OrderRecord Implantable DeviceChange Status of Implantable DeviceChange DemographicsChange Medication OrderChange Laboratory OrderChange Radiology/Imaging OrderChange Medication Allergy ListChange ProblemView Clinical Decision SupportClinical Information ReconciliationTasks were selected based on their frequency of use, criticality of function, and those that maybe most troublesome for users. Tasks were constructed in light of the study objectives.ProceduresUpon arrival, participants were greeted, their identity was verified, their time was matched onthe participant schedule, they were given their participant ID and they reviewed and signed aninformed consent and release form (See Appendix A.)To ensure that the test ran smoothly, two staff members participated in this test 1) the usabilityadministrator 2) the data logger. The usability testing staff conducting the test were experiencedusability practitioners with a Master’s degree, trained in market research and usability research,with more than 30 years experience combined. They regularly conduct both formal and informalusability testing using various formats including mobile and desktop applications.The administrator moderated the session covering all instructions and tasks. The administratoralso monitored task times, obtained post-task rating data, and took notes on participant8
comments. A second person served as the data logger and took notes on task success, pathdeviations, number and type of errors, and comments.Participants were instructed to perform the tasks (see specific instructions below): As quickly as possible making as few errors and deviations as possible. Without assistance, administrators were allowed to give immaterial guidance andclarification on tasks, but not instructions on use. If instruction was given, the task wasclassified as abandoned and/or failed.o If the task was abandoned and/or failed, the administrator continued withquestioning to clarify the steps that should have been taken and acquirefeedback as to what exactly was confusing and/or what could be done to makethe task easier/more straight forward.For each task, the participants were given a written copy of the task. Task timing began once theadministrator finished reading the question. The task time was stopped once the participantindicated they had successfully completed the task. Scoring is discussed below in Section – DataScoring.Following the session, the administrator gave the participant the post-test questionnaire (e.g.System Usability Scale, see Appendix C), compensated him or her for their time, and thankedeach individual for their participation.Participants’ demographic information, task success rate, time on task, errors, deviations, andpost-test questionnaire were recorded into a spreadsheet.Participants were thanked for their time and compensated. Participants signed a receipt andacknowledgment form (See Appendix D) indicating they had received the compensation.Test LocationThe test facility included a waiting area and a quiet testing room with a table, computer for theparticipant, and recording computer for the administrator. Only the participant andadministrator were in the test room. All observers and the data logger worked from a separateroom where they could see the participant’s screen and face shot and listen to the audio of thesession. To ensure that the environment was comfortable for users, noise levels were kept to aminimum with the ambient temperature within a normal range. All of the safety instruction andevacuation procedures were valid, in place, and visible to the participants.Test EnvironmentThe EHRUT would typically be used in a healthcare office or facility. In this instance, the testingwas conducted in a research facility. For testing, the computer used a Dell E6420 laptop runningWindows 7. The participants used a mouse and keyboard when interacting with the EHRUT.The EHRUT used a 14-inch monitor that came with the laptop. The screen was set to a resolutionof 1600 x 900 pixels, and used the True Color (32-bit) color setting. The EHRUT application itselfwas accessed over the internet via a virtual private network, and it was running against an9
internal staging environment set up on Kareo’s internal network. The system performance (i.e.response time) was representative to what actual users would experience in a fieldimplementation. Additionally, participants were instructed not to change any of the defaultsystem settings (such as control of font size).Test Forms & ToolsDuring the usability test, various documents and instruments were used, including:1. Informed Consent2. Moderator’s Guide3. Post-test Questionnaire4. Incentive Receipt and Acknowledgment FormExamples of these documents can be found in Appendices [A-D], respectively. The Moderator’sGuide was devised so as to be able to capture required data.A video camera recorded each participant’s facial expressions, and verbal comments wererecorded with a microphone. The test sessions were electronically transmitted to a nearbyobservation room where the data logger observed the test session.Participant InstructionsThe administrator reads the following instructions aloud to each participant (also see the fullmoderator’s guide in Appendix B):Thank you for participating in this study. Your input is very important. Oursession today will last about 60 minutes. During that time you will use anexample of an electronic health record. I will ask you to complete a few tasksusing this system and answer some questions. Please try to complete the taskson your own following the instructions very closely. Please note that we are nottesting you we are testing the system, therefore if you have difficulty all thismeans is that something needs to be improved in the system. I will be here incase you need specific help, but I am not able to instruct you or provide help inhow to use the application.Overall, we are interested in how easy (or how difficult) this system is to use,what would be useful to you, and how we could improve it. I did not have anyinvolvement in its creation, so please be honest with your opinions. All of theinformation that you provide will be kept confidential and your name will not beassociated with your comments at any time. Should you feel it necessary, youare able to withdraw at any time during the testing.Following the procedural instructions, participants were shown the EHR and as their first task,were given a minute to explore the system and make comments. Once this task was complete,the administrator gave the following instructions:10
For each task, I will read the description to you and say, “begin.” At that point,please perform the task and say “Done” once you believe you have successfullycompleted the task. I will ask you your impressions about the task once you aredone.Participants were then given 18 tasks to complete. Tasks are listed in the moderator’s guide inAppendix B.Usability MetricsAccording to the NIST Guide to the Processes Approach for Improving the Usability of ElectronicHealth Records, EHRs should support a process that provides a high level of usability for allusers. The goal is for users to interact with the system effectively, efficiently, and with anacceptable level of satisfaction. To this end, metrics for effectiveness, efficiency and usersatisfaction were captured during the usability testing. The goals of the test were to assess:1. Effectiveness of Kareo EHR by measuring participant success rates and errors2. Efficiency of Kareo EHR by measuring the average task time and path deviations3. Satisfaction with Kareo EHR by measuring ease of use ratingsData ScoringThe following table details how tasks were scored, errors evaluated, and the time data analyzed.MeasuresEffectiveness:Task SuccessRationale and ScoringA task was counted as a “Success” if the participant was able to achievethe correct outcome, without assistance, within the time allotted on a pertask basis.The total number of successes were calculated for each task and thendivided by the total number of times that task was attempted. Theresults are provided as a percentage.Task times were recorded for successes. Observed task times divided bythe optimal time for each task is a measure of optimal efficiency.Effectiveness:Task FailuresOptimal task performance time, as benchmarked by expert performanceunder realistic conditions, is recorded when constructing tasks. Targettask times used for task times in the Moderator’s Guide must beoperationally defined by taking multiple measures of optimalperformance and multiplying by some factor [e.g. 1.25] that allows sometime buffer because the participants are presumably not trained to expertperformance. Thus, if expert, optimal performance on a task was [X]seconds then allotted task time performance was [x * 1.25] seconds. Thisratio should be aggregated across tasks and reported with mean andvariance scores.If the participant abandoned the task, did not reach the correct answer orperformed it incorrectly, or reached the end of the allotted time before11
successful completion, the task was counted as a “failure.” No task timeswere taken for errors.The total number of errors was calculated for each task and then dividedby the total number of times that task was attempted. Not all deviationswould be counted as errors. This should also be expressed as the meannumber of failed tasks per participant.Efficiency:Task DeviationsEfficiency:Task TimeSatisfaction:Task RatingOn a qualitative level, an enumeration of errors and error types should becollected.Deviations occur if the participant, for example went to a wrong screen,clicked on an incorrect menu item, followed an incorrect link, orinteracted incorrectly with an on-screen control. This path was comparedto the optimal path. The observed path is divided by the optimal path toshow a ratio of path deviation.Only task times for tasks that were successfully completed were includedin the average task time analysis. Average time per task was calculatedfor each task. Variance measures (standard deviation and standard error)were also calculated.Participant’s subjective impression of the ease of use of the applicationwas measured by administering both a simple post-task question as wellas a post-session questionnaire. After each task, the participant wasasked to rate “Overall, this task was:” on a scale of 1 (Very Easy) to 5 (VeryDifficult). These data are averaged across participants.To measure participants’ confidence in and likeability of the Kareo EHRoverall, the testing team administered the System Usability Scale (SUS)post test questionnaire. Questions included, “I think I would like to usethis system frequently.” “I thought the system was easy to use,” and “Iwould imagine that most people would learn to use this system veryquickly.” See full System Usability Score questionnaire in Appendix C.RESULTSData Analysis and ReportingThe results of the usability test were calculated according to the methods specified in theUsability Metrics section above. Participants who failed to follow session and task instructionshad their data excluded from the analyses.The usability testing results for the EHRUT are detailed below. The results should be seen inlight of the objectives and goals outlined in Section - Study Design. The data should yieldactionable results that, if corrected, yield positive impact on user performance.12
Data Analysis and Reporting, .126.96.36.199.188.8.131.52.184.108.40.206.220.127.116.11.View PatientSummary ScreenRecordDemographicsRecord MedicationHistoryRecord MedicationAllergyAdd a ProblemCreate Prescription,DDI interactionsRecord Radiology/Imaging OrderRecord LaboratoryOrderRecord ImplantableDeviceChange Status ofImplantable DeviceChangeDemographicsChange MedicationOrderChange LaboratoryOrderChange Radiology/Imaging OrderChange MedicationAllergy ListChange ProblemView ClinicalDecision SupportClinical InformationReconciliation#Task Time(in ngs1 0.3)2(0.5)NANARisk1 low5 high112112112112241235The results from the SUS (System Usability Scale) scored the satisfaction with the system basedon performance with these tasks to be: 71.3. Broadly interpreted, scores over 68 representsystems with above average usability.13
Effectiveness At the highest level of effectiveness, tasks with a success rate of 90%-100% alsotend to score well in other areas: no deviations; no errors; perceived as easy; andcompleted in a fraction of the optimal time. These tasks include:o View Patient Summary Screen (#1)o Record Medication Allergy (#4)o Change Demographics (#11)o Change Medication Allergy (#15) Similarly, ‘Add a Problem’ scored a high success rate (90%), and resulted in thesimilar positive scores for errors, task ease, and task times. However, various slightdeviations were observed.o Several participants in this study have a background in non-ambulatorysettings, and deviated to an incorrect location (History tab), then quicklyfound their way back to the correct path through the left navigation(Problem tab). “We normally find that information in History.”o A couple of participants were observed deviating slightly within the correctlocation, but clicking on multiple dropdown options as they struggled withthe autosuggest function in ‘Add a Problem’ appeared to not work properly. “The problem/issue box seems useless. This seemed time consuming,too many unnecessary multiple steps.” The next level of effectiveness includes tasks that scored a high success rate (80%90%), yet scored lower in other areas. These tasks were completed with slightdeviations; slight number of errors; and, perceived as slightly more difficult.However, they were still completed faster than expected.o Record Demographics (#2)o Record Radiology Imaging Order (#7)o Record Laboratory Order (#8) The auto suggest function in both ‘Record Ra
Kareo EHR Usability Study Report of Results EXECUTIVE SUMMARY A usability test of Kareo EHR version 4.0 was conducted on August 21 & 23, 2017, in Irvine, CA by 1M2Es, Inc. The purpose of this study was to test and validate the usability of the current user interface, and provide evidence of usability in the EHR Under Test (EHRUT). During the
Practice Fusion policy information will update in Kareo except for policy number and policy date range. Practic Fusion case information will not update in Kareo; it will only create new data in Kareo when the superbill is submitted. Updates must be done manually on an existing patient case in Kareo.
A separate download of heavy IT applications is not required. In this way, Frost & Sullivan recognizes how the . Kareo Telemedicine is fully interoperable with Kareo EHR and Kareo Billing. Also, Kareo can deploy its practice management (PM) software within the EHR solution as an additional module. The company bundles both clinical and .
Kareo EHR Implementation Guide - September 2013 3 3. Support Kareo EHR offers comprehensive training and support to assist you with a successful EHR implementation. Your Customer Success Coach (CSC) will provide personalized, one-on-one support throughout your onboarding process and make sure you successfully complete each setup milestone.
The Kareo Add-in for Microsoft Excel was built on top of Microsoft Excel so that you can query your data from Kareo and use the powerful data analysis, reporting, and graphing tools of Micr osoft Excel to build advanced reports. This section highlights some of the powerful Microsoft Excel features you can use to analyze your Kareo data. 4.1
Professional EHR - An EHR catered towards small-mid sized healthcare providers, offering an easy to use interface. TouchWorks EHR - Touchworks EHR software is a product from Allscripts, a vendor that provides multi-site and specialty support as well as configurable desktop. Sunrise EHR - A comprehensive EHR system for large clinical enterprises.
EHR Replacement EHR replacement - Do it right! 1 In good company! (EHR replacement is popular) About the author. A letter from Dr. Gary. 2 Decision: Replace the EHR. Now what? 3 EHR replacement: Do it right! 4 Project manage. Manage change. 5 Select the right EHR solution. 6 Get the right EHR support at the right time.
usability test of the DrFirst.com Rcopia V4 system. The test was conducted in the Fairfax, VA office of The Usability People over remote tele-conferencing sessions using GotoMeeting. The purpose was to test and validate the usability of the current user interface and provide evidence of usability of Rcopia V4 as the EHR Under Test (EHRUT).
American National Standards Institute (ANSI) A300 (Part 6) – 2012 Transplanting for Tree Care Operations – Tree, Shrub, and other Woody Plant Maintenance Standard Practices (Transplanting) Drip line The hole should be 1.5-2 times the width of the root ball. EX: a 32” root ball should have a minimum wide 48” hole