DRAM Errors In The Wild: A Large-Scale Field Study

2y ago
21 Views
3 Downloads
284.02 KB
12 Pages
Last View : 1m ago
Last Download : 5m ago
Upload by : Raelyn Goode
Transcription

DRAM Errors in the Wild: A Large-Scale Field StudyBianca SchroederEduardo PinheiroWolf-Dietrich WeberDept. of Computer ScienceUniversity of TorontoToronto, CanadaGoogle Inc.Mountain View, CAGoogle Inc.Mountain View, CAbianca@cs.toronto.eduABSTRACTErrors in dynamic random access memory (DRAM) are a commonform of hardware failure in modern compute clusters. Failures arecostly both in terms of hardware replacement costs and servicedisruption. While a large body of work exists on DRAM in laboratory conditions, little has been reported on real DRAM failuresin large production clusters. In this paper, we analyze measurements of memory errors in a large fleet of commodity servers overa period of 2.5 years. The collected data covers multiple vendors,DRAM capacities and technologies, and comprises many millionsof DIMM days.The goal of this paper is to answer questions such as the following: How common are memory errors in practice? What are theirstatistical properties? How are they affected by external factors,such as temperature and utilization, and by chip-specific factors,such as chip density, memory technology and DIMM age?We find that DRAM error behavior in the field differs in manykey aspects from commonly held assumptions. For example, weobserve DRAM error rates that are orders of magnitude higherthan previously reported, with 25,000 to 70,000 errors per billiondevice hours per Mbit and more than 8% of DIMMs affectedby errors per year. We provide strong evidence that memoryerrors are dominated by hard errors, rather than soft errors, whichprevious work suspects to be the dominant error mode. We findthat temperature, known to strongly impact DIMM error rates inlab conditions, has a surprisingly small effect on error behaviorin the field, when taking all other factors into account. Finally,unlike commonly feared, we don’t observe any indication thatnewer generations of DIMMs have worse error behavior.Categories and Subject Descriptors: B.8 [Hardware]:Performance and Reliability; C.4 [Computer Systems Organization]: Performance of Systems;General Terms: Reliability.Keywords: DRAM, DIMM, memory, reliability, data corruption, soft error, hard error, large-scale systems.1.INTRODUCTIONErrors in dynamic random access memory (DRAM) devices have been a concern for a long time [3, 11, 15–17, 23].A memory error is an event that leads to the logical stateof one or multiple bits being read differently from how theyPermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SIGMETRICS/Performance’09, June 15–19, 2009, Seattle, WA, USA.Copyright 2009 ACM 978-1-60558-511-6/09/06 . 5.00.were last written. Memory errors can be caused by electrical or magnetic interference (e.g. due to cosmic rays),can be due to problems with the hardware (e.g. a bit beingpermanently damaged), or can be the result of corruptionalong the data path between the memories and the processing elements. Memory errors can be classified into soft errors, which randomly corrupt bits but do not leave physicaldamage; and hard errors, which corrupt bits in a repeatablemanner because of a physical defect.The consequence of a memory error is system dependent.In systems using memory without support for error correction and detection, a memory error can lead to a machinecrash or applications using corrupted data. Most memorysystems in server machines employ error correcting codes(ECC) [5], which allow the detection and correction of oneor multiple bit errors. If an error is uncorrectable, i.e. thenumber of affected bits exceed the limit of what the ECCcan correct, typically a machine shutdown is forced. Inmany production environments, including ours, a single uncorrectable error is considered serious enough to replace thedual in-line memory module (DIMM) that caused it.Memory errors are costly in terms of the system failuresthey cause and the repair costs associated with them. In production sites running large-scale systems, memory component replacements rank near the top of component replacements [20] and memory errors are one of the most commonhardware problems to lead to machine crashes [19]. Moreover, recent work shows that memory errors can cause security vulnerabilities [7,22]. There is also a fear that advancingdensities in DRAM technology might lead to increased memory errors, exacerbating this problem in the future [3,12,13].Despite the practical relevance of DRAM errors, very littleis known about their prevalence in real production systems.Existing studies are mostly based on lab experiments using accelerated testing, where DRAM is exposed to extremeconditions (such as high temperature) to artificially induceerrors. It is not clear how such results carry over to realproduction systems. The few existing studies that are basedon measurements in real systems are small in scale, such asrecent work by Li et al. [10], who report on DRAM errorsin 300 machines over a period of 3 to 7 months.One main reason for the limited understanding of DRAMerrors in real systems is the large experimental scale requiredto obtain interesting measurements. A detailed study of errors requires data collection over a long time period (severalyears) and thousands of machines, a scale that researcherscannot easily replicate in their labs. Production sites, whichrun large-scale systems, often do not collect and record error

data rigorously, or are reluctant to share it because of thesensitive nature of data related to failures.This paper provides the first large-scale study of DRAMmemory errors in the field. It is based on data collectedfrom Google’s server fleet over a period of more than twoyears making up many millions of DIMM days. The DRAMin our study covers multiple vendors, DRAM densities andtechnologies (DDR1, DDR2, and FBDIMM).The paper addresses the following questions: How common are memory errors in practice? What are their statistical properties? How are they affected by external factors,such as temperature, and system utilization? And how dothey vary with chip-specific factors, such as chip density,memory technology and DIMM age?We find that in many aspects DRAM errors in the field behave very differently than commonly assumed. For example,we observe DRAM error rates that are orders of magnitudehigher than previously reported, with FIT rates (failures intime per billion device hours) of 25,000 to 70,000 per Mbitand more than 8% of DIMMs affected per year. We providestrong evidence that memory errors are dominated by harderrors, rather than soft errors, which most previous workfocuses on. We find that, out of all the factors that impacta DIMM’s error behavior in the field, temperature has asurprisingly small effect. Finally, unlike commonly feared,we don’t observe any indication that per-DIMM error ratesincrease with newer generations of DIMMs.2.BACKGROUND AND METHODOLOGY2.1 Memory errors and their handlingMost memory systems in use in servers today are protected by error detection and correction codes. The typicalarrangement is for a memory access word to be augmentedwith additional bits to contain the error code. Typical errorcodes in commodity server systems today fall in the singleerror correct double error detect (SECDED) category. Thatmeans they can reliably detect and correct any single-bit error, but they can only detect and not correct multiple biterrors. More powerful codes can correct and detect more error bits in a single memory word. For example, a code familyknown as chip-kill [6], can correct up to 4 adjacent bits atonce, thus being able to work around a completely broken4-bit wide DRAM chip. We use the terms correctable error(CE) and uncorrectable error (UE) in this paper to generalize away the details of the actual error codes used.If done well, the handling of correctable memory errors islargely invisible to application software. Correction of theerror and logging of the event can be performed in hardwarefor a minimal performance impact. However, depending onhow much of the error handling is pushed into software, theimpact can be more severe, with high error rates causing asignificant degradation of overall system performance.Uncorrectable errors typically lead to a catastrophic failure of some sort. Either there is an explicit failure action inresponse to the memory error (such as a machine reboot),or there is risk of a data-corruption-induced failure such as akernel panic. In the systems we study, all uncorrectable errors are considered serious enough to shut down the machineand replace the DIMM at fault.Memory errors can be classified into soft errors, which randomly corrupt bits, but do not leave any physical damage;and hard errors, which corrupt bits in a repeatable mannerBreadcrumbsRaw DataCollectorComputing NodeAggregated Raw DataReal TimeBigtableResultsSelected Raw DataSummary DataAnalysis ToolSawzallFigure 1: Collection, storage, and analysis architecture.because of a physical defect (e.g. “stuck bits”). Our measurement infrastructure captures both hard and soft errors,but does not allow us to reliably distinguish these types oferrors. All our numbers include both hard and soft errors.Single-bit soft errors in the memory array can accumulate over time and turn into multi-bit errors. In order toavoid this accumulation of single-bit errors, memory systemscan employ a hardware scrubber [14] that scans through thememory, while the memory is otherwise idle. Any memorywords with single-bit errors are written back after correction,thus eliminating the single-bit error if it was soft. Three ofthe six hardware platforms (Platforms C, D and F) we consider make use of memory scrubbers. The typical scrubbingrate in those systems is 1GB every 45 minutes. In the otherthree hardware platforms (Platforms A, B, and E) errors areonly detected on access.2.2 The systemsOur data covers the majority of machines in Google’s fleetand spans nearly 2.5 years, from January 2006 to June 2008.Each machine comprises a motherboard with some processors and memory DIMMs. We study 6 different hardwareplatforms, where a platform is defined by the motherboardand memory generation.The memory in these systems covers a wide variety of themost commonly used types of DRAM. The DIMMs comefrom multiple manufacturers and models, with three different capacities (1GB, 2GB, 4GB), and cover the three mostcommon DRAM technologies: Double Data Rate (DDR1),Double Data Rate 2 (DDR2) and Fully-Buffered (FBDIMM).DDR1 and DDR2 have a similar interface, except that DDR2provides twice the per-data-pin throughput (400 Mbit/s and800 Mbit/s respectively). FBDIMM is a buffering interfacearound what is essentially a DDR2 technology inside.2.3 The measurement methodologyOur collection infrastructure (see Figure 1) consists of locally recording events every time they happen. The loggedevents of interest to us are correctable errors, uncorrectableerrors, CPU utilization, temperature, and memory allocated.These events (”breadcrumbs”) remain in the host machine

and are collected periodically (every 10 minutes) and archivedin a Bigtable [4] for later processing. This collection happenscontinuously in the background.The scale of the system and the data being collected makethe analysis non-trivial. Each one of many ten-thousands ofmachines in the fleet logs every ten minutes hundreds of parameters, adding up to many TBytes. It is therefore impractical to download the data to a single machine and analyze itwith standard tools. We solve this problem by using a parallel pre-processing step (implemented in Sawzall [18]), whichruns on several hundred nodes simultaneously and performsbasic data clean-up and filtering. We then perform the remainder of our analysis using standard analysis tools.2.4 Analytical methodologyThe metrics we consider are the rate and probability oferrors over a given time period. For uncorrectable errors,we focus solely on probabilities, since a DIMM is expectedto be removed after experiencing an uncorrectable error.As part of this study, we investigate the impact of temperature and utilization (as measured by CPU utilization andamount of memory allocated) on memory errors. The exact temperature and utilization levels at which our systemsoperate are sensitive information. Instead of giving absolute numbers for temperature, we therefore report temperature values “normalized” by the smallest observed temperature. That is a reported temperature value of x, means thetemperate was x degrees higher than the smallest observedtemperature. The same approach does not work for CPUutilization, since the range of utilization levels is obvious(ranging from 0-100%). Instead, we report CPU utilizationas multiples of the average utilization, i.e. a utilization ofx, corresponds to a utilization level that is x times higherthan the average utilization. We follow the same approachfor allocated memory.When studying the effect of various factors on memoryerrors, we often want to see how much higher or lower themonthly rate of errors is compared to an average month (independent of the factor under consideration). We thereforeoften report “normalized” rates and probabilities, i.e. wegive rates and probabilities as multiples of the average. Forexample, when we say the normalized probability of an uncorrectable error is 1.5 for a given month, that means theuncorrectable error probability is 1.5 times higher than inan average month. This has the additional advantage thatwe can plot results for platforms with very different errorprobabilities in the same graph.Finally, when studying the effect of factors, such as temperature, we report error rates as a function of percentilesof the observed factor. For example, we might report thatthe monthly correctable error rate is x if the temperaturelies in the first temperature decile (i.e. the temperature isin the range of the lowest 10% of reported temperature measurements). This has the advantage that the error rates foreach temperature range that we report on are based on thesame number of data samples. Since error rates tend to behighly variable, it is important to compare data points thatare based on a similar number of samples.3.BASELINE STATISTICSWe start our study with the basic question of how commonmemory errors are in the field. Since a single uncorrectableerror in a machine leads to the shut down of the entire ma-Table 1: Memory errors per R2–CEIncid.(%)45.446.222.312.3–26.932.2Per machineCECECERate Rate MedianMean 918–34083751Per DIMMCECERate id.(%)0.05–0.280.250.080.390.22chine, we begin by looking at the frequency of memory errorsper machine. We then focus on the frequency of memory errors for individual DIMMs.3.1 Errors per machineTable 1 (top) presents high-level statistics on the frequencyof correctable errors and uncorrectable errors per machineper year of operation, broken down by the type of hardwareplatform. Blank lines indicate lack of sufficient data.Our first observation is that memory errors are not rareevents. About a third of all machines in the fleet experienceat least one memory error per year (see column CE Incid.%) and the average number of correctable errors per yearis over 22,000. These numbers vary across platforms, withsome platforms (e.g. Platform A and B) seeing nearly 50% oftheir machines affected by correctable errors, while in othersonly 12–27% are affected. The median number of errors peryear for those machines that experience at least one errorranges from 25 to 611.Interestingly, for those platforms with a lower percentageof machines affected by correctable errors, the average number of correctable errors per machine per year is the sameor even higher than for the other platforms. We will takea closer look at the differences between platforms and technologies in Section 3.2.We observe that for all platforms the number of errorsper machine is highly variable with coefficients of variationbetween 3.4 and 20 1 . Some machines develop a very largenumber of correctable errors compared to others. We findthat for all platforms, 20% of the machines with errors makeup more than 90% of all observed errors for that platform.One explanation for the high variability might be correlations between errors. A closer look at the data confirmsthis hypothesis: in more than 93% of the cases a machinethat sees a correctable error experiences at least one morecorrectable error in the same year.1These are high C.V. values compared, for example, to anexponential distribution, which has a C.V. of 1, or a Poissondistribution, which has a C.V. of 1/mean.

0Fraction of correctable M 110PfMfgA11234GB 210 310Platform APlatform BPlatform CPlatform D 410 510 4 3 2 1101010101010Fraction of dimms with correctable errorsFigure 2: The distribution of correctable errors overDIMMs: The graph plots the fraction Y of all errors in aplatform that is made up by the fraction X of DIMMs withthe largest number of errors.B2C145D61While correctable errors typically do not have an immediate impact on a machine, uncorrectable errors usually resultin a machine shutdown. Table 1 shows, that while uncorrectable errors are less common than correctable errors, theydo happen at a significant rate. Across the entire fleet, 1.3%of machines are affected by uncorrectable errors per year,with some platforms seeing as many as 2-4% affected.3.2 Errors per DIMMSince machines vary in the numbers of DRAM DIMMsand total DRAM capacity, we next consider per-DIMM statistics (Table 1 (bottom)).Not surprisingly, the per-DIMM numbers are lower thanthe per-machine numbers. Across the entire fleet, 8.2% ofall DIMMs are affected by correctable errors and an averageDIMM experiences nearly 4000 correctable errors per year.These numbers vary greatly by platform. Around 20% ofDIMMs in Platform A and B are affected by correctableerrors per year, compared to less than 4% of DIMMs inPlatform C and D. Only 0.05–0.08% of the DIMMs in Platform A and Platform E see an uncorrectable error per year,compared to nearly 0.3% of the DIMMs in Platform C andPlatform D. The mean number of correctable errors perDIMM are more comparable, ranging from 3351–4530 correctable errors per year.The differences between different platforms bring up thequestion of how chip-hardware specific factors impact thefrequency of memory errors. We observe that there are twogroups of platforms with members of each group sharingsimilar error behavior: there are Platform A , B, and E onone side, and Platform C , D and F on the other. While bothgroups have mean correctable error rates that are on thesame order of magnitude, the first group has a much higherfraction of DIMMs affected by correctable errors, and thesecond group has a much higher fraction of DIMMs affectedby uncorrectable errors.We investigated a number of external factors that mightexplain the difference in memory rates across platforms, including temperature, utilization, DIMM age and capacity.While we will see (in Section 5) that all these affect thefrequency of errors, they are not sufficient to explain thedifferences we observe between ��––––11071179A closer look at the data also lets us rule out memorytechnology (DDR1, DDR2, or FBDIMM) as the main factorresponsible for the difference. Some platforms within thesame group use different memory technology (e.g. DDR1versus DDR2 in Platform C and D, respectively), while thereare platforms in different groups using the same memorytechnology (e.g. Platform A , B and C all use DDR1). Thereis not one memory technology that is clearly superior to theothers when it comes to error behavior.We also considered the possibility that DIMMs from different manufacturers might exhibit different error behavior. Table 2 shows the error rates broken down by themost common DIMM types, where DIMM type is definedby the combinations of platform and manufacturer. We notethat, DIMMs within the same platform exhibit similar error behavior, even if they are from different manufacturers.Moreover, we observe that DIMMs from some manufacturers(Mfg1 , Mfg4 ) are used in a number of different platformswith very different error behavior. These observations showtwo things: the differences between platforms are not mainlydue to differences between manufacturers and we do not seemanufacturers that are consistently good or bad.While we cannot be certain about the cause of the differences between platforms, we hypothesize that the observeddifferences in correctable errors are largely due to board andDIMM design differences. We suspect that the differencesin uncorrectable errors are due to differences in the errorcorrection codes in use. In particular, Platforms C and Dare the only platforms that do not use a form of chip-kill [6].Chip-kill is a more powerful code, that can correct certaintypes of multiple bit errors, while the codes in Platforms Cand D can only correct single-bit errors.We observe that for all platforms the number of correctableerrors per DIMM per year is highly variable, with coefficientsof variation ranging from 6 to 46. One might suspect that

61013X91XCE probability (%)8035X158X 228X6040200Platform APlatform CPlatform DCE same monthNumber of CEs in month64X5101Platform DPlatform CPlatform A410310210CE previous month0.60.40.2110 010Platform APlatform CPlatform D0.8Autocorrelation100123410101010Number of CEs in prev. month510002468Lag (months)1012Figure 3: Correlations between correctable errors in the same DIMM: The left graph shows the probability of seeinga CE in a given month, depending on whether there were other CEs observed in the same month and the previous month.The numbers on top of each bar show the factor increase in probability compared to the CE probability in a random month(three left-most bars) and compared to the CE probability when there was no CE in the previous month (three right-mostbars). The middle graph shows the expected number of CEs in a month as a function of the number of CEs in the previousmonth. The right graph shows the autocorrelation function for the number of CEs observed per month in a DIMM.this is because the majority of the DIMMs see zero errors,while those affected see a large number of them. It turns outthat even when focusing on only those DIMMs that have experienced errors, the variability is still high (not shown intable). The C.V. values range from 3–7 and there are largedifferences between the mean and the median number ofcorrectable errors: the mean ranges from 20, 000 140, 000,while the median numbers are between 42 167.Figure 2 presents a view of the distribution of correctableerrors over DIMMs. It plots the fraction of errors made upby the top x percent of DIMMs with errors. For all platforms, the top 20% of DIMMs with errors make up over94% of all observed errors. For Platform C and D, the distribution is even more skewed, with the top 20% of DIMMscomprising more than 99.6% of all errors. Note that thegraph in Figure 2 is plotted on a log-log scale and that thelines for all platforms appear almost straight indicating apower-law distribution.To a first order, the above results illustrate that errors inDRAM are a valid concern in practice. This motivates usto further study the statistical properties of errors (Section4) and how errors are affected by various factors, such asenvironmental conditions (Section 5).4.A CLOSER LOOK AT CORRELATIONSIn this section, we study correlations between correctableerrors within a DIMM, correlations between correctable anduncorrectable errors in a DIMM, and correlations betweenerrors in different DIMMs in the same machine.Understanding correlations between errors might help identify when a DIMM is likely to produce a large number oferrors in the future and replace it before it starts to causeserious problems.4.1 Correlations between correctable errorsFigure 3 (left) shows the probability of seeing a correctableerror in a given month, depending on whether there were correctable errors in the same month or the previous month. Asthe graph shows, for each platform the monthly correctableerror probability increases dramatically in the presence ofprior errors. In more than 85% of the cases a correctableerror is followed by at least one more correctable error inthe same month. Depending on the platform, this corresponds to an increase in probability between 13X to morethan 90X, compared to an average month. Also seeing correctable errors in the previous month significantly increasesthe probability of seeing a correctable error: The probabilityincreases by factors of 35X to more than 200X, compared tothe case when the previous month had no correctable errors.Seeing errors in the previous month not only affects theprobability, but also the expected number of correctable errors in a month. Figure 3 (middle) shows the expectednumber of correctable errors in a month, as a function ofthe number of correctable errors observed in the previousmonth. As the graph indicates, the expected number of correctable errors in a month increases continuously with thenumber of correctable errors in the previous month.Figure 3 (middle) also shows that the expected number oferrors in a month is significantly larger than the observednumber of errors in the previous month. For example, inthe case of Platform D , if the number of correctable errorsin the previous month exceeds 100, the expected number ofcorrectable errors in this month is more than 1,000. This isa 100X increase compared to the correctable error rate fora random month.We also consider correlations over time periods longerthan from one month to the next. Figure 3 (right) shows theautocorrelation function for the number of errors observedper DIMM per month, at lags up to 12 months. We observethat even at lags of several months the level of correlationis still significant.4.2 Correlations between correctable and uncorrectable errorsSince uncorrectable errors are simply multiple bit corruptions (too many for the ECC to correct), one might wonder whether the presence of correctable errors increases theprobability of seeing an uncorrectable error as well. This isthe question we focus on next.The three left-most bars in Figure 4 (left) show how theprobability of experiencing an uncorrectable error in a givenmonth increases if there are correctable errors in the samemonth. The graph indicates that for all platforms, the prob-

193X1.5147X19X0.527X0CE same monthPlatform APlatform CPlatform D88X8060X70Percentage (%)UE probability (%)2390Platform APlatform CPlatform D431X10X60506X4032X3015X2010CE previous month210110Platform DPlatform CPlatform A09X010Factor increase in UE probability2.5CE same monthCE prev month10 010123410101010Number of CEs in same monthFigure 4: Correlations between correctable and uncorrectable errors in the same DIMM: The left graph showsthe UE probability in a month depending on whether there were CEs in the same month or in the previous month. Thenumbers on top of the bars give the increase in UE probability compared to a month without CEs (three left-most bars) andthe case where there were no CEs in the previous month (three right-most bars). The middle graph shows how often a UEwas preceded by a CE in the same/previous month. The right graph shows the factor increase in the probability of observingan UE as a function of the number of CEs in the same month.ability of an uncorrectable error is significantly larger in amonth with correctable errors compared to a month without correctable errors. The increase in the probability of anuncorrectable error ranges from a factor of 27X (for Platform A ) to more than 400X (for Platform D ). While notquite as strong, the presence of correctable errors in the preceding month also affects the probability of uncorrectable errors. The three right-most bars in Figure 4 (left) show thatthe probability of seeing a uncorrectable error in a month following a month with at least one correctable errors is largerby a factor of 9X to 47X than if the previous month had nocorrectable errors.Figure 4 (right) shows that not only the presence, but alsothe rate of observed correctable errors in the same month affects the probability of an uncorrectable error. Higher ratesof correctable errors translate to a higher probability of uncorrectable errors. We see similar, albeit somewhat weakertrends when plotting the probability of uncorrectable errorsas a function of the number of correctable errors in the previous month (not shown in figure). The uncorrectable errorprobabilities are about 8X lower than if the same numberof correctable errors had happened in the same month, butstill significantly higher than in a random month.Given the above observations, one might want to use correctable errors as an early warning sign for impending uncorrectable errors. Another interesting view is therefore whatfraction of uncorrectable errors are actually preceded by acorrectable error, either in the same

This paper provides the first large-scale study of DRAM memory errors in the field. It is based on data collected from Google’s server fleet over a period of more than two years making up many millions of DIMM days. The DRAM in our study covers multiple vendors, DRAM densities and technologies (DDR1, DDR2, and FBDIMM).

Related Documents:

May 02, 2018 · D. Program Evaluation ͟The organization has provided a description of the framework for how each program will be evaluated. The framework should include all the elements below: ͟The evaluation methods are cost-effective for the organization ͟Quantitative and qualitative data is being collected (at Basics tier, data collection must have begun)

Silat is a combative art of self-defense and survival rooted from Matay archipelago. It was traced at thé early of Langkasuka Kingdom (2nd century CE) till thé reign of Melaka (Malaysia) Sultanate era (13th century). Silat has now evolved to become part of social culture and tradition with thé appearance of a fine physical and spiritual .

On an exceptional basis, Member States may request UNESCO to provide thé candidates with access to thé platform so they can complète thé form by themselves. Thèse requests must be addressed to esd rize unesco. or by 15 A ril 2021 UNESCO will provide thé nomineewith accessto thé platform via their émail address.

̶The leading indicator of employee engagement is based on the quality of the relationship between employee and supervisor Empower your managers! ̶Help them understand the impact on the organization ̶Share important changes, plan options, tasks, and deadlines ̶Provide key messages and talking points ̶Prepare them to answer employee questions

Dr. Sunita Bharatwal** Dr. Pawan Garga*** Abstract Customer satisfaction is derived from thè functionalities and values, a product or Service can provide. The current study aims to segregate thè dimensions of ordine Service quality and gather insights on its impact on web shopping. The trends of purchases have

Chính Văn.- Còn đức Thế tôn thì tuệ giác cực kỳ trong sạch 8: hiện hành bất nhị 9, đạt đến vô tướng 10, đứng vào chỗ đứng của các đức Thế tôn 11, thể hiện tính bình đẳng của các Ngài, đến chỗ không còn chướng ngại 12, giáo pháp không thể khuynh đảo, tâm thức không bị cản trở, cái được

Disusun oleh deretan flip-flop. Baik SRAM maupun DRAM adalah volatile. Sel memori DRAM lebih sederhana dibanding SRAM, karena itu lebih kecil. DRAM lebih rapat (sel lebih kecil lebih banyak sel per satuan luas) dan lebih murah. DRAM memrlukan rangkaian pengosong muatan. DRAM cenderung

An Introduction to Random Field Theory Matthew Brett , Will Penny †and Stefan Kiebel MRC Cognition and Brain Sciences Unit, Cambridge UK; † Functional Imaging Laboratory, Institute of Neurology, London, UK. March 4, 2003 1 Introduction This chapter is an introduction to the multiple comparison problem in func-