PREFENDER: A Prefetching Defender Against Cache Side Channel Attacks As .

1y ago
9 Views
2 Downloads
1.80 MB
6 Pages
Last View : 12d ago
Last Download : 3m ago
Upload by : River Barajas
Transcription

To Appear in the 2022 Design, Automation and Test in Europe Conference (DATE)P REFENDER: A Prefetching Defender against CacheSide Channel Attacks as A PretenderLuyi Li† , Jiayi Huang‡ , Lang Feng† , Zhongfeng Wang† School of Electronic Science and Engineering, Nanjing University‡Department of Electrical and Computer Engineering, University of California, Santa Barbaraluyli@smail.nju.edu.cn; jyhuang@ucsb.edu; flang@nju.edu.cn; zfwang@nju.edu.cn;†Abstract—Cache side channel attacks are increasingly alarmingin modern processors due to the recent emergence of Spectre andMeltdown attacks. A typical attack performs intentional cacheaccess and manipulates cache states to leak secrets by observingthe victim’s cache access patterns. Different countermeasureshave been proposed to defend against both general and transientexecution based attacks. Despite their effectiveness, they all tradesome level of performance for security. In this paper, we seek anapproach to enforcing security while maintaining performance.We leverage the insight that attackers need to access cache in orderto manipulate and observe cache state changes for informationleakage. Specifically, we propose P REFENDER, a secure prefetcherthat learns and predicts attack-related accesses for prefetchingthe cachelines to simultaneously help security and performance.Our results show that P REFENDER is effective against severalcache side channel attacks while maintaining or even improvingperformance for SPEC CPU2006 benchmarks.Index Terms—Security, Cache Side Channel Attacks, PrefetcherI. I NTRODUCTIONThe threats of cache side channel attacks [1], [2] to modernprocessors are growing rapidly due to the vulnerabilities ofadvanced microarchitecture optimizations. For example, recentSpectre [3] and Meltdown [4] attacks can exploit the transientexecution enabled by branch prediction, speculative execution,and out-of-order execution to create cache timing side channelsfor information leakage. These vulnerabilities were found inprocessors from Intel, AMD, and ARM that are used in billionsof devices. Even worse, many new attack variants have beenintroduced in the past few years, making cache side channelattacks a serious problem [5].Cache side channel attacks can exploit the timing variation ofshared cache accesses for information leakage [6]. For example,the timing difference of a fast cache hit and a slow cachemiss can reveal a victim’s cache footprint [3], [4]. Differentcountermeasures have been proposed for either general or transient execution based attacks through isolation [7], conditionalspeculation [8], stateless mis-speculative cache accesses [9],and noise injection [10], [11]. Despite their effectiveness, theyeither trade some level of performance for security or ignorethe performance effect.In this work, we seek an approach to enforcing security whilemaintaining performance. Note that victims need to accesscache to change cache states while attackers need to accesscache to manipulate and observe cache state changes. If we can Correspondingauthors.learn their access patterns, we can predict the future accessesand prefetch data to change the cache state, thereby confusing the attacker. From a performance perspective, effectiveprefetching can also save execution time for benign programs.We propose P REFENDER, a prefetching defender to defeatcache side channel attacks while preserving performance benefits for benign programs. Specifically, we design a low-costData Scale Tracker to track the address calculation of memoryinstructions to guide prefetching address prediction targetingvictim accesses. We also propose an Access Pattern Trackerto learn the cache access pattern targeting attacker accesses byleveraging the insight that only a few loads are intensivelyused for attack. This helps correlate even intentional randomattacker accesses for address prediction. Furthermore, effectiveprefetching also maintains or improves performance. This workincludes the following main contributions: We propose P REFENDER , the first work that can defeatcache side channel attacks while maintaining or evenimproving performance, to the best of our knowledge. We propose a new approach to analyzing cache accesspatterns, and design Data Scale Tacker (DST) and AccessPattern Tacker (APT) to realize the runtime analysis foreffective prefetching. Our detailed security and performance evaluation showsthat P REFENDER is not only effective for cache sidechannel defense, but also highly compatible with otherprefetchers for performance improvement.II. BACKGROUND AND T HREAT M ODELA. Cache Side Channel AttacksCache side channel attacks can infer secrets of victim programs from the cache state changes made by the victim’sexecution. Such an attack typically consists of three phases. Inphase 1, the attacker initializes the cache state by performingcache accesses. In phase 2, the victim may access the cacheand change the cache state. In phase 3, the attacker measuresthe cache state change to infer the victim’s secret.One widely used attack is timing-based side channel attack.Fig. 1 shows the Flush Reload [2], Evict Reload [12], andPrime Probe [6] attacks. For the Flush Reload attack, in phase1, the attacker flushes all the cachelines that may be accessedby the victim (through shared libraries). These cachelines forman eviction set, and each of them is called an eviction cacheline.In phase 2, the victim loads the secret-dependent data to theAuthor’s version. 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, includingreprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrightedcomponent of this work in other works.

Attacker DataFlushVictim DataAttacker Data Accesswhich may cost high hardware overhead. Reuse-trap needs toknow the victim’s process ID in advance to record the victim’scache misses, which may cause software modifications. Lastbut not the least, they both overlook the potential performancegains that can be achieved with prefetcher.By contrast, P REFENDER is a completely hardware-basedand resource-efficient method without modifying any policyof speculative execution or cache in the modern processors.On the premise of ensuring security , it further achieves aperformance enhancement through accurate runtime analysesand well-designed hardware prefetching strategies.Victim Data AccessFlush ReloadAttackerVictimCachelinesAttackerLow LatencyEvict ReloadAttackerVictimAttackerLow LatencyCachelinesPrime ProbeAttackerVictimAttackerIV. P REFENDER D ESIGNCachelinesPhase 1Phase 2High LatencyPhase 3In this section, we present the overview of our proposedsystem with P REFENDER shown in Fig. 2 and detail the designof Data Scale Tracker (DST) and Access Pattern Tracker (APT).Fig. 1. The examples of Flush Reload, Evict Reload, and Prime Probe. Thesecret can be revealed by the only low (or high) latency eviction cacheline.Memory Stagecache. In phase 3, the attacker accesses the eviction set andtimes each eviction cacheline’s access latency. Then, the low hitlatency of the secret-dependent cacheline reveals the secret. Forexample, assuming the cacheline size is 64 bytes, if the victimloads a secret-dependent shared data array[s 64] in phase 2,where s is the secret. During phase 3, the attacker accessesarray[768] with a low cache hit latency; it can infer the secretis s 768/64 12. Compared with Flush Reload, Evict Reloadand Prime Probe differ in the way of phase 1 and phase 3 butshare the same key idea.Execute StageDatareg0: fva, screg1: fva, sc.Cal. Hist. Buf.ControllerPrefenderDSTBasic Pref.APTCore n-1Core0L1DL1I. L1DL1IInst0 Inst1.Acc. Tra. Buf.L2MemoryFig. 2. The overall design architecture of our system.A. OverviewB. Threat ModelAccording to Section II-A, a cache side channel attackhas three phases. If one of the phases is interfered with, wecan defend against the attack. P REFENDER is designed inthe L1Dcache for phase interference by prefetching the eviction cachelines. Specifically, P REFENDER includes Data ScaleTacker (DST) and Access Pattern Tacker (APT) to interfere withthe second and third phases, respectively. In P REFENDER, wealso support a basic prefetcher (Basic Pref. in Fig. 2), which is aconventional prefetcher such as the Tagged or Stride prefetcher.All the three including DST, APT, and the basic prefetcher canprefetch data. Note that the basic prefetcher can only help withperformance, while DST and APT can enforce security and alsoimprove performance to some extent.DST aims to predict the eviction cachelines during the victimexecution (phase 2). The prediction is based on the arithmeticcalculation histories of the victim instructions, which are storedin the Calculation History Buffer (Cal. Hist. Buf. in Fig. 2).After a victim instruction loads a secret-dependent cacheline(e.g., an eviction cacheline in Flush Reload), DST will predictanother eviction cacheline and prefetch it. The prefetched eviction cacheline can be disguised as a secret-dependent cachelinebrought by the victim to fool the attacker, which is illustratedin the example on the top of Fig. 3.APT aims to predict the access patterns of the evictioncachelines during the attacker measurement stage (phase 3).The patterns are stored in the Access Trace Buffer (Acc. Tra.Buf. in Fig. 3). As shown in the example at the bottom ofFig. 3, APT seeks to prefetch the cacheline before the attackeraccesses and measures it, which also misleads the attacker.Our threat model includes the cache timing side channelattacks described in Section II-A. The attackers can manipulatethe state of any cacheline (usually the eviction set) and measurethe cacheline access latency. The victim may either sharedata with the attacker through shared libraries (for flush-basedattacks) or conflict with the attacker’s eviction set. The attackercan exploit the timing difference of eviction cachelines’ accesslatencies to infer the secret of the victim.III. R ELATED W ORKTo mitigate cache side channel attacks caused by speculativeexecution, prior work constrains the execution of speculativeloads, such as Conditional Speculation [8] and SpecShield[13]. Another category such as InvisiSpec [9] and SafeSpec[14] design a shadow structure to temporarily hold the databrought by speculative loads during transient execution. However, they are ineffective in defending against general cacheside channel attacks. As a result, new cache policies wereintroduced. DAWG [7] dynamically partitions cache ways toavoid cache sharing among different security domains. SHARP[15] designs a new cache replacement policy to prevent the spyfrom evicting the victim’s data. All these approaches pay somelevel of performance for the security strength.Prefetch-guard [10] and Reuse-trap [11] propose severalmethods to detect the spy and leverage prefetching to obfuscatethe spy based on previously recorded information. Prefetcherguard needs a large amount of history tracking registers torecord all the flushed memory lines and replaced memory lines,2

Victim DataFlushVictim Data AccessPrefender Prefetchingthe base address, i is an index, and scale is the stride. Suchan address calculation typically propagates through severalregisters and finally to the one that is used by a load instructionto access the array. Our goal is to track the scale value withthe help of f var by propagating them from source registersto destination registers. With scr , we can predict the nearbycachelines in the access pattern even with a single access.Complicated access patterns such as 128 i 32 j imm canexist, where i and j are indices and imm is an immediate. Givenan imm, if there is a pair of i and j makes the result to be652, there may be another pair (e.g., i increments 1) to makethe result as 652 128. The 128 can be scr in this calculation.Similarly, 32 and any multiples of them like 256, 512, etc.,can also be scr . Note that a more complicated access patternthat involves multiplications of several registers’ scales (suchas (128i0 i1 i2 32j0 16j1 ) (48k0 imm) ) can also be handled.Defense against Flush Reload by DSTPrefender VictimAttackerAttackerLow LatencyCachelinesDefense against Flush Reload by APTAttackerVictim55Cachelines4Prefender64Attacker7 8 9 10789 10Low LatencyFig. 3. The example of the defenses against Flush Reload attacks (The numbernear an arrow represents the access time, and the number inside each rectanglerepresents the first time when the corresponding cacheline is accessed).Since the key idea of DST and APT is to learn cache accesspatterns for prefetching, effective prefetching on benign loadscan also improve performance while enforcing security. However, there are two major challenges for effective prefetchingfor DST and APT, respectively.C1. During phase 2, the victim may only access one secretdependent eviction cacheline. Even though more secretdependent cachelines are accessed, they may not be simplycontiguous. How to effectively predict the access patterngiven limited accesses (even single access) is challenging,which we overcome with DST.C2. During phase 3, the attacker tends to randomly measurethe eviction cachelines to bypass prefetchers, such asStride prefetcher. Learning the eviction set given a randomaccess pattern is challenging, which we tackle using APT.TABLE IThe rules to calculate f vard and scrd . (rd is the destination register; “-”means not applicable. † The rule is also for subtraction when is replacedby . ‡ The rule is also for shifting when is replaced by or .)Instructionload rd aadd rd a b†mul rd ab‡OtherwiseB. Data Scale TackerArg. aimm0imm(rs0 g. bf NAimm0Validrs1Validrs1NArs1Validrs1NA-Resultsf vars1f vardValidValidNANAValidValidNANA-imm0NAscrd11NAf vars0 imm0f vars0 f vars1NANANAscrs01NAscrs0scrs1min(scrs0 , scrs1 )NAf vars0 imm0f vars0 f vars1NANANAscrs0 imm01NAscrs0 f vars1f vars0 scrs1scrs0 scrs1NA1We propose a set of rules to calculate scr (and f var , whichcan help calculate scr ) as shown in Table I. The fixed valueand scale of the destination register rd are calculated usingthe source operand and the propagated values of the sourceregisters. The fixed and scale values are initialized to N A and1 respectively upon starting a program.For data movement instructions, if an immediate number isloaded to rd, f vard is set to the number. If a value is loadedfrom memory to rd, vard and scrd are reinitialized since weconservatively regard the loaded value as an unknown variable.For addition, when vard is calculated by one immediatenumber and one register rs0 , if rs0 ’s f vars0 is N A, scrd isthe same as scrs0 since adding the immediate number as theoffset has no effect on the scale. If f vars0 is valid, f vard is theaddition of f vars0 and the immediate number since both arefixed values. When adding two registers, if only one of themhas a valid fixed value, the scale of the destination register isthe same as the scale of the source register without a valid fixedvalue. If neither of the source registers has a valid fixed value,the scale of the destination register can be the minimum scaleof the two registers. The reason is that when the values of tworegisters are added, both scales can be used as the new scale.However, using the minimum one can reduce the possibility ofmaking the scale larger than a page.For multiplication, the calculations of f vard and scrd aresimilar to those of addition, except the consideration of multiplicative factors due to multiplication. If any other calculationsDST predicts the eviction cachelines by tracking how theload’s target address is calculated. For example, if the targetaddress is calculated by 128 i for a load, where i is an integervariable, the target address can only be 0, 128, 256, etc. Afteraddress translation, if the target physical address of the loadis paddr, we can infer that this load may access addressespaddr-128, paddr, paddr 128, etc., if they are in the same page.Therefore, we can leverage this pattern to predict the evictioncachelines. The key question is how to track and learn 128 asin this example, which we call a scale.Since the target address of a load is usually calculated andstored in registers, DST needs to track how the register valuesare calculated. The best way is to record all the calculationsrelated to each register. But this is impractical due to highhardware cost. So we only consider two common calculations:addition (and subtraction) and multiplication (and shifting).We use two values to track the history for each registerr: A fixed value f var and a scale scr , which are stored inthe calculation history buffers. Essentially, we maintain f varto help track the scale. If the register r’s calculation historyonly depends on constant values (immediate numbers), thecalculated result is assigned to f var . Otherwise, f var is notapplicable (N A); for example, when the register value dependson a loaded memory value, f var is N A.Scale scr is tracked to predict the cache access pattern.In common cases, array access address in a loop is usuallycalculated as base scale i (e.g., base 128 i), where base is3

are involved, to be conservative, the destination register of thecalculation is reinitialized.The prefetching policy of DST is as follows. When an instruction load rd imm(rs) (or the equivalent instructions suchas mov rd 12(rs)) is executed, assuming the calculated addressis addr, the candidate prefetching addresses are addr scrsand addr scrs . If scrs is larger than the cacheline size,one of the candidate addresses currently not in the L1Dcacheand is in the same page as addr is chosen for prefetching.Note that, to be conservative, all the load instructions areassumed to be vulnerable and applied with DST prefetching.For implementation, since DST prefetches data in the samepage, the bitwidth for storing and calculating f va and sc canbe small (Section V-E).the entries is stored in the DiffMin register. For every pieceof information maintained by the buffer, a valid bit is used toindicate if the data is valid or not. When the buffers are reset, allthe valid bits are set to 0. Note that we discuss the conceptualidea in this section. For implementation, we do not need tostore a complete block address in each entry (Section V-E).The APT policy can be described in the following 4 stages:1 Buffer Allocation: When a cache access is issued by aload, its instruction address is used to look up the buffers bycomparing with their InstAddrs. If a hit happens, its associatedbuffer is activated. Otherwise, an empty buffer is allocatedto the load; if all the buffers are occupied, we use the leastrecently used (LRU) replacement policy to select a buffer forallocation. Fig. 4 shows an example where cache access isissued by the load with InstAddr 0x8008, and its associatedbuffer (Buffer[0]) is activated. When a cache access is issuedby the load with InstAddr 0x8018, a buffer miss happens andBuffer[1] is selected with LRU and allocated to the load.2 Entry Updating: For the selected buffer, if the accessedBlkAddr is not recorded in the buffer, it is stored into a newentry of the buffer by APT. Once all the entries are full, weapply LRU to find an entry for the new BlkAddr.3 DiffMin Updating: After the number of valid entries ofa buffer surpasses a threshold, each time this buffer is activated,DiffMin is calculated by APT. To reduce the possible hardwarecomplexity, we only use a small number of entries for eachbuffer, such as 8. DiffMin is used to predict the differencebetween each two addresses to be timed by the attacker.4 Data Prefetching: After the number of valid entriesin a buffer surpasses a threshold, each time this buffer isactivated, candidate prefetching addresses are calculated. Assuming the block address of the current load is BlkAddr′ ,the candidate prefetching addresses are BlkAddr′ DiffMin andBlkAddr′ -DiffMin. APT checks if they exist in the activatedbuffer; then one of them that is not in the activated buffernor L1DCache will be prefetched. For example, assuming thecacheline size is 256 bytes, in Fig. 4, the colored cachelines’block addresses are recorded in the buffer entries, where thecachelines and their corresponding block addresses have thesame color. When Buffer[0] is activated and the latest blockaddress 0x1C00 is stored in the buffer, DiffMin is updated to0x300. APT now predicts the eviction cachelines are 0x1C00 0x300 k, where k is an integer. The eviction cachelinesthat are not currently accessed are shown with red margins.According to our data prefetching policy, the cacheline withaddress 0x1C00-0x300 is finally prefetched (indicated by thearrow near 4 ), since 0x1C00 0x300 is already in the cache.In this way, if a load is actively executed with several accesspatterns, the future target address of the load is predicted andprefetched by APT to mislead the attacker. We conservativelyassume all the loads are vulnerable and apply APT to them.By increasing the number of the buffers, we can also increasethe possibility of associating one buffer with an attackingload. Note that APT (or DST) only prefetch one cachelinefor each load execution in order to reduce the risk of incurringperformance overhead and avoid additional hardware cost.C. Access Pattern TrackerIn phase 3 of the attack, all the eviction cachelines ofthe victim instruction are timed for their access latencies. Tomislead the attacker, besides using the DST to interfere withphase 2, we propose an Access Pattern Tacker (APT) to directlyinterfere with phase 3. The key idea of APT is to learn thepatterns of the timed accesses, and prefetch the cachelinesbefore they are accessed and timed by the attacker.According to challenge C2, attackers usually use randominstead of sequential cacheline accesses to bypass prefetchers(e.g. Stride prefetcher), making it difficult to learn the accesspattern. One key observation is that these random accesses areassociated with only a few load instructions; this helps correlatethe cache accesses that are initiated by the same load.Instruction Address 0x8008 Instruction Address 0x8018Load 0x1000······Load 0x2800Load 0x1C00Valid Buffer[0]InstAddr �00②10x1001 0xA000 0x1500Buffer[n - 00xA1001100xA200······ r[1]1 0x8010 0x8018① Buffer Allocation② Entry Updating③ DiffMin Updating④ Data ·BufferEntries1Load Cachelines0x1000 ④ 0x1C00Fig. 4. An example of the access trace buffer.To defend against the attacks, we propose APT, which consists of a set of access trace buffers. Each buffer is associated with a load instruction and records the block addressesaccessed by the associated load. The purpose of the accesstrace buffer is to infer the access pattern of each load. Theaccess pattern is estimated as an arithmetic sequence witha constant difference, which is estimated as the minimumdifference between block addresses recorded in the buffer.Fig. 4 shows the access trace buffer microarchitecture. Eachbuffer keeps the instruction address (InstAddr) of its associatedload, and maintains a DiffMin register. Each entry in a bufferrecords the block address (BlkAddr) accessed by the load. Theminimum difference between two block addresses among all4

Latency (Cycles)Hit Threshold400BasePrefender-DST400Secret ValuePrefender-APT2002002000005065 7090Array Index(a) Flush Reload1105065 70Prefender400Secret Value90Array Index(b) Evict Reload110Secret ValueCacheMissCacheHit5065 7090Array Index(c) Prime Probe110Fig. 5. The results of different attack methods. (Note that for P REFENDER-DST, the latency results of array indices 64-66 are the same in each figure.)TABLE IIPerformance improvement of SPEC CPU 2006 benchmarks. († The basic prefetcher.)BenchmarkColumn IDPrefetcher# of Acc. Tra. 0.702%0.000%1.533%2P 6%0.001%3.566%567P REFENDER († 11P REFENDER († 851%model, P REFENDER can enforce the security the same as theprevious work [8], [9], [13], [14].V. E VALUATIONA. Experimental SetupC. Performance EvaluationIn our experiments, we use gem5 simulator [16]. The baselineis configured with a 2GHz x86 out-of-order CPU with a 32KBL1Icache, a 64KB L1Dcache, and a 2MB L2cache. For securityanalysis, we test different Spectre attacks using Flush Reload,Evict Reload and Prime Probe. For performance analysis,SPEC CPU2006 benchmark suite [17] is evaluated.Building upon the baseline, P REFENDER can be combined with different basic prefetcher configurations, includingP REFENDER only, P REFENDER with a Tagged prefetcher [18],and P REFENDER with a Stride prefetcher [19].While enforcing security, P REFENDER can also maintain oreven improve performance. The performance results of SPECCPU 2006 benchmarks are shown in Table II, where “BasicPrefetcher” indicates the conventional prefetcher being used.The results show the improvement percentile compared withthe baseline that has no prefetchers.In Table II, the main results are at Columns 2, 6 and 10,with 32 access trace buffers. When we enable P REFENDER(Column 2), the system gains around 2% improvement onaverage, with the security enforcement. For Columns 6 and 10,when the conventional prefetchers are used, P REFENDER canfurther improve performance compared with the setups whereP REFENDER is disabled (Columns 4 and 8, respectively), showing the ability of P REFENDER for maintaining performance.P REFENDER improves overall performance in general withdifferent impacts on each benchmark. For example, 401.bzip2,429.mcf and 462.libquantum have the most speedup withP REFENDER. In comparison, P REFENDER has almost no performance impact on 999.specrand. For 445.gobmk, 458.sjengand 471.omnetpp, their performance has a slight drop (less than0.5%) with P REFENDER. We further conducted a sensitivitystudy on the number of the access trace buffers. It showsthat with the same basic prefetcher, more access trace buffersusually help improve performance. When more than 32 buffersare used, the improvements are marginal.B. Security EvaluationWe evaluate the security effectiveness of P REFENDER bytesting different side channel attacks. In phase 3, the attackertimes the latencies of the cachelines by randomly accessing anarray covering the eviction cachelines. The results are shownin Fig. 5. For Flush Reload, the attacker successfully stealsthe secret value by detecting the only cache hit of the arrayaccesses. After applying DST, two cachelines near the secretdependent one are hit due to prefetching. After applying APT,most of the cachelines are hit because APT successfully learnsthe attacker’s access pattern. When both DST and APT areimplemented, their effect on cachelines are overlapped. Similarresults are also illustrated for Evict Reload. For Prime Probe,the secret value is found by the attacker through the onlymiss of the array. Similarly, when DST is implemented, moremisleading cachelines are prefetched to incur more misses.When APT is applied, all eviction cachelines are fetched andresult in all hits; this also misleads the attacker. When bothDST and APT are applied, only the effect of APT remainssince APT performs prefetching (phase 3) after DST (phase 2).In conclusion, by successfully defeating the attacks in the threatD. Analysis of Cache Miss and DefensePrefetching can impact the cache miss rate. We tested thecache miss rate with P REFENDER enabled and disabled inL1Dcache. The results are shown in Fig. 6, where the shadowedbars represent the systems with P REFENDER. For each case inthe benchmark, the cache miss rate results are normalized to5

N o rm a liz e d C a c h e M is s R a teB a s e lin eT a g g e dS trid e1 .61 .41 .21 .00 .80 .60 .4P re fe n d e rP re fe n d e r (T a g g e d )P re fe n d e r (S trid e )rfk2karn gn dre f n e tp pu mn chm ez ip 2 9 .m c g o b m.a s tc b m p e c ra.s jean t2 6 4r lb e 4 0 1 .b.h m44 7 3 .x a la n4 5 8 .lib q u 4 6 4 .h 7 1 .o m.p e9 .s4 4 54 5 69349284 0 044 6during the victim’s execution and Access Pattern Tracker (APT)to predict the access patterns during the attacker’s measurement.Based on these predictions, the hardware prefetcher bringsthe eviction cachelines into the cache and thus obfuscates theattacker. The experiment proves the defense effectiveness ofP REFENDER using Spectre attacks based on Flush Reload,Evict Reload, and Prime Probe. In addition, the performanceevaluation shows that P REFENDER can achieve a speedup forthe SPEC CPU 2006 benchmark.g .A vFig. 6. The normalized cache miss rate of L1Dcache.# o f P re fe tc h e s (L o g 1 0 )the baseline. In this experiment, P REFENDER is configured with32 access trace bu

Then, the low hit latency of the secret-dependent cacheline reveals the secret. For example, assuming the cacheline size is 64 bytes, if the victim loads a secret-dependent shared data array[s 64] in phase 2, where s is the secret. During phase 3, the attacker accesses array[768] with a low cache hit latency; it can infer the secret is s 768 .

Related Documents:

chine learning parlance, it's not clear which label to use to train the ML model. Previous work has made inroads into these problems. Most no-tably, Hashemi, et al [12] show that prefetching can be phrased as a classification problem and that neural models, such as LSTMs [13],2 can be used for prefetching. But Hashemi, et al. focus on learning

be able to tune Db2 prefetching and I/O. Optimizing Db2 I/O is critical both for row-and column-organized table performance. Attend this session to peek behind the curtain at Db2 prefetching and I/O both from a functional and monitoring perspective and learn tip s and tricks for tuning Db2 to maximize I/O throughput. 1

Windows Defender AV will indicate a detection through standard Windows notifications. You can also review detections in the Windows Defender AV app. The Windows event log also records detection and engine events. See the Windows Defender Antivirus events topic for a list of event IDs and their corresponding actions. Cloud protection features

2. Kanguru Defender Manager Kanguru Defender Manager (KDM) is the client program preloaded on the Defender's CD-ROM partition. The user needs to login to KDM in order to access the secure, encrypted partition. KDM comes pre-installed on your Defender flash drive. No installation to your PC is necessary. KDM is named differently for each model:

mounting platform for Defender Roof Racks, Roof top tents, or other accessories. The mount fits both hard and soft top equipped vehicles. It is constructed and finished in the same fashion as the Defender Rack for a perfect match. The Expedition Mount, Defender Series Roof Rack, and Mounting

public defender services. Currently, those guidelines match the Federal Poverty Guidelines. Strictly applied, that would mean an individual making only 12,000 a year would not qualify for a public defender. According torecent reports, Missouri ranks 50th out of 50 states in income eligibility standards for public defender services, leaving a

2.3 Public Defenders' Opinions of Training and Performance Appraisal, 2009 22 2.4 State Public Defender's Conclusions Regarding the Benefits and Risks of Using Full-Time Public Defender Offices, 1996 25 2.5 Weighted Case Units by District, Fiscal Years 2003 to 2009 29 2.6 Changes to Public Defender Services Resulting from Budget .

Due date of deliverable: 31.12.2012 Document identifier: docx Revision: 1_ 1 Date: 2013-04-16 . SmartAgriFood 31.12.2012 docx Page 2 of 58 The SmartAgriFood Project The SmartAgriFood project is funded in the scope of the Future Internet Public Private Partner-ship Programme (FI-PPP), as part of .