Optimistic Hybrid Analysis: Accelerating Dynamic Analysis Through .

1y ago
24 Views
2 Downloads
1.52 MB
15 Pages
Last View : 23d ago
Last Download : 3m ago
Upload by : Samir Mcswain
Transcription

Optimistic Hybrid Analysis: Accelerating DynamicAnalysis through Predicated Static AnalysisDavid DevecseryPeter M. ChenUniversity of Michiganddevec@umich.eduUniversity of Michiganpmchen@umich.eduJason FlinnSatish NarayanasamyUniversity of Michiganjflinn@umich.eduUniversity of Michigannsatish@umich.eduCCS Concepts Theory of computation Program analysis; Software and its engineering Dynamic analysis;Software reliability; Software safety; Software testing and debugging;ACM Reference Format:David Devecsery, Peter M. Chen, Jason Flinn, and Satish Narayanasamy.2018. Optimistic Hybrid Analysis: Accelerating Dynamic Analysisthrough Predicated Static Analysis. In ASPLOS ’18: 2018 Architectural Support for Programming Languages and Operating Systems,March 24–28, 2018, Williamsburg, VA, USA. ACM, New York, NY,USA, 15 pages. amic analysis tools, such as those that detect data-races,verify memory safety, and identify information flow, have become a vital part of testing and debugging complex softwaresystems. While these tools are powerful, their slow speedoften limits how effectively they can be deployed in practice.Hybrid analysis speeds up these tools by using static analysisto decrease the work performed during dynamic analysis.In this paper we argue that current hybrid analysis is needlessly hampered by an incorrect assumption that preservingthe soundness of dynamic analysis requires an underlyingsound static analysis. We observe that, even with unsoundstatic analysis, it is possible to achieve sound dynamic analysis for the executions which fall within the set of statesstatically considered. This leads us to a new approach, calledoptimistic hybrid analysis. We first profile a small set of executions and generate a set of likely invariants that hold trueduring most, but not necessarily all, executions. Next, wePermission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACM must be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from permissions@acm.org.ASPLOS ’18, March 24–28, 2018, Williamsburg, VA, USA 2018 Association for Computing Machinery.ACM ISBN 978-1-4503-4911-6/18/03. . . 15.00https://doi.org/10.1145/3173162.3177153apply a much more precise, but unsound, static analysis thatassumes these invariants hold true. Finally, we run the resulting dynamic analysis speculatively while verifying whetherthe assumed invariants hold true during that particular execution; if not, the program is reexecuted with a traditionalhybrid analysis.Optimistic hybrid analysis is as precise and sound as traditional dynamic analysis, but is typically much faster because(1) unsound static analysis can speed up dynamic analysismuch more than sound static analysis can and (2) verifications rarely fail. We apply optimistic hybrid analysis torace detection and program slicing and achieve 1.8x overa state-of-the-art race detector (FastTrack) optimized withtraditional hybrid analysis and 8.3x over a hybrid backwardslicer (Giri).1IntroductionDynamic analysis tools, such as those that detect data-races [23,46], verify memory safety [41, 42], and identify informationflow [16, 20, 31], have become a vital part of testing anddebugging complex software systems. However, their substantial runtime overhead (often an order of magnitude ormore) currently limits their effectiveness. This runtime overhead requires that substantial compute resources be used tosupport such analysis, and it hampers testing and debuggingby requiring developers to wait longer for analysis results.These costs are amplified at scale. Many uses of dynamicanalysis are most effective when analyzing large and diverse sets of executions. For instance, nightly regressiontests should run always-on analyses, such as data-race detection and memory safety checks, over large test suites. Debugging tools, such as slicing, have been shown to be moreinformative when combining multiple executions, e.g. whencontrasting failing and successful executions [4, 25]. Forensicanalyses often analyze weeks, months, or even years of computation [16, 31]. Any substantial reduction in dynamic analysis time makes these use cases cheaper to run and quickerto finish, so performance has been a major research focus inthis area.Hybrid analysis is a well-known method for speeding updynamic analysis tools. This method statically analyzes the

program source code to prove properties about its execution.It uses these properties to prune some runtime checks during dynamic analysis [13, 14, 33, 41]. Conventionally, hybridanalysis requires sound 1 (no false negatives) static analysis,so as to guarantee that any removed checks do not compromise the accuracy of the subsequent dynamic analysis.However, soundness comes at a cost: a lack of precision(i.e., false positives) that substantially reduces the number ofchecks that can be removed and limits the performance improvement for dynamic analysis tools such as race detectorsand slicers.The key insight in this paper is that hybrid analysiscan benefit from carefully adding unsoundness to thestatic analysis, and preserve the soundness of the final dynamic analysis by executing the final dynamicanalysis speculatively. Allowing the static analysis to beunsound can improve its precision and scalability (Figure 1),allowing it to dramatically speed up dynamic analyses suchas race detection (even after accounting for the extra cost ofdetecting and recovering from errors introduced by unsoundstatic analysis).Optimistic hybrid analysis is a hybrid analysis based onthis insight. It combines unsound static analysis and speculative execution to create a dynamic analysis that is as preciseand sound as traditional hybrid analysis, but is much faster.Optimistic hybrid analysis consists of three phases. First, itprofiles a set of executions to derive optimistic assumptionsabout program behavior; we call these assumptions likelyinvariants. Second, it performs a static analysis that assumesthese likely invariants hold true, we call this predicated staticanalysis. The assumptions enable a much more precise analysis, but require the runtime system to compensate whenthey are violated. Finally, it speculatively runs the targetdynamic analysis, verifying that all likely invariants holdduring the analyzed execution. If so, both predicated staticanalysis and the dynamic analysis are sound. In the rarecase where verification fails, optimistic hybrid analysis rollsback and re-executes the program with a traditional hybridanalysis.We demonstrate the effectiveness of optimistic hybridanalysis by applying it to two popular analyses on two different programming languages: OptFT, an optimistic hybriddata-race detection tool built on top of a state-of-the-art dynamic race detector (FastTrack) [23] for Java, and OptSlice,a optimistic hybrid backward slicer, built on the Giri dynamic slicer [45] for C. Our results show that OptFT providesspeedups of 3.5x compared to FastTrack, and 1.8x comparedto a hybrid-analysis-optimized version of FastTrack. Further,OptSlice analyzes complex programs for which Giri cannotrun without exhausting computational resources, and it pro1 Followingconvention, we classify an analysis as sound even if it is only“soundy” [34]. For example, most “sound” static analysis tools ignore somedifficult-to-model language features.OPSFigure 1. Sound static analysis not only considers all validprogram states P, but due to sound over-approximation, italso considers a much larger S. Using likely invariants, predicated static analysis considers a much smaller set of programstates O that are commonly reached (dotted space in P).vides speedups of 8.3x over a hybrid-analysis-optimized version of Giri. We then show how predicated static analysis canimprove foundational static analyses, such as points-to analysis, indicating that optimistic hybrid analysis techniqueswill benefit many more dynamic analyses.The primary contributions of this paper are as follows: We present optimistic hybrid analysis, a method ofdramatically reducing runtimes of dynamic analysiswithout sacrificing soundness by first optimizing witha predicated static analysis and recovering from anypotential unsoundness through speculative execution. We identify properties fundamental to selecting effective likely invariants, and we identify several effectivelikely invariants: unused call contexts, callee sets, unreachable code, guarding locks, singleton threads, andno custom synchronizations. We demonstrate the power of optimistic hybrid analysis by applying the technique to data-race detectionand slicing analyses. We show optimistic hybrid analysis dramatically accelerates these analyses, withoutchanging the results of the analysis. To the best ofour knowledge, OptFT is currently the fastest dynamichappens-before data-race detector for Java that is sound.2DesignOptimistic hybrid analysis reduces the overhead of dynamicanalyses by combining a new form of unsound analysis,known as predicated static analysis, with speculative execution. The use of speculative execution allows optimistichybrid analysis to provide correct results, even when entering states not considered by predicated static analysis.A predicated static analysis assumes dynamically-gatheredlikely invariants hold true to reduce the state space it explores, creating a fundamentally more precise static analysis.Figure 1 shows how the assumptions in a predicated staticanalysis can dramatically reduce the state space considered.A sound static analysis must make many overly-conservative

approximations that lead it to consider not just all possibleexecutions of a program (P), but also many impossible executions (S).Rather than paying the cost of this over-approximation, ahybrid analysis can instead construct a static analysis basedonly on the set of executions likely to actually be analyzeddynamically. Speculative assumptions make the state space(O) much smaller than not only S but also P, demonstratingthat by using a predicated static analysis, optimistic hybridanalysis has the potential to optimize the common-case analysis more than even a perfect sound static analysis (whoseresults are bounded by P). The set of states in P not in Orepresent the set of states in which predicated static analysisis unsound. Optimistic hybrid analysis uses speculation andruntime support to handle when these states are encountered. As long as the set of states commonly experienced atruntime (denoted by the dotted area) resides in O, optimistichybrid analysis rarely mis-speculates, resulting in an average runtime much faster than that of a traditional hybridanalysis.We apply these principles using our three-phase analysis.First, we profile a set of executions of the target programand generate optimistic assumptions from these executionsthat might reduce the state space the static analysis needs toexplore. As these dynamically gathered assumptions are notguaranteed to be true for all executions, we call them likelyinvariants of the executions.Second, we use these likely invariants to perform a predicated static analysis on the program source. Leveraging thelikely invariants allows this static analysis to be far moreprecise and scalable than traditional whole-program analysis,ultimately allowing it to better optimize dynamic analyses.Finally, we construct and run the final dynamic analysis optimistically. Because predicated static analysis is notsound, we insert extra checks in this optimistic dynamicanalysis to verify the likely invariants assumed hold truefor each analyzed execution. If the checks determine thatthe likely invariants are in fact true for this execution, theexecution will produce a sound, precise, and relatively efficient dynamic analysis. If the additional checks find that theinvariants do not hold, the analysis needs to compensate forthe unsoundness caused by predicated static analyses.The rest of this section describes the three analysis steps,and important design considerations.2.1Likely Invariant ProfilingA predicated static analysis is more precise and scalable thantraditional static analysis because it uses likely invariants toreduce the program states it considers. Likely invariants arelearned though a dynamic profiling pass. We next discuss thedesirable properties of a likely invariant, and how optimistichybrid analysis learns the invariants by profiling executions.Strong: By assuming the invariant, we should reduce thestate space searched by predicated static analyses. This is thekey property that enables invariants to help our static phase;if the invariant does not reduce the state space consideredstatically, the dynamic analyses will see no improvement.Cheap: It should be inexpensive to check that a dynamicexecution obeys the likely invariants. For soundness, the finaldynamic analysis must check that each invariant holds during an analyzed execution. The cost of such checks increasethe cost of the final dynamic analysis, so the net benefit ofoptimistic hybrid analysis is the time saved by eliding dynamic runtime checks minus the cost of checking the likelyinvariants. Note that the time spent in the profiling stage togather likely invariants is done exactly once, and is thereforeless important; only dynamically verifying the invariantsneeds to be inexpensive.Stable: A likely invariant should hold true in most orall executions that will be analyzed dynamically. If not, thesystem will declare a mis-speculation, and recovering fromsuch mis-speculations may be expensive for some analyses.There is a trade-off between stability and strength of invariants. We find it sufficient to consider invariants that aretrue for all profiled executions. However, we could aggressively assume a property that is infrequently violated duringprofiling as a likely invariant. This stronger, but less stable invariant may result in significant reduction in dynamicchecks, but increase the chance of invariant violations. If thereduced checks outweigh the costs of additional invariantviolations this presents a beneficial trade-off.2.2Predicated Static AnalysisThe second phase of optimistic hybrid analysis creates anunsound static analysis used to elide runtime checks andspeed up the dynamic analysis. Traditional static analysiscan elide some runtime checks. However, to ensure soundness, such static analysis conservatively analyzes not onlyall states that may be reached in an execution, but also manystates that are not reachable in any legal execution. Thisconservative analysis harms both accuracy and scalabilityof static analysis.A better approach would be for the static analysis to explore precisely the states that will be visited in dynamicallyanalyzed executions. A predicated static analysis tries toachieve this goal by predicting these states through profiling and characterizing constraints on the states as likelyinvariants. By exploring only a constrained state space ofthe program (the states predicted reachable in future executions), predicated static analysis provides fundamentallymore precise analysis.This reduction of state space also improves the scalabilityof static analysis, which now need perform only a fractionof the computation a traditional static analysis would. Staticanalysis algorithms frequently trade-off accuracy for scalability [27, 35, 50, 53]. In some instances this improved efficiencyallows the use of more sophisticated static analyses that aremore precise but often fail to scale to large programs.

2.3Dynamic AnalysisThe final phase of optimistic hybrid analysis produces asound, precise and relatively efficient dynamic analysis. Dynamic analysis is implemented by instrumenting a binarywith additional checks that verify a property such as datarace freedom and then executing the instrumented binaryto see if the verification succeeds.In our work, the instrumentation differs from traditionaldynamic analysis in two ways. First, we elide instrumentation for checks that static analysis has proven unnecessary;this is done by hybrid analysis also, but we elide more instrumentation due to our unsound static analysis. Second,we add checks that verify that all likely invariants hold trueduring the execution and violation-handling code that isexecuted when a verification fails.To elide instrumentation, this phase consumes the set ofunneeded runtime checks from the predicated static analysisphase. For instance, a data-race detector will instrument allread/write memory accesses and synchronization operations.The static analysis may prove that some of these read/writeor synchronization operations cannot contribute to any races,allowing the instrumentation to be elided. Since the overheadof dynamic analysis is roughly proportional to the amountof instrumentation, eliding checks leads to a commensurateimprovement in dynamic analysis runtime.The instrumentation also inserts the likely invariant checks.By design, these invariants are cheap to check, so this code isgenerally low-overhead and simple. For example, checkinglikely unused code requires adding an invariant violationcall at the beginning of each assumed-unused basic block.This call initiates rollback and re-execution if the check fails.Roll-back is necessary as predicated static analysis mayoptimize away prior metadata updates needed for sound execution once an invariant is violated. Figure 2 shows howa metadata update for variable a on line 2 is elided by optimistic hybrid analysis because of the likely-unused code(LUC) invariant on line 4. If the invariant fails, then the metadata required to do the race check on line 5 is missing, andwill be recovered by rolling-back and executing line 2 withconservative analysis.We currently handle invariant violations with a catch-allapproach: roll-back the entire execution and re-analyze itwith traditional (non-optimistic) hybrid analysis. As we target retroactive analysis, this approach is practical for severalreasons. First, with sufficient profiling invariant violationswill be rare enough that even this simple approach has minimal impact on overall analysis time. Second, restarting adeterministic replay, and guaranteeing equivalent executionis trivial with record/replay systems, which are commonlyused in retroactive analyses. If the cost of rollback becamean issue or full record/replay systems were impractical, wecould reduce the costs of rollbacks through more profilingor explore cheaper rollback mechanisms, such as partial rollback or partial re-analysis.One appealing approach to reducing the cost of invariantmis-speculation is to recover by rolling back to a predicatedstatic analysis analysis that doesn’t assume the invariant justviolated. However, doing so generally would require an analysis for each possible set of invariant violations (O(2n ) wheren is number of invariants), far too many static analyses toreasonably run. It may be possible to reduce this number bygrouping invariants, but since we do not experience significant slowdowns with our sound analysis recovery method,we do not explore this approach further.3Static Analysis BackgroundOptFT and OptSlice are built using several data-flow analyses, such as backward slicing, points-to, and may-happen-inparallel. Data-flow analysis approximate how some propertypropagates though a program. To construct this approximation, a data-flow analysis builds a conservative modelof information flow through the program, usually using adefinition-use graph (DUG). The DUG is a directed graphthat creates a node per definition (def) analyzed. For example,a slicing DUG would have a node per instruction, while apoints-to analysis would have nodes for pointer definitions.Edges represent information flow in the program betweendefs and the defs defined by uses. For example, an assignment operation in a points-to analysis creates an edge fromthe node representing the assignment’s source operand tothe node representing its destination. Once the DUG is constructed, the analysis propagates information through thegraph until a closure is reached. To create optimistic versionsof these data-flow analyses, we leverage likely invariantsto reduce the number of paths through which informationflows in the DUG.There are many modeling decisions that an analysis makeswhen constructing the DUG. One critical choice is that ofcontext-sensitivity. A call-site context-sensitive analysis logically distinguishes different call-stacks, allowing more precise analysis. A context-insensitive analysis tracks information flow between function calls, but does not distinguishbetween different invocations of the same function.Logically, a context-insensitive analysis simplifies and approximates a program by assuming a function will alwaysbehave the same way, irrespective of calling context. To create this abstraction, context-insensitive analyses constructwhat we call local DUGs for each function by analyzing thefunction independently and creating a single set of nodes inthe graph per function. The analysis DUG is then constructedby connecting the nodes of the local DUGs at inter-functioncommunication points (e.g. calls and returns).A context-sensitive analysis differs from a context-insensitiveanalysis by distinguishing all possible calling contexts ofall functions, even those which will never likely occur in

Traditional HybridInvariant Mis-SpeculationOptimistic HybridT2T11. lock(l);2. a 7;// Update metaa3. unlock(l);if (x) {T1T21. lock(l);2. a 7;// elide update3. unlock(l);if (x) {4.5.4.5.print(a);// check metaaT1T21. lock(l);Invariant2. a 7;Fails// elide update3. unlock(l);if (x) {LUC Check();print(a);Missing// check metaa?Dependency}LUC Check();print(a);// elide check}}Figure 2. Example of how OptFT can require rollback on invariant violation. When the likely-unused code (LUC) invariant isviolated on rollback, the execution must rollback and re-execute line 2 to gather the metadata required for the check on line 5.Source Codemain() {1: a my malloc();2: b my malloc();}my malloc() {if (!g init)3:return do init();4: return malloc( );}do init() {g init true;5: // Long initialization code}Context-Insensitive12345Def-Use Graph e Likely-Unused Call Contexts324252312141324251Figure 3. Demonstration of how context-sensitive and context-insensitive analysis parse a code segment to construct a DUG,as well as the reductions from likely-unused call contextspractice. To create this abstraction, the DUG of the analysisreplicates the nodes defined by a function each time a newcalling-context is discovered during the DUG construction.One simple method of creating such a DUG is through whatis known as a bottom-up construction phase, in which theanalysis begins at main, and for each call in main it createsa clone of the nodes and edges of the local DUG for thecallee function. It then connects the arguments and returnvalues to the call-site being processed. If that callee functionhas any call-sites, the callee is then processed in the samebottom-up manner. This recurses until all callees have beenprocessed, resulting in a context-sensitive DUG representingthe program. The context-sensitive expression of the DUG ismuch larger than that of a context-insensitive analysis, butit also allows for more precise analysis.Figure 3 illustrates the differences between DUGs constructed by a context-sensitive and insensitive analysis. Nodes3, 4, and 5 are replicated for each call to my malloc (), allowing the analysis to distinguish between the different callcontexts, but replicating the large do init () function.Context-sensitive analyses tend to be precise, but not fullyscalable, while context-insensitive analyses are more scalableat the cost of accuracy. We build both context-sensitive andinsensitive variants of several predicated static analyses.4OptFTTo show the effectiveness of optimistic hybrid analysis, wedesign and implement two sample analyses: OptFT, an optimistic variant of the FastTrack race detector for Java, andOptSlice, an optimistic dynamic slicer for C programs. Thissection describes OptFT and Section 5 describes OptSlice.OptFT is a dynamic data-race detection tool that providesresults equivalent to the FastTrack race detector [23]. FastTrack instruments load, store, and synchronization operations to keep vector clocks tracking the ordering amongmemory operations. These vector clocks are used to identifyunordered read and write operations, or data-races.OptFT uses the Chord analysis framework for static analysis and profiling, building on Chord’s default context-insensitivestatic data-race detector [40]. For dynamic analysis we usethe RoadRunner [24] analysis framework, optimizing theirdefault FastTrack implementation [23].4.1Analysis OverviewThe Chord static data-race detector is a context-insensitive,lockset-based detector. The analysis depends on two fundamental data-flow analyses, a may-happen-in-parallel (MHP)analysis, which determines if memory accesses may happen in parallel, and a points-to analysis, which identifies the

memory locations to which each pointer in the program maypoint.The analysis first runs its static MHP analysis to determinewhich sets of loads and stores could dynamically happen inparallel. Once those sets are known, the analysis combinesthis information with a points-to analysis to construct pairsof potentially racy memory accesses which may alias andhappen in parallel. Finally, the analysis uses its points-toanalysis to identify the lockset guarding each memory access,and it uses these to exclude pairs of loads and stores guardedby the same lock from its set of potentially racing accesses.To optimize the dynamic analysis, OptFT elides instrumentation around any loads or stores that predicated staticanalysis identifies as not racing. The analysis also elides instrumentation around some lock/unlock operations, as wediscuss in Section 4.2.4.4.2InvariantsOptFT is optimized with four likely invariants. OptFT firstgathers the invaraints with a set of per-invariant profilingpasses, and stores the invariant set for each profiling execution in a text file. This text file maps invariant sites to setsof invariant data (e.g. a basic block to how many times itsvisited, or an indirect callsite to the functions it may call).Then, after all profiles are run, the individual run’s invariantsets are merged, (by intersecting the sets of invariants, tofind invariants that hold true for all runs) to gather the invariant set for all of the profiling experiments. The individualinvariants gathered and used by OptFT are:4.2.2Chord’s race detector’s final phase prunes potentially racyaccesses by identifying aliasing locksets. Unfortunately, thisoptimization is unsound. To soundly identify if two locksitesguard a load or store, Chord needs to prove that the two sitesmust hold the same lock when executing. However, the aliasanalysis Chord uses only identifies may alias relations. Toget sound results from Chord we must either forego this lockbased pruning or use a (typically unscalable and inaccurate)must analysis. In the past, hybrid analyses that use Chordhave opted to remove this pruning phase for soundness [47].Likely guarding locks attempt to overcome Chord’s mayalias lockset issue by dynamically identifying must-alias lockpairs. The profiling pass instruments each lock site and tracksthe objects locked, creating a set of dynamic objects locked ateach lock site. If it identifies that two sites always only lockthe same dynamic object, it assumes a must-alias invariantfor the lock pairs. The output of this profiling execution isa set of these “must-alias” relations, these pairs can then bedirectly consumed by chord’s lockset pruning pass.The invariant is strong. By assuming the invariant, theChord race detection algorithm can add in some of the locksetbased pruning discarded due to its weaker may alias analysis.Additionally, the invariant is cheap to check at runtime. Thedynamic analysis need only instrument the assumed aliasinglock-sites and verify the sites are locking the same object,a check far less expensive than the lock operation itself. Finally, executions do not vary the objects locked frequently,so this invariant remains stable across executions.4.2.34.2.1Likely Unreachable CodeThe first, and simplest, invariant OptFT assumes is likelyunreachable code. We define a basic block within the program that is unlikely to be visited in an execution as a likelyunreachable code (LUC) block. To profile LUC, OptFT profilesthe inverse, that is used basic blocks. OptFT runs a basicblock counting profiling pass, which instruments each basicblock to create a count of times it was visited. OptFT uses thisinformation to create a mapping of basic blocks to executioncounts. The inverse of profiled blocks (set of basic blocks notin our visited basic block set) is our likely unvisited set.This invariant easily satisfies the three criteria of goodlikely invariants. First, it is strong; the invariant reducesthe state space our data-flow analyses considers by pruningnodes defined by likely unused code and any edges incidentupon them from our analysis DUGs. This reduction in connectivity within the DUG can greatly reduce the amountinformation that propagates within the analysis. Second, theinvariant is virtually free to check at runtime, requiring onlya mis-speculation call at the beginning of the likely-unusedcode. Finally, we observe that unused code blocks are typically stable across executions.Likely Guarding LocksLikely Singleton ThreadLikely singleton thread

tional dynamic analysis, but is typically much faster because (1) unsound static analysis can speed up dynamic analysis much more than sound static analysis can and (2) verifi-cations rarely fail. We apply optimistic hybrid analysis to race detection and program slicing and achieve 1.8x over a state-of-the-art race detector (FastTrack .

Related Documents:

Optimistic concurrency control methods [20, 41] are especially attractive for real-time database systems because they are non-blocking and deadlock-free. Therefore, in recent years, numerous optimistic concurrency control methods have been proposed for real-time databases (e.g. [13, 14, 26, 42, 43, 49]). Although optimistic approaches have been .

On Optimistic Methods for Concurrency Control. Kung81: H.T . Kung, John Robinson. ACM Transactions on Database Systems (TODS), vol 6, no 2, June 1981. Birth of Optimistic Methods Lovely , complex, very concurrent transactions Spawned much subsequent systems theory Basic tradeof f between go slow & safe vs go fast and clean up after .

SONATA Hybrid & Plug-in Hybrid Hybrid SE Hybrid Limited Plug-in Hybrid Plug-in Hybrid Limited Power & Handling 193 net hp, 2.0L GDI 4-cylinder hybrid engine with 38 kW permanent magnet high-power density motor —— 202 net hp, 2.0L GDI 4-cylinder hybrid engine with 50 kW permanent magnet high-power density motor —— 6-speed automatic .

Hybrid clouds can't begin cutting your organization's costs and increasing its agility until workloads are migrated and integrated with existing management, security and chargeback platforms and processes. 6 / Accelerating Time-to-Value Through Hybrid Cloud Automation Automation: Getting started Hybrid clouds can't begin cutting your

An optimistic concurrency control technique is one that allows transactions to execute without synchronization, relying on commit-time validation to ensure serializability. Several new optimistic concurrency control techniques for objects in decentralized distributed systems are described here,

Optimistic Concurrency Control Modern OCC Implementations 4. CMU 15-721 (Spring 2018) . Optimistic Currency Control (OCC) Store all changes in private workspace. . Two methods for this phase: Backward Validation Forward Validation 22. CMU 15-721 (Spring 2018)

Abstract. Concurrency control is one of the main issues in the studies of real-time database systems. In this paper di erent distributed con-currency control methods are studied and evaluated in real-time system environment. Because optimistic concurrency control is promising candi-date for real-time database systems, distributed optimistic .

Animal Nutrition & Health addresses the nutrition additives segment of the feed and pet food markets. Human Nutrition & Health largely addresses nutrition and functional ingredients segment of the food markets. Personal Care is focusing on the actives and ingredients in the sun care, skin care and hair care industries. DSM is the only producer who can supply the lawsuits, and public rejection .