Elixir: A High-performance And Low-cost Approach To Managing Hardware .

1y ago
98 Views
3 Downloads
4.53 MB
17 Pages
Last View : Today
Last Download : 2m ago
Upload by : Helen France
Transcription

Elixir: A High-performance and Low-cost Approachto Managing Hardware/Software Hybrid Flow TablesConsidering Flow BurstinessYanshu Wang and Dan Li, Tsinghua University; Yuanwei Lu, Tencent;Jianping Wu, Hua Shao, and Yutian Wang, Tsinghua /presentation/wang-yanshuThis paper is included in the Proceedings of the19th USENIX Symposium on Networked SystemsDesign and Implementation.April 4–6, 2022 Renton, WA, USA978-1-939133-27-4Open access to the Proceedings of the19th USENIX Symposium on NetworkedSystems Design and Implementationis sponsored by

Elixir: A High-performance and Low-cost Approach to ManagingHardware/Software Hybrid Flow Tables Considering Flow BurstinessYanshu Wang? , Dan Li? , Yuanwei Lu† , Jianping Wu? , Hua Shao? , Yutian Wang? Tsinghua University, † TencentAbstractHardware/software hybrid flow table is common in modern commodity network devices, such as NFV servers, smartNICs and SDN/OVS switches. The overall forwarding performance of the network device and the required CPU resourcesare considerably affected by the method of how to split theflow table between hardware and software. Previous worksusually leverage the traffic skewness for flow table splitting,e.g. offloading top 10% largest flows to the hardware can saveup to 90% CPU resources. However, the widely-existingbursty flows bring more challenges to flow table splitting. Inparticular, we need to identify the proper flows and propertiming to exchange the flows between hardware and softwareby considering flow burstiness, so as to maximize the overallperformance with low overhead.In this paper we present Elixir, a high-performance andlow-cost approach to managing hardware/software hybridflow tables on commodity devices. The core idea of Elixirincludes three parts, namely, combining sampling-based andcounter-based mechanisms for flow rate measurement, separating the replacement of large flows and bursty flows, aswell as decoupling the flow rate identification window andthe flow replacement window. We have implemented Elixirprototypes on both Mellanox ConnectX-5 NIC and BarefootWedge100BF-32X/65X P4 Switch, with a software libraryon top of DPDK. Our experiments based on real-world datatraces demonstrate that, compared with the state-of-the-art solutions, Elixir can save up to 50% software CPU resourceswhile keeping the tail forwarding latency 97.6% lower.1IntroductionWith more and more packet header fields taken as the inputfor forwarding rules, the size of flow table in modern networkdevices grows rapidly [3, 4, 59, 64, 71]. Although hardwarehas fast forwarding speed, the hardware on-chip memory(typically containing 6M flows [13, 49, 50]) usually cannot serve all the concurrent flows (typically with an order ofO(10M) [31]). On the contrary, the software has large memoryUSENIX Associationand limited forwarding capacity. As a result, many commodity network devices, such as NFV servers, smart NICs andSDN/OVS switches, take a hardware/software hybrid way tomanage the large flow table [4, 15, 16, 61]. By using this kindof hybrid flow table, the typical packet forwarding processis as follows. Upon receiving traffic, the hardware extractscertain fields from the received packet’s header. If an entryin the hardware flow table is hit, the action associated withthe entry is executed on the packet; otherwise, the packet isforwarded to CPU to match the software flow table.In this scenario, it is important to figure out how to splitthe flow table between hardware and software. The splitting method not only affects the forwarding performance,but also determines the CPU resources reserved for softwareforwarding, which is of particular importance for cloud environment. Previous works usually leverage the traffic skewness for flow splitting [5, 15, 18, 53, 55, 61, 73], e.g. offloading top 10% largest flows to the hardware can save up to 90% CPU resources. However, the wide existence of burstyflows [19, 25, 33, 48, 52] brings more challenges to be addressed to flow table splitting, for maximizing the overallforwarding performance with low overhead.First, how to accurately measure all the flow rates with lowoverhead on commodity devices?The dynamic flow replacement between hardware and software requires a timely and accurate identification of all theflow rates including bursty ones. As the hardware flows bypass the software, the measurement cannot be done solelyby software. Commodity hardware devices, such as Mellanox NICs and P4 Switches, support both putting a hardwarecounter to every flow and sampling the hardware traffic tosoftware. The cost of using hardware counters for flow ratemeasurement is very high (more discussion in Section 2). Ifsampling a portion of the hardware traffic to software formeasurement, a too large sampling rate wastes many CPUresources while a too small sampling rate might miss burstyflows, which can result in packet loss. Given a certain sampling rate, we should also set a proper window size for accurate flow rate identification. Therefore, we need a method to19th USENIX Symposium on Networked Systems Design and Implementation535

accurately measure all the flow rates with low overhead oncommodity devices, so as to identify the appropriate set offlows for replacement.Second, how to set the proper timing for flow replacementbetween hardware and software?Due to traffic dynamics, some flow table entries in hardwareand software should be exchanged over time. Conventional approaches make periodic replacement, i.e. if a software flow’srate becomes larger than a hardware flow’s rate within a timewindow, the two flows are exchanged with each other. However, in practice we find it difficult to set the proper windowsize for periodic replacement. In order to timely offload software flows whose rates turn large, the window should be assmall as possible. However, the small window will lead tofrequent flow table replacement. In the high-speed packetforwarding scenario, too frequent flow table replacement willcause considerable performance degradation, in both hardware and software. On the other side, if we set a large replacement window to mitigate forwarding performance degradation, bursty flows with lower average rate than stable flowsmight be kept in software. To prevent packet loss of thesebursty flows, we have to provision much more CPU resourcesfor the peak rate, which is usually several orders of magnitudehigher than the average rate. It is a considerable waste of CPUresources. Therefore, we have to solve this dilemma of settingthe proper flow replacement window between hardware andsoftware.In this paper, we propose Elixir, a high-performance andlow-cost approach to managing hardware/software hybridflow tables on commodity devices, by taking flow burstinessinto account. Elixir deals with the aforementioned challengesas follows.First, Elixir combines both sampling-based and counterbased mechanisms for the rate measurement of large flowsand bursty flows, respectively. For large flows, packets aresampled from hardware to software with a low samplingrate, so as to reduce the processing overhead on CPU. Forbursty flows that are easy to be missed by low-rate sampling,hardware counters are put to each flow, since the numberof concurrent bursty flows is not quite large (based on ourobservation from real traces). By this method, Elixir leveragesthe benefit of both hardware and software solutions to get thebalance between measurement accuracy and measurementoverhead.Second, Elixir separates the replacement processes of largeflows and bursty flows. For large flows, due to their stablerate changing pattern over time, Elixir periodically exchangeslarge flows between hardware and software. A relatively largereplacement window is used to offload large flows with thehighest average rates during this window, so as to avoid thethroughput degradation caused by too frequent replacement.Meanwhile, for bursty flows, as they appear in the systemat irregular basis, Elixir leverages an event-driven processto offload them to hardware, i.e. bursty flows are offloaded536immediately when they are detected. By observing that thesize of software queue increases dramatically when a burstyflow comes, Elixir uses it as a signal to trigger the burstyflow offloading process. In this way, Elixir makes the tradeoffbetween burst-aware offloading and forwarding performancedegradation caused by frequent replacement.Third, Elixir decouples the flow rate identification windowand the flow replacement window. Previous works [6, 21, 66]make no distinction between the two windows. In principle,the flow rate identification window is determined by traffic characteristics while the flow replacement window is determined by hardware/software system limitations. Consequently, using one size for the two windows may either sacrifice the flow rate identification accuracy, or lead to laggedflow replacement. Elixir explicitly decouples the two andindependently decides the proper sizes for them. For largeflows, the flow replacement window is set to the minimumreplacement decision interval which brings affordable impacton the forwarding performance; while the flow rate identification window is set to the minimum window which canaccurately identify flow rates, since the sampled traffic maycause inaccurate rankings of large flow rates. For bursty flows,the flow rate identification window is set to be small enoughto catch the flow burstiness and they are offloaded immediately once detected. By the decoupling, Elixir achieves thetradeoff between timely flow replacement and accurate flowidentification.We have implemented Elixir prototypes on both MellanoxConnectX-5 NIC and Barefoot Wedge100BF-32X/65X P4Switch, with a software library on top of DPDK. We run experiments based on the real-world traces from cloud gatewaysof an Internet content provider. The results show that Elixircan save up to 50% software CPU resources while keepingthe tail forwarding latency 97.63% lower compared withTFO [61] and 97.61% lower compared with LFP [15].2Motivation and ChallengesIn this section, we first describe the findings from traffic measurement of typical cloud gateways, then we present the design challenges for a hardware/software hybrid flow tablemanagement solution.2.1Traffic Measurement of Cloud GatewaysWe examine three cloud gateways of an Internet contentprovider. The gateways run on commodity servers using hardware/software hybrid flow tables. We collect traffic tracesfrom all the three gateways. The data path of these gatewaysmanipulates packet headers and forwards them with tunnels,e.g. GRE or VxLAN [17, 43]. For each gateway, we have collected the real-time packet-level traces of a work day. In whatfollows we describe our key findings from the traffic traces ofthe three gateways. Due to page limits, we only present theresults for one gateway, since the traffic characteristics of theother two are quite similar.19th USENIX Symposium on Networked Systems Design and ImplementationUSENIX Association

(a) Distribution of the burst ratio (b) Distribution of the numbers(peak/average rate).of concurrent bursty flows.Figure 1: Accumulated change rate of large flows againstdifferent flow rate identification window sizes.Large flows constantly change over time: The rates offlows dynamically change over time. For the traces, we setthe flow rate identification window to different sizes, i.e. from1 second to 300 seconds. For each window size, we measurethe top 10% flows with the largest flow rates in every timewindow (referred to as large flows), and count the number ofchanged large flows between neighboring windows within a10-minute period. Then we divide this number by the totalnumber of the flows to calculate the accumulated change ratesof large flows. Fig. 1 shows the accumulated change rates oflarge flows within the 10-minute period. We find that largeflows constantly change over time. It indicates that, in orderto efficiently manage the hardware/software hybrid flow tableand offload large flows to hardware, the flow table entriesin hardware and software need to be periodically exchanged.Moreover, given a fixed time period, a smaller flow rate identification window leads to higher accumulated changes of largeflows between neighboring windows.Bursty flows are common: Previous works on flow tablesplitting primarily consider offloading large flows, with afocus on the average flow rates during a time period. They paylittle attention to bursty flows, of which the flow rates surgeto a high value quickly and last for a short time period beforedropping to a low rate. Bursty flows are very common inboth backbone and datacenter networks [19, 25, 33, 48, 48, 52],which may come from bursty applications (such as videoapplication), TCP incast, or batching operation of networkstacks [48].Conceptually, a bursty flow can be or not be a large flow;besides, a large flow can be either a bursty large flow duringsome periods while a stable large flow during other periods.In order to study the impact of bursty flows, we measure therate changing pattern for each individual flow in the cloudgateway trace.The results show that bursty flows are also quite commonin the trace. We use the ratio of a flow’s peak rate (in a second)over its average rate, named burst ratio, to describe the level offlow burstiness. We depict the CDF of burst ratio in Fig. 2(a).As shown in the figure, 80% of the flows have a 20 burstratio, with the maximum ratio as high as 80. By examiningthe data, we find that the peak rate duration of most burstyflows is very short, e.g. several seconds. Moreover, we makeUSENIX AssociationFigure 2: The characteristics of bursty flows in the cloudgateway trace.(a) Queue size and latency.(b) Packet loss rate against dif-ferent software-forwarding CPUcores.Figure 3: The cost of forwarding the bursty flow and the stableflow by software.statistics about the distribution of the numbers of concurrentbursty flows (using bursty flows with a 10 burst ratio in thecase), shown in Fig. 2(b). We find that, although bursty flowsare common, their bursty times are usually not overlapped. Inother words, the number of concurrent bursty flows is limited,e.g. with the largest number as 36 in our trace.Bursty flows require more software forwarding resources: We further carry on experiments to quantitativelycompare the cost of forwarding bursty flows and stable flowsby software. We use three servers, one as the sender, one asthe receiver and the third as the software forwarder. We purposely generate a stable flow and a bursty flow at the sender.Notably, the average rate of the stable flow is 5 times higherthan that of the bursty flow. Each flow is only forwarded bythe software.We first use one CPU core at the software forwarder. Weseparately run the two flows, and record the end-to-end latencyas well as the queue size at the software forwarder. The resultsare shown in Fig. 3(a). It indicates that, for the stable flowand the stable periods of the bursty flow, the queue size issmall and the end-to-end latency is low; but at the peak rateperiod of the bursty flow, the queue size and the end-to-endlatency sharply increase, by a maximum of 30 and 28times respectively.Then we use different numbers of CPU cores at the software forwarder to forward the two flows and measure thepacket loss rates. As demonstrated by Fig. 3(b), although theaverage rate of the bursty flow is only one fifth of the stableflow, the software forwarding resources required by the burstyflow are 4 times of the stable flow to avoid packet loss. Specif-19th USENIX Symposium on Networked Systems Design and Implementation537

ically, to prevent packet loss from occurrence, at least 4 CPUcores are required at the software forwarder for the burstyflow, while only 1 CPU core is needed for the stable flow. Inconclusion, since the CPU resources should be reserved forthe peak rate of a flow instead of the average rate to preventpacket loss, in practice bursty flows occupy remarkably moresoftware forwarding resources than stable flows.2.2Design ChallengesPrevious works [15, 55, 61, 73] on managing hardware/software hybrid flow tables usually leverage the traffic skewnessfor flow table splitting, i.e. offloading a small portion of thelargest flows to the hardware can save most of the CPU resources. However, by considering flow burstiness, more challenges have to be addressed, so as to maximize the overallperformance with low overhead.Challenge 1: how to accurately measure all the flowrates with low overhead on commodity devices?An accurate and low-cost approach to measuring all theflow rates is critical in hybrid flow table management oncommodity devices. Note that hardware can see all the flowsbut software can only see the software-forwarded flows, hencethe flow rate measurement cannot be done without the supportof hardware. Knowledgeable readers may consider building asketch data structure [12,23,39,41,68–70,72,74] in hardwareto measure all the flow rates. However, commodity hardwaredoes not support this kind of functionality yet. Besides, asketch for so many flows consumes too many resources inhardware, which can be saved for storing more forwardingrules.Modern commodity hardware devices, such as MellanoxNICs and P4 Switches, support both putting a hardwarecounter to every flow and sampling the hardware traffic tosoftware (Mellanox plans to support the sampling functionality in the new release). Hence, one candidate solution is toplace a hardware counter for each flow and use the countersto accurately measure each flow’s rate. However, there aretwo problems for this method. First, setting a counter for eachflow occupies additional hardware resources. Based on ourmeasurement, hardware counters result in 20% less spacefor hardware (NIC and switch) forwarding rules, which willresult in much more traffic forwarded to software and muchmore CPU resources consumed consequently. Second, thespeed for software to read the counters from commodity hardware devices is slow, e.g. the speed is about 20k rules persecond for Mellanox ConnectX-5 NIC. It means that severalseconds are required for the software to read all the hardwarecounters, which is too slow for timely flow replacement.The final optional solution is to sample a portion of hardware traffic to software and use the software to measure all theflow rates. However, as the peak rate duration of bursty flowsis quite short, a high sampling rate is required in order notto miss them; otherwise, bursty flows might not be correctlyidentified to offload from software to hardware, or might be538Figure 4: Miss probability of bursty flows against differentsampling rates.(a) Identification probability of (b) Replacement frequency ofbursty flows.large flows.Figure 5: Identification probability of bursty flows and replacement frequency of large flows against different timewindow sizes.replaced from hardware to software by mistake. As shown inFig. 4, in the cloud gateway trace, a 20% sampling rate willlead to a probability of 76% to miss bursty flows. However, ifwe set a high sampling rate, much more CPU resources willbe occupied. Hence, it is a dilemma how to set the propersampling rate if we use a sampling-based measurement approach.As a result, we have to design an accurate and low-cost flowmeasurement method, by overcoming the problems above.Challenge 2: How to set the proper timing for flow replacement between hardware and software?Conventional approaches make periodic flow replacementbetween hardware and software, i.e. if a software flow’s ratebecomes larger than that of a hardware flow within a periodictime window, the two flows are exchanged with each other.Following this approach, as discussed above, if we set a largetime window to replace flows between hardware and software,bursty flows might have lower average rate than stable flowsduring this window and thus be kept in software. It will resultin more CPU resources reserved for software forwarding.Based on the cloud gateway trace, we make further analysisto measure the probability for bursty flows to rank higher thanstable flows, i.e. bursty flows being identified as large flows tooffload by previous solutions, against different identificationwindow sizes. As shown in Fig. 5(a), for bursty flows torank higher than stable flows with 60% probability, theidentification window should be as small as 2 seconds.On the other side, if we set a small time window for flowreplacement, it usually means much more flows to exchangebetween hardware and software during a fixed time period19th USENIX Symposium on Networked Systems Design and ImplementationUSENIX Association

(a) Hardware throughput against (b) Software throughput againstdifferent replacement window different replacement windowsizes and numbers of initialized sizes.rules.Figure 6: Forwarding speed will be degraded by frequent flowreplacement between hardware and software.(refer to Fig. 1). In other words, a small flow replacementwindow results in high replacement frequency. It is validatedby Fig. 5(b), which shows the replacement frequency againstdifferent replacement window sizes based on the cloud gateway trace, e.g. a 2-second replacement window results in areplacement frequency of 3.25k/s.Unfortunately, high replacement frequency causes forwarding speed degradation, in both hardware and software. Forhardware, it cannot keep forwarding traffic at a high ratewith too frequent rule replacement, due to the lock mechanism used. We measure the forwarding speed under different replacement frequencies using a commodity MellanoxConnectX-5 NIC. As shown in Fig. 6(a), with 50k initial hardware flows updated at a replacement frequency of 3.25k/s(2-second time window), the NIC throughput drops by 50%.For Barefoot P4 Switches, today the vendors limit the maximum replacement frequency to 2k/s, which can merely avoidobvious performance degradation during replacement. Forsoftware, frequent replacement leads to cache contention andresults in many cache misses, which can also degrade theforwarding throughput. Specifically, in our trace, the software throughput drops by 16% at a replacement frequency of3.25k/s, as shown in Fig. 6(b).Therefore, if a conventional periodic replacement methodis used, we have to resolve the dilemma of how to set theproper size of the time window for high-performance hybridflow table management.3DesignIn this section, we elaborate the design of Elixir. We firstdescribe the design overview, and then separately presentthe design details of large flow replacement and bursty flowreplacement.3.1Design OverviewElixir takes the following key ideas to address the challengesdiscussed in Section 2.First, Elixir uses different methods for measuring the ratesof bursty flows and large flows. As large flows usually comprise of many packets, a small portion of sampled traffic cangive relatively accurate information about flow characteristics.USENIX AssociationMotivated by that, Elixir samples hardware traffic to softwarewith a low sampling rate, so as for large flow identification.Observing that the number of concurrent bursty flows is small(refer to Fig. 2(b)), Elixir associates each bursty flow with ahardware counter and the software polls the counters for flowrate measurement.As different measurement techniques are used for largeflows and bursty flows, Elixir separates the hardware flowtable into two disjoint areas, i.e. a large flow area and a burstyflow area. Large flows and bursty flows are inserted into corresponding areas accordingly. It is worth noting that, if aflow is both a large flow and a bursty flow (as discussed inSection 2), we insert the flow’s forwarding rule to the corresponding hardware area where the replacement mechanism isfirstly triggered for the flow. In principle, no matter in whichhardware area is the forwarding rule stored, the flow will beforwarded by hardware and will not cause packet loss in software. Hence, when the flow turns from a bursty large flow toa stable large flow or vice versa, it is unnecessary to move theflow to other hardware area.By this method, Elixir leverages the benefits of both hardware and software solutions to get the balance between measurement accuracy and measurement overhead.Second, Elixir separates the replacement of large flows andbursty flows, due to their distinct characteristics. As shown inFig. 1, large flows constantly change over time. If not adjustedperiodically, throughput of software traffic may graduallyincrease, leading to higher CPU usage. It indicates that weneed a periodic process to exchange large flows betweenhardware and software, and the replacement window can beset relatively larger, so as to bring affordable impact on theoverall forwarding performance.In contrast, bursty flows appear in the system at irregularbasis, and cause a sharp increase of the queue size and forwarding latency. If not offloading software-forwarded burstyflows quickly, it may result in severe packet loss. As a result,we need to detect the existence of bursty flows quickly andoffload them as soon as possible, which is an event-drivenprocedure.Overall, Elixir handles large flows and bursty flows separately with distinct methods, i.e. large flows are offloadedperiodically while bursty flows are offloaded immediatelyonce detected. In this way, Elixir makes the tradeoff betweenburst-aware offloading and forwarding performance degradation caused by frequent replacement.Third, Elixir decouples the flow rate identification windowand the flow replacement window. Flow rate identificationwindow (referred to as identification window for short in therest of this paper) is the time window to measure the flowthroughput, i.e. flows that generate the most traffic within thewindow are identified as the large flows. As a result, the sizeof the identification window should be determined by trafficpattern. Differently, flow replacement window (referred toas replacement window for short in the rest of this paper) is19th USENIX Symposium on Networked Systems Design and Implementation539

Figure 8: Identification accuracy against different samplingrates and identification window sizes.Figure 7: Architecture overview of Elixir.the time window between two adjacent hardware/softwareflow replacement decisions, which has close relationship withhardware/software system characteristics, i.e. too frequentflow replacement causes the performance degradation of bothhardware and software. Consequently, using one size for thetwo windows may either sacrifice the flow rate identificationaccuracy or lead to lagged flow replacement.Based on the observation, Elixir explicitly decouples theidentification window and the replacement window. Specifically, for large flows, the replacement window is set to theminimum replacement decision interval which brings affordable impact on the forwarding performance; while the identification window is set to the minimum window which canaccurately identify flow rates, since the sampled traffic maycause inaccurate rankings of large flow rates. For bursty flows,the identification window is set to be small enough to catchthe flow burstiness, and they are offloaded immediately oncedetected.By this decoupling, Elixir achieves the tradeoff betweentimely flow replacement and accurate flow identification.Based on the key ideas above, the architecture overviewof Elixir is shown in Figure 7. Two different policies areseparately run for large flows and bursty flows. For large flows,periodic replacement policy is used, with sampling-based rateidentification and a relatively large replacement window; forbursty flows, event-driven replacement policy is adopted, withcounter-based rate identification and the software queue sizeas the replacement signal.3.2Periodic Large Flow ReplacementAs aforementioned, Elixir leverages a periodic procedure toexchange large flows between hardware and software, and therates of large flows are identified using sampled traffic.Setting the sampling rate and the identification window size: When sampling the hardware traffic to software,in order to reduce the computation and storage overhead insoftware, a low sampling rate is required. To further reduce540Figure 9: Percentage of software traffic increase against different identification window sizes and replacement windowsizes.the hardware/software communication cost, the payload ofevery packet is cut and only the packet header is deliveredfrom hardware to software. Since we use a low samplingrate, a flow’s ranking in the sampled traffic may be differentfrom its ranking in the actual traffic, which leads to inaccurateselection of flows to replace. Generally, a higher samplingrate or a larger identification window not only results in moreaccurate flow rate rankings as it sees more packets, but alsomeans that more CPU and memory resources are required forflow rate identification in software. In a practical system, thesampling rate and the identification window size should beset by taking multiple factors into account, including CPUoverhead, memory overhead and identification accuracy.For the cloud gateway trace, Fig. 8 shows the rate identification accuracy 1 against different sampling rates and identification window sizes. Based on the results, we set the samplingrate to 20% and the identification window size to 30 seconds,which can achieve an identification accuracy of 90%.Setting the replacement window size: As aforementioned, Elixir decouples the identification window and thereplacement window. When setting the replacement windowsize for large flows, if using a large window, some softwareflows which turn large cannot be timely offloaded, and thesoftware-forwarded traffic may increase during the window,which causes more CPU resource consumption. If settinga small window, as shown in Fig. 6(a) and Fig. 6(b), thehigh replacement frequency may cause considerable forwarding performa

by software. Commodity hardware devices, such as Mel-lanox NICs and P4 Switches, support both putting a hardware counter to every flow and sampling the hardware traffic to software. The cost of using hardware counters for flow rate measurement is very high (more discussion in Section2). If sampling a portion of the hardware traffic to .

Related Documents:

Elixir 9 only - First, depress the lever blade and use a 2 mm hex to remove the set screw. Elixir 9 & 7 - Open a vise 1/2 inch and place a clean rag over the jaws of the vise. Position the pivot pin over the opening of the vise. Use a rubb

Early praise for Programming Elixir Dave Thomas has done it again. Programming Elixir is what every programming book aspires to be. It goes beyond the basics of simply teaching syntax and mechanical examples. It teaches you how to think Elixir. Bruce Tate CTO, icanmakeitbetter.com. Author.

Alchemy) transitioned to Neidan (Daoist Internal Alchemy) and the "elixir" became the “inner elixir” sought to be created, or developed, or simply re-discovered (in the version of Liu Yiming, which stated that we already had it in ourselves). The Elixir now becomes

- 1 PHYTO Elixir Shampoo - 1 PHYTO Elixir Mask - 1 PHYTO Elixir Cream - 1 PHYTO Elixir Oil VALUE: 175 STARTING BID 100 BUY NOW 300. ITEM # 13 LA CHATELAINE Luxurious hand cream gift set composed of 12 -1oz tubes from the South of

Elixir 1 - Remove the grip and shifter from the handlebar according to your manufacturer's instructions. Use a 4 mm hex to loosen the brake clamp bolt and slide the brake lever off the end of the handlebar. Pull the hose

Elixir is a dynamic, functional language designed for building scalable and maintainable applications. Elixir leverages the Erlang VM, known for running low-latency, distributed and fault-tolerant systems, while also being successfully

ELIXIR - Distributed infrastructure for life science. 20 members Implemented through National ELIXIR Nodes. Build on: local bioinformatics on national strengths and priorities Consolidates Europe’s natio

Elixir Data Designer for extracting, merging and processing data from a variety of datasources, either to generate direct data output (for example, Excel files or database records), or to feed data into Elixir