Increasing Data Center Network Visibility With Cisco NetFlow-Lite - Ntop

1y ago
4 Views
2 Downloads
976.47 KB
8 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Adele Mcdaniel
Transcription

Increasing Data Center Network Visibilitywith Cisco NetFlow-LiteLuca Derintop, IIT-CNRPisa, Italyderi@ntop.orgEllie Chou, Zach Cherian, Kedar KarmarkarCisco SystemsSan Jose, CA, USA{wjchou, zcherian, kedark}@cisco.comABSTRACTNetFlow is the de-facto protocol used to collect IP trafficinformation by categorizing packets in flows and obtain importantflow information, such as IP address, TCP/UDP ports, bytecounts. With information obtained from NetFlow, IT managerscan gain insights into the activities in the network. NetFlow hasbecome a key tool for network troubleshooting, capacity planning,and anomaly detection. Due to its nature to examine every packet,NetFlow is often implemented on expensive custom ASIC or elsesuffer major performance hit for packet forwarding, thus limit theadoption. NetFlow-Lite bridges the gap as a lower-cost solution,providing the network visibility similar to those delivered byNetFlow.This paper describes the architecture and implementation ofNetFlow-Lite, and how it integrates with nProbe to provide ascalable and easy-to-adopt solution. The validation phase carriedon Catalyst 4948E switches has demonstrated that NetFlow-Litecan efficiently monitor high-speed networks and deliver resultssimilar to those provided by NetFlow with satisfactory accuracy.Categories and Subject DescriptorsC.2.2 [Computer-Communication Networks]: NetworkProtocols—DNS; C.2.3 [Network Operations]: Networkmonitoring.General TermsMeasurement, Performance.KeywordsNetFlow-Lite, Passive traffic monitoring.1. INTRODUCTION AND MOTIVATION1.1 Flow-based Network MonitoringNetFlow [1] and IPFIX are two popular traffic monitoringprotocols that allow to classify traffic in flows. Within thiscontext, a flow is defined [2] as a set of IP packets passingthrough an observation point during a certain time interval.Packets belonging to a flow have a set of common headerproperties including IP/port source/destination, VLAN,Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise,or republish, to post on servers or to redistribute to lists, requires priorspecific permission and/or a fee.Mike PattersonPlixer IncSanford, ME, USAmike@plixer.comapplication protocol and TOS (Type of Service). In bothNetFlow and IPFIX the flow probe, responsible foraggregating packets into flows, is usually embedded intothe networks device where flows the traffic to be analyzed.When traffic analysis capabilities are missing from thenetwork devices, it is also possible to export packets (e.g.using a span port or a network tap) from the network deviceto a PC and run let them be analyzed by a software proberunning on PCs [4] [5].When flows are expired, either due to timeout or maximumduration, they are exported out of the device to a flowcollector via UDP/SCTP formatted in NetFlow/IPFIXformat. The flow collector usually runs on a PC, and itoften dumps flows on a database after flow filtering andaggregation. Unlike SNMP [3], NetFlow/IPFIX are basedon the push paradigm where the probe sends flows to thecollector, without allowing the collector to periodically readflows from the probe.As flows are computed on IP packets, thus limitingNetFlow/IPFIX visibility to the IP protocol. Although flowbased analysis is quite accurate, it is relatively heavy forthe probe as every packet need to be decoded and alsobecause the number of active flows increases with thetraffic rate. In order to cope with high-speed traffic analysiswhile preventing NetFlow/IPFIX to take over all theavailable resources on the monitoring device, oftensampling techniques are used [10]. Sampling can bothhappen at packet [6] and flow [7] level. In the former casereducing the amount of traffic to be analyzed also reducesthe load on the probe, but often not the number of flowsbeing computed; in the latter case, reducing the number ofexported flows decreases the load on the collector withlittle relief on the probe side. Unfortunately the use ofsampling leads to inaccuracy [8] [9], and thus networkoperators prefer to avoid if possible.Although on layer-three routers the use of sampling is notdesirable, monitoring high-speed switches withoutsampling is not really feasible. This is because the totalaggregate port traffic can very well exceed 100 Gbit (if not1 Tbit), thus either monitoring is restricted to a limited setof ports or some packet sampling techniques have to beused. Furthermore it is a common misconception thatsampling reduces accuracy of measurements [11].

1.2 MotivationIn today’s complex network environment, applications withdiverse purposes converge on common networkinfrastructure, users from different geographic locationsconnect to the same physical network through differentmethods. As a result of that, having the visibility into thenetwork activities and application traffic is critical to manyIT managers.For years people have been using NetFlow to gain insightinto the network traffic. However, NetFlow is not alwaysan available option. In some places in network, thenetworking gear is often not equipped with such capabilitydue to the architecture design and cost structure to fit intothat specific market, for example data center ToR switches.Flexible NetFlow is an evolution of NetFlow. It utilizes theextensible format of NetFlow version 9 or IPFIX and hasthe ability to export not only the key fields seen intraditional NetFlow, but also the new fields such as packetsection. Flexible NetFlow also introduces the concept ofimmediate cache which allows immediate export of flowinformation without hosting a local cache. NetFlow-lite[13] is built upon the flexibility of Flexible NetFlow, withthe combination of packet sampling, to offer the visibilitysimilar to those delivered by NetFlow at a lower pricepoint, without the use of expensive customer ASIC whilemaintaining the packet forwarding performance.Due to the pervasiveness of NetFlow in many parts of thenetwork, the solution also needs to be designed to integrateeasily with existing infrastructure that is already monitoringthrough NetFlow. In addition, the solution needs to bescalable in order to accommodate the rapid growth oftoday’s network, especially in mega-scale data centers(MSDCs), where thousands of servers are connected toprovide the application services to scale to the businessneeds. One challenge that arises when monitoringnetworking devices with a centralized collector/analyzer isthe extra amount of traffic it generates and traversesthrough the network. Not only does valuable bandwidthbeing taken up, but also the centralized collector might notbe able to scale up to meet the demands.This is where the NetFlow-lite converter, such as nProbe,fits in. It bridges the world between NetFlow-lite andNetFlow. It parses the packet section exported throughNetFlow version 9 or IPFIX format, extracts keyinformation such as src/dst IP address, TCP/UDP port,packet length, etc., it constructs temporary flow cache,extrapolate flow statistics by correlating sampling rate w/sampled packets, exports aggregated and extrapolated datato NetFlow collectors in standard IPFIX or NetFlow v5/v9format. With this solution, the valuable forwardingbandwidth is conserved by aggregating NetFlow-lite data tomore bandwidth efficient NetFlow exportIn a nutshell, NetFlow-Lite is a technology that providesvisibility in the data center as it enables networkadministrators to: Know what applications are consuming bandwidth, whois using them, when they are being used, what activitiesare prevalent. Have visibility and control of the network. Gather data for network and capacity planning. Troubleshoot issues. Implement network forensics.The rest of the paper is organized as follows. Section twodescribes the NetFlow-Lite architecture and flow format.Section three covers NetFlow-Lite implementation both onthe switch and collector side. Section four describes howthe implementation has been validated against real traffic.Finally open issues and future work are described onsection five.2. NETFLOW-LITE2.1 ArchitectureIn essence, the NetFlow-lite solution consists of threeelements: The switches that supports NetFlow-lite functionality andchurn out NetFlow-lite data. The converter that aggregates the data into formatunderstandable by NetFlow collectors in today’s marketplace The NetFlow collector that collects and analyzes not onlyinformation originated through NetFlow-lite, but alsoNetFlow data gathered from different parts of thenetwork, all through standard IPFIX format (or NetFlowversion 9).The converter implements the flow cache by populating itusing the sample packets stored on the received flows, andnot doing a simple 1:1 flow format conversion. It thenexports the flows in standard NetFlow V5/V9/IPFIX to astandard NetFlow collector. In a nutshell, the NetFlow-Liteconverter acts as a flow collector with respect to the switchas it collects NetFlow-Lite flows, and as a probe for theflow collector.IPFIX/V9 NetFlow-LiteNetFlow-Lite SwitchNetFlow-Lite - NetFlow/IPFIXConverterIPFIX/V9 /V5Standard NetFlow/IPFIX CollectorFigure 1. NetFlow-Lite ArchitectureIn order to preserve bandwidth usage for links on the pathbetween the switches and the converter, an option is beingprovided to specify the number of bytes in the raw packetsection that will be included in the export packet. Inaddition, it is preferable that the converter is located nearthe switch in order to avoid taking up extra forwarding

bandwidth.Netflow-LiteConverterAny NetFlowCollectorrecord within an export packet does not necessarilyindicate the format of data records within that samepacket. A collector application must cache any templaterecords received, and then parse any data records itencounters by locating the appropriate template recordwithin the cache. Data FlowSet: a collection of one or more data recordsthat have been grouped together in an export packet.Netflow-Lite 1:NPacket SamplingNetFlow v9 orIPFIX ExportFigure 2. NetFlow-Lite Enabled Data CenterArchitectureThe figure above shows a NetFlow-lite enabled data centerarchitecture, where NetFlow-lite samples incoming trafficon the TOR (top of rack) switches. The converter sitsbetween NetFlow-lite capable switches and NetFlowcollectors, extracting the information from the raw packetsection, such as IP address, TCP/UDP ports, etc. andaggregate them into a local flow cache. The flow cache canbe exported to any existing NetFlow collector for analysisand correlating.With larger data center, a zonal design is recommended. Inthat case, a converter is placed per “zone” to be responsiblefor aggregating and converting NetFlow-lite packets withinthe zone. Converters from different zones can be feedingthe aggregated NetFlow data into a centralized NetFlowcollector in order to achieve a data center-wide networkvisibility.2.2 Flow FormatA switch with Netflow-lite functionality observes ingresstraffic and sample packets at 1-in-N rate at the monitoringpoint, for example, an interface on the switch. The sampledpackets are exported in standard NetFlow version 9 orIPFIX format. IPFIX and NetFlow version 9 differs fromprevious version in that it is template-based. Templateallows the design of extensible record format. Figure 3shows the NetFlow version 9 format.Figure 3. NetFlow v9 FormatIt consists of: Template FlowSet: a collection of one or more templaterecords that have been grouped together in an exportpacket. Template record used to define the format of subsequentdata records that may be received in current or futureexport packets. It is important to note that a template Data record: it provides information about an IP flow thatexists on the device that produced an export packet. Eachgroup of data records (that is, each data FlowSet)references a previously transmitted template ID, whichcan be used to parse the data contained within therecords. Options template: a special type of template record usedto communicate the format of data related to the NetFlowprocess. Options data record: a special type of data record (basedon an options template) with a reserved template ID thatprovides information about the NetFlow process itself.One of the capabilities of this extensible design is to allowthe export of raw packet sections in the Data Record, whichfacilitates the export of NetFlow-lite sampled packets.NetFlow-Lite enabled switches exports three differenttemplates that contain: Data template that describes the structure of sampledpacket export by the switch. Options template that describes the structure of samplerconfiguration data. Options template that describes the structure of interfaceindex mapping data.The options template describing the sampler configurationessentially exports the structure of the following pieces ofinformation: An identifier for a given sampler configuration. The type of packet sampling algorithm that is employed(currently 1-in-N packet sampling). The length of the packet section extracted from the inputsampled packet. The offset in the input sampled packet from where thepacket section is extracted.Templates are exported by default every 30 minutes, andthey can be packed into a single export packet for reducingthe number of transmitted packets.L2 HeaderL3 HeaderUDP Header42 Bytes (IPv4) / 62 Bytes (IPv6)Sampled Flow Datagram84 Bytes Truncated SampleFigure 4. NetFlow-Lite Sampled Flow Datagram

From the flow format point of view, NetFlow-Lite flowsare standard V9/IPFIX flows defined using a template. theycontain packet section and other sampling parameters, butnot the traditional fields such as source/destination IPaddress. In order to bridge between NetFlow-lite andNetFlow, and integrate NetFlow-lite into existing NetFlowsolution, a converter is necessary in order to convert theinformation contained inside packet section, such as source/destination IP, TCP port, etc., into format understandable bythe NetFlow collector on the market today.NetFlow-Lite switches can adapt the sampling rateaccording to the switch port. This means that networkmanagers can provide precise monitoring of selected switchports by disabling sampling (i.e. 1-to-1 sampling rate),while using a higher sampling rate for all remaining ports.The use of the standard V9/IPFIX format preventsNetFlow-Lite converters to support a custom exportprotocol, while allowing them to be deployed anywhere inthe network as long as they are reachable via IP. Anotheradvantage is that future changes and extensions to the flowformat, do not require changes on the collector as newfields can be accommodated into the exported flows simplymy defining them into the exported template.Flow conversion is transparent to existing NetFlow/IPFIXcollectors and back-end tools. The use of sampling allowsNetFlow-Lite to scale both in terms of number of ports andpackets being monitored. Sampling rate can be adaptedaccording to various parameters such as the total number ofpackets that are collected by a converter and also thenumber of switch exporters per converter.3. IMPLEMENTATIONDue to its probe/converter architecture, supportingNetFlow-Lite has required both to enhance the switch andcreate the converter. No changes have been necessary onthe collector side, as the converter emits standard flows inv5, v9 and IPFIX format.3.1 Switch ImplementationOn Cisco Catalyst 4948E switch, the sampling rate atwhich input packets are sampled is based on userconfiguration. The switch supports extremely (low) goodsampling rate which allows for high quality of trafficmonitoring. The sampling and export are both done inhardware, which does not put heavy load on control plane.Each sampled packet is exported as a separate NetFlowdata record in NetFlow v9 or IPFIX format.The switch implements a relatively inexpensive and not sostateful way of doing packet sampling and netflow exportin hardware. The switch makes copies of the packetscoming in and being forwarded through the switch, usingappropriate rules in the classification engine that identifypackets coming from monitored interfaces. The originalpacket undergoes normal forwarding and switchingtreatment through the device. The copies undergo a twolevel sampling process.At the first level, the copies of packets from variousmonitored interfaces are generated and sent to a transmitqueue where a credit rate limiting scheme is applied. Thiscredit rate mechanism is called DBL (Dynamic BufferLimiting) and is proprietary to the Cisco Catalyst switches.DBL is used as an active queue management mechanismnormally on the switch but in this case it is ingeniouslybeing used for first level selection of sampled packets.DBL credits are applied to a monitor and refreshed in atime based fashion that allows enqueue of packets to thetransmit queue such that there are enough packets from amonitored interface to match the user configured samplingrate. Whenever a packet from a monitor is enqueued to thetransmit queue, the credits for that monitor getdecremented. The credit lookup is done through a hashingscheme that can take as input various packet fields andinput port. This effectively provides the ability to samplepackets as if on the input before packets from variousmonitors aggregate into the transmit queue.The DBL credits and refresh frequency take into accountthe average packet size observed at a given monitor. Usersmay override the observed average packet size at a monitorand configure an average packet size for a monitor via CLI.The system will then use that average packet size incomputing credits for traffic seen by that monitor.Traffic flows from each monitor are isolated from traffic onother monitors because the DBL hash key masks are basedonly on the incoming interface or VLAN ID for port andvlan monitors respectively.From the transmit queue the sampled packets are fed to aFPGA which does final sampling for packets from eachmonitor to eliminate extra samples. They are then exportedin NetFlow version 9 or IPFIX format, assisted by theFPGA.The combination of high sampling rate and userconfigurable options provide a highly accurate sampling forNetFlow-lite. The hardware-assisted sampling and exportoffer a scalable solution with minimal impact to the controlplane.3.2 NetFlow-Lite Converter ImplementationThe NetFlow-Lite converter has been implemented as anextension to nProbe [4], an open-source NetFlow/IPFIXprobe/collector developed by one of the authors availablefor both Unix and Windows systems. As stated before, theflows emitted by the switch to the exporter are followingthe v9/IPFIX guidelines thus from the flow format point ofview no changes have been necessary. The main changes innProbe have been: Ability to interpret the received NetFlow-lite flows. Extract the packet samples. Use samples to populate the flow cache.In addition to packet samples, the flows emitted by theswitch contain additional information that is necessary toproperly support NetFlow-Lite, including: The sampler named and id (configured into theswitch)that has sampled the packet.

The original packet length before cutting it to thespecified snaplen. The packet offset of the received sample, as the switchcan be configured to emit sampled packet starting from aspecific offset (the default is 0) after the ethernet header. The switch interface on which the packet has beensampled.Switch samplers are responsible to select packet to sample.A switch can define many samplers, and thus each switchport can potentially have a specific sampler. This allows forinstance to have a per-port sampling rate, but it requires theconverter to store this information as the received samplesneed to be scaled based on the sampler that has emittedthem.In order to enhance the exporter performance, it is possibleto configure the switch to send flows to a pool of UDPports and not to a single one. The switch sends the flowtemplates to the first port of the pool, and flow samples tothe remaining port. Currently the destination ports areselected in round-robin in order to balance the load on thecollector side.1 / 10 GbitNetFlow-Lite SwitchNetFlow-LiteConverterUDP PortsFigure 5. NetFlow-Lite CollectionThis has been an important change as it has allowed theconverter to boost its performance. In fact, NetFlowcollectors usually are designed to handle a limited numberof flows per second [14] that are often dumped to persistentstorage after filtering and aggregation. In the case of theNetFlow-Lite converter the number of received flows canbe very high and exceeds the rate of 1 million flows/sec,whereas a high-end NetFlow collector can very seldomhandle sustain rate of a couple of hundred flows/sec. Thenumber of collected flows can be quite high if the switch isconfigured with a 1:1 sampler on a high-traffic port.Unfortunately as all the templates are send to a single UDPport, it is not possible to spawn multiple independentconverters, one per UDP port, so that they could eachanalyze a portion of the traffic. Furthermore as the switch isselecting destination ports in round robin, it can happen thattwo sampled packets belonging to the same flow are sent todifferent UDP ports. The use of 16 multiple collection portshas allowed nProbe to successfully collect and convert up 500K flows/sec per switch with a single threaded instance.Unfortunately this performance has been enough and thus adifferent solution had to be developed.Leveraging on the experience of the PF RING project [15],in order to further boost converter performance, we decidedto exploit multi-core computer architectures by developinga kernel module for expedite operations. The idea is toperform in-kernel NetFlow-Lite collection driven by theuser-space nProbe converter.nProbeUserlandKernelPF RING-aware Driver The sampling algorithm and size of the sampling pool,used by the sampler.PF RINGRXQueueRXQueueRXQueueRXQueueNetFlow-LitePF RINGPluginRSS (Resource Side Scaling)[Hardware per-flow Balancing]10 Gbit NIC (Intel 82599)Figure 6. NetFlow-Lite PF RING PluginnProbe sets a PF RING kernel filter for the IPv4/v6 UDPports on which flows will be received, that instructsPF RING to divert such packets to the kernel pluginwithout letting them continue its journey to user-space. ThePF RING kernel plugin implements flow collection bymaintaining information about the received templates inkernel memory. Sampled packets are extracted from flowsand sent to nProbe via a PF RING socket. Along with thepacket header and timestamp, PF RING adds somemetadata such as sampling information and interface Id,that have been extracted from received flows. Modernmulti-queue adapters such as Intel 82599 allow cards to bepartitioned into several RX queues, one per processor core.PF RING exploits this feature and capitalizes on it byallowing each queue to work independently, and poll packetconcurrently one per core. By means of a PF RING-awaredriver that pushes packets to PF RING without usingLinux kernel queueing mechanisms, packets are copiedfrom the NIC buffers directly to the NetFlow-Lite plugin.As there is a single plugin instance, kernel locking has beencarefully avoided when possible, thus each queue extractssampled packets without interference from other queues.The only lock present on the plugin is used when templatesare received and need to be copied in memory. As thisinformation is shared across all queues, it is necessary touse a lock in order to avoid that a poller is using a templatewhile it is updated. Nevertheless as templates are receivedvery seldom (by default every half an hour) we can assumethat no locking happens. An advantage of this solution,beside the increased processing speed, is that everyPF RING-aware network application can use the convertedpacket samples to implement monitoring. For instance bymeans of libpcap-over-PF RING, applications such astcpdump and wireshark can analyze received packets as if

they were captured from a network interface, this withoutbeing aware of having been received encapsulated inNetFlow-lite flows.The use of an external server-based converter can bedetected by a flow collector as flows are sent by nProbeand not by the switch. In order to make NetFlow-Litetotally transparent to applications, nProbe has implementedautomatic packet spoofing based on the source IP:port onwhich sampled flows have been received. Thus convertedflows are not sent with the IP address of the server onwhich nProbe runs, but with the original IP:port of theswitch that has sent the NetFlow-Lite flows. Thisinformation is propagated by the PF RING kernel moduleto nProbe as part of the metadata information associatedwith each packet.3.3 Collector ImplementationThe collector receives and stores the NetFlow-Litedatagrams from the converter. Data is massaged andformatted then made available to the reporting front end.The reports are in turn used to optimize networkperformance. As previously stated, no change has beennecessary to support NetFlow-Lite on the collector sideThe collector listens for the NetFlow-Lite datagrams onspecified UDP ports. The NetFlow-Lite converter leavesthe original source IP address intact when it is forwarded tothe collector. This technique keeps the convertersinvolvement completely transparent to the collector. Sincethe collector could be receiving datagrams from more than1 source, the datagrams from each switch are saved inseparate tables in the MySQL backend. Before thedatagrams can be saved, a template must be received fromthe exporter. Without a template, the datagrams could bedropped until a template comes in. The templates tell thecollector the contents that can be expected in thedatagrams. Typically the template refresh rate is configuredto once every 1-2 minutes. A less popular method is toexport the template every X datagrams however, thispractice is less common. It is important to note that eachswitch could be exporting different details regarding theflows they are forwarding. Some may include details in thetemplate on MAC address or VLAN ID. Others may notinclude these fields but, may include metrics on VoIP jitter,packet loss or TCP connection latency. The responsibilityof the templates is critical to the collection process and alsoplays a pivotal role when it is time to report on the data.Reporting on NetFlow-Lite depends largely on the datareceived. In addition to traditional NetFlow v5 information,nProbe can report detailed information about MAC address,VLAN tags, and tunnels (e.g. GRE/GTP) that have beenpart of the original packet sample. The collector makescertain reports available to the user based on the data that isavailable in the template. Before we discuss the reports,several routines must be run on the data collected. The mostsignificant processes worth noting for this paper are asfollows: Application Identification. Here we identify the likelyapplication or wellKnownPort of the flow. During thisprocess, the collector looks at the source (e.g. 5555) anddestination (e.g. 80) transport ports. The lower of the twoports is compared to a ‘wellKnownPorts’ table. If thelower port has an entry (e.g. 80 HTTP) thewellKnownPort is noted inside the saved flow. If thelower port is not defined, the higher port is compared. Ifit is also not defined, the lower number is saved with theflow in its native format (e.g. 80). Interface throughput: when the collector recognizes anew device, it immediately SNMP queries the device forthe interface details [12]. The use of transparent IPspoofing on the nProbe converter, allows collectors toquery the switch and not the converter, thus making ittotally unaware of the conversion happened. Rollups. The collector saves 100% of all data in rawformat to the 1 minute conversations tables for eachrouter. Every hour it creates a new 1 minute interval tableper router. Every 5 minutes, it creates higher intervalsusing the smaller intervals. This process is called "rollups".When the roll ups occur for 5 Min, 30 Min, 2 Hr, 12 Hr, 1Day and 1 week, two tables are created: Totals: The total in and out byte counts are saved perinterface before the data for the conversations table iscalculated. This table allows the reporting front end todisplay accurate total throughput per interface over timeand allows the front end to operate with no dependencyon SNMP yet still provide accurate total utilizationreporting. Conversations: All flows for the time period (e.g. 5minutes) are aggregated together based on a tuple. Onceall flows are aggregated together, the top 10,000 (i.e.default) flows based on byte count are saved. The non top10,000 flows are dropped. Remember: the total tablesensure a record of the total in / out utilization perinterface over time.Data is usually aged out over time. Generally the moregranular intervals are saved for shorter periods of time. Thereporting front end determines the intervals displayed basedon the amount of time requested.Figure 7. NetFlow-Lite Collector ReportWhen a report is run on an individual interface within 1minute intervals, the totals table isn’t needed because the

conversations table contains 100% of the data. When areport is run on an individual interface with no filters in 5minute or higher intervals, both the Conversations andTotal tables are used in the report. The Total tables are usedto display the total in and out utilization of the interface.The top 10 from the Conversations table are then subtractedout from the total and added back in color. Reporting onNetFlow and NetFlow-Lite depends largely on the contentsof the templates. As stated earlier, traditional NetFlow v5contains the aforementioned fields from the flow. NetFlowv9/IPFIX allow the export of even greater details (e.g.MAC address, VLAN, VoIP jitter, packet loss, TCPhandshake latency, etc.The front end of the collector displays only reports that willwork given the types of information ava

NetFlow version 9 or IPFIX format, extracts key information such as src/dst IP address, TCP/UDP port, packet length, etc., it constructs temporary flow cache, extrapolate flow statistics by correlating sampling rate w/ sampled packets, exports aggregated and extrapolated data to NetFlow collectors in standard IPFIX or NetFlow v5/v9 .

Related Documents:

ANSI A10 ANSI A10.47 Work Zone Safety for Highway Construction “High Visibility Safety Apparel.” Garments and headwear worn by workers to provide increased conspicuity and visibility and that meet the performance requirements contained in ANSI/ISEA 107. Visibility. Flagger visibility shal

Apr 10, 2017 · only by capacity on network cards – Assigning servers to service should be independent of network topology . cheap infrastructure and low power consumption & heat emission. Internet. Servers. Access. Layer-2 switch. . Place the 48 switches in a centralized rack Cables moves in sets of 12 from pod to pod and inFile Size: 1MBPage Count: 28People also search forcisco data center topologymicrosoft data center topology pdfcisco data center topogolydifferent type of data center topologydata center network topologydata center network topology

To address these challenges, Infoblox Network Insight delivers actionable network intelligence by integrating Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), and IP address management (IPAM) data with network infrastructure data—providing unprecedented visibility across the entire network.

since, setting the standard for portable network testing. Network professionals around the world trust our best-in-class tools to deliver the visibility needed to get the job done, fast. Testing you can trust . from a reliable ally. simplicity visibility

Rethinking minority status and ‘visibility’ Miri Song Correspondence: a.m.song@kent.ac. uk University of Kent, Cornwallis NE, room 236, Canterbury, Kent CT2 7NF, UK Abstract Historically, minority status has been linked with visibility as a non-White person, and such phenotypical visibility has marked people in terms of racial stigmas and .

Quick Start: Manage Users and Partner Visibility Microsoft Volume Licensing 2 When you go to the Manage Access section, you will see two tabs named Manage Users and Manage Partner Visibility. The Manage Users tab is where you can search for a user, invite new users, and assign or remove roles to manage your users' permissions. The Manage Partner Visibility is where customers can allow their .

SOFTWARE DEFINED DATA CENTER SERVICES VIRTUAL DATA CENTER VIRTUAL DATA CENTER VIRTUAL DATA CENTER VIRTUAL DATA CENTER VIRTUAL DATA CENTER Network Storage Server Security The Software-Defined Data center

network.edgecount Return the Number of Edges in a Network Object network.edgelabel Plots a label corresponding to an edge in a network plot. network.extraction Extraction and Replacement Operators for Network Objects network.indicators Indicator Functions for Network Properties network.initialize Initialize a Network Class Object