1000BASE-T/10GBASE-T Time-To-Link And Some Implications For 2.5/5GBASE-T

1y ago
10 Views
2 Downloads
1.20 MB
20 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Helen France
Transcription

1000BASE-T/10GBASE-T Time-ToLink and Some Implications for2.5/5GBASE-TIEEE P802.3bz 2.5G/5G BASE-T Task ForceArchitecture ad hocPete Cibula, IntelMay 5th, 2015Page 1

Discussion Outline What is time-to-link, why is it important,and how can it be characterized? A few representative measurements Factors that influence time-to-link Observations and summaryVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 2

What is Time-To-Link (TTL)? Time-To-Link (TTL): A systemperformance metric thatcharacterizes and measures PHYbehavior through autonegotiationand 1G/10G BASE-T startupsequences––– Version 2.4Autonegotiation in 802.3 Clause 28,“Physical Layer link signaling for AutoNegotiation on twisted pair”1Gb in 802.3 Clause 40, Subclause40.4.2.4, “PHY Control function”10Gb in 802.3 Clause 55, Subclause55.4.2.5.14, “Startup sequence”Autonegotiation10Gb Startup1Gb StartupOne of two primary performancemeasures (along with BER) usedto characterize BASE-T physicallayer link interoperabilityIEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 3

Why is it Important? Server networking drivers mustmeet 3rd-party certificationsExample - Windows HardwareQuality Labs (WHQL) testing &certification “devfund”– A series of “device fundamentals”tests to evaluate the compatibility,reliability, performance, security andavailability of a device in WindowsOS– Includes many automated driverstress tests that execute multipledevice resets– Long link times appear as a “failure”to these tests, which expect a link in3s-4s based on 10Mb/100Mb/1GbPHY performanceServer device fundamentalsrequirementsSource: Device Fundamentals Overview Presentation atIhv devfund.pptx Long TTLs ( 6s) can lead to device certification failures!Version 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 4

Link Interoperability Measurements Representative Link Interoperability metrics associatedwith TTL––––––Time-To-Link (Time to achieve link after link initiation event)# Link Attempts (Number of attempts for each link)# Link Drops (Number of link drops observed after link is established)Clock Recovery (Master/Slave resolution)TTL Distribution (% of links by link time)Speed Downshift/Downgrade (Resolved speed if other than 10Gb/s) Variables that can affect TTL– Channel (type, configuration, length)– Link initiation event on either endpoint Hardware reset, “soft” reset or MDIO PHY reset, autoneg restart, transmitterdisable/enable, cable connect/disconnectVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 5

Characterizing Time-To-Link BehaviorTTL as a Percentage of Total TrialsTotal to remember: 1,550 link tests 1,050 out of 1,550 tests, or 68% ofthe total number of link tests,achieved a link state in 7s or less(green slice) 499 out of 1,550 tests, or 32% ofthe total number of link tests,achieved a link state somewherebetween 7s and 15s (yellow slice) 1 out of 1,550 tests, or 1 %(actually 0.15%) of the totalnumber of link tests, achieved alink state longer than 15s (actually16.4s; can’t see this in the piechart)Version 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 6

Characterizing Time-To-Link BehaviorCumulative % TTLCumulative % TTL is the distribution ofmeasured link times as a percentage ofthe total measured link timeTotal to remember: Total link timerecorded for all 1,550 tests 10,837,835ms or about 3h 0min 38sec 1,050 tests with TTL 7s had a total linktime of 6,843,118 ms (63.14% of the totalmeasured link time)499 tests with 7s TTL 15s had a totallink time of 3,978,317 ms (40.24% of the totalmeasured link time)1 test with TTL 15s had a total link time of16,400 ms (0.15% of the total measured linktime)Expressed as a cumulative percentage Measured link time 7s: 63.14%Measured link time 15s: 99.85%Measured link time 16.4s (max): 100%Version 2.499.85% of totallink time16.4s63.14% of totallink timeTTLbinTotal Timein TTL binTotal TimeAll Tests% TotalTime 7s6,843,11810,837,83563.14%63.14%7s TTL 15s3,978,31736.71%99.85%16.4000.15%100.00% 15sIEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015% CumTimePage 7

Example: TTL Distribution and Master/SlaveResolution by Channel Length Example of 10GBASE-T TTLmeasured from 2m to 115m channels(9,790 links)– Stacked plot order L-R is 2m to 115mTTL across 2m-100m–––Average TTL 7.6sAverage time in AN 5sAverage time in training 2.6s Note apparent loop timing trendtowards MASTER preference withincreasing channel length Very long TTLs ( 15s) at 100m channels are associated withdownshifts to 1Gb link speedGreen: TTL 7s; Yellow: 7s TTL 15s; Red: TTL 15sCyan Master; Blue SlaveVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 8

Other TTL Metrics (Same Dataset)# of LinkAttemptsLink DropsGreen: 1-2; Yellow: 3-4; Crimson: 5-6; Red: 6Green: # no link drop; Red: # link dropResolvedlink speedCumulative% Link TimeGreen: 10Gb/s; Yellow: 1Gb/sColor: Channel lengthSummary: Link interoperability measurements can clearly show differences inPHY autonegotiation and link state behavior as a function of channel characteristicsVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 9

Time-To-Link etry”ANClause 28AutonegAN TTL is a combination of both autonegotiation and 1G/10G startup behavior––Version 2.4Three sources of variability: Autonegotiation, “Retrain” (variability through 55.4.6.1) and “Retry”(return to 28.3.4)Longest TTLs typically driven by multiple passes through the Clause 28 Arbitration state diagramafter failed training attemptsIEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 10

AN & Training Times: 1000BASE-TMeasured autonegotiation andtraining times from 1,550 1Gb links. 10GBASE-T device to1000BASE-T link partner Both endpoints set to “full auto”(autonegotiate speed, mode, andloop timing)Results Autonegotiation– Average 3.89s– Range 3.25s to 5.50s Training– Average 0.91s– Range 0.575s to 1.175sVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 11

AN & Training Times: 10GBASE-TMeasured autonegotiation andtraining times from 1,550 1Gb links. 10GBASE-T device to 10GBASET link partner Both endpoints set to “full auto”(autonegotiate speed, mode, andloop timing)Results Autonegotiation– Average 3.35s– Range 2.46s to 9.24s Training– Average 2.98s– Range 2.22s to 3.66sVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 12

AN Comparison - 1Gb/10GbCurrent autonegotiation times for1000BASE-T and 10GBASE-T arecomparable.From an end-user perspective, it ishighly desirable that 2.5G/5GBASE-Tautonegotiation times align with thesetechnologies, and that total time-tolink be minimized.TechnologyRepresentativeAverage AN(ms)RepresentativeAverage Training(ms)RepresentativeAverage GBASE-T334628986.24*10GBASE-T to 1000BASE-T link partner. 1000BASE-T to 1000BASE-T is slightly faster.Version 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 13

Observations from 10GBASE-T Channel topologies significantly affect the channel solutionsrealized by PHY DSP systems– “Peaky” impairments (return loss, crosstalk) appear to be a factor inlink-trial-to-link-trial variability in the system solution– Transition region between RL/crosstalk-driven to IL-driven solutions– Channel lengths near 10GBASE-T PBO transitions PHY-specific responses to channel characteristics drivevariability in autonegotiation and training time– Loop timing/clock recovery resolution– Time spent in 10GBASE-T startup states May have implications for both system performance andend-user experience– Potential to affect product time-to-market and customer ease-of-useVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 14

Summary Time-to-link from the end-user perspective– User time-to-link experience with the installed base ofCat5e/Cat6 cabling and 1000BASE-T is between 3s & 4s– User time-to-link experience with 10GBASE-T is 7s (and insome cases, longer)– Measured 1000BASE-T and 10GBASE-T autonegotiation timesare comparable Considerations for P802.3bz and the Architecture ad hoc– Can 2.5/5GBASE-T autonegotiation and startup times beimproved to be more aligned with end-user expectations* and/orrequirements? *Assume they will be looking through a 1000BASE-T lens– Consider how time-to-link is affected when developing andevaluating 2.5/5GBASE-T autonegotiation proposalsVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 15

Thank You!Version 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 16

Test Channels Focused channel selection using multiple cabletypes and lengths– 2m, 4m, 7m, 30m, 55m, 90m and 100m are “standard”channels for both TTL and BER– Other channel lengths (typically 5m increments) areused to check for consistent link behavior over a rangeof PHY channel solutions (different PBOs, operatingmargin, delay/delay skew, etc.) Includes direct connection, 2-connector, and 4connector topologies Test channel matrix will (of course) be modifiedfor 40GBASE-TVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 17

Link Interoperability TTL Dashboards Two different dashboard formatsare used:– “Single dashboards”summarize a single dataview– “Comparison dashboards”compare multiple “singledashboards” in a slightlydifferent format Single dashboard pie charts arerepresented as a stacked verticalbar charts in comparisondashboards 2013-03-06Version 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 18

% TTL (Trials) and Cumulative % TTL (Time)2m Example16.4s 32% of total 2m tests7s TTL 15s37% of total test timeis 7s t 15s99.85% of total linktime 15s(100% 16.4s) Version 2.499.85% oftotal link time63.14% oftotal link time68% of total 2m tests 7s63% of total test time 7s63% of total link time 7sIEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 19

% TTL (Trials) and Cumulative % TTL (Time)100m Example45.3s 11% of total 100mtests 15s29.64% of total testtime is 15s to max100% of total link time 45.3s100% of totallink time77.89% oftotal link time45% of total 100mtests 7s TTL 15s37% of total test timeis 7s t 15s77.89% of total linktime 15s29.64% oftotal link time44% of total 100mtests 7s29.64% of total testtime 7s29.6% of total 100mlink time 7sVersion 2.4IEEE P802.3bz 2.5/5GBASE-T Architecture ad hoc - May 5th, 2015Page 20

recorded for all 1,550 tests 10,837,835 ms or about 3h 0min 38sec 1,050 tests with TTL 7s had a total link measured link time) 499 tests with 7s TTL 15s had a total measured link time) 1 test with TTL 15s had a total link time of time) Expressed as a cumulative percentage Measured link time 7s: 63.14%

Related Documents:

Optical and copper models can be used on a wide variety of Cisco products and intermixed in combinations of 1000BASE-T, 1000BASE-SX, 1000BASE-LX/LH, 1000BASE-EX, 1000BASE-ZX, or 1000BASE-BX10-D/U on a port-by-port basis. Figure 1. Cisco Optical Gigabit Ethernet SFP Figure 2. Cisco 1000BASE-T Copper SFP F

SFP Ports Support 1000BASE-T, 1000BASE-LX, 1000BASE-SX SFP Ports Support 10GBASE-LR, 10GBASE-SR Backplane Interfaces 36 10GBASE-R, SGMII Model # ACI-3010-E36-7 Future. Only OC-192 and 10Gbps Ethernet supported in initial firmware release. (3072-XE support) Protocols 1 Gbps Ethernet, 10 Gbps Ethernet, 40 Gbps Ethernet Media Conversion Yes

CWDM-SFP10G-10SP 10GBASE CWDM SFP Transceiver, LC, 10km over OS2 SMF, 1270nm-1330nm Duplex LC CWDM-SFP10G-10M 10GBASE CWDM SFP Transceiver, LC, 10km over OS2 SMF, 1350nm-1610nm Duplex LC . DWDM-SFP10G-40 10GBASE DWDM SFP Transceiver, LC, 40km over OS2 SMF, C17-C61 Duplex LC DWDM-SFP10G-40-I 10GBASE DWDM SFP Transceiver, LC, 40km over OS2 .

IEEE802 Plenary July 2006 10GBASE-KR FEC tutorial 13 FEC block format Payload words carry the 10GBASE-R scrambled payload words Tn Transcode bit carries the state of the 10GBASE-R sync bits for the associated payload word Sync bits are compressed to аsingle bit then scrambled to ensure DC balance 64b/66b sync bits are either 10 or 01 hence can be reconstructed from the T bit

The Nortel Ethernet Routing Switch 5000 Series is a set of high-performance Stackable LAN Switches providing the resiliency, security and convergence readiness required for today’s high-end . ERS 5530-24TFD 24 x 1000BASE-T, including 12 x Combo 1000BASE-T/1000BASE-SFP, plus 2 x 10GBASE-XFP

LAN Base - Cisco Catalyst 2960-Plus 24TC-L 24 2 (SFP or 1000BASE-T) LAN Base - Cisco Catalyst 2960-Plus 48PST-S 48 2 SFP and 2 1000BASE-T LAN Lite 370W Cisco Catalyst 2960-Plus 24PC-S 24 2 (SFP or 1000BASE-T) LAN Lite 370W Cisco Catalyst 2960-Plus 24LC-S 24 2 (SFP or 1000BASE-T) LAN Lite 123W Cisco Catalyst 2960-Plus 48TC-S 48File Size: 341KB

3 INDUSTRIAL ETHERNET SWITCHES Ordering Information SSF-1x2-RUGGED Gigabit Ethernet Switch 2x 10/100/1000Base-Tx to 100/1000Base-X SFP slot, PoE 60W budget DIN rail and Wall inst. SSF-1x4-RUGGED Gigabit Ethernet Switch 4x 10/100/1000Base-Tx 1x 100/1000Base-X SFP sl

Modules Datasheet . ROUTER-SWITCH.COM 1 . Cisco Catalyst 4500 Series Line Cards Specifications Feature Description Standards Gigabit Ethernet: IEEE 802.3z, IEEE 802.3x, IEEE 802.3ab, IEEE 803.3at, IEEE 802.3af, IEEE 802.3az 1000BASE-X (GBIC), 1000BASE-SX, 1000BASE-LX/LH, 1000BASE-ZX, CWDM