ISP Design Using MikroTik CHR As An MPLS Router

1y ago
11 Views
2 Downloads
1.63 MB
23 Pages
Last View : 8d ago
Last Download : 3m ago
Upload by : Ronan Garica
Transcription

www.iparchitechs.comISP Design – Using MikroTikCHR as an MPLS routerP R E S E N T E D B Y:KEVIN MYERS,NETWORK ARCHITECT

Profile: AboutKevin MyersBackground: 20 years in NetworkingDesigned/Built Networks on 6 continentsMikroTik Certified TrainerMikroTik, Cisco and Microsoft CertifiedCommunity Involvement:Packet Pushers (Podcast Guest / Blogger)Group Contributor (RouterOS / WISP Talk and others)Delegate/Roundtable contributor (NFD14)MT Forum (Forum Veteran – Member since 2012)Network Collective (Podcast Guest)

Profile: AboutIP ArchiTechsExpert NetworkingWhitebox ISP Data Center Enterprise Global ConsultingManaged NetworksMonitoringLoad TestingDevelopmentLocations in: US Canada South AmericaCall us at: 1 855-645-7684E-mail: consulting@iparchitechs.comWeb: www.iparchitechs.com

Profile: AboutIP ArchiTechsNow in Europe!IPA Opened an office in Nis, Serbia in 2018

Design: Whyuse the CHR as an MPLS router?Goal of this presentation: When thepresentation is finished, hopefully you willhave walked away with a few key concepts: Performance characteristics of the CHR as anMPLS router Best practices when deploying the CHR as anMPLS router Benefits of using the CHR vs RouterBoard orCCR as an MPLS router

Design: CHRMPLS – which hypervisor ? Which Hypervisor should I use?

Design: CHRMPLS – which hypervisor ? Which Hypervisor should I use?Hyper-V is the only hypervisor currently recommendedfor MPLS with the MikroTik CHR.MTU is handled differently in Hyper-V vs. ESXi andProxMox (KVM). Packets are not assembled into 64kbuffers in HyperV. When packets are broken down into64k buffers, it seems to create MTU issues for the CHR.

Design: CHRMPLS – which hypervisor ? Why Not ESXi or ProxMox (KVM)?ESXi and ProxMox (KVM) both have issues whenrunning the CHR for MPLS.MTU is handled differently in Hyper-V vs. ESXi andProxMox (KVM). Packets are assembled into 64k bufferswhich seems to create MTU issues for the CHR. Thisaffects explicit null the most.

Design: CHRvs. Hardware for MPLS? Which platform is better? Throughput capabilities? x86 CPU vs. ARM/Tilera? MTU/Throughput concernson different Hypervisorsvs.

Design: CHRvs. Tilera/ARM for MPLS?PlatformCPUMPLS router CPUrequirementsdepend on load andexplicit/implicit nullx86Better for heavycomputational work.Higher power draw.TileraOptimized for packettransfer. Designed to below power draw.ARMIn between x86 andTilera for performance.ThroughputAt 1530 bytes (L2),and 8970 bytes (L2)x86More CPU and power isrequired to move data atthe same speed as a CCRTileraHandles throughput atdifferent frame sizesslightly better than x86ARMHandles throughput atdifferent frame sizessimilar to TileraMTU Handlingx86x86 hardware and HV cantypically support up to9000 MTU.TileraSupports up to 10222ARMSupports up to 9982

Design: CHRMPLS testing - logical lab setupL2VPN - VPLSMPLS CoreLo0 - 100.127.1.2/32Lo0 - 100.127.1.1/32Vlan 777100.76.1.0/29VLAN 761Vengeance CHRMPLS PELo0 - 100.127.2.1/32100.76.3.0/29VLAN 763100.76.2.0/29VLAN 762CCR1036CCR1036Vlan 778CCR1072MPLS PELo0 - 100.127.2.2/32

Design: CHRMPLS – switch–centric lab setupPhysicalTest Network forMPLS CHRHP 1U x86 Server500W94%41UIDiLO14Baltic Networks Vengeance HVCHRMPLS PEMPLS PEMPLS Core

Design: CHRtesting lab setupHypervisor details – VM provisioningrecommendations for Hyper V 2 vCPUs 4096 MB RAM (or more) Disable HyperThreading in the BIOS Use CPU reservation (100%) Disable all un-needed VM components (CDROM, SCSI controller, etc) Increase MTU to maximum on theVSWITCH/Interfaces

Design: CHRperformance on VMWARE ESXi Concept of testing Performance with VPLS Performance with Implicit Null vs Explicit Null Performance at 1530 MPLS MTU bytes, 9000 MPLS MTU bytes Performance considerations Dedicate HV to CHR – don’t mix applications TSO/LSO – Disable for best performance Clock speed – Highest speed possible

Design: Benefitsof MPLS on CHR VPLS is easier to make highly available than independent routers dueto issues with Layer 2 looping. CHR on two HV hosts eliminateslooping.LoopPotentialL2VPN - VPLSL2VPN - VPLSMPLS CoreVlan 777100.76.1.0/29VLAN 761Vengeance CHRMPLS PELo0 - 100.127.2.1/32100.76.3.0/29VLAN 763100.76.2.0/29VLAN 762CCR1036Vlan 778Lo0 - 100.127.1.2/32Lo0 - 100.127.1.1/32CCR1036Vlan 778CCR1072MPLS PELo0 - 100.127.2.2/32

Design: Benefits of MPLS on CHRCan deploy multiple MPLS PE routers to isolate clients when neededMultiple VMsL2VPN - VPLSL2VPN - VPLSVlan 780Vlan 779MPLS CoreLo0 - 100.127.1.2/32Lo0 - 100.127.1.1/32Vlan 777100.76.1.0/29VLAN 761Vengeance CHRMPLS PELo0 - 100.127.2.1/32100.76.3.0/29VLAN 763100.76.2.0/29VLAN 762CCR1036CCR1036Vlan 778CCR1072MPLS PELo0 - 100.127.2.2/32

Design: UnderstandingExplicit vs. Implicit NULL Implicit Null will use Penultimate Hop Poppingto deliver the packet unlabeled to the lastMPLS router before the packet is forwardedinto a non-MPLS network Explicit Null will keep the packet labeled untilit egresses an interface that isn’t MPLS capableand then the label will be stripped Explicit null set in MPLS LDP will result inslightly higher CPU usage

Design: Testingnew multi-core bandwidth Using the mew multi-corebandwidth test MikroTikrecently introduced forperformance testing. (ROS v6.44) It works very well!

Design: CHRperformance on Hyper V (Windows Server 2016)PlatformHypervisorBaltic Vengeance Hyper-V 2016CHR6.44Implicit Null - VPLS Throughput: 4.4 Gbps Peak VM CPU: 15% MPLS MTU: 1530

Design: CHRperformance on Hyper V (Windows Server 2016)PlatformHypervisorBaltic Vengeance Hyper-V 2016CHR6.44Explicit Null - VPLS Throughput: 8.3 Gbps Peak VM CPU: 20% MPLS MTU: 1530

Design: CHRperformance on Hyper V (Windows Server 2016)PlatformHypervisorBaltic Vengeance Hyper-V 2016CHR6.44Implicit Null - VPLS Throughput: 9.9 Gbps Peak VM CPU: 7% MPLS MTU: 9000

Design: CHRperformance on Hyper V (Windows Server 2016)PlatformHypervisorBaltic Vengeance Hyper-V 2016CHR6.44Explicit Null - VPLS Throughput: 9.9 Gbps Peak VM CPU: 16% MPLS MTU: 9000

Design: Questions?Questions?

Design: CHR vs. Tilera/ARM for MPLS? Platform CPU MPLS router CPU requirements depend on load and explicit/implicit null x86 Better for heavy computational work. Higher power draw. Tilera Optimized for packet transfer. Designed to be low power draw. ARM In between x86 and Tilera for performance. Throughput At 1530 bytes (L2), and 8970 bytes (L2 .

Related Documents:

Media Convertor AT-MC103XL-20 3 Mikrotik S-3553LC20D SFP 20km BiDir (pair) 4 Mikrotik S 31DLC10D SFP 10km 3 Mikrotik S 2332LC10D SFP 10km BiDir (pair) 3 Mikrotik SFP 3m direct attach cable 2 Mikrotik S-31DLC20D 2 D-Link DGE-528T 5 Dell Memory Upgrade - 32GB - 4Rx4 DDR

MENOG13-IXP-Network-Design.pdf " And on the MENOG 13 website ! Feel free to ask questions any time . IXP Design ! Background ! Why set up an IXP? ! Layer 2 Exchange Point . Network ISP 6 ISP 5 ISP 4 Ethernet Switch IXP Services: Root & TLD DNS, Routing Registry Looking Glass, etc . Layer 2 Exchange 22 ISP 1 ISP 2 ISP 3 IXP

kita ingin menerapkan Load Balance dimana Mikrotik akan mengingat kembali koneksi sebelumnya untuk digunakan. Topologi sistem pada Load Balancing ditunjukkan oleh Gambar 5. Implementasi Sistem Load Balancing Dua ISP Menggunakan Mikrotik dengan Metode Per Connection Classfier JURNAL MULTINETICS VOL. 1 NO. 2 NOVEMBER 2015 36 .

RouterOS history 2001 – MikroTik v2.2 Router Software – MikroTik v2.3 Router Software npk first mentioned as method for extending functionality Jan 2002 – MikroTik

check DHCP server configuration of MikroTik. 5. Check the IP range/subnet of LAN PC client. IP range should be as per prefix/subnet which is received from MikroTik (R-Series will get prefix/subnet from MikroTik router). if LAN IP of the subnet does not match with prefix/subnet, execute the IP and IP renew process in LAN PC to get new IP. 6.

1.2.1 SK-ISP Design & Layout The MCO must: 1. Adhere to the ISP PDF provided by HHSC for the design and layout of the Service Tracking form in the MCO system. This form corresponds to the SK-ISP 278 data. 2. Transmit the SK-ISP Service Tracking data (not the narrative addendum) to TMHP via

(ISP) Plan (-2020) is a truly collaborative and inclusive process. Kildare County Development Board, initiated the ISP in 2010, . yielded in other ISP towns, through local community, business, voluntary sector, state and elected representatives working together will be as highly successful in Celbridge as other ISP

Archaeological Research & Consultancy at the University of Sheffield Graduate School of Archaeology West Court 2 Mappin Street Sheffield S1 4DT Phone 0114 2225106 Fax 0114 2797158 Project Report 873b.3(1) Archaeological Building Recording and Watching Brief: Manor Oaks Farm, Manor Lane, Sheffield, South Yorkshire Volume 1: Text and Illustrations July 2007 By Mark Douglas and Oliver Jessop .