11.07.2015 Views

“ASR 9K + NX7K” DCI Solution Overview MC-LAG + vPC - Cisco ...

“ASR 9K + NX7K” DCI Solution Overview MC-LAG + vPC - Cisco ...

“ASR 9K + NX7K” DCI Solution Overview MC-LAG + vPC - Cisco ...

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Dennis Cai and Sai NatarajanMay 2013© 2012 2010 <strong>Cisco</strong> and/or its affiliates. All rights reserved. <strong>Cisco</strong> Confidential 1


Carrier Cloud Network Reference ArchitectureCustomer/ClientsSP DCSP NGNInternetSP NGNEnterprise DCWAN/DC gatewayDC Core/aggData CenterTop of Rack switchVM, Hypervisor, Storage


Carrier Cloud Business & Technical RequirementsShare the physical network resource amongas many tenants as possibleNetwork virtualization +high scaleBusiness agility: monetize the CAPXspending, fully use the available computingresourceVM mobilityAuto ProvisioningDifferentiate cloud service from MSDCpublic cloud: leverage the customer VPNnetwork footprintIP NGN integration +Service SLADiverse customer applications (IP, non-IP,non-routed)L2 adjacency


ASR 9000 Cloud Building Blocks


Agenda• Introduction: Cloud Architecture, Requirements & Challenges• ASR 9000 Cloud Building Blocks– HW & SW Foundation– <strong>DCI</strong> Technologies & Future Evolutions– Virtualized Services Infrastructure– <strong>Cisco</strong> ONE


ASR 9000 Chassis <strong>Overview</strong>Common software image and architecture, identical softwarefeatures across all chassisBandwidth/slot99xx: >2Tb/Slot*2HCY132HCY139904, 6 RU9912, 30RU 9922, 44RU90xx: 880Gb/Slot*48Tbps system9006, 10RU 9010, 21RU1HCY139001, 2RU, 120G9001-S, 2RU, 60GFixed2 I/O4 I/O* Chassis capacity only, bandwidth also depends on the fabric and line cards8 I/O10 I/ONumber of I/O slots20 I/O7


ASR 9000 Ethernet Line Card <strong>Overview</strong>First-generation LCTrident NPU-L, -B, -EA<strong>9K</strong>-40G A<strong>9K</strong>-4T A<strong>9K</strong>-8T/4 A<strong>9K</strong>-2T20G A<strong>9K</strong>-8T A<strong>9K</strong>-16T/8-TR, -SESecond-gen LCTyphoon NPUA<strong>9K</strong>-MOD160A<strong>9K</strong>-MOD80A<strong>9K</strong>-24x10GEA<strong>9K</strong>-36x10GEA<strong>9K</strong>-2x100GEMPAs20x1GE2x10GE4x10GE8x10GE1x40GE2x40GE-L: low queue, -B: Medium queue, -E: Large queue, -TR: transport optimized, -SE: Service edge optimized


Fully Distributed OS ArchitectureControl plane split among RSP and LC CPU (same type ofCPU as RSP)L2 protocols, BFD, CFM, Netflow runs on the LC CPU forhigh scaleLine CardRSP• IOS-XR: True Modular, fully distributedOS• Fully distributed HW resource, forexample, each line card has same CPUhardware as the RSP• Ultra-High multi-dimensional controlplane scaleCPUCPUBITS/DTIFIAFICMAC addressL2 interfacesP2P EoMPLSBridge-domain/VFIVFI PWsL3 interfaces/VRFFIBMulti-dimensional scale example2M, HW MAC learning 4-5Mpps,per-LC MAC learning (roadmap)128K128K64K128K20K/8K4M, per-LC VRF FIB tabledownload


Flexible EVC Architecture:Any Port Any Service, any VLAN to any VLAN• Industry most flexible servicedeployment model: any L2 andany L3 service, concurrentlyon any/same physical port• Flexible VLAN based serviceclassification model: matchany combination of up to twoVLAN tags• Flexible VLAN tagmanipulation: translate VLANas needed• Support all standard basedservice: L2 P2P local connectand EoMPLS, L2 Multi-pointlocal bridging, H-VPLS andVPLS, Regular L3 sub-interface,and Integrated L2 and L3 - IRBXL3 SubI/FXL2 or L3 sub-interfaces(802.1a/qinq/.1ad)BridgingBridgingRoutingVPLSIRBIRBEoMPLS PWEoMPLS PWEoMPLS PWIntegrated Routing and Bridging


Scalable and Flexible L2 Foundation• Scalable solution– 64K bridge-domain/VFI, 128K PWs– 2M MAC address, HW MAC learningCustomer/Clients– PBB-VPLS and PBB-EVPN* for highscaleSP DC2VPLS/EVPNInternetVPLS/EVPNEnterpriseDC3– MAC routing for future evolution EVPN, PBB-EVPN*• Flexible solution– Flexible VLAN tag classification andmanipulation: any VLAN to any VLAN– Flexible encapsulation : 802.1q/ qinq/.1ad/ .1ahSP DC1– VPLS (both LDP and BGP signaling),per-VC or per-flow load balancingusing fat-PW– Full Ethernet OAM– MPLS/VPLS fast convergence* PBB-EVPN FCS: Oct/2013, EVPN FCS: CY14


Scalable and Flexible L3 FoundationDC2L3/L3VPN/MPLSfor inter-DCL3/MPLSCustomer/ClientsInternetDC1L3/MPLSDC3L2 or L3 intra-DCdesign• Scalable solution– Ultra-High dense 10GE/100GE: upto 720 10GE per system– 4M FIB, selective per-LC download– 20K L3 user interface, 8K VRF– Super high TE Mid point scale:100k+ TE mid points• Flexible solution– Rich & mature IOS-XR BGP featureset– Rich & mature IOS-XR RSVP-TEfeature set: auto-bw, auto-backup,50msec te/frr– Fast IGP and BGP convergence: BGPPIC, IP/FRR– Flexible user CLI: configurationtemplate, apply group


Agenda• Introduction: Cloud Architecture, Requirements & Challenges• ASR 9000 Cloud Building Blocks– HW & SW Foundation– <strong>DCI</strong> Technologies & Future Evolutions– Virtualized Services Infrastructure– <strong>Cisco</strong> ONE


Nexus + ASR 9000: Integrated End to End DC DesignNexusNexus + ASR <strong>9K</strong>ASR 9000• ASIC based forwardingperformance• Low price per port• High port density• DC software feature innovation• Best DC Aggregation Platform inindustry• Best SW feature set, portdensity and price/port in DC• Full SP feature set and high scalefor <strong>DCI</strong>/DC-WAN• Flexible NP based forwarding inASR<strong>9K</strong> is complementary to N7K• ASR<strong>9K</strong> can work as L2/L3gateway to interconnect DCsusing FP, TRILL, .1ah, .1q classic• Best of best integration• Complete SP MPLS/BGP featureset on IOS-XR• Carrier Ethernet L2 VPN featureset• High port density• NP (micro code) basedforwarding. Easy to add featuresfor time to market• XL scale (2M MAC/LC, 128K EFP,64K BD, 8K VRF, 4M FIB/LC,256K ARP/LC)


<strong>Cisco</strong> Virtualized Multi-tenant Data Center ArchitectureASR<strong>9K</strong>UCS610NEXUS 01000vL3IP/NGNL2 VPLSExtensionASR<strong>9K</strong>UCS610NEXUS 01000vValidated referencearchitecture thatincludes Nexus in UnifiedDC and ASR<strong>9K</strong> as <strong>DCI</strong> &the bridge to Nexus DC• Reduced time todeployment• Reduced risk• Increased flexibility• Improved operationalefficiencyhttp://www.cisco.com/en/US/solutions/ns340/ns414/ns742/ns743/ns749/landing_dci_mpls.html


ASR 9000 <strong>DCI</strong> <strong>Solution</strong> – the Value PropositionUltra-high scale: 2M mac address, HWmac learning 4-5Mpps, 64K BD/VFIs,128K EFPs/PWs, 4M FIBFull MPLS, L2VPN, L3VPN feature set,50msec to sub-second convergence, fat-PW<strong>vPC</strong>+<strong>MC</strong>-<strong>LAG</strong>:Simple VPLS multi-homing, fastconvergenceASR <strong>9K</strong>L3 IP/NGNFlexible EVC infrastructure: flexible serviceprovisioning, flexible tag manipulation.Any service any portAny VLAN to any VLANH-QoSASR <strong>9K</strong>ICCPICCPL2 <strong>DCI</strong> ExtensionNexus 7KN-PE2N-PE4Nexus 7KM-LACP Bundle(Active/Standby)L2 N7ktrunks: <strong>vPC</strong>s


ASR 9000 <strong>DCI</strong> <strong>Solution</strong> – the EnhancementsICCP-based service multi-homingnV ClusterEasy <strong>DCI</strong>: IOS-XR flexible CLIPBB-VPLSASR9kL3 IP/NGNASR9k,VPLS LSM: P2MP-TE1ICCPL2 <strong>DCI</strong> ExtensionICCPN7K N-PE2 N-PE4234N7KM-LACP Bundle(Active/Standby)L2 N7ktrunks: <strong>vPC</strong>sUCS6100NEXUS 1000vUCS6100NEXUS 1000v


ASR 9000 <strong>DCI</strong> <strong>Solution</strong> - nV ClusterDCVFIDCVFIDCVFIReduce the number of PWs a lotUse 5 DC as example:With nv cluster: 5*(5-1)/2=10 PWsWithout nv cluster: 10*(10-1)/2=45 PWsVPLS full meshVFIVFInV ClusternV Cluster<strong>vPC</strong><strong>vPC</strong>Simplify VPLS dual homing withactive/active link bundle: per-flow andper-VLAN load balancingSub-second to 50msec fast convergence


ASR <strong>9K</strong> <strong>DCI</strong> <strong>Solution</strong> – ICCP based VPLS Multi-homingPer-VLAN active/active load balancingASR<strong>9K</strong> PE control the blocking/forwarding based onICCP, which is independent of the CE deviceASR<strong>9K</strong> PE send MVRP-Lite message or STPTCN message to DC switches during networkfailure event avoid packet black hole issueDCDCDCVFIVFIVFIVFIVPLS full meshVFIVFIVFI• Simple solution– Doesn’t require BGP– ICCP between local PE pairICCPTCNmessageICCP– Independent of the CE switches• Flexible solution– Works for all service: L2 and L3, notlimited to VPLS– Works for any topology<strong>vPC</strong>


ASR<strong>9K</strong> <strong>DCI</strong> <strong>Solution</strong> Enhancement – PBB-VPLSOne common backbone VPLS full mesh for all bridge-domains:• Highly PW scale O(1) instead of O(n2)• One time VFI/PW provisioningDCDCDCAggregate all or multiple bridge-domains into onecommon backbone VFIVFIVFIVFIVFIPBB-VPLSVFI• SP <strong>DCI</strong> deployment example: 20 DC sites,40 PEs, 8K Bridge-domains/VFIsnV Cluster– With full mesh VPLS PW, it require(40-1)*8K= ~320K PWs per PE– With PBB-VPLS, it only require 39VPLS PW<strong>vPC</strong><strong>vPC</strong>


L2VPN Evolution with EVPN: MAC RoutingVPLSEVPNPacket isolationVLAN/BD/VSIVLAN/BD/EVIPacket forwardingLoop preventionBased on MAC addressData plane auto learningVPLS multi-homingPer-VLAN load balancingSingle PathControl plane MAClearning/distribution usingrouting protocol just like L3A/A mc-lagPer-flow load balancingECMPsThe combined advantages of both L2 and L3 Forwarding• Per-flow load balancing, multiple paths• Reduced L2 flooding (no unknown unicast flooding, ARP proxy)• Efficient multicast/broadcast distribution (LSM)• No control plane overhead, PW is eliminated . No control planesignaling per PW


EVPN – The PrincipleActive-active <strong>MC</strong>-<strong>LAG</strong>, perflowload balancingFrom PE1iBGP L3-NLRI:• next-hop: PE1• iBGP L2-NLRI• next-hop: PE1• C-MAC1PE1PE2Control plane:BGP for MAC distributionData plane:MPLS forwarding like L3PE3PE4PEC-MAC3From PE3iBGP L3-NLRI:• next-hop: PE3• iBGP L2-NLRI• next-hop: PE3• • Treat MAC as routable addresses and distribute them in BGP• Receiving PE injects these MAC addresses into forwarding table along with itsassociated adjacency like IP prefix• When multiple PE nodes advertise the same MAC, then multiple adjacency iscreated for that MAC address in the forwarding table: multi-paths• When forwarding traffic for a given unicast MAC DA, a hashing algorithm based onL2/L3/L4 hdr is used to pick one of the adjacencies for forwarding: per-flow loadbalancing


PBB-EVPN – for Simplicity and High ScaleEVPN• Is conceptually simple. In reality, it’s not. It has complex BGP route to addresssplit-horizon, DF selection, etc• Advertises each customer MAC address as BGP route. So it’s not scalablesolution for large scale deploymentPBB-EVPN• Leverage PBB header to dramatically simplify the EVPN operation. So it’s not“PBB+EVPN”. In fact, it’s much more simple than EVPN itself• Only advertises B-MAC address via BGP. It’s highly scalable solution• Has lots of other advantages in addition to scale and simplicity: draft-ietfl2vpn-pbb-evpn-03


Deploying Large Scale <strong>DCI</strong> with PBB-EVPNFlexible and Scalable <strong>DCI</strong> PE• Highly scale: 64K+ BDs• VLAN local significant, flexible VLAN translation,128K VLANs, re-use VLAN on the ToR switches• Per-LC HW based C-MAC learningInter-DC network (w PBB-EVPN)• MAC routing: fast convergence, L3 ECMPs,optimized multicast forwarding, flexible policycontrol• L2VPN auto discovery• No PW required, highly scale for many sitesDCDCPBB-EVPN<strong>DCI</strong>ntra-DC• 4K VLAN per each ToR switch• Limited MAC per each ToR switch• Scale with more ToR switchesDC boundary• PBB-EVPN Active-active <strong>MC</strong>-<strong>LAG</strong>with auto provisioning


L2 <strong>DCI</strong> Technology ComparisonFeatureMACScaleTenantVPNPWOptimizedforwardingNetworkresiliencyOperation Policy ReleaseMACbridgingVPLSPBB-VPLS* Per-VLAN***,single pathICCP-SMPer-VLAN LBGeo-rednV ClusterNowNowMACroutingEVPN **SamePer-flow,ECMP, multipathsPBB-EVPNFP/Trill-EVPN* * A/A <strong>MC</strong>-<strong>LAG</strong>Per-flow orper-VLAN LBGeo-redA/A, ECMPsDC-NGN tightintegrationDC-NGNtightintegrationCY142HCY13Radar* Support O(10 Million) MAC Addresses per DC. Confinement of C-MAC Learning** BGP control plane overhead, slow convergence during MAC movement*** fat-pw can support per-flow load balancing, but it’s single PE termination point only


Beyond the <strong>DCI</strong>: the Evolution of the Cloud NetworkingL2 <strong>DCI</strong> Enhancements• Resiliency• Scale• Simple provisioningOptimize L2 <strong>DCI</strong>• EVPN/PBB-EVPNClassic <strong>Solution</strong>• L2 <strong>DCI</strong>: VPLS• DC fabric: legacy L2• L2/L3 boundary:aggregation switchDC Fabric Evolution• FP/Trill• IP Fabric: VXLAN, NV-GRE• MPLSOptimize L3 Routing• Centralized vs.Distributed PE• Host-based vs.Network-based routing


Agenda• Introduction: Cloud Architecture, Requirements & Challenges• ASR 9000 Cloud Building Blocks– HW & SW Foundation– <strong>DCI</strong> Technologies & Future Evolutions– Virtualized Services Infrastructure– <strong>Cisco</strong> ONE


Traditional Data Center Service ModelToday’s Network Forces Services IsolationAt WorkTraditional Services Model:• All services traffic backhauled to central site• Scale, latency challenges as service popularity growsAppAt HomeThe NetworkOn the MoveTraditional(Centralized)Data Center


Virtualized Cloud Services InfrastructureThe Network Becomes the Services LayerAt WorkVirtualized Services Model:• Compute capacity distributed into network infrastructure• Identical apps & hosting environments using VMsAppAppAt HomeASR9kServicesEdgeVMVMOn the MoveCreatively Link Apps to Network Resources:• Couple BB subscribers to VoD streams• Create service chains; BNG > DPI > CGv6Traditional(Centralized)Data Center


Virtualized Cloud Service Use CaseRemote Site –Enterprise ASaaSApp-1SaaSApp-2Remote Office –Enterprise BRemote Office –Enterprise A• Host Services offered by multiple SaaS Vendors.• Multiple instances for each SaaS vendor• Optimize Scaling using distributed infrastructure


Agenda• Introduction: Cloud Architecture, Requirements & Challenges• ASR 9000 Cloud Building Blocks– HW & SW Foundation– <strong>DCI</strong> Technologies & Future Evolutions– Virtualized Services Infrastructure– <strong>Cisco</strong> ONE


MonetizationOptimization<strong>Cisco</strong> ONE: Value PropositionVideo-ScapeCloudCollaborationApplicationCommunityMonetization• Adaptive billing infrastructure• B2B2C Business Models• Premium Resource Allocation• VIP Customer treatmentPOLICYOrchestrationANALYTICSOptimization• Real time analysis of traffic profiles• Finding unused capacity• Map resources to SLA• Closed Feedback LoopNet Effect Platform EnablementNetwork


<strong>Cisco</strong> ONE Use Case: Dynamic Bandwidth AllocationSP Policy Server12 24Ingress PEEgress PECPESP Network3ASR <strong>9K</strong> with OnePKASR <strong>9K</strong> with OnePKCloud ServiceCustomer1. Customer requests premium access to cloud service2. Policy server pushes customer policy to OnePK on 9k3. SP Policy Server uses OnePK API to program higher bandwidth QoS policy for specific flow[Customer IP Cloud Service IP]4. Customer traffic matching the policy is given premium QoS treatmentUsing <strong>Cisco</strong> OnePK API SPs can build custom apps to create differentiated, revenue generating services


ASR 9000 as Universal Cloud Gateway


Thank you.© 2012 <strong>Cisco</strong> and/or its affiliates. All rights reserved. <strong>Cisco</strong> Confidential 36

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!