14.12.2012 Views

Data Center LAN Migration Guide - Juniper Networks

Data Center LAN Migration Guide - Juniper Networks

Data Center LAN Migration Guide - Juniper Networks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Data</strong> <strong>Center</strong> <strong>LAN</strong> <strong>Migration</strong> <strong>Guide</strong><br />

Best Practices: Designing the Upgraded Aggregation/Core Layer<br />

The insertion point for an enterprise seeking to initially retain its existing three-tier design may be at the current design’s<br />

aggregation layer. In this case, the recommended <strong>Juniper</strong> aggregation switch, typically an EX8200 line or MX Series<br />

platform, should be provisioned in anticipation of it eventually becoming the collapsed core/aggregation layer switch.<br />

If the upgrade is focused on the aggregation layer in a three-tier design (to approach transformation of the data<br />

center network architecture in an incremental way, as just described), the most typical scenario is for the aggregation<br />

switches to be installed as part of the L2 topology of the data center network, extending the size of the L2 domains<br />

within the data center and interfacing them to the organization’s L3 routed infrastructure that typically begins at the<br />

core tier, one tier “up“in the design from the aggregation tier.<br />

In this case, the key design considerations for the upgraded aggregation tier include:<br />

• Ensuring sufficient link capacity for the necessary uplinks from the access tier, resiliency between nodes in the<br />

aggregation tier, and any required uplinks between the aggregation and core tiers in the network<br />

• Supporting the appropriate V<strong>LAN</strong>, LAG, and STP configurations within the L2 domains<br />

• Incorporating the correct configurations for access to the L3 routed infrastructure at the core tier, especially for<br />

knowledge of the default gateway in a VRRP (Virtual Router Redundancy Protocol) environment<br />

• Ensuring continuity in QoS and policy filter configurations appropriate to the applications and user groups supported<br />

At a later point of evolution (or perhaps at the initial installation, depending on your requirements), it may be that these<br />

nodes perform an integrated core and aggregation function in a two-tier network design. This would be the case if it suited<br />

the organization’s economic and operational needs, and could be accomplished in a well managed insertion/upgrade.<br />

In such a case, the new “consolidated core” of the network would most typically perform both L2 and L3 functions. The<br />

L2 portions would, at a minimum, include the functions described above. They could also extend or “stretch” the L2<br />

domains in certain cases to accommodate functions like application mirroring or live migration of workloads in a virtual<br />

server operation between parts of the installation in other areas of the data center or even in other data centers. We<br />

describe design considerations for this later in this guide.<br />

In addition to L2 functions, this consolidated core will provide L3 routing capabilities in most cases. As a baseline, the<br />

L3 routing capabilities to be included are:<br />

• Delivering a resilient interface of the routed infrastructure to the L2 access portion of the data center network. This<br />

is likely to include VRRP default gateway capabilities. It is also likely to include one form or another of an integrated<br />

routing/bridging interface in the nodes such as routed V<strong>LAN</strong> interfaces (RVIs), or integrated routing and bridging<br />

interfaces (IRBs), to provide transition points between the L2 and L3 forwarding domains within the nodes.<br />

• Resilient, HA interfaces to adjacent routing nodes, typically at the edge of the data center network. Such high<br />

availability functions can include nonstop active routing (NSR), GRES, Bidirectional Forwarding Detection (BFD),<br />

and even MPLS fast reroute depending on the functionality and configuration of the routing services in the site.<br />

For definitions of these terms, please refer to the section on Node-Link Resiliency. MPLS fast reroute is a local<br />

restoration network resiliency mechanism where each path in MPLS is protected by a backup path which originates<br />

at the node immediately upstream.<br />

• Incorporation of the appropriate policy filters at the core tier for enforcement of QoS, routing area optimization,<br />

and security objectives for the organization. On the QoS level, this may involve the use of matching Differentiated<br />

Services code points (DSCPs) and MPLS traffic engineering designs with the rest of the routed infrastructure to<br />

which the core is adjacent at the edge, as well as matching priorities with the 802.1p settings being used in the L2<br />

infrastructure in the access tier. On the security side, it may include stateless filters that forward selected traffic to<br />

security devices such as firewall/IDP platforms at the core of the data center to enforce appropriate protections for<br />

the applications and user groups supported by the data center (see the next section of the core tier best practices<br />

for a complementary discussion of the firewall/IDP part of the design).<br />

In some cases, the core design may include use of VPN technology—most likely VPLS and MPLS—to provide<br />

differentiated handling of traffic belonging to different applications and user communities, as well as to provide<br />

special networking functions between various data center areas, and between data centers and other parts of the<br />

organization’s network.<br />

40 Copyright © 2012, <strong>Juniper</strong> <strong>Networks</strong>, Inc.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!