31.12.2014 Views

Why switched FICON? - Dr. Steve Guendert's Mainframe World

Why switched FICON? - Dr. Steve Guendert's Mainframe World

Why switched FICON? - Dr. Steve Guendert's Mainframe World

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Why</strong> You Should Deploy<br />

Switched-<strong>FICON</strong><br />

David Lytle, BCAF<br />

Global Solutions Architect<br />

System z Technologies and Solutions<br />

Brocade


Legal Disclaimer<br />

All or some of the products detailed in this presentation may still be under<br />

development and certain specifications, including but not limited to, release<br />

dates, prices, and product features, may change. The products may not<br />

function as intended and a production version of the products may never be<br />

released. Even if a production version is released, it may be materially<br />

different from the pre-release version discussed in this presentation.<br />

NOTHING IN THIS PRESENTATION SHALL BE DEEMED TO CREATE A<br />

WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, STATUTORY OR<br />

OTHERWISE, INCLUDING BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES<br />

OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR<br />

NONINFRINGEMENT OF THIRD-PARTY RIGHTS WITH RESPECT TO ANY<br />

PRODUCTS AND SERVICES REFERENCED HEREIN.<br />

Brocade, Fabric OS, File Lifecycle Manager, MyView, and StorageX are<br />

registered trademarks and the Brocade B-wing symbol, DCX, and SAN Health<br />

are trademarks of Brocade Communications Systems, Inc. or its subsidiaries,<br />

in the United States and/or in other countries. All other brands, products, or<br />

service names are or may be trademarks or service marks of, and are used to<br />

identify, products or services of their respective owners.<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

2


Switched-<strong>FICON</strong> is a Best Practice for System z<br />

‣ Brocade <strong>FICON</strong> switching devices do not cause<br />

performance problems within a local data center<br />

‣ Architected and deployed correctly, Brocade <strong>FICON</strong><br />

switching devices do not cause performance problems<br />

even across very long distances<br />

‣ In fact, use of Brocade <strong>switched</strong>-<strong>FICON</strong> and Brocade<br />

FCIP long distance connectivity solutions can even<br />

enhance DASD replication performance and long<br />

distance tape operations effectiveness and performance<br />

‣ Switched-<strong>FICON</strong> is the only way to efficiently and<br />

effectively support Linux on System z connectivity<br />

‣ Switched-<strong>FICON</strong> is the only way to really take advantage<br />

of the full value of the System z I/O subsystem<br />

• Over the next set of slides you’ll discover why this is true<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

3


SM to MM Conversion and Vice Versa<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

Long-wave Optics<br />

<strong>FICON</strong><br />

Point-to-Point<br />

Would have to be<br />

long-wave Optics<br />

Longwave-to-Shortwave<br />

Long-wave Optics<br />

Long-wave<br />

<strong>FICON</strong> DCX<br />

Long-wave to short-wave conversion<br />

without having to modify storage optics<br />

Short-wave or<br />

long-wave Optics<br />

Short-wave<br />

Most IBM <strong>FICON</strong> Express8 channel<br />

cards have long-wave optics<br />

• IBM is pushing long-wave in the data center<br />

• They know that 16 Gbps is on the near horizon<br />

If your “on the floor” legacy storage<br />

currently has short-wave ports:<br />

• End-to-end connections must be the same<br />

•LW-to-LW<br />

•SW-to-SW<br />

•You would have to be sure that storage ports were<br />

also long-wave – potential budget hit for optics!<br />

• Long-wave optics for storage and Directors is<br />

more expensive than short-wave optics<br />

Switched <strong>FICON</strong> allows long-wave into<br />

the switch/Director and short-wave out<br />

of the switch/Director<br />

• Port by port basis in the switching device<br />

• Just have to order the proper port cards<br />

• Financially better <strong>FICON</strong> TCO<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

4


Point-to-Point versus Switched-<strong>FICON</strong><br />

Point-to-point <strong>FICON</strong> is exactly the same as what is called in<br />

the Open Systems world - Direct Attach Storage (DAS)<br />

And it also suffers from exactly the same issues that caused<br />

companies to being implementing SANs in their growing<br />

storage environments beginning in the late 1990’s<br />

Switched-<strong>FICON</strong> provides very compelling benefits to the user<br />

in the areas of:<br />

• Consolidation (dynamic connections to fewer storage boxes)<br />

• High Availability (far beyond point-to-point and over distance)<br />

• Management (especially of fast or abrupt growth - planning)<br />

• Performance (balanced and better use of all channel resources)<br />

• Scalability (in many different ways including Fan In-Fan out)<br />

Let’s discuss this in detail on the following slides<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

5


Point-to-Point <strong>FICON</strong> is almost dead!<br />

System z<br />

There are a LOT of reasons for not getting caught in the<br />

trap of doing direct-attached <strong>FICON</strong>:<br />

• Maximum Point-to-Point distances are going to shrink<br />

dramatically as bandwidth increases<br />

• Reliability<br />

• Scalability<br />

• As of <strong>FICON</strong> Express8, buffer credits are also shrinking<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

6


End-to-End <strong>FICON</strong>/FCP Connectivity<br />

System z<br />

Fiber Cable<br />

Cabling<br />

Considerations<br />

• Long wave single mode still works well<br />

• 1/2/4/8/10 Gbps out to 10km with SM<br />

• Short wave multi-mode might be limiting<br />

• 4G optics auto-negotiate back to 1, 2G<br />

• 8G optics auto-negotiate back to 2, 4G<br />

• 1G storage connectivity requires 2/4G SFPs<br />

Distance with Multi-Mode Cables (meters)<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

7


End-to-End <strong>FICON</strong>/FCP Connectivity<br />

System z cable distances at 8Gbps<br />

OS1 9m<br />

OM3 50m<br />

150 meters<br />

492 feet<br />

2, 4 and 8Gbps<br />

4 to 10 km<br />

2.5 to 6.2 miles<br />

System z<br />

OM2 50m<br />

50 meters<br />

164 feet<br />

Longwave 8G on a<br />

z10 only comes<br />

at 10km – no 4km<br />

~80% of System z CHPIDs are long wave<br />

But older OM2 cables are going to start requiring the<br />

mainframe PtP devices to hover very close to System z<br />

• And db link loss budget on all cable types is reduced at 8G<br />

What is your current cable distance to the farthest<br />

attachment point Will it have to change<br />

• If not already, now is the time to deploy <strong>switched</strong>-<strong>FICON</strong>!<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

8


End-to-End <strong>FICON</strong>/FCP Connectivity<br />

Cable distances at 8Gbps<br />

75 th<br />

floor<br />

366m of cable run<br />

using OM3 MM cable<br />

1,280ft 80-story office tower<br />

with a printer on 75 th floor<br />

(about 1200 ft from the M/F)<br />

Today using 4Gbps you might<br />

have need to connect a tape<br />

drive or printer at distances up<br />

to 380m (1,246 feet) using<br />

multimode cable (shortwave)<br />

Now you want to deploy 8Gbps<br />

soon<br />

But 8Gbps will only reach out<br />

to 150m (492 feet) …so…<br />

…that tape drive or printer<br />

might have to be relocated<br />

down to the 41 st floor …or…<br />

…you might change interfaces<br />

and cables to run Single-mode<br />

to regain the lost distance<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

9


End-to-End <strong>FICON</strong>/FCP Connectivity<br />

<strong>FICON</strong> Director cable distances at 8Gbps<br />

SM<br />

SM<br />

MM<br />

MM<br />

<strong>FICON</strong> Director<br />

OS1 9m using 10km transceivers<br />

OS1 9m using 4km transceivers<br />

OM3 50m<br />

OM2 50m<br />

50 meters<br />

164 feet<br />

150 meters<br />

492 feet<br />

LWL(10km) ELWL(40km)<br />

6.2 or 49.7 miles<br />

4 km – 2.5 miles<br />

Majority of local storage connections are still short wave<br />

What is your current cable distance to farthest storage<br />

attachment point<br />

4Gbps allowed it to be 1,246 feet (380m) away using OM3<br />

• Will you have to relocate some storage closer in at 8G<br />

• Do you need to re-cable from OM2 to OM3 or up to OS1<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

10


Provide for Greater Distance Connectivity<br />

long-wave<br />

short-wave<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

10 KM native<br />

Only 40 buffer credits per CHPID<br />

Point-to-Point<br />

150 meters native<br />

Distance Extension<br />

DCX with<br />

FX8-24 FCIP<br />

Blade<br />

Long-wave connections can push a native<br />

unrepeated, <strong>FICON</strong> connection up to 10km<br />

• This is from mainframe to storage port (LX)<br />

• And be careful as <strong>FICON</strong> Express8 has only 40<br />

buffer credits per CHPID<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, the<br />

switch/Director acts like a repeater so that<br />

you get the full distance on each side of the<br />

switching device<br />

• 150m to 10km from mainframe to switch device<br />

• 150m to 25km from switch device to storage<br />

When designed with well-thought out<br />

placement in mind, a <strong>FICON</strong> switching<br />

device can help alleviate some concerns<br />

about distance connectivity even in a local<br />

computing environment<br />

• Locally, can send frames up to 25km (LX)<br />

• Can use 7800 / FX8-24 Blade for FCIP extension<br />

• Over 1,300 buffer credits provides long<br />

distance (on 1 port of each port blade ASIC)<br />

• Tape emulation and Global Mirror (XRC)<br />

emulation are supported for <strong>FICON</strong><br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

11


<strong>Mainframe</strong> Channel Cards<br />

<strong>FICON</strong> Express2<br />

z10, z9, z990, z890<br />

Longwave (LX) to 10km<br />

Shortwave (SX)<br />

1 or 2 GBps link rate<br />

270MBps FD max thru-put<br />

68% of 400MBps potential<br />

107 Buffer Credits per port<br />

• 108km @ 2G full frame / port<br />

• 54km @ 2G half frame / port<br />

• 43km @ 2G 819 byte payloads<br />

<strong>FICON</strong> Express4<br />

• z10, z9<br />

• 4km & 10km LX<br />

• Shortwave (SX)<br />

• 1, 2 or 4 GBps link rate<br />

• 520MBps FD max thru-put<br />

• 65% of 800MBps potential<br />

• 200 Buffer Credits per port<br />

– 101km @ 4G full frame / port<br />

– 51km @ 4G half frame / port<br />

– 40km @ 4G 819 byte payloads<br />

<strong>FICON</strong> Express4 provides the<br />

last native 1Gbps CHPID support<br />

<strong>FICON</strong> Express8<br />

• z10<br />

• 10km LX<br />

• Shortwave (SX)<br />

• 2, 4 or 8 GBps link rate<br />

• 740MBps FD max thru-put<br />

• 46% of 1600MBps potential<br />

• 40 Buffer Credits per port<br />

– 10km @ 8G full frame / port<br />

– 5km @ 8G half frame / port<br />

– 4km @ 8G 819 byte payloads<br />

<strong>FICON</strong> switching devices will<br />

provide BCs for long distances<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

12


Switched-<strong>FICON</strong> Provides additional BCs<br />

• To the left is just an example of<br />

the limitations of buffer credits<br />

provided on mainframe CHPIDs<br />

<strong>FICON</strong> Express8<br />

• 40 Buffer Credits per port<br />

– 10km @ 8G full frame / port<br />

– 5km @ 8G half frame / port<br />

– 4km @ 8G 819 byte payloads<br />

• <strong>FICON</strong> switching devices can<br />

provide as many as 1,300 buffer<br />

credits on a single port<br />

• If a dark fiber connection, or any<br />

long distance connection,<br />

requires more than 5-10km of<br />

distance then <strong>switched</strong>-<strong>FICON</strong><br />

can provide connectivity ports<br />

that can reach far further, at full<br />

path utilization, than a CHPID<br />

can provide<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

13


Availability After A Component Failure<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

CHPID path Fails<br />

Both sides are down<br />

X<br />

Cable or optic failure occurs<br />

X<br />

Avoid complete path failures<br />

…BUT…<br />

Storage Port<br />

Remains<br />

Available!<br />

A failure of a FE8 card …or… FE8 channel<br />

port …or… failure of the P-2-P cable …or…<br />

failure of the storage port optic …or…<br />

storage adapter causes:<br />

• FE8 port to become unavailable AND<br />

• Storage port becomes unavailable for<br />

everyone!<br />

A failure anywhere affects both the mainframe<br />

connection and the storage connection<br />

• Lose an SFP and lose host + storage connect<br />

• The WORST possible reliability and<br />

availability is provided by a P-2-P topology!<br />

• FC optics are most likely element to fail in the<br />

channel path – same as in a SAN<br />

• FC cables are second most likely failure in a<br />

fabric<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, only a<br />

segment is rendered unavailable:<br />

• The non-failing side remains available<br />

• If the storage has not failed, its port is still<br />

available to be used by other CHIPDs<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

14


Fan In–Fan Out For Better Efficiency<br />

It is a common practice in a storage network to share one port on<br />

the storage subsystem among multiple CHPIDs (HBAs) from<br />

multiple servers (partitions).<br />

• Fan In: the common practice of attaching many underutilized server<br />

connected switch ports to a single storage subsystem port as long as<br />

additional bandwidth remains and the FC <strong>switched</strong> fabric is non-blocking<br />

• Fan Out: the common practice of oversubscribing storage connected<br />

switch ports by connecting each of them to many server ports, making the<br />

best use of that critical resource.<br />

• The SAN fan-out ratio of storage ports typically ranges from 4:1 to 12:1<br />

server-to-storage subsystem ports – bandwidth dependent.<br />

• The intent is to fully utilize available storage port bandwidth while<br />

enabling the maximum throughput of each HBA (CHPID) to achieve nearwire-rate<br />

throughput at a given time.<br />

• The ratio also implies that the server (partition) ports are being underused<br />

(5-50% of possible bandwidth) most of the time.<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

15


FI-FO To Overcome System Bottlenecks<br />

System z<br />

Fiber Cable<br />

<strong>FICON</strong><br />

Director<br />

Cascaded<br />

<strong>FICON</strong><br />

<strong>FICON</strong><br />

Director<br />

DASD<br />

135-740 MBps<br />

@ 2/4/8Gbps<br />

Per CHPID<br />

(transmit and receive)<br />

Example Fan In:<br />

To one CHPID = 12<br />

(trying to keep the CHPID busy)<br />

380 MBps @ 2Gbps<br />

760 MBps @ 4Gbps<br />

1520 MBps @ 8Gbps<br />

1900 MBps @ 10Gbps<br />

Per Link<br />

(transmit and receive)<br />

Example Fan Out:<br />

From 12 Storage Adapters<br />

Total <strong>FICON</strong> path usually does not support full speed<br />

• Must utilize Fan In – Fan Out to utilize CHPID port wisely<br />

• Multiple I/O flows funneled over a single channel path<br />

• Direct attached <strong>FICON</strong> is really not a best practice<br />

70-740 MBps<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

16


DASD Fan In–Fan Out<br />

Maximize CHPID Capacity Utilization<br />

Storage-to-<strong>Mainframe</strong><br />

1 : 5<br />

FAN-IN<br />

(~510 MBps possible)<br />

460MBps<br />

CHPID<br />

110MBps<br />

120MBps<br />

50MBps<br />

80MBps<br />

100 MBps<br />

Storage<br />

Port<br />

Storage-to-<strong>Mainframe</strong><br />

1 : 5<br />

FAN-Out<br />

(~1800 MBps possible)<br />

Maximize Storage Port Capacity Utilization<br />

<strong>Mainframe</strong>-to-Storage<br />

1 : 5<br />

FAN-IN<br />

(~380 MBps possible)<br />

360MBps<br />

Storage<br />

Port<br />

75MBps<br />

100MBps<br />

20MBps<br />

65MBps<br />

100 MBps<br />

CHPID<br />

<strong>Mainframe</strong>-to-Storage<br />

1 : 5<br />

FAN-Out<br />

(~2550 MBps possible)<br />

NOTE: Remember that at 8Gbps a Command Mode z/OS CHPID can do about 510MBps maximum<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

17


Fan In–Fan Out for Tape<br />

Maximize CHPID Capacity Utilization<br />

Tape-to-<strong>Mainframe</strong><br />

1 : 1<br />

FAN-IN<br />

(~510 MBps possible)<br />

320MBps<br />

CHPID<br />

320MBps<br />

Tape<br />

Port<br />

Tape-to-<strong>Mainframe</strong><br />

1 : 1<br />

FAN-Out<br />

(~320 MBps<br />

possible)<br />

Maximize Tape Port Capacity Utilization<br />

<strong>Mainframe</strong>-to-Tape<br />

1 : 1<br />

FAN-IN<br />

(~160 MBps possible)<br />

320MBps<br />

Storage<br />

Port<br />

320MBps<br />

CHPID<br />

<strong>Mainframe</strong>-to-Tape<br />

1 : 1<br />

FAN-Out<br />

(~160 MBps<br />

possible)<br />

NOTE: Remember that at 8Gbps a Command Mode z/OS CHPID can do about 510MBps maximum<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

18


Fan In–Fan Out On A System Basis Too<br />

12 Paths<br />

FAN-IN<br />

<strong>FICON</strong><br />

Directors<br />

ESCON had performance<br />

constraints that effectively<br />

circumvented its ability to<br />

provide Fan In-Fan Out<br />

8 Paths<br />

FAN-OUT<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

19


Minimize <strong>Mainframe</strong> Channel Card Costs<br />

For Example<br />

16<br />

CHPIDS<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

<strong>FICON</strong><br />

Point-to-Point<br />

Same<br />

16<br />

Storage<br />

Ports<br />

Can use Fan In – Fan Out to minimize CHPID<br />

ports required as long as required bandwidth<br />

requirements are satisfied<br />

For<br />

MAX<br />

16<br />

Storage<br />

Ports<br />

One CHPID per each Storage Port – expensive!<br />

Use only 8<br />

of the CHPIDs<br />

to contain $$<br />

Each storage port in a P-2-P connection<br />

requires its very own physical port<br />

connection on the mainframe<br />

• <strong>FICON</strong> Express8 cards are fairly expensive<br />

• By model, there is a finite limit to the<br />

number of <strong>FICON</strong> channels available<br />

• Deploying many P-2-P connections can get<br />

• pretty expensive<br />

• A high TCO is usually attributed to P-2-P<br />

How do you continue to scale when you run<br />

out of either mainframe or storage ports<br />

• Buy a new mainframe or storage chassis<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, the Fan<br />

In – Fan Out ratios solve this problem just<br />

like it solves other connectivity problems<br />

• Director/switch ports are less expensive<br />

than mainframe <strong>FICON</strong> Express8 cards<br />

• If you run out of <strong>FICON</strong> CHPIDs then simply<br />

continue to Fan-Out to more storage ports<br />

• Or simply use fewer <strong>FICON</strong> Express8<br />

channel cards on your Fan-In into storage<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

20


Minimize <strong>Mainframe</strong> Channel Card Costs<br />

For Example<br />

16<br />

CHPIDS<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

<strong>FICON</strong><br />

Point-to-Point<br />

Same<br />

16<br />

Storage<br />

Ports<br />

Can use Fan In – Fan Out to minimize CHPID<br />

ports required as long as required bandwidth<br />

requirements are satisfied<br />

For<br />

MAX<br />

16<br />

Storage<br />

Ports<br />

One CHPID per each Storage Port – expensive!<br />

Use only 8<br />

of the CHPIDs<br />

to contain $$<br />

Each storage port in a P-2-P connection<br />

requires its very own physical port<br />

connection on the mainframe<br />

• <strong>FICON</strong> Express8 cards are fairly expensive<br />

• By model, there is a finite limit to the<br />

number of <strong>FICON</strong> channels available<br />

• Deploying many P-2-P connections can get<br />

• pretty expensive<br />

• A high TCO is usually attributed to P-2-P<br />

How do you continue to scale when you run<br />

out of either mainframe or storage ports<br />

• Buy a new mainframe or storage chassis<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, the Fan<br />

In – Fan Out ratios solve this problem just<br />

like it solves other connectivity problems<br />

• Director/switch ports are less expensive<br />

than mainframe <strong>FICON</strong> Express8 cards<br />

• If you run out of <strong>FICON</strong> CHPIDs then simply<br />

continue to Fan-Out to more storage ports<br />

• Or simply use fewer <strong>FICON</strong> Express8<br />

channel cards on your Fan-In into storage<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

21


Robust General Scalability<br />

Core<br />

DCX<br />

DCX<br />

4S<br />

6140<br />

B5300<br />

DCX<br />

DCX<br />

<strong>FICON</strong> switching allows for dynamic connectivity<br />

in a local or remote environment<br />

Point-to-Point does not allow for easy,<br />

dynamic growth and scalability<br />

• One mainframe port is tied to one<br />

storage port<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, you<br />

can provide dynamic connectivity<br />

• Better use of all channel resources<br />

• Better use of all storage resources (fan<br />

in-fan out)<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, you<br />

can provide dynamic scalability if you<br />

implement <strong>FICON</strong> cascading<br />

• Better use of all channel resources<br />

• Better use of all storage resources<br />

• Fan in-Fan out<br />

• Efficient utilization of all resources<br />

• Quick response to new connectivity<br />

demands<br />

• Proven Core-to-Edge connectivity<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

22


Scalability Beyond System z CHPID Limits<br />

<strong>Mainframe</strong>s<br />

Have a limited<br />

Number of<br />

<strong>FICON</strong> CHPIDs<br />

Attach to many<br />

more devices<br />

than possible<br />

with P-2-P<br />

z800: 32 <strong>FICON</strong> Express<br />

z900: 96 <strong>FICON</strong> Express<br />

z890: 40 <strong>FICON</strong> Express<br />

z890: 80 <strong>FICON</strong> Express2<br />

z990: 120 <strong>FICON</strong> Express2<br />

z990: 240 <strong>FICON</strong> Express2<br />

z9BC: 112 <strong>FICON</strong> Express4<br />

z9EC: 336 <strong>FICON</strong> Express4<br />

z10BC: 112 <strong>FICON</strong> Express8<br />

z10EC: 336 <strong>FICON</strong> Express8<br />

ICL’d DCX<br />

Maximums<br />

Can use DCX ICL’s and <strong>FICON</strong> Cascading<br />

to act as a CHPID multiplier for obtaining<br />

access to storage devices<br />

Each storage port in a P-2-P<br />

connection requires its own physical<br />

port connection on the mainframe<br />

• There is a finite limit to the number of <strong>FICON</strong><br />

channels – depends upon the model of the<br />

mainframe that you are using<br />

• What happens when you need more <strong>FICON</strong><br />

connectivity than you have CHPIDs<br />

How do you continue to scale when<br />

you run out of mainframe CHPIDs<br />

• Even if you make really good use of of Fan-<br />

In, this could eventually happen<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, the<br />

Fan In – Fan Out ratios solve this<br />

problem just like it solves many other<br />

connectivity and scalability problems<br />

• If you run out of <strong>FICON</strong> CHPIDs then simply<br />

continue to Fan-Out to more storage ports<br />

• Or simply use fewer <strong>FICON</strong> channel cards<br />

on your Fan-In into storage depending upon<br />

your bandwidth requirements<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

23


Balance Workload Across All Storage Ports<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

CHPIDs<br />

8<br />

Path<br />

Group<br />

<strong>FICON</strong><br />

Point-to-Point<br />

Path Busy %<br />

20%<br />

20%<br />

20%<br />

20%<br />

Overall low utilization but no way to balance workload.<br />

Use only 8 of<br />

the CHPIDs to<br />

contain $$<br />

<strong>FICON</strong> DCX<br />

Can use Fan In – Fan Out to help<br />

distribute workload across all<br />

of the storage ports evenly while<br />

making best use of high-capacity<br />

storage arrays<br />

Maybe<br />

only buy<br />

10 of 16<br />

Storage<br />

Ports<br />

IF CHPIDs 10 through 17 (8 path<br />

group) are consistently pushing<br />

low amounts of data, there is really<br />

no opportunity to make better use<br />

of either those channel ports or<br />

those storage ports<br />

• <strong>FICON</strong> does not do channel disconnect<br />

• With P-2-P no sharing of ports is possible<br />

• Capability of storage device may be under<br />

utilized as a result of these P-2-P<br />

connections<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment,<br />

Fan In to Fan Out ratios help<br />

evenly distribute storage workload<br />

across all ports<br />

• Fewer channel ports can often actually<br />

push more bandwidth across fewer storage<br />

ports<br />

• Can typically put more capacity inside of a<br />

DASD array when using Fan In – Fan Out<br />

rather than P-2-P while using equal or<br />

fewer total storage connections<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

24


<strong>Mainframe</strong> and Linux Resource Sharing<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

<strong>FICON</strong><br />

FCP<br />

Will need to provide unique CHPIDs and maybe<br />

storage ports, some for Linux and some for <strong>FICON</strong><br />

and possibly even different storage arrays<br />

<strong>FICON</strong><br />

FCP<br />

<strong>FICON</strong> and Linux Intermix<br />

Point-to-Point<br />

Point-to-Point<br />

Many customers are adopting Linux on<br />

System z<br />

• Use mainframe functionality to manage up to<br />

several thousands of Linux clients running on<br />

your System z<br />

• Use Node Point ID Virtualization (NPIV) to<br />

interleave I/O across physical channels<br />

• IFL engines keep software costs down<br />

In a <strong>switched</strong>-<strong>FICON</strong> environment, the<br />

switch/Director accepts both <strong>FICON</strong> and<br />

Linux FC connections and can then<br />

maximize the use of storage ports via<br />

reasonable Fan In – Fan Out ratios<br />

• Better scalability and access to storage ports<br />

• Lower TCO for both <strong>FICON</strong> and Linux<br />

• Easier manageability of both <strong>FICON</strong> and Linux<br />

Might want to have <strong>FICON</strong> using some<br />

storage arrays and Linux other storage<br />

arrays<br />

• Open systems maintenance might be on a<br />

different schedule than <strong>FICON</strong> maintenance<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

25


A Simplified Schematic<br />

Linux on a System z without NPIV<br />

NPIV is ONLY available<br />

in a switch-<strong>FICON</strong> fabric!<br />

Linux on System z<br />

without NPIV<br />

Linux Guests<br />

Linux A<br />

V<br />

M<br />

A<br />

I/O<br />

A<br />

One FCP CHPID<br />

per Linux guest<br />

A<br />

A<br />

A<br />

M-Series and<br />

B-Series<br />

Linux B<br />

B<br />

B<br />

B<br />

B<br />

B<br />

Linux C<br />

Linux n<br />

C<br />

n<br />

IOS<br />

Linux on System z can run in its<br />

own LPAR(s) but usually it is<br />

deployed as guests under VM<br />

C<br />

C<br />

C<br />

C<br />

n n n n<br />

No parallelism so it<br />

is very difficult to<br />

drive I/O for lots of<br />

Linux images with<br />

only 256 CHPIDs<br />

Probably very little<br />

I/O bandwidth<br />

utilization per CHPID<br />

and switch port<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

26


Linux on System z using NPIV<br />

System z using NPIV<br />

Linux Guests<br />

Linux A<br />

Linux B<br />

Linux C<br />

Linux n<br />

V<br />

M<br />

A<br />

B<br />

C<br />

n<br />

I/O<br />

IOS<br />

NPIV works<br />

only when using<br />

<strong>switched</strong>-<strong>FICON</strong><br />

One FCP channel for<br />

many Linux guests<br />

n C B A<br />

Lots of<br />

Parallelism<br />

Fewer switch<br />

ports required!<br />

8 Gbps Is Great For NPIV!<br />

M-Series and<br />

B-Series<br />

NPIV enabled<br />

Much better I/O<br />

bandwidth<br />

utilization<br />

per path<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

27


How Are Directors and Switches Different<br />

B5100<br />

B5300<br />

M-Series can run<br />

at up to 400MBps and<br />

B-Series can run<br />

at up to 800MBps on<br />

a port by port basis<br />

M6140<br />

Director<br />

Mi10K<br />

Director<br />

DCX-4S<br />

DCX<br />

SAN Switches<br />

Good Availability up to 99.99%<br />

Based upon motherboard design<br />

Some redundant components like<br />

power supplies and fans<br />

24-80 Fiber Channel ports<br />

Good fabric Scalability (100’s of ports)<br />

Online microcode activation<br />

Online Health Monitoring<br />

Online Error Detection<br />

Online Fault isolation checking<br />

It is not when it is working, but rather when a problem<br />

occurs, that truly differentiates a Director from a Switch!<br />

SAN Directors<br />

Superb Availability up to 99.999%<br />

Based on discrete, redundant parts<br />

Redundancy and hot swap FRUs<br />

throughout the architecture<br />

Highest port counts – up to 384 ports<br />

Superior fabric Scalability (1,000s of ports)<br />

Online microcode activation<br />

Online Health Monitoring<br />

Online Error Detection<br />

Online Fault isolation checking<br />

Online Error Recovery (non-disruptive failover)<br />

Online Repair of the error (hot swap)<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

28


How Are Directors and Switches Different<br />

‣ Since switches are motherboard-based, they are engineered<br />

to run at the then current line rate<br />

• Today each port of a B5100 and B5300 can run at 8Gbps<br />

• Failing SFPs can be hot-swapped but physical ports cannot be replaced<br />

• A switch must be completely replaced to repair a failed physical port(s)<br />

‣ Directors have discrete, redundant components that are<br />

engineered to run at the then current line rate<br />

• Today each port of a DCX and DCX-4S can run at 8Gbps<br />

• Failing SPFs can be hot-swap replaced (and fans and power supplies…)<br />

• New blades can replace blades that have failing or failed physical ports<br />

‣ Brocade expects to have 16Gbps blades for the DCX and<br />

DCX-4S within the next 12 months<br />

• The next generation mainframe will be engineered for 16G CHPIDs<br />

• A DCX and DCX-4S should be upgradable in the future to 16Gbps<br />

non-disruptively (swap out old 8G blades swap in new 16G blades)<br />

• But 8G switches will have to be completely swapped out and replaced<br />

with 16G capable switches to achieve 16G fabrics<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

29


The DCX/DCX-4S is a GREAT Chassis for <strong>FICON</strong>!<br />

Highlights<br />

The DCX/DCX-4S has an internal cycle time of ~1 microsec<br />

DCX/DCX-4S has 32 times more chassis bandwidth than<br />

previous, popular Brocade <strong>FICON</strong> Directors<br />

DCX/DCX-4S scales up to as many as 2,304 ports per fabric<br />

through use of our ingenious, unique inter-chassis links<br />

DCX/DCX-4S was built for 16G link speeds while currently<br />

shipping with only 8G capable blades and SFPs<br />

DCX/DCX-4S has the best energy efficiency, for <strong>FICON</strong><br />

Directors, in the world!<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

30


The DCX/DCX-4S is a GREAT Chassis for <strong>FICON</strong>!<br />

Technical Description Overview<br />

• 512Gb aggregate bandwidth per slot (256Gb transmit & receive)<br />

• 16 ports/blade = 256Gb / 16p = 16.0Gb/port potential (gated by 8G SFPs)<br />

• 32 ports/blade = 256Gb / 32p = 8.0Gb/port potential (1:1)<br />

• 48 ports/blade = 256Gb / 48p = 5.3Gb/port (get to 1:1 via Local Switching)<br />

• A full, real 2Tb of bandwidth for internal frame handling<br />

• DCX; DCX-4S has 1Tb of bandwidth<br />

• A full, real 3Tb of bandwidth when local switching used<br />

• DCX; DCX-4S has 1.5Tb of bandwidth<br />

• Chassis-to-Chassis bandwidth of .5Tb when using ICLs<br />

• DCX: ICL’s allow <strong>FICON</strong> scaling of up to 2,304 ports per fabric!<br />

• DCX-4S: ICL’s allow <strong>FICON</strong> scaling of up to 1,152 ports per fabric!<br />

• The DCX can pass up to 5.088 BILLION frames per second<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

31


Just Say NO to Direct-Attached <strong>FICON</strong>!<br />

Point-to-Point Deployment of <strong>FICON</strong><br />

<strong>FICON</strong><br />

Point-to-Point<br />

As a Best Practice, never direct attach <strong>FICON</strong>!<br />

<strong>FICON</strong> Presentation – For Customer and/or Partner Use Only<br />

© 2009-2010 Brocade Communications Systems, Inc.<br />

All Rights Reserved.<br />

32


THE END<br />

I hope this information was<br />

useful to you!

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!