07.02.2013 Views

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

the input files <strong>for</strong> the InfoCube load resided on this storage system, it was critical<br />

to get the maximum bandwidth to the storage system.<br />

Connection between System p and System z<br />

There were eight 10 Gigabit Ethernet (GbE) lines between System p and<br />

System z. We could have used fewer than eight connections. However, the<br />

precise data rate requirement was not known at the beginning of the project. It<br />

was safer to over-configure than to under-configure, to ensure that there were no<br />

network constraints during InfoCube load and during <strong>BI</strong> Accelerator index<br />

creation phases. The data rate was not expected to be close to the aggregated<br />

bandwidth of these eight connections. Multiple connections mitigated (but did not<br />

totally prevent) delays due to latency from channel busy, hardware<br />

implementation, and physical media.<br />

To simplify the environment, our System p was configured as a single system<br />

image. Since multiple <strong>SAP</strong> application servers can run in one partition, our<br />

Jupiter project employed multiple instances <strong>for</strong> a more realistic customer<br />

environment. This affected our network implementation, because each <strong>SAP</strong><br />

application server can use only one <strong>SAP</strong>DBH<strong>OS</strong>T connection, which is usually<br />

associated with one physical network connection. To make use of all eight<br />

connections, we combined them into one virtual connection <strong>using</strong> a virtual IP<br />

address (VIPA).<br />

This approach utilizes network trunking, such as EtherChannel or MultiPath.<br />

EtherChannel is supported on System p but not on System z. So instead we<br />

used a MultiPath network connection, which has similar functionality to the<br />

EtherChannel. Example D-1 on page 326 shows our setup definition of static<br />

multipath. Each connection is configured with its own subnet on separate VLANs.<br />

This ensures that traffic is sent and returned over the same physical connection,<br />

and also provides better analysis and monitoring of per<strong>for</strong>mance data. Also, a<br />

jumbo frame size was configured <strong>for</strong> the large block sizes transmitted.<br />

For load balancing, there is an algorithm to distribute traffic to all connections<br />

when several are combined into one virtual connection. This method provides<br />

higher availability, since another available physical connection associated with<br />

the virtual connection can be used if one of the physical connections is down. For<br />

details and guidance on high availability network implementation, refer to <strong>SAP</strong> on<br />

<strong>IBM</strong> System z: High Availability <strong>for</strong> <strong>SAP</strong> on <strong>IBM</strong> System z Using Autonomic<br />

Computing Technologies, SC33-8206.<br />

Connection between System p and blade cluster<br />

There was one 10 GbE connection from System p to our network switch, and an<br />

EtherChannel of four GbE connections from the switch to each blade chassis.<br />

Within the chassis, there was one GbE interface <strong>for</strong> each of the 14 blades within<br />

Chapter 11. Project Jupiter: large <strong>BI</strong> Accelerator scalability evaluation 233

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!