07.02.2013 Views

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

Best Practices for SAP BI using DB2 9 for z/OS - IBM Redbooks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

the chassis <strong>for</strong> blade-to-blade connections. Data traffic between System p and<br />

the <strong>BI</strong> Accelerator cluster of blades was mainly through Remote Function Calls<br />

(RFCs). A standard Maximum Transmission Unit (MTU) size of 1500 was used<br />

because the data package traffic was small.<br />

Connection between blade cluster and DS8300<br />

A GPFS file system was mounted and accessible to all 140 blades. Generally, a<br />

Storage Area Network (SAN) is effective with 64 blades or fewer. As the number<br />

of blades scales up, GPFS topology with its cluster of network shared disk (NSD)<br />

server nodes per<strong>for</strong>ms better. In our large environment with 10 BladeCenter®<br />

chassis and 140 blades in total, 10 GPFS NSD servers were employed. They<br />

were System x3650 machines (two dual-core CPUs at 3.0 GHz with 4 GB RAM)<br />

running Linux SLES9 SP3.<br />

Since InfiniBand (IB) has better bandwidth and latency than GbE, we chose IB<br />

<strong>for</strong> the network between the blades and the GPFS NSD servers. Each configured<br />

HS21 blade chassis had two embedded Cisco SDR IB switches. Blades 1<br />

through 7 were configured to use the first IB switch, and blades 8 through 14<br />

were on the second one, to have a non-blocking configuration. Each SDR IB<br />

daughter card on each blade is rated at 10 Gbps, or 4x, where x = 2.5 Gbps.<br />

Each SDR IB switch had two external 12x ports and two external 4x ports, with<br />

an aggregate bandwidth of 80 Gbps. Each GPFS NSD server had 4x or 10 Gbps<br />

IB connection.<br />

On the other end, the 10 GPFS NSD servers were connected to GPFS storage<br />

(DS8300) via twenty 4 Gbps Fibre Channel connections, two connections per<br />

GPFS NSD server.<br />

Since each blade had GbE and IB interfaces, blade-to-blade communication<br />

could use GbE or IB. In our case, IB was mainly used <strong>for</strong> reading and writing to<br />

the GPFS file system and GbE was used <strong>for</strong> blade-to-blade communication.<br />

11.3 <strong>SAP</strong> <strong>BI</strong> database and InfoCube load<br />

One of the major challenges <strong>for</strong> project Jupiter was to build a large, complex<br />

(25 TB) <strong>BI</strong> database within a very short period of time. According to the plan, a<br />

minimum InfoCube data load rate of 1 TB per day was needed in order to meet<br />

the aggressive schedules. In the following sections, we describe the database<br />

structures and the InfoCube load techniques that we implemented to meet the<br />

project requirements.<br />

234 <strong>Best</strong> <strong>Practices</strong> <strong>for</strong> <strong>SAP</strong> <strong>BI</strong> <strong>using</strong> <strong>DB2</strong> 9 <strong>for</strong> z/<strong>OS</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!