05.03.2013 Views

Power Storage Solid State Drives - IBM

Power Storage Solid State Drives - IBM

Power Storage Solid State Drives - IBM

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>IBM</strong> Systems Group<br />

<strong>Power</strong> <strong>Storage</strong><br />

<strong>Solid</strong> <strong>State</strong> <strong>Drives</strong><br />

Sept 22, 2009<br />

Dan Braden dbraden@us.ibm.com<br />

AIX Advanced Technical Support<br />

http://w3.ibm.com/support/americas/pseries<br />

© 2009 <strong>IBM</strong> Corporation


Agenda<br />

SSD technology overview<br />

SSD performance and sizing<br />

Positioning<br />

SSD options for AIX<br />

SSD in DS8000<br />

Configuring SSD and configuration options<br />

Choosing data to place on SSDs<br />

© 2009 <strong>IBM</strong> Corporation


SSD technology<br />

Uses SLC (single level cell) flash memory technology<br />

Prices coming down fast so expect to see more in the future<br />

Very fast IOs, but limited life<br />

Approximately 1,000,000 writes to a cell maximum<br />

Over provisioned – 69 GB available, 128 GB used<br />

Wear leveling – writes are spread around<br />

Bad block relocation<br />

ECC<br />

Low power requirement ~ 90% less<br />

Space savings (one SSD replacing many HDDs)<br />

High cost<br />

About 22X more per GB vs. HDD<br />

One SSD costs about the same as 10.25 HDDs<br />

Costs include adapter, enclosure, maintenance and power<br />

8000<br />

7000<br />

6000<br />

5000<br />

4000<br />

3000<br />

2000<br />

1000<br />

0<br />

SSD HDD<br />

<strong>Power</strong> Consumption<br />

Watts Required for 135K IOPS<br />

performance<br />

© 2009 <strong>IBM</strong> Corporation


SSD performance and sizing<br />

100% random 4<br />

KB reads<br />

100% random 4<br />

KB writes<br />

Mixed<br />

reads+writes 4 KB<br />

Sequential reads<br />

large block<br />

Sequential writes<br />

large block<br />

14,000<br />

PAWS<br />

Single SSD<br />

29,000 IOPS<br />

21,000 IOPS<br />

240 MB/s<br />

125 MB/s<br />

Ndisk<br />

Single SSD<br />

26.234 IOPS<br />

(102 MB/s)<br />

16,669 IOPS<br />

16,110 IOPS<br />

20% reads<br />

240 MB/s<br />

124.5 MB/s<br />

Internal SAS adapter for ndisk, adapter with cache for PAWS<br />

Adapters can easily become a bottleneck<br />

Adapter<br />

maximums<br />

60,837 IOPS<br />

(237 MB/s)<br />

58,068 IOPS<br />

59,359 IOPS<br />

70% read<br />

666 MB/s<br />

302 MB/s<br />

Use of write cache on adapter usually reduces adapter bandwidth<br />

Consider dedicating an adapter to SSDs<br />

Use of RAID 1/5/6/10 or LVM mirroring affects performance<br />

A HDD can do about 200 IOPS max<br />

© 2009 <strong>IBM</strong> Corporation


High IOPS applications<br />

SSD positioning<br />

Existing customers using many physical disks, often with lots of<br />

unused storage capacity<br />

One SSD can potentially replace 100+ HDDs<br />

High %iowait or inadequate application performance<br />

Potential to improve application thruput/response time<br />

Mostly reads<br />

Writes get good IO service times with write cache<br />

Random IO<br />

HDDs provide relatively good sequential performance<br />

High access density (IOPS/GB) data<br />

Data placement is critical<br />

Boot support<br />

© 2009 <strong>IBM</strong> Corporation


There are tradeoffs<br />

SAS adapter write cache<br />

IOPS bandwidth usually greater with write cache turned off<br />

Adapter processor has more work to do handling cache and/or RAID<br />

Write IO latency slightly better with write cache turned on<br />

Adapters in HA configurations require writing data to cache on both adapters<br />

RAID 5 write penalty increases write IO latency with adapter cache turned off<br />

Sequential write threads allow adapter to perform full stripe writes, but<br />

sequential IO should usually be on HDDs<br />

Rt (ms)<br />

9<br />

8<br />

7<br />

6<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

4KB/op Read Response Time<br />

0.1<br />

0.33<br />

3.9<br />

IOA Cache Hit SSD 15k RPM HDD<br />

Short Seek<br />

8<br />

15k RPM HDD<br />

Long Seek<br />

© 2009 <strong>IBM</strong> Corporation


OLTP1 (60 % read, 4KB I/Os)<br />

procs<br />

procs<br />

procs<br />

1<br />

4<br />

8<br />

24<br />

48<br />

96<br />

OLTP2 (90 % read, 8KB I/Os)<br />

1<br />

4<br />

8<br />

24<br />

48<br />

96<br />

1<br />

4<br />

8<br />

24<br />

48<br />

96<br />

Some SSD performance data<br />

IOPS<br />

IOPS<br />

IOPS<br />

1 Drive RAID0<br />

SSD cache off<br />

2,641<br />

6,260<br />

8,873<br />

12,399<br />

12,488<br />

12,635<br />

2,496<br />

6,381<br />

8,472<br />

10,661<br />

10,654<br />

10,364<br />

4,015<br />

10,598<br />

15,376<br />

21,068<br />

21,471<br />

21,013<br />

RT<br />

1 Drive RAID0<br />

SSD cache off<br />

RT<br />

1 Drive RAID0<br />

SSD cache off<br />

RT<br />

0.37<br />

0.64<br />

0.90<br />

1.93<br />

3.84<br />

7.59<br />

0.40<br />

0.62<br />

0.94<br />

2.25<br />

4.50<br />

9.26<br />

0.25<br />

0.37<br />

0.52<br />

1.14<br />

2.23<br />

4.56<br />

IOPS<br />

IOPS<br />

IOPS<br />

SSD cache on<br />

3,320<br />

10,157<br />

16,095<br />

21,810<br />

21,875<br />

21,740<br />

2,995<br />

9,018<br />

14,851<br />

27,960<br />

33,649<br />

35,071<br />

OLTP3 (70% read, 4KB I/Os, 50% read cache hit, 66% write cache hit)<br />

SSD cache on<br />

4,841<br />

15,719<br />

24,372<br />

28,920<br />

28,586<br />

28,517<br />

RT<br />

SSD cache on<br />

RT<br />

RT<br />

6 Drive RAID0<br />

0.30<br />

0.39<br />

0.49<br />

1.10<br />

2.19<br />

4.38<br />

0.33<br />

0.44<br />

0.53<br />

0.85<br />

1.43<br />

2.73<br />

6 Drive RAID0<br />

0.20<br />

0.25<br />

0.32<br />

0.83<br />

1.68<br />

3.36<br />

IOPS<br />

6 Drive RAID0<br />

IOPS<br />

IOPS<br />

SSD cache off<br />

3,474<br />

10,567<br />

17,211<br />

34,012<br />

47,820<br />

54,314<br />

2,793<br />

9,113<br />

15,222<br />

29,923<br />

38,897<br />

43,903<br />

SSD cache off<br />

4,943<br />

16,467<br />

27,603<br />

52,638<br />

59,359<br />

59,005<br />

RT<br />

SSD cache off<br />

RT<br />

RT<br />

0.28<br />

0.37<br />

0.46<br />

0.70<br />

1.00<br />

1.76<br />

0.35<br />

0.43<br />

0.52<br />

0.80<br />

1.23<br />

2.18<br />

0.20<br />

0.24<br />

0.29<br />

0.45<br />

0.80<br />

1.62<br />

IOPS<br />

IOPS<br />

IOPS<br />

SSD cache on<br />

2,957<br />

8,384<br />

12,262<br />

14,064<br />

13,913<br />

13,893<br />

SSD cache on<br />

2,427<br />

6,889<br />

10,737<br />

19,763<br />

24,324<br />

25,245<br />

SSD cache on<br />

4,379<br />

12,918<br />

18,861<br />

20,712<br />

20,557<br />

20,553<br />

RT<br />

RT<br />

RT<br />

6 Drive RAID5<br />

0.33<br />

0.47<br />

0.65<br />

1.70<br />

3.45<br />

6.91<br />

6 Drive RAID5<br />

0.41<br />

0.58<br />

0.74<br />

1.21<br />

1.97<br />

3.80<br />

6 Drive RAID5<br />

0.23<br />

0.31<br />

0.42<br />

1.15<br />

2.33<br />

4.67<br />

IOPS<br />

IOPS<br />

IOPS<br />

SSD cache off<br />

1,619<br />

5,009<br />

8,105<br />

15,704<br />

21,551<br />

23,264<br />

SSD cache off<br />

2,181<br />

6,689<br />

10,869<br />

20,780<br />

27,626<br />

30,941<br />

SSD cache off<br />

2,851<br />

9,126<br />

14,877<br />

26,062<br />

27,234<br />

26,743<br />

RT<br />

RT<br />

RT<br />

0.61<br />

0.79<br />

0.98<br />

1.52<br />

2.22<br />

4.12<br />

0.45<br />

0.59<br />

0.73<br />

1.15<br />

1.73<br />

3.10<br />

0.35<br />

0.43<br />

0.53<br />

0.92<br />

1.76<br />

3.59<br />

© 2009 <strong>IBM</strong> Corporation


Some SSD performance data<br />

Read Miss 4KB<br />

procs<br />

procs<br />

1<br />

4<br />

8<br />

24<br />

48<br />

96<br />

Write Miss 4KB<br />

1<br />

4<br />

8<br />

24<br />

48<br />

96<br />

1 Drive RAID0<br />

SSD cache off<br />

IOPS<br />

3,035<br />

11,862<br />

22,022<br />

27,467<br />

27,455<br />

27,469<br />

IOPS<br />

7,849<br />

19,847<br />

20,187<br />

20,599<br />

20,458<br />

20,371<br />

RT<br />

SSD cache off<br />

RT<br />

0.33<br />

0.33<br />

0.36<br />

0.87<br />

1.74<br />

3.49<br />

1 Drive RAID0<br />

0.12<br />

0.20<br />

0.39<br />

1.16<br />

2.34<br />

4.71<br />

SSD cache on<br />

IOPS<br />

3,027<br />

11,972<br />

23,420<br />

57,643<br />

58,679<br />

58,463<br />

IOPS<br />

6,340<br />

13,466<br />

13,271<br />

13,251<br />

13,190<br />

13,268<br />

RT<br />

SSD cache on<br />

RT<br />

6 Drive RAID0<br />

0.33<br />

0.33<br />

0.34<br />

0.41<br />

0.81<br />

1.64<br />

6 Drive RAID0<br />

0.15<br />

0.29<br />

0.60<br />

1.81<br />

3.64<br />

7.23<br />

SSD cache off<br />

IOPS<br />

3,030<br />

11,987<br />

23,433<br />

59,650<br />

60,583<br />

60,320<br />

SSD cache off<br />

IOPS<br />

7,931<br />

28,055<br />

45,340<br />

58,219<br />

58,004<br />

57,818<br />

RT<br />

RT<br />

0.33<br />

0.33<br />

0.34<br />

0.41<br />

0.79<br />

1.59<br />

0.12<br />

0.14<br />

0.17<br />

0.41<br />

0.82<br />

1.66<br />

SSD cache on<br />

IOPS<br />

2,976<br />

11,744<br />

22,726<br />

41,004<br />

40,724<br />

40,608<br />

SSD cache on<br />

IOPS<br />

5,597<br />

7,024<br />

6,988<br />

6,938<br />

6,960<br />

6,987<br />

RT<br />

RT<br />

6 Drive RAID5<br />

0.33<br />

0.34<br />

0.35<br />

0.58<br />

1.18<br />

2.36<br />

6 Drive RAID5<br />

0.17<br />

0.57<br />

1.14<br />

3.46<br />

6.89<br />

13.74<br />

SSD cache off<br />

IOPS<br />

2,994<br />

11,843<br />

22,919<br />

43,393<br />

43,239<br />

42,912<br />

SSD cache off<br />

IOPS<br />

970<br />

2,907<br />

4,600<br />

8,802<br />

11,845<br />

11,865<br />

RT<br />

RT<br />

0.33<br />

0.33<br />

0.35<br />

0.55<br />

1.11<br />

2.23<br />

1.03<br />

1.37<br />

1.74<br />

2.72<br />

4.05<br />

8.09<br />

© 2009 <strong>IBM</strong> Corporation


SSD RAID sizing theory<br />

Use of RAID 1/5/6/10: application IOPS do not equal physical IOPS<br />

Use these formulas to determine potential application IOPS<br />

N = number of SSDs<br />

R = proportion of IOs that are reads<br />

W = 1-R = proportion of IOs that are writes<br />

D = IOPS for a single SSD (use 16,000 to be conservative unless all R or W)<br />

JBOD or RAID 0<br />

IOPS bandwidth = NxD<br />

RAID 1 or RAID 10<br />

IOPS bandwidth = NxD/(R+2W)<br />

RAID 5<br />

IOPS bandwidth = NxD/(R+4W)<br />

RAID 6<br />

IOPS bandwidth = NxD/(R+6W)<br />

Take into account adapter bandwidths<br />

This does not take into account adapter processor bottlenecks<br />

6 disk RAID 5 write IOPS bandwidth is only 11,865 IOPS<br />

Processor must issue 4 IOs and calculate parity for each application write<br />

© 2009 <strong>IBM</strong> Corporation


SSD RAID sizing theory<br />

Example 1: 3 SSDs in a RAID 5 configuration 90% reads<br />

IOPS bandwidth = NxD/(R+4W) = 3x16,000/(0.9+4x0.1) = 36,923 IOPS<br />

This is less than the adapter bandwidth of 59,359 IOPS<br />

Example 2: 3 SSDs in a RAID 5 configuration 10% reads<br />

IOPS bandwidth = NxD/(R+4W) = 3x16,000/(0.1+4x0.9) = 12,973 IOPS<br />

Example 3: 6 SSDs in a RAID 5 configuration 80% reads<br />

IOPS bandwidth = 6x16,000/(0.8+4x0.2) = 60,000 IOPS<br />

This is more than the adapter bandwidth of 59,359 IOPS<br />

© 2009 <strong>IBM</strong> Corporation


66,550 MB<br />

SAS Disk<br />

SSD options for AIX<br />

3.5 inch<br />

Small Form Factor (SFF)<br />

2.5 inch SAS<br />

EXP 12S SAS FC#5886<br />

Disk Drawer<br />

p560 &<br />

p570<br />

New p520<br />

& p550<br />

New Blades<br />

JS23 / JS43<br />

© 2009 <strong>IBM</strong> Corporation


SSD configuration guidelines<br />

RAID 0/1/5/6/10 only RAID 5 probably best for availability<br />

Integrated Controller: Mixing of HDDs and SSD devices supported<br />

6 SSDs max per controller on 560/570<br />

Internal RAID No mixing of HDD and SSD devices<br />

Split Backplane: No mixing of HDD and SSD within group<br />

EXP 12S SAS Drawer: 8 SSDs max / No mixing of SSDs and HDDs<br />

Four Bays not used<br />

© 2009 <strong>IBM</strong> Corporation


Supported SSD adapters<br />

PCIe SAS RAID Adapter<br />

FC # 5903, CCIN 574E<br />

380 MB cache, dual 4x ports<br />

Used in pairs – affects performance with write<br />

cache turned on (recommend off for SSD)<br />

PCI-X SAS RAID Adapter<br />

FC # 5904 (i) 5906 or 5908 CCIN 572F<br />

1.5 GB cache, 4 port<br />

Uses 2 slots<br />

8 SSDs max<br />

Integrated SAS Controller, CCIN 572C<br />

Recommended cache daughter card FC 5679<br />

Systems: 520, 550, 560, & 570<br />

Blades: JS23 & JS43<br />

© 2009 <strong>IBM</strong> Corporation


SAS adapter details<br />

Planar integrated SAS adapters support RAID 0<br />

With RAID enablement card, RAID 0,1,5,6 and 10 supported and with cache<br />

Non RAID SAS adapters are advertised as supporting RAID 0 and 10<br />

Two disk RAID 10 = RAID 1<br />

One can create RAID arrays, including RAID 5/6, with these adapters but it’s not<br />

advertised due to the performance impact of the RAID write penalty without write<br />

cache<br />

Some adapters must be purchased in pairs and used in HA mode<br />

Single system HA or two system HA<br />

CCIN 572B and 574E<br />

When HA configs don’t allow JBOD, one can use RAID 0 arrays instead<br />

© 2009 <strong>IBM</strong> Corporation


CCIN<br />

Code name<br />

FC<br />

Description<br />

Form factor<br />

RAID levels<br />

Write<br />

cache<br />

HA two sys<br />

RAID<br />

HA two sys<br />

JBOD<br />

HA one sys<br />

RAID<br />

Requires<br />

HA config<br />

572A<br />

Cadet<br />

5900<br />

PCI-X266<br />

Ext Dualx4<br />

3Gb<br />

SAS<br />

Adapter<br />

Low<br />

profile 64<br />

bit PCI-X<br />

0, 10<br />

No<br />

No<br />

No<br />

No<br />

572A<br />

Cadet<br />

5912<br />

PCI-X266<br />

Ext Dualx4<br />

3Gb<br />

SAS<br />

Adapter<br />

Low<br />

profile 64<br />

bit PCI-X<br />

0, 10<br />

Yes<br />

Yes<br />

Yes<br />

No<br />

SAS adapter details<br />

572B<br />

Squib<br />

5902<br />

PCI-<br />

X266 Ext<br />

Dual-x4<br />

3Gb SAS<br />

RAID<br />

Adapter<br />

Long 64<br />

bit PCI-X<br />

0,5,6,10<br />

175 MB<br />

Yes<br />

No<br />

Yes<br />

Yes<br />

572C<br />

Star<br />

Planar<br />

integrated<br />

PCI-X266<br />

Planar 3Gb<br />

SAS<br />

Adapter<br />

Planar<br />

integrated<br />

0<br />

No<br />

No<br />

No<br />

No<br />

57B7<br />

Vortex<br />

5679<br />

Planar<br />

RAID<br />

daughter<br />

card<br />

with<br />

cache<br />

Planar<br />

auxiliary<br />

cache<br />

0,5,6,10<br />

175 MB<br />

No<br />

No<br />

No<br />

No<br />

57B8<br />

Dagger<br />

5679<br />

PCI-X266<br />

Planar 3Gb<br />

SAS RAID<br />

Adapter<br />

Planar RAID<br />

enablement<br />

0,5,6,10<br />

175 MB<br />

No<br />

No<br />

No<br />

No<br />

57B3<br />

Cadet-E<br />

5901<br />

PCI-e x8<br />

Dual-x4 3<br />

Gb SAS<br />

RAID<br />

adapter<br />

PCI-e x8<br />

0, 10<br />

Yes<br />

Yes<br />

Yes<br />

No<br />

57B9<br />

Cadet-EL<br />

5909<br />

PCI-e x8<br />

Ext Dualx4<br />

3Gb<br />

SAS<br />

Adapter<br />

PCI-e x8<br />

0, 10<br />

No<br />

No<br />

No<br />

No<br />

57BA<br />

Cadet-EL2<br />

5911<br />

PCI-e x8<br />

Ext Dualx4<br />

3Gb<br />

SAS<br />

Adapter<br />

PCI-e x8<br />

0, 10<br />

Yes<br />

Yes<br />

Yes<br />

No<br />

574E<br />

Squib-<br />

E<br />

5903<br />

PCI-e<br />

Dual -<br />

x4 3<br />

Gb<br />

SAS<br />

RAID<br />

Short<br />

PCI-e<br />

0,5,6,1<br />

0<br />

380<br />

MB<br />

Yes<br />

No<br />

Yes<br />

Yes<br />

572F<br />

575C<br />

Knorr<br />

5904<br />

5906<br />

5908<br />

Long<br />

2 slot<br />

PCI-X<br />

1.5 GB<br />

No<br />

© 2009 <strong>IBM</strong> Corporation<br />

PCI-X<br />

DDR 1.5<br />

GB<br />

Cache<br />

SAS<br />

RAID<br />

0,5,6,10<br />

Not<br />

presently<br />

supported<br />

Not<br />

presently<br />

supported<br />

Not<br />

presently<br />

supported


OS level pre-requisites for SSD<br />

AIX 5.3 with the 5300-07 Technology Level and Service Pack 9<br />

AIX 5.3 with the 5300-08 Technology Level and Service Pack 7<br />

AIX 5.3 with the 5300-09 Technology Level and Service Pack 4<br />

AIX 5.3 with the 5300-10 Technology Level<br />

AIX 6.1 with the 6100-00 Technology Level and Service Pack 9<br />

AIX 6.1 with the 6100-01 Technology Level and Service Pack 5<br />

AIX 6.1 with the 6100-02 Technology Level and Service Pack 4<br />

AIX 6.1 with the 6100-03 Technology Level<br />

<strong>IBM</strong> i Support 6.1 or later [ FC #3587 & #1909] or V5R4M5<br />

SUSE Linux Enterprise Server 10, Service Pack 2 or later<br />

Red Hat Enterprise Linux version 4.7 or later<br />

Red Hat Enterprise Linux version 5.2 or later<br />

© 2009 <strong>IBM</strong> Corporation


Other pre-requisites for SSD<br />

System firmware - http://www-933.ibm.com/support/fixcentral/<br />

HMC<br />

Blades - 01FA340_072_039<br />

520 - 01FL340_072_039<br />

550 - 01FL340_072_039<br />

570 - 01FM340_071_039<br />

575 - 01FS340_072_042<br />

595 - 01FH340_072_039<br />

V7 R3.4.0 Service Pack 2 update package.<br />

http://www14.software.ibm.com/webapp/set2/sas/f/hmc/power6.html<br />

Check SAS adapter, SAS disks and SAS enclosure (ses) microcode<br />

invscout a good tool<br />

http://www14.software.ibm.com/webapp/set2/mds/fetch?page=mds.html<br />

© 2009 <strong>IBM</strong> Corporation


Announced 2/10/2009<br />

SSD in DS8000<br />

Also announced full disk encryption, SATA, intelligent write<br />

caching, remote pair flash copy<br />

73 GB and 146 GB<br />

RAID 5 only<br />

128 SSDs max<br />

16 max per device adapter (DA) pair<br />

No intermix of SSDs and HDDs on DA pair<br />

Use of SSD reduces max drives allowed<br />

Plant installation only<br />

Must be ordered with RPQ 8S1027<br />

Requires licensed Machine Code (LMC) R4.2 (bundle version<br />

64.20.13x.0)<br />

© 2009 <strong>IBM</strong> Corporation


Devices are initially configured as pdisks<br />

# lsdev -Cc pdisk<br />

SSD configuration<br />

pdisk0 Available 02-08-00 Physical SAS Disk Drive<br />

pdisk1 Available 02-08-00 Physical SAS Disk Drive<br />

smitty devices -> Disk Array -> <strong>IBM</strong> SAS Disk Array -> <strong>IBM</strong> SAS Disk Array Manager<br />

Create an Array Candidate pdisk and Format to 528 Byte Sectors<br />

Create a SAS RAID array<br />

RAID 0, 5, 6 or 10 (2 disk RAID 10 = RAID 1)<br />

16, 64 or 256 KB stripe (aka strip) size<br />

RAID 5 will be popular<br />

An hdisk appears<br />

# lsdev -Cc disk | grep "SAS RAID"<br />

hdisk3 Available 02-08-00 SAS RAID 0 Disk Array<br />

hdisk5 Available 02-08-00 SAS RAID 0 Disk Array<br />

Choose whether or not to turn on write cache for the adapter<br />

Probably do not create hot spares<br />

Proceed to LVM configuration<br />

© 2009 <strong>IBM</strong> Corporation


SSD configuration<br />

© 2009 <strong>IBM</strong> Corporation


SSD configuration<br />

From Disk Array Manager menu -> Diagnostics and Recovery Options -> Change/Show<br />

SAS RAID Controller<br />

Adapter Cache can be set to Disabled<br />

For adapters in HA configurations with SSDs, disabled cache probably is best<br />

© 2009 <strong>IBM</strong> Corporation


Choosing data to place on SSDs<br />

iostat – identify IOPS (tps) and R/W ratio for PV<br />

High tps and high R/W ratio suggests good candidate<br />

Investigate further with lvmstat and # lspv –l <br />

lvmstat – identify IOPS (iocnt) and R/W ratio for LVs<br />

Turn on lvmstat for VG with # lvmstat –e –v <br />

# lvmstat -v newvg2<br />

Logical Volume iocnt Kb_read Kb_wrtn Kbps<br />

…<br />

High iocnt and high R/W ratio LVs are good candidates<br />

Also reports IOPS on PPs – useful when the LV is relatively large<br />

filemon – identify IO sizes, sequentiality to PVs, LVs<br />

A trace for only seconds of time<br />

Some applications have tools for identifying hot data<br />

DB2 snapshot monitoring tool<br />

© 2009 <strong>IBM</strong> Corporation


Add SSD hdisk to VG with<br />

Moving data to SSDs<br />

# extendvg <br />

Migrate LV to hdisk dynamically with<br />

# migratepv –l <br />

Repeat with other hdisks the LV resides on<br />

Or create a new VG for just SSD data<br />

# mkvg –y ssdvg –s 32 –S <br />

Offers smaller PP sizes to waste less space<br />

Stop application and copy the LV to the new VG with<br />

# cplv –v -y ssdlv <br />

Or backup/restore/copy data to new file system<br />

© 2009 <strong>IBM</strong> Corporation


Reference material<br />

April 24 HW Deep Dive (Patrick O’Rourke and Mark Olsen)<br />

https://w3-<br />

03.sso.ibm.com/sales/support/ShowDoc.wss?docid=M118105V51587O66&infotype=SK&infosubtype=W0&nod<br />

e=&appName=popular<br />

DS8000 SSD redpiece<br />

http://www.redbooks.ibm.com/redpieces/abstracts/redp4522.html?Open<br />

DS8000 2/10 announcement materials on Systems Sales<br />

https://w3-<br />

03.sso.ibm.com/sales/support/ShowDoc.wss?docid=J177640U44767L57&infotype=SK&infosubtype=N0&node<br />

=brands,B5000&ftext=<strong>Solid</strong>%20state%20disk&sort=recommended&showDetails=true&hitsize=25&offset=0&ca<br />

mpaign=<br />

White paper examining SSD value and implementation with AIX, SAP and DB2<br />

ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/pow03025usen/POW03025USEN.PDF<br />

Whitepaper on SSD<br />

ftp://submit.boulder.ibm.com/sales/ssi/sa/wh/n/tsw03044usen/TSW03044USEN_HR.PDF<br />

Whitepaper: SAP with SSD/DS8000s on z<br />

https://w3-<br />

03.sso.ibm.com/sales/support/ShowDoc.wss?docid=ZSP03162USEN&infotype=SA&infosubtype=PS&node=&ft<br />

ext=solid%20state%20disk&showDetails=true&sort=date&hitsize=50&offset=0<br />

SSD movie by Nigel Griffiths<br />

https://www.ibm.com/developerworks/wikis/display/WikiPtype/Movies<br />

© 2009 <strong>IBM</strong> Corporation


Installing and configuring SSDs<br />

Reference material<br />

http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/iphal/iphalssdconfig.ht<br />

m&resultof=%22%53%53%44%22%20&searchQuery=%53%53%44&searchRank=%31&pageDe<br />

pth=%30<br />

Considerations for <strong>Solid</strong>-<strong>State</strong> <strong>Drives</strong> (SSD)<br />

http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/arebj/arebjsolidstatedrive<br />

s.htm&resultof=%22%53%53%44%22%20&searchQuery=%53%53%44&searchRank=%30&pageDe<br />

pth=%30<br />

Performance Impacts of NAND Flash SSDs Upon <strong>IBM</strong> <strong>Power</strong> System Servers<br />

To be published<br />

SSD Wiki<br />

https://www.ibm.com/developerworks/wikis/display/WikiPtype/<strong>Solid</strong>+<strong>State</strong>+<strong>Drives</strong><br />

© 2009 <strong>IBM</strong> Corporation


Notes on benchmarks and values<br />

The <strong>IBM</strong> benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should<br />

consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For<br />

additional information about the benchmarks, values and systems tested, contact your local <strong>IBM</strong> office or <strong>IBM</strong> authorized reseller or access the Web site of the benchmark<br />

consortium or benchmark vendor.<br />

<strong>IBM</strong> benchmark results can be found in the <strong>IBM</strong> <strong>Power</strong> Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.<br />

All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX<br />

Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing<br />

benchmarks were compiled using <strong>IBM</strong>'s high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of<br />

these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++<br />

Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for<br />

FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these<br />

vendors. Other software packages like <strong>IBM</strong> ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.<br />

For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.<br />

TPC http://www.tpc.org<br />

SPEC http://www.spec.org<br />

LINPACK http://www.netlib.org/benchmark/performance.pdf<br />

Pro/E http://www.proe.com<br />

GPC http://www.spec.org/gpc<br />

NotesBench http://www.notesbench.org<br />

VolanoMark http://www.volano.com<br />

STREAM http://www.cs.virginia.edu/stream/<br />

SAP http://www.sap.com/benchmark/<br />

Oracle Applications http://www.oracle.com/apps_benchmark/<br />

PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly<br />

Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm<br />

Baan http://www.ssaglobal.com<br />

Microsoft Exchange http://www.microsoft.com/exchange/evaluation/performance/default.asp<br />

Veritest http://www.veritest.com/clients/reports<br />

Fluent http://www.fluent.com/software/fluent/index.htm<br />

TOP500 Supercomputers http://www.top500.org/<br />

Ideas International http://www.ideasinternational.com/benchmark/bench.html<br />

<strong>Storage</strong> Performance Council http://www.storageperformance.org/results<br />

© 2009 <strong>IBM</strong> Corporation


Notes on performance estimates<br />

All performance estimates are provided "AS IS" and no warranties or<br />

guarantees are expressed or implied by <strong>IBM</strong>. Buyers should consult other<br />

sources of information, including system benchmarks, and application sizing<br />

guides to evaluate the performance of a system they are considering buying.<br />

© 2009 <strong>IBM</strong> Corporation

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!