13.11.2014 Views

HP BladeSystem Family - Critical Facilities Round Table

HP BladeSystem Family - Critical Facilities Round Table

HP BladeSystem Family - Critical Facilities Round Table

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

The Case for<br />

<strong>BladeSystem</strong><br />

Servers<br />

November 2006<br />

Ken Baker<br />

<strong>BladeSystem</strong> Infrastructure Technologist<br />

mrblade@hp.com<br />

© 2006 Hewlett-Packard Development Company, L.P.<br />

The information contained herein is subject to change without notice


Proprietary non-disclosure reminder<br />

• The information contained in this presentation is<br />

proprietary to Hewlett-Packard Company and is<br />

offered in confidence, subject to the terms and<br />

conditions of a Non-Disclosure Agreement<br />

• <strong>HP</strong> makes no warranties regarding the accuracy<br />

of this information. <strong>HP</strong> does not warrant or<br />

represent that it will introduce any product to<br />

which the information relates. It is presented for<br />

evaluation by the recipient and to assist <strong>HP</strong> on<br />

defining product direction<br />

<strong>HP</strong> Restricted


Hardware Density Directions<br />

• Trending up<br />

− Some relief in the near term<br />

− Long range forecast (+5yrs) still shows significant<br />

increases


Power Density and its’ Impact<br />

• Power Density is increasing on average 15-20% per year<br />

within the datacenter<br />

− Individual server power density<br />

− Recent IT refresh activity<br />

− Server sprawl<br />

• Methods of measuring loads within the DC are outdated<br />

− Watts/sq ft or meter are no longer useful in deciding server<br />

deployment strategies<br />

• Increasing power density drives:<br />

− Server count down in static environments<br />

− Additional investments in infrastructure for dynamic environments<br />

• Key to success is balancing infrastructure investments<br />

with IT goals


Processors<br />

• Today and near future<br />

− 55WDC-135WDC<br />

• 2007 and beyond<br />

− 40WDC- ~140WDC<br />

• Power consumption rises will ease over the next<br />

two years<br />

− More choices to fit the performance needs into a<br />

supportable power envelope at the datacenter level<br />

• Power Management Technologies will<br />

dramatically improve<br />

− Dynamically control consumption based upon workload<br />

demands, rather than system inadequacies


Intel/AMD Processor Power<br />

Consumption<br />

’03 ’04<br />

Dempsey<br />

DP DC<br />

65nm<br />

T case ? ºC<br />

TDP 130W<br />

Paxville<br />

DP DC<br />

65nm<br />

T case ? ºC<br />

TDP 130W<br />

Watts<br />

Prestonia<br />

512K/130nm<br />

2.8GHz<br />

Prestonia<br />

2ML3<br />

130nm<br />

3.2 GHz<br />

T case 71 ºC<br />

TDP 94W<br />

Nocona<br />

1M/90nm<br />

3.6GHz<br />

T case 71 ºC<br />

TDP 103W<br />

Woodcrest<br />

4M<br />

(mobile dual)<br />

65nm<br />

TDP 70W<br />

Clovertown<br />

quadcore<br />

65nm<br />

TDP 130W<br />

AMD DC- F Series = 95WDC<br />

Clovertown<br />

quadcore<br />

65nm<br />

TDP 55W<br />

2005 2006<br />

533 800 MHz<br />

2007


Long Term Server Power Density<br />

Customer use lifecycle<br />

800<br />

Mfg lifecycle -Woodcrest<br />

(70watts)<br />

Customer use lifecycle<br />

Watts/U<br />

450<br />

Customer use lifecycle<br />

Mfg lifecycle -Nocona<br />

(104 watts)<br />

Mfg lifecycle -Dempsey<br />

(135watts)<br />

Customer use lifecycle<br />

• Typical Server<br />

Manufacturing lifecycle<br />

is 2 years<br />

•Customer production<br />

lifecycle averages 3-5<br />

years<br />

350<br />

Mfg lifecycle -Prestonia<br />

(94 watts)<br />

•Technology Refreshes<br />

result in gradual power<br />

density increases of<br />

15% per year<br />

2003 2005 2007 2009 2011 2013 2015 2017


Modern Datacenter Design<br />

• Raised floor, forced air cooling through perforated panels<br />

• Power and network wiring may be under floor or overheard<br />

• Designed for 5-10 year lifecycles<br />

• A significant percentage of data center costs go to power<br />

and cooling<br />

• Cooling efficiencies average between 40-50%


Datacenter Inefficiencies<br />

• Unmanaged openings<br />

− Cable openings<br />

− Perforated tiles in wrong place<br />

− Wrong type of tile used<br />

• Increases of up to 15% can be achieved by managing<br />

openings


Improving Efficiencies Further<br />

ceiling return air plenum<br />

critically placed return<br />

grilles<br />

Adds another 15%<br />

improvement in efficiency<br />

rack blanking panels


Addressing High Density Zones<br />

• Down flow cooling<br />

− Efficient<br />

− Cost effective<br />

− Tuned for spot loads<br />

− Dedicated resource<br />

for high density areas<br />

• Examples<br />

− Liebert<br />

• XDV<br />

• XDO<br />

− Ducted Systems


Informational Resources<br />

• Must Have Documentation<br />

− Uptime Institute Whitepapers<br />

• 2005-2010 Heat Density Trends in Data Processing, Computer<br />

Systems, and Telecommunications<br />

• Industry Standard Tier Classifications Define Site Infrastructure<br />

Performance<br />

• Dollars per kW plus Dollars per Square Foot Are a Better Data Center<br />

Cost Model than Dollars per Square foot Alone<br />

• Reducing Bypass Airflow Is Essential for Eliminating Computer Room<br />

Hot Spots<br />

− <strong>HP</strong> Whitepapers<br />

• Optimizing data centers for high-density computing ,technology brief,<br />

2nd edition<br />

− AHSRAE Datacenter Guidelines<br />

• Thermal Guidelines for Data Processing Environments<br />

• High Density Cooling of Data Centers and Telecom <strong>Facilities</strong> -- Part 1<br />

and 2<br />

• Datacom Equipment Power Trends and Cooling Applications


Problem: Limited power budget and<br />

growing power demand<br />

More performance<br />

and density<br />

Draws<br />

more<br />

power<br />

Generates<br />

more<br />

heat<br />

Which eventually<br />

impacts<br />

performance,<br />

reliability, and cost<br />

Requires<br />

more<br />

cooling<br />

<strong>HP</strong> Restricted


It’s a racked, stacked and wired world<br />

The root cause of datacenter pain<br />

The functionality of today’s datacenter is constrained by the form of their building<br />

blocks and the processes required to manage them<br />

Inflexible: Static and hardwired<br />

Manually coordinated: Change requires too<br />

many people and steps<br />

Over-provisioned: Wasting power, cooling,<br />

space, people and money<br />

Managed 1 by 1: Processes<br />

are unique, with unique tools<br />

and inconsistency<br />

Expensive: More expensive to own than to<br />

build<br />

Because of conventional IT’s limited form and processes, the potential to<br />

improve the operational efficiency, cost and flexibility are limited<br />

<strong>HP</strong> Confidential


The <strong>HP</strong> <strong>BladeSystem</strong> approach to<br />

simplify infrastructure<br />

Consolidate<br />

Virtualize<br />

Automate<br />

Server<br />

Storage<br />

Power &<br />

Cooling<br />

Connectivity<br />

LAN<br />

<strong>Facilities</strong><br />

SAN<br />

Policy and Task<br />

• Modularize and integrate<br />

components<br />

• Surround with intelligence<br />

• Manage as one<br />

• Create logical, abstracted<br />

connection to LAN/SAN<br />

• Pool and share server,<br />

storage, network, and power<br />

• Simplify routine tasks and<br />

processes to save time<br />

• Keep control<br />

Reduce time and cost to buy,<br />

build and maintain<br />

Greater resource<br />

efficiency and flexibility<br />

Free IT resources for revenue<br />

bearing projects<br />

<strong>HP</strong> Confidential


The Bladed World<br />

Time-smart, change-ready and cost-savvy system to give you the greatest<br />

control, most flexibility and best savings for business.<br />

Provisioned JIT: Pre-provisioned and<br />

wired-once. Ready for change.<br />

Automated coordination: Domains and<br />

people are isolated from the<br />

upheavals of change<br />

Virtual: Devices and connections<br />

managed as pools of resources.<br />

Lights-Out, ‘1 to n’ management:<br />

Group management. Processes are<br />

reduced, streamlined.<br />

Most efficient: Less expensive to own<br />

and buy than conventional IT<br />

<strong>HP</strong> Confidential


c7000 Enclosure<br />

Front View<br />

Server blades<br />

• 2x features, 2x the density<br />

10U<br />

8-16 blades<br />

Storage blades<br />

• A new paradigm for “bladed”<br />

storage solutions<br />

Onboard Administrator<br />

• <strong>HP</strong> Insight Display<br />

• Simple set-up delivered out of the box<br />

Integrated power<br />

• Simplified configuration and<br />

greater efficiency<br />

• Same flexibility, capacity and<br />

redundancy<br />

<strong>HP</strong> Confidential


c7000 Enclosure<br />

Rear View<br />

Active Cool fans<br />

• Adaptive flow for maximum<br />

power efficiency, air movement<br />

& acoustics<br />

Interconnect bays<br />

• 8 bays; up to 4 redundant I/O fabrics<br />

• Up to 94% reduction in cables<br />

• Ethernet, Fibre Channel, iSCSI, SAS, IB<br />

Onboard Administrator<br />

• Remote administration view<br />

• Robust, multi-enclosure control<br />

PARSEC architecture<br />

• Parallel, redundant and scalable<br />

cooling and airflow design<br />

Power management<br />

• Choice of single-phase or 3-phase<br />

enclosures and N+N or N+1 redundancy<br />

• Best performance per watt<br />

<strong>HP</strong> Confidential


Processor Naming Decoder - AMD<br />

AMD Based processors<br />

Processor Processor Processor Processor Front Side Processor Memory<br />

Number Type Speed Cores Bus Speed Wattage Cache<br />

4P Platforms<br />

8220 Opteron MP 2.8GHz Dual 1GHz 68W, 95W 2M<br />

8218 Opteron MP 2.6GHz Dual 1GHz 68W, 95W 2M<br />

8216 Opteron MP 2.4GHz Dual 1GHz 68W, 95W 2M<br />

8214 Opteron MP 2.2GHz Dual 1GHz 68W, 95W 2M<br />

8212 Opteron MP 2.0GHz Dual 1GHz 68W, 95W 2M<br />

880 Opteron MP 2.4GHz Dual 1GHz 68W, 85W, 95W 2M<br />

875 Opteron MP 2.2GHz Dual 1GHz 68W, 85W, 95W 2M<br />

870 Opteron MP 2.0GHz Dual 1GHz 68W, 85W, 95W 2M<br />

865 Opteron MP 1.8GHz Dual 1GHz 68W, 85W, 95W 2M<br />

856 Opteron MP 3.0GHz Single 1GHz 1M<br />

854 Opteron MP 2.8GHz Single 1GHz 68W 1M<br />

852 Opteron MP 2.6GHz Single 1GHz 68W, 95W 1M<br />

850 Opteron MP 2.4GHz Single 1GHz 95W 1M<br />

848 Opteron MP 2.2GHz Single 800 95W 1M<br />

846 Opteron MP 2.0GHz Single 800 95W 1M<br />

844 Opteron MP 1.8GHz Single 800 95W 1M<br />

842 Opteron MP 1.6GHz Single 800 95W 1M<br />

2P Platforms<br />

2220 Opteron DP 2.8GHz Dual 1GHz 68W, 95W 2M<br />

2218 Opteron DP 2.6GHz Dual 1GHz 68W, 95W 2M<br />

2216 Opteron DP 2.4GHz Dual 1GHz 68W, 95W 2M<br />

2214 Opteron DP 2.2GHz Dual 1GHz 68W, 95W 2M<br />

2212 Opteron DP 2.0GHz Dual 1GHz 68W, 95W 2M<br />

2210 Opteron DP 1.8GHz Dual 1GHz 68W, 95W 2M<br />

285 Opteron DP 2.6GHz Dual 1GHz 68W, 85W, 95W 2M<br />

280 Opteron DP 2.4GHz Dual 1GHz 68W, 85W, 95W 2M<br />

275 Opteron DP 2.2GHz Dual 1GHz 68W, 85W, 95W 2M<br />

270 Opteron DP 2.0GHz Dual 1GHz 68W, 85W, 95W 2M<br />

265 Opteron DP 1.8GHz Dual 1GHz 68W, 85W, 95W 2M<br />

256 Opteron DP 3.0GHz Single 1GHz 1M<br />

254 Opteron DP 2.8GHz Single 1GHz 68W 1M<br />

252 Opteron DP 2.6GHz Single 1GHz 68W, 95W 1M<br />

250 Opteron DP 2.4GHz Single 1GHz 95W 1M<br />

248 Opteron DP 2.2GHz Single 800 95W 1M<br />

246 Opteron DP 2.0GHz Single 800 95W 1M<br />

244 Opteron DP 1.8GHz Single 800 95W 1M<br />

242 Opteron DP 1.6GHz Single 800 95W 1M<br />

Where used:<br />

DL585, BL45p, BL685c<br />

DL585, BL45p, BL685c<br />

DL585, BL45p, BL685c<br />

DL585, BL45p, BL685c<br />

DL585, BL45p, BL685c<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585, BL45p<br />

DL585<br />

DL585<br />

DL585<br />

DL585<br />

DL585<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL365, DL385, DL145, BL465c<br />

BL25p, DL385<br />

BL25p, BL35p, DL145, DL385<br />

BL25p, BL35p, DL145, DL385<br />

BL25p, BL35p, DL145, DL385<br />

BL25p, BL35p, DL145, DL385<br />

BL25p, BL35p, DL145<br />

BL25p, DL145, DL385<br />

BL25p, DL145, DL385<br />

BL25p, BL35p, DL145, DL385<br />

BL35p, DL145<br />

BL35p, DL145<br />

BL35p, DL145<br />

BL35p, DL145


Processor Naming Decoder - Intel<br />

Intel Based processors<br />

Processor Processor Processor Processor Front Side Processor Memory<br />

Number Type Speed Cores Bus Speed Wattage Cache<br />

4P Platforms<br />

7140M Tulsa 3.40MHz Dual 800 150W 2X8MB<br />

7130M Tulsa 3.20MHz Dual 800 150W 2X4MB<br />

7120M Tulsa 3.00MHz Dual 800 150W 2X2MB<br />

7110M Tulsa 2.60MHz Dual 800 150W 2X2MB<br />

7041 Paxville MP 3.00MHz Dual 800 145W 2X2MB<br />

7040 Paxville MP 3.00MHz Dual 667 145W 2X2MB<br />

7030 Paxville MP 2.8MHz Dual 800 145W 2X1MB<br />

7020 Paxville MP 2.66MHz Dual 667 145W 2X1MB<br />

2P Platforms<br />

5160 Woodcrest 3.0MHz Dual 1333 65W 2X2MB<br />

5150 Woodcrest 2.66MHz Dual 1333 65W 2X2MB<br />

5148 Woodcrest 2.33MHz Dual 1333 65W 2X2MB<br />

5140 Woodcrest 2.33MHz Dual 1333 65W 2X2MB<br />

5130 Woodcrest 2.0MHz Dual 1333 65W 2X2MB<br />

5120 Woodcrest 1.86MHz Dual 1066 65W 2X2MB<br />

5110 Woodcrest 1.6MHz Dual 1066 65W 2X2MB<br />

5080 Dempsey 3.73MHz Dual 1066 130W 2X2MB<br />

5063 Dempsey 3.2MHz MV Dual 1066 130W 2X2MB<br />

5060 Dempsey 3.2MHz Dual 1066 130W 2X2MB<br />

5050 Dempsey 3.0MHz Dual 667 130W 2X2MB<br />

1P Platforms<br />

X6800 Conroe XE 2.93GHz Dual 800 130W 4MB<br />

E6700 / 3070 Conroe 2.67GHz Dual 800 130W 4MB<br />

E6600 / 3060 Conroe 2.40GHz Dual 800 130W 4MB<br />

E6400 / 3050 Conroe 2.13GHz Dual 800 130W 2MB<br />

E6300 / 3040 Conroe 1.86GHz Dual 800 130W 2MB<br />

p960 Presler 3.64GHz Dual 800 130W 2X2MB<br />

p950 Presler 3.4GHz Dual 800 130W 2X2MB<br />

p940 Presler 3.2GHz Dual 800 130W 2X2MB<br />

p930 Presler 3.0GHz Dual 800 130W 2X2MB<br />

p925 Presler 3.0GHz Dual 800 130W 2X2MB<br />

p920 Presler 2.8GHz Dual 800 130W 2X2MB<br />

p915 Presler 2.8GHz Dual 800 130W 2X2MB<br />

p840 Smithfield 3.2GHz Dual 800 130W 2X1MB<br />

p830 Smithfield 3.0GHz Dual 800 130W 2X1MB<br />

p820 Smithfield 2.8GHz Dual 800 130W 2X1MB<br />

cm631 Cedar Mill 3.0GHz Single 800 84W 2MB<br />

p670 Prescott T 3.8GHz Single 800 115W 2MB<br />

p660 Prescott T 3.6GHz Single 800 115W 2MB<br />

p650 Prescott T 3.4GHz Single 800 115W 2MB<br />

p640 Prescott T 3.2GHz Single 800 115W 2MB<br />

p630 Prescott T 3.0GHz Single 800 115W 2MB<br />

c341 Celeron 2.93GHz Single 400 84W 256K<br />

c331 Celeron 2.66GHz Single 400 84W 256K<br />

Where used:<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL580 & ML570<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, DL360<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, ML370, DL360, ML350, BL20, ML150,<br />

DL380, ML370, DL360, ML350, BL20, ML150<br />

BL20<br />

DL380, ML370, DL360, ML350, BL20, ML150<br />

DL380, ML370, DL360, ML350, BL20, ML150<br />

Not currently used.<br />

Not currently used.<br />

ML310, ML110, DL320<br />

ML310, ML110, DL320<br />

ML310, ML110, DL320<br />

Not currently used.<br />

Not currently used.<br />

ML310, ML110, DL320<br />

ML310, ML110, DL320<br />

ML310, ML110<br />

ML310, ML110, DL320<br />

Not currently used.<br />

ML310, ML110, DL320<br />

ML310, ML110, DL320<br />

ML310, ML110, DL320<br />

ML310, ML110<br />

Not currently used.<br />

Not currently used.<br />

ML310, ML110<br />

ML310, ML110<br />

ML310, ML110<br />

ML310, DL320<br />

ML310


The Impact of New Memory Styles<br />

•Fully Buffered<br />

DIMM’s<br />

•Necessary only for<br />

Intel Platforms today<br />

•AMD will move to<br />

FBDIMM by 2008<br />

•Serialized I/O to save pins<br />

•Increased power consumption<br />

•Rises from 5-6 watts to 12-14 watts<br />

per DIMM


Enclosure Power<br />

• (1) Single-phase enclosure<br />

available worldwide for use with inrack<br />

PDUs which accept C19 –<br />

C20 power cords;<br />

C-19 16A<br />

• (2) Three-Phase enclosure with a<br />

pair of US/Japan power cords with<br />

NEMA L15-30P power connectors<br />

NA/JPN L15-30p<br />

• (3) Three-Phase enclosure with a<br />

pair of International power cords<br />

with IEC 309 16A power<br />

connectors<br />

Intl IEC309 5-Pin, 6h, 16A<br />

<strong>HP</strong> Confidential


Using 32A 3Ø PDU – Intl<br />

• S332 - AF917A<br />

• 22 KVA N+N redundant<br />

power<br />

• 2 x 32A 3Ø connections<br />

• 4 c-Class Blade Server<br />

enclosures<br />

• Each power enclosure<br />

contains 6 2250W power<br />

supplies.<br />

• 4 8KVA enclosures<br />

supported by 4 feeds


Power Supply Conversion Efficiency<br />

• PS conversion efficiency is a function of load and<br />

$$<br />

− higher load= better efficiency<br />

• 0-50% load = 60% efficient<br />

• 50-80% = 89-92% efficiency<br />

− Higher costs = better efficiency<br />

• +92% efficient = +50% increase in cost<br />

• Higher native power supply efficiency is not cost<br />

effective nor necessarily possible<br />

• Key to better thermal efficiency in power<br />

subsystems lies with how the power supplies are<br />

loaded


Greater efficiency and cost savings with<br />

<strong>HP</strong>’s <strong>BladeSystem</strong><br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Rack<br />

Server<br />

Typical Rack Servers<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

BL<br />

20p<br />

100%<br />

90%<br />

80%<br />

Power Supply Efficiency<br />

Blade PSU<br />

Typical PSU<br />

70%<br />

Typical Blade Servers<br />

Efficiency<br />

60%<br />

50%<br />

40%<br />

30%<br />

Dynamic Power Saver<br />

20%<br />

10%<br />

0%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

Output Load


Power Supply Conversion Efficiency<br />

• Rack mount vs. Blades<br />

− 32 server example<br />

− DL360G5 vs. BL460c, 2.33GHz LV, 2x72GB,<br />

8GB(1GB)<br />

DL360G5<br />

Rack Server<br />

X 32 = 7424 consumed watts @ 40% utilization<br />

X 32 = 6664 consumed watts @ 40% utilization<br />

This value assures Dynamic Power Saver will be on, saving energy


Dynamic Power Saver Operation<br />

Switch Mode Power Supply<br />

Input stages are energized at<br />

when power is applied to the<br />

circuitry<br />

Reservoir<br />

Capacitors<br />

account for the<br />

power supply<br />

inrush current<br />

As all of the input circuitry remains energized, no<br />

additional inrush occurs. Further, the low rate of<br />

switching eliminates any concern for system reliability.<br />

Dynamic Power Saver<br />

activates the outputs<br />

when load of the<br />

previously activated<br />

power supplies rise<br />

above full load for the<br />

rating of the power<br />

supply.


Dynamic Power Saver<br />

3+3 Power Supply Efficiency<br />

Dynamic Power Saver<br />

1+1 up to 2250W 2+2 up to 4500W 3+3 up to 6750W<br />

Power Supply Efficiency<br />

Typical power<br />

supply curve<br />

High efficiency maintained<br />

from 1150W to 6750 W<br />

2+2 Not Managed<br />

3+3 Not Managed<br />

3+3 Managed<br />

Typical PSU<br />

0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000 6500<br />

Power Supply Output Power (Watts)<br />

<strong>HP</strong> Confidential


ProLiant Power Regulator: advanced<br />

power management<br />

• Monitor and manage individual and groups of servers by<br />

physical or logical location (power domain)<br />

• Monitor vital power information<br />

− power usage in watts<br />

− BTU/hr output<br />

− ambient air temperature<br />

• Policy based management<br />

− Power cap policy: Set maximum BTU’s/hr or wattage threshold<br />

(capped on a server by server basis)<br />

− Temporary conservation policy: Set time of day to drop lower<br />

selected priority servers into lower power state<br />

− Severe facilities issue: drop lower priority servers into lower power<br />

state when severe facilities issue occurs<br />

− Energy efficiency policy: Set all servers in power domain to<br />

dynamic power regulating


Power Regulator for ProLiant<br />

Improve system energy efficiency<br />

Give CPU’s full power for<br />

applications when they need it.<br />

Save when they don’t.<br />

• Server level, policy based<br />

power management<br />

− Dynamic & static control of<br />

processor power states<br />

− Unique OS independence<br />

− Scalable iLO scripting<br />

• Benefits<br />

POWER<br />

COSTS<br />

− Save up to 18% on power &<br />

cooling costs with no<br />

performance loss<br />

− Increase facility compute<br />

capacity


Power Regulator for ProLiant Overview<br />

• Uses new exposed Intel<br />

and AMD CPU<br />

performance states<br />

• Modes<br />

− Static Low Power<br />

• Pmin always<br />

− Dynamic Power Savings<br />

• Pmin-Pmax switching based<br />

on application load as<br />

monitored by ROM<br />

− Disabled<br />

• Pmax or OS based power<br />

management<br />

Application load = % of total CPU time<br />

spent processing application data<br />

P-states<br />

CPU<br />

APPLICATION<br />

LOAD<br />

STATIC<br />

LOW<br />

POWER<br />

DYNAMIC<br />

POWER<br />

SAVINGS<br />

P-States<br />

Xeon 3.6 GHz/800 MHz CPU<br />

Low<br />

CPU frequency<br />

Min power<br />

Min power<br />

Approx. CPU<br />

voltage<br />

Pmax 3.6 GHz 1.4 V<br />

Pmin 2.8 GHz 1.2 V<br />

High<br />

Max<br />

Power


Power Regulator for ProLiant<br />

<strong>HP</strong> Performance Bench Tests<br />

• In constant Power Saver<br />

Mode, a DL380 G4<br />

experienced<br />

Impact of Power Savings Mode on<br />

System Power and Performance<br />

DL380 G4<br />

− no performance loss up to<br />

80% CPU utilization and<br />

− 18% system power savings<br />

• Most customer systems<br />

operate well below 80%<br />

CPU utilization<br />

Power Reduction (%)<br />

0%<br />

-5%<br />

-10%<br />

-15%<br />

-20%<br />

-25%<br />

-30%<br />

Percent CPU Utilization<br />

0 20 40 60 80 100<br />

Min Power Zone<br />

0%<br />

-1%<br />

-2%<br />

-3%<br />

-4%<br />

-5%<br />

-6%<br />

Performance Loss (%)<br />

% Pwr Reduction<br />

% Perf Loss<br />

Performance and Power impact is dependent on configuration, application and load.


ProLiant Power Management<br />

ROM or Operating System Flexibility<br />

Intel<br />

Models<br />

AMD<br />

Models<br />

ROM Power<br />

Management<br />

Power Regulator<br />

• Dynamic power<br />

Selected models<br />

• Static low power<br />

All models except low bin<br />

Power Regulator<br />

• Static low power<br />

All models except low bin<br />

• No OS dependency<br />

• Foundation for value<br />

add functionality<br />

OS Power<br />

Management<br />

Demand Based<br />

Switching<br />

• Dynamic power<br />

Selected models<br />

PowerNow<br />

• Dynamic power<br />

Selected models<br />

• Heterogeneous<br />

deployment


New Thermal Logic cooling<br />

technologies<br />

Active Cool Fans<br />

Control algorithm to optimize for<br />

any configuration based on customer<br />

parameter of<br />

• Air flow<br />

• Acoustics<br />

• Power<br />

• Performance<br />

PARSEC architecture<br />

20<br />

patents<br />

pending<br />

Parallel, redundant and scalable airflow design<br />

• All blades in parallel<br />

• Cooled by all fans in parallel<br />

• Air distribution manifold<br />

• Back flow preventers on all fans<br />

• Shut off doors on all servers


c7000 PARSEC Architecture<br />

• Hybrid Model<br />

− Advantages of both local and<br />

central cooling<br />

− Blades are divided into 4 zones<br />

− Fans in each zone provide<br />

• Cooling for blades in that zone<br />

• Plus redundant cooling for other<br />

blades<br />

• Centralized Cooling Done<br />

Right<br />

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5<br />

Blade<br />

Zone 1<br />

2 FH or<br />

4 HH<br />

Blade<br />

Zone 2<br />

2 FH or<br />

4 HH<br />

Blade<br />

Zone 3<br />

2 FH or<br />

4 HH<br />

Blade<br />

Zone 4<br />

2 FH or<br />

4 HH<br />

Fan 6 Fan 7 Fan 8 Fan 8 Fan 10<br />

PS 1 PS 2 PS 3 PS 4 PS 5 PS 6<br />

<strong>HP</strong> Confidential


Existing Fan Technology<br />

• DL360/BL20p style fan<br />

− Develop high pressure<br />

− Poor at generating high CFM<br />

− Need lots of fans<br />

• IBM Style Blower<br />

− Good at generating CFM<br />

− Moderate at developing pressure<br />

− Sensitive to inlet geometry<br />

− High Power requirement


Fans<br />

• Fan Laws<br />

− Volume of air flow varies as (fan diameter)³ and as rpm<br />

− Pressure developed varies as (fan diameter)² and as (rpm)²<br />

− Power absorbed by the fan varies as (fan diameter) 5 and as (rpm)³<br />

− Sound pressure varies as (air speed) 6<br />

• Fan Concepts<br />

− Pressure<br />

− Airflow<br />

• Significant Tradeoffs have to be made<br />

− Larger fans move more air but take more power<br />

− Smaller fans need higher rpm to move the same amount of air<br />

− Higher rpm means higher noise for a given size fan<br />

− There are physical limits on how fast a fan can go<br />

− More fans = more power and more cost


What’s needed<br />

• A new fan technology<br />

− High CFM<br />

− High pressure<br />

− Best in class acoustics<br />

− Best in class power consumption<br />

• A new cooling architecture<br />

− Think beyond just a server blade<br />

− A Bladed System<br />

− Combine the best of both worlds<br />

• Centralized cooling<br />

• That scales as you grow<br />

• “Centralized cooling done right”


<strong>HP</strong> Active Cool Fan<br />

• Custom fan design delivering better than<br />

industry performance<br />

• High air flow<br />

• High pressure<br />

• Best in class reliability<br />

• Superior acoustics across entire operating<br />

range<br />

• Cool Facts<br />

− 4 Active Cool fans could cool an IBM<br />

BladeCenter with N+1 redundancy<br />

− 1 Active Cool fan could cool 5 DL360G4<br />

<strong>HP</strong> Confidential


Fan Qty versus Power<br />

Power versus CFM<br />

6 Fans 8 Fans 10 Fans<br />

Power<br />

CFM<br />

Acoustics<br />

6 fans are 3.7 dB louder than 8 fans.<br />

For sounds with similar frequency content, most people consider a 3dB change in sound<br />

pressure a noticeable difference in sound.<br />

<strong>HP</strong> Confidential


Enclosure Management<br />

Enclosure<br />

inflow and<br />

outflow<br />

temperatures<br />

Rack-level<br />

BTU/hour<br />

Actual power<br />

usage<br />

Maximum<br />

power available<br />

<strong>HP</strong> Confidential


<strong>HP</strong> Onboard Administrator<br />

Remote <strong>BladeSystem</strong> Management<br />

The intelligence of the c7000 <strong>BladeSystem</strong>!<br />

• Real-time power and cooling control<br />

• Device health and configuration (enclosure, blade, switch)<br />

• Integrated iLO 2 with each server blade providing single sign-on for easy<br />

access<br />

<strong>HP</strong> Restricted


<strong>HP</strong> Insight Display<br />

Local <strong>BladeSystem</strong> Management<br />

Simple and easy to learn!<br />

• Simplifies installation and<br />

setup<br />

• Visual indicators of faults<br />

• Visual info on ambient<br />

inflow temperatures<br />

• Monitors device status<br />

• Graphical instructions on<br />

fixing configuration issues<br />

• Secure local management<br />

<strong>HP</strong> Restricted


Thermal Logic Summarized<br />

• Instant thermal monitoring<br />

− Real-time heat, power and<br />

cooling data<br />

• Active Cool fans (20 patents<br />

pending)<br />

− Control algorithm to optimize<br />

Airflow, Acoustics, Power,<br />

and Performance<br />

• Dynamic Power Saver<br />

− Power load shifting for max<br />

efficiency and reliability<br />

• Power Regulator<br />

− ROM-controlled speed stepping<br />

• Power workload balancing<br />

− Saves power while maximizing<br />

performance per watt<br />

• Pooled power<br />

− N+N power redundancy<br />

<strong>HP</strong> Restricted


Cooling Strategies:<br />

How is the market served today?<br />

No easy solution for customer pain points<br />

Today’s point solutions<br />

• Partitioning<br />

− Liquid- or refrigerant-based solutions; air containment and baffling (<strong>HP</strong>, Liebert, Verari)<br />

• "Encapsulated best practices" like APC InfrastruXure<br />

• Element efficiency (Intel-AMD CPU perf/watt, <strong>HP</strong> c-Class Thermal Logic)<br />

• <strong>Facilities</strong> plays outside the data center (air- and water-side optimizers)<br />

• Computational Fluid Dynamics (CFD) modeling (<strong>HP</strong> Static Smart<br />

Cooling)<br />

Gap in current industry approach<br />

• No one has yet married provisioning logic with thermodynamics, end-toend<br />

from servers to data center management.<br />

− <strong>HP</strong> working to convert energy into a variable cost and tie into policy-based<br />

logic<br />

− <strong>HP</strong> introduced server resource provisioning tools in the 1990s<br />

− <strong>HP</strong> has researched heat transfer for the last 20 years<br />

− <strong>HP</strong> is the first to bridge facilities and IT for Adaptive Infrastructure


Customers are over-provisioning cooling<br />

capacity<br />

<strong>HP</strong>’s Holistic approach bridges the gap between facilities and IT<br />

• Cooling represents upwards of<br />

60-70% of data center power<br />

spend<br />

• Approximately 85% of the world’s<br />

data centers are overprovisioned<br />

by more than double<br />

Servers and<br />

Storage<br />

AC Power<br />

Conversion<br />

Cooling<br />

•IT industry focuses on server and storage efficiency<br />

•<strong>Facilities</strong> industry focuses on actuator/generator efficiency<br />

•<strong>HP</strong> focuses on the ensemble of IT + facilities together<br />

•US$10B data center cooling spend in 2005<br />

Sources: Preliminary assessment from Uptime Institute; IDC Data Center of the<br />

Future US Server Power Spend for 2005 as a baseline ($6bn); applied a cooling<br />

factor of 1; applied a 0.6 multiplier to US data for WW amount; Belady, C., Malone,<br />

C., “Data Center Power Projection to 2014”, 2006 ITHERM, San Diego, CA (June<br />

2006)


Industry needs to benchmark data centers<br />

Efficient components are necessary, but not sufficient<br />

<strong>HP</strong> introduces “Power Usage Effectiveness” (PUE) for the data<br />

center<br />

• Look at the ratio of building load to IT load as a measure of efficiency<br />

PUE = Building Load / IT Load<br />

Industry numbers suggest<br />

PUE = 1.6 Ideal 0%<br />

PUE = 2.0 Target 5%<br />

PUE = 2.4 Ave 10%<br />

PUE = +3.0 Poor 85%<br />

Building load<br />

Demand from grid<br />

Power (Switch Gear,<br />

UPS, Battery<br />

backup, and so on)<br />

Cooling (Chillers,<br />

CRACs, and so on)<br />

IT load<br />

Demand from<br />

servers, storage,<br />

telco equipment, and<br />

so on<br />

Source: Belady, C., Malone, C., “Data Center Power Projection to 2014”, 2006 ITHERM, San Diego, CA (June 2006)


Robust Solution Requires a Holistic Approach<br />

• Complex problem<br />

• Multi-layered challenge<br />

• Interdependencies –standardsbased<br />

approach<br />

• Energy – finite planet resource<br />

Temperatur<br />

e<br />

2nd law-based tool from chip<br />

scale to data center scale<br />

Non-uniform<br />

power<br />

Flow<br />

irreversibility<br />

Thermodynamic<br />

irreversibility<br />

Non-ideal<br />

effects<br />

Energ<br />

y flow<br />

Non-uniform<br />

power<br />

Flow<br />

irreversibility<br />

Non-ideal<br />

effects<br />

SYSTEM<br />

Non-uniform power<br />

CHIP<br />

Flow and thermodynamic work<br />

Ground<br />

state<br />

(ambient)<br />

DATA<br />

CENTER<br />

Exergy (available work)<br />

Source: <strong>HP</strong> Labs and UC Berkeley


It’s now one conversation<br />

Cooling<br />

solutions<br />

Infrastructure<br />

products<br />

Chip<br />

design<br />

System<br />

design<br />

Data<br />

center<br />

services<br />

Energy efficient<br />

data center<br />

(lowest TCO)<br />

Server<br />

&<br />

storage<br />

consolidation<br />

Industry<br />

standards<br />

Virtualization<br />

&<br />

automation<br />

technologies<br />

Power<br />

&<br />

cooling<br />

management<br />

Business<br />

continuity &<br />

availability


The Provisioning Dilemma<br />

“If too much site infrastructure<br />

capacity is installed, those making the<br />

investment recommendations will be<br />

criticized for the resulting low siteequipment<br />

utilization and poor<br />

efficiency. If too little capacity is<br />

installed, a company’s IT strategy may<br />

be constrained…”*<br />

* Reprinted with permission of The Uptime Institute from a White Paper titled Heat<br />

Density Trends in Data Processing, Computer Systems, and Telecommunications<br />

Equipment Version 1.0.<br />

<strong>HP</strong> Energy Provisioning Strategy<br />

Can we make facilities modular?<br />

Can we provision energy like IT?<br />

Can we make energy a variable<br />

cost?<br />

Can we tie that variable cost to<br />

business and application priorities?


Introducing <strong>HP</strong> Dynamic Smart Cooling<br />

<strong>HP</strong> unique innovation—over 1,000 patents in cooling<br />

Provisioning energy for data center efficiency<br />

• Industry’s first intelligent cooling management system<br />

− Pervasive thermal sensing grid down to the rack level<br />

− <strong>HP</strong> intelligent management software delivers continuous, real-time Computational Fluid<br />

Dynamics (CFD)<br />

− Adaptive control of Variable-Flow Devices (VFDs) in Computer Room Air Conditioner<br />

(CRAC)<br />

• Standard interfaces to air-conditioning and building management systems<br />

• Easy to retrofit or spec for new construction applications<br />

• Agnostic to IT equipment in the racks<br />

“Dynamic Smart Cooling is the most<br />

remarkable development for data center<br />

critical support systems.”<br />

-Peter Gross, CEO and CTO, EYP Mission <strong>Critical</strong> <strong>Facilities</strong><br />

Inc.


Only cool where and when you need it<br />

Automated energy provisioning for cooling applications<br />

DSC features:<br />

• Pervasive sensing grid with intelligent,<br />

adaptive control of air conditioners<br />

• A network of sensors deployed on racks<br />

feed thermal information to<br />

management software in real time.<br />

• Management SW continually allocates<br />

airflow into fluidic partitions<br />

corresponding to highest-efficiency<br />

cooling zones that cooperate in the<br />

event of environmental events.<br />

• Thermodynamic controller software<br />

continually optimizes thermal<br />

environment through low-latency<br />

adjustments to CRAC fan speed and<br />

temperature set points.<br />

• Incorporated into overall data center<br />

management solution<br />

• Automated provisioning: as customers<br />

add/delete equipment, DSC<br />

automatically reconfigures fluidic<br />

partitions.


<strong>HP</strong> Data Center Solution Builder Program<br />

Charting the path forward to Adaptive Infrastructure<br />

• Partnering is in our DNA<br />

• <strong>HP</strong> is creating a data center design partner ecosystem for industry<br />

leaders<br />

− Open to architecture & engineering (A&E), equipment manufacturers,<br />

mechanical contractors, utility companies, software companies, service<br />

providers and real estate specialists<br />

− Accelerate and drive adoption of energy-efficient data center solutions<br />

• Co-designing next generation data centers with key customers &<br />

partners<br />

− Around the world, in every industry vertical<br />

− First customer was <strong>HP</strong> IT


Key takeaways<br />

• Dedicated & Holistic approach across the data<br />

center and delivery of the Adaptive<br />

Infrastructure<br />

− A strong product and services portfolio to<br />

address customer’s current and future<br />

power and cooling challenges<br />

− Best positioned to leverage the “power” of the<br />

portfolio & provide unique power and cooling<br />

products, solutions and services available today<br />

• Experts on hand to help customers evaluate their<br />

data center environment<br />

− Data center service portfolio for assessment,<br />

design and ongoing life cycle support<br />

• Today’s announcement is another proof point of<br />

<strong>HP</strong> innovation in this space


Thank you!<br />

Ken Baker<br />

<strong>BladeSystem</strong> Infrastructure<br />

Technologist<br />

mrblade@hp.com<br />

© 2006 Hewlett-Packard Development Company, L.P.<br />

The information contained herein is subject to change without notice

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!