22.02.2014 Views

Data Center Technology - Schneider Electric

Data Center Technology - Schneider Electric

Data Center Technology - Schneider Electric

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Data</strong> <strong>Center</strong> <strong>Technology</strong>:<br />

Physical Infrastructure<br />

IT Trends Affecting New Technologies and Energy<br />

Efficiency Imperatives in the <strong>Data</strong> <strong>Center</strong><br />

Hisham Elzahhar<br />

Regional Enterprise & System Manager,<br />

<strong>Schneider</strong> <strong>Electric</strong> IT business EMEA, Dubai


Keystrokes Kilowatts<br />

Heat OUT<br />

<strong>Electric</strong>ity IN<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

2


US <strong>Electric</strong>al Energy Sources 2006<br />

Other Renew ables<br />

Hydro-<strong>Electric</strong> 2%<br />

7%<br />

Nuclear<br />

19%<br />

Natural Gas<br />

20%<br />

Petroleum<br />

2%<br />

CoalCoal<br />

50%<br />

Coal<br />

Petroleum<br />

Natural Gas<br />

Nuclear<br />

Hydro-<strong>Electric</strong><br />

Other Renewables<br />

Source US EIA<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

3


Prime <strong>Electric</strong>al Source<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

4


WHICH infrastructure?<br />

BUILDING<br />

infrastructure<br />

“Building systems”<br />

HVAC<br />

<strong>Electric</strong>al system<br />

Fire suppression<br />

Lighting<br />

Security<br />

BMS<br />

DATA CENTER<br />

infrastructure<br />

Power<br />

Cooling<br />

Racks<br />

Management<br />

Lighting<br />

Fire suppression<br />

Physical security<br />

IT<br />

infrastructure<br />

“IT assets”<br />

Servers, storage<br />

hypervisors, , NMS<br />

NETWORK<br />

infrastructure<br />

Switches, cabling,<br />

routers<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

5


WHICH infrastructure?<br />

Focus of this<br />

discussion<br />

BUILDING<br />

infrastructure<br />

“Building systems”<br />

HVAC<br />

<strong>Electric</strong>al system<br />

Fire suppression<br />

Lighting<br />

Security<br />

BMS<br />

DATA CENTER<br />

infrastructure<br />

Power<br />

Cooling<br />

Racks<br />

Management<br />

Lighting<br />

Fire suppression<br />

Physical security<br />

IT<br />

infrastructure<br />

“IT assets”<br />

Servers, storage<br />

hypervisors, , NMS<br />

NETWORK<br />

infrastructure<br />

Switches, cabling,<br />

routers<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

6


<strong>Data</strong> center planning and operation<br />

is under increasing pressures<br />

Increasing availability<br />

Rapid changes in<br />

expectations<br />

IT technology<br />

Uncertain<br />

long-term plans for<br />

Energy and service<br />

capacity or density<br />

cost control pressure<br />

High density<br />

blade server<br />

power/heat<br />

Dynamic power<br />

variation<br />

Regulatory<br />

requirements<br />

Server<br />

consolidation<br />

In response, will need to change the way the<br />

world designs, installs, operates, manages, and<br />

maintains data centers<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

7


The increasing power density<br />

of data centers<br />

Management challenge:<br />

HIGH DENSITY<br />

Power density of IT devices<br />

is leveling off…<br />

… but power density of data centers<br />

data centers<br />

continues to increase due to “packing” of<br />

high-density devices into smaller floor<br />

footprint<br />

2000<br />

2009<br />

KW per rack continues to increase, raising the need<br />

for management to keep things under control<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

8


High density is stressing<br />

Management challenge:<br />

HIGH DENSITY<br />

power and cooling systems<br />

IT is getting boxed-in by limitations of<br />

power and cooling infrastructure<br />

● High density increases the risk of unpredictable cooling<br />

● Capacity is “tight” in some places, unused and unusable<br />

(“stranded”) in others<br />

● High density requires informed and efficient allocation of<br />

your expensive power/cooling resources<br />

● High density increases the need to know where new devices<br />

can be “squeezed in” to available capacities<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

9


The Newest Challenge: EFFICIENCY<br />

Efficiency goal:<br />

Provide power and cooling in the amount needed, when needed, and<br />

where needed – but no more than what is required for redundancy<br />

and safety margins<br />

But we can’t manage what we can’t measure<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

10


<strong>Data</strong>center Efficiency - DCiE<br />

<strong>Data</strong> center<br />

Power to<br />

data center<br />

Power path<br />

to IT<br />

POWER<br />

system<br />

Power<br />

to IT<br />

IT<br />

equipment<br />

Power to<br />

Secondary<br />

Support<br />

COOLING<br />

system<br />

Physical<br />

infrastructure*<br />

*To simplify the analysis, subsystems<br />

consuming a small amount of power<br />

are not included in this discussion:<br />

Cabling Physical security<br />

Switches Generator<br />

Lights Switchgear<br />

White<br />

paper<br />

113<br />

=<br />

<strong>Data</strong> <strong>Center</strong> infrastructure Efficiency<br />

Power<br />

to IT<br />

Power to<br />

data center<br />

( )%<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

11


<strong>Data</strong>center Efficiency<br />

<strong>Data</strong> <strong>Center</strong><br />

Physical Infrastructure<br />

IT<br />

COOLING<br />

COOLING<br />

system system<br />

POWER POWER<br />

system system<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

12


Power Chain Losses<br />

4,930 barrels<br />

47 tons SO2<br />

16 tons N2O<br />

6,539 tons CO2<br />

Per mW/yr<br />

1mW<br />

DCiE @ 47%<br />

45 racks @ 10kW<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

13


Inefficiencies Create Consumption<br />

● Computing inefficiencies > More servers<br />

● Server inefficiencies > More power and cooling<br />

● Power and cooling inefficiencies > More power consumption<br />

Inefficiencies drive both power consumption<br />

and material consumption<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

14


Primary drivers of inefficiency<br />

● Oversizing of power and cooling equipment<br />

● Pushing cooling systems to cool densities higher than they were<br />

designed for<br />

● Ineffective room layout<br />

● Ineffective airflow patterns<br />

● Redundancy (for availability)<br />

● Inefficient power and cooling equipment<br />

● Inefficient operating settings of cooling equipment<br />

● Clogged air or water filters<br />

● Disabled or malfunctioning cooling economizer modes<br />

● Raised floor clogged with wires<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

15


Efficiency: key reference points<br />

● More than 50% of the power going into a typical data center<br />

goes to the power and cooling systems – NOT to the IT loads<br />

● The typical 1MW (IT load) data center is continuously wasting<br />

about 400kW or 2,000 tons of coal per year due to poor design<br />

(DCiE = 50%, instead of best-practice 70%)<br />

● Every kW saved in a data center saves about $1,000 per year<br />

● Every kW saved in a data center reduces carbon dioxide<br />

emissions by 5 tons per year<br />

● Every kW saved in a data center has a carbon reduction<br />

equivalent to eliminating about 1 car from the road.<br />

● A 1% improvement in data center infrastructure efficiency (DCiE)<br />

corresponds to approximately 2% reduction in electrical bills<br />

References: APC White Paper 66<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

16


Power tools for The Efficient Enterprise <br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

17


Power tools – The “Four Cs”<br />

1<br />

2<br />

omponents<br />

MODULAR and SCALABLE, with best-in-class EFFICIENCY<br />

lose-coupled cooling <br />

Placement of cooling units near the heat source<br />

3<br />

ontainment<br />

Thermal containment of airflow in high-density zones<br />

4<br />

apacity management<br />

Instrumented intelligence to optimize use of power and<br />

cooling capacity<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

18


1<br />

omponents with the “right stuff”<br />

Best-in-class component EFFICIENCY<br />

Efficient<br />

Agile<br />

MODULAR SCALABLE component design<br />

Scalable<br />

External modularity<br />

Internal modularity<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

19


Problem: Underloading<br />

Low loading = low efficiency<br />

In a traditional data center, over half the power consumption<br />

of the power/cooling infrastructure is fixed and does not go down<br />

when IT load goes down<br />

Efficiency degrades as IT load declines<br />

Underloading is a primary contributor to inefficiency<br />

<strong>Data</strong> center<br />

Efficiency<br />

E ffic ie n c y<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

Efficiency degrades at low loads<br />

Typical load range<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

% IT Load<br />

IT load<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

20


Solution: Right-sizing<br />

Efficiency gain through modular scalable<br />

buildout – avoids oversizing / underloading<br />

<strong>Data</strong> center<br />

Efficiency<br />

Efficiency<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

30%<br />

20%<br />

10%<br />

0%<br />

Power and cooling<br />

installation method<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

% IT load Load<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

21


Modular scalable design<br />

Reduce power consumption up to 30% by “right<br />

right-sizing<br />

sizing”<br />

power and cooling infrastructure<br />

●<br />

●<br />

Avoid underloading run more efficiently<br />

Pay only for what you need, when you need it<br />

P = Power C = Cooling R = Racks<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

22


500kW of scalable, high-efficiency<br />

power protection<br />

100kW 125kW 150kW 175kW 200kW 225kW 250kW 275kW 300kW 325kW 350kW 375kW 400kW 425kW 450kW 500kW 475kW<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

23


lose-coupled cooling<br />

Reduce power consumption up to 20% with InRow ®<br />

architecture<br />

●<br />

●<br />

●<br />

●<br />

Closely couples cooling with heat load, preventing exhaust air<br />

recirculation<br />

Less fan power than traditional raised-floor system<br />

Varying equipment temperatures are constantly held to set point<br />

conditions<br />

Lowers operating cost by monitoring inlet temperatures to modulate<br />

cooling capacity based on the cooling demand<br />

Fan speed adjusts to follow changing IT heat<br />

load<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

24


Close-coupled cooling <br />

InRow ® air<br />

conditioner<br />

Heat captured and<br />

rejected to chilled water<br />

Cold air is supplied<br />

to the cold aisle<br />

Hot-aisle air enters from<br />

rear, preventing mixing<br />

Hot aisle<br />

Cold aisle<br />

Can operate on hard<br />

floor or raised floor<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

25


Efficiency comparison<br />

100%<br />

90%<br />

Cooling Efficiency<br />

Cooling<br />

efficiency<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

IT load<br />

% IT Load<br />

Cooling efficiency = useful cooling power / (power consumed + useful cooling power)<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

26


ontainment<br />

Eliminate expensive temperature cross-contamination<br />

contamination<br />

with thermal containment options<br />

●<br />

●<br />

●<br />

●<br />

Simplifies analysis and understanding<br />

of the thermal environment<br />

Increases predictability of the cooling<br />

system<br />

Increases cooling EFFICIENCY and<br />

cooling CAPACITY by returning warmest possible air to cooling<br />

units<br />

Ensures proper air distribution by<br />

separating supply and return air paths<br />

Hot Aisle Containment (HAC)<br />

Rack Air Containment (RAC)<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

27


Rack Air Containment<br />

Rear<br />

Rear<br />

Containment<br />

●<br />

Rear containment prevents<br />

hot exhaust air from escaping<br />

InRow<br />

cooling<br />

unit<br />

InRow<br />

cooling<br />

unit<br />

●<br />

●<br />

●<br />

All exhaust air is returned to<br />

InRow ® cooling unit<br />

Optional front containment<br />

directs cool air to front of<br />

servers<br />

Allows up to 60 kW per rack<br />

(30 kW with N+1 redundancy)<br />

NetShelter SX<br />

rack<br />

Front<br />

Containment<br />

Front<br />

Top Down View<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

28


Hot aisle containment vs traditional<br />

room cooling<br />

●Inherently higher power<br />

density capability than room<br />

designs<br />

●Fan power is reduced by 50%<br />

●Needless dehumidification /<br />

re-humidification is eliminated<br />

●Need for high-bay areas and<br />

raised floors is reduced or<br />

eliminated (particularly for small<br />

installations)<br />

●Cooling capacity can “follow”<br />

IT loads that move due to<br />

virtualization and server power<br />

management<br />

C o o l i n g E f f i c i e n c y<br />

100%<br />

90%<br />

80%<br />

70%<br />

60%<br />

50%<br />

40%<br />

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%<br />

% IT Load<br />

Cooling efficiency = useful cooling power /<br />

(power consumed + useful cooling power)<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

29


Hot Aisle Containment areas<br />

can be added as needed<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

30


apacity Management<br />

Increase IT staff efficiency with predictable<br />

Capacity Management<br />

●<br />

●<br />

●<br />

Identify over- and under-utilized areas of your data<br />

center<br />

Minimize waste and human error<br />

via predictable software monitoring, sensing, and<br />

environmental control<br />

Quickly adapt to change with<br />

real-time data on what to power<br />

and where to cool<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

31


Capacity Manager <br />

Physical equipment<br />

provisioning<br />

Quickly locate the optimum spot<br />

for that next server based on<br />

space, cooling, and power needs<br />

Rack elevations<br />

Easy-to-use front view for<br />

accurate and detailed<br />

representation of equipment<br />

layout<br />

Airflow analysis<br />

Locate new devices without<br />

overheating new or existing<br />

equipment by simulating<br />

changes in; supply<br />

temperature, airflow and<br />

number of cooling units<br />

Available capacity<br />

Understand available capacity by<br />

calculating actual space, power<br />

and cooling consumption against<br />

data center architecture<br />

constraints<br />

Capacity grouping<br />

Specify architecture<br />

capabilities to; match IT<br />

equipment with availability<br />

needs ad avoid stranded<br />

space, power and cooling<br />

capacity<br />

Design analysis<br />

Model the effects of and<br />

compare alternative layouts<br />

through detailed design<br />

analysis<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

32


Capacity and energy<br />

management<br />

●Poor utilization of capacity is a<br />

primary cause of inefficiency<br />

●Software can identify available<br />

capacity (even by rack) and<br />

help prevent creation of<br />

stranded capacity<br />

●Side effect is you can fit more<br />

IT equipment in the power and<br />

cooling “envelope” of the data<br />

center<br />

●Energy management can<br />

identify efficiency improvement<br />

opportunities<br />

Infrastructure Central Software<br />

With Capacity Manager<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

33


Power consumptions compared to the<br />

IT load<br />

IT Load<br />

Aux Devices<br />

Lights<br />

Humidifier<br />

Chiller<br />

Pumps<br />

Heat Rejection<br />

CRAC<br />

Distribution Wiring<br />

Switchgear<br />

Generator<br />

Improving efficiency<br />

means working to<br />

reduce power<br />

consumption (increase<br />

efficiency) for each of<br />

these device categories<br />

PDU<br />

UPS<br />

Reference: APC White Paper 114<br />

0% 20% 40% 60% 80% 100% 120%<br />

Power<br />

Power<br />

consumption<br />

Consumption<br />

as<br />

as<br />

% of<br />

%<br />

the<br />

of IT<br />

IT<br />

Load<br />

load<br />

0.0% 20.0% 40.0% 60.0% 80.0% 100.0% 120.0%<br />

<strong>Data</strong> for a typical tier 4 data center operating at 30% of rated load<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

34


Drivers of infrastructure efficiency gains<br />

(Baseline: Average of existing installed base)<br />

IMPROVEMENT Device Gain DCiE Gain<br />

$$ saved over 15<br />

years in a 1MW data<br />

center**<br />

Move from room cooling to<br />

dynamic row/rack cooling<br />

70% 8% $5,900,000<br />

Cooling economizers 38% 4% $2,500,000<br />

Right-sizing through modular<br />

power and cooling equipment<br />

4% 4% $2,400,000<br />

Higher UPS efficiency 8% 4% $1,900,000<br />

415/240 V transformerless<br />

power distribution (NAM)*<br />

Dynamic control of cooling<br />

plant (VFD fans, pumps,<br />

chillers)<br />

4% 2.5% $1,500,000<br />

25% 2.5% $1,200,000<br />

TOTAL to get industry<br />

from 47% to 72% DCiE 25%<br />

25% $14,700,000<br />

*No benefit outside of NAM; Transformer based PDUs typically in NAM only<br />

**$$ values based on $.15 per kwh electric cost, starting DCiE of 47%, ave density 8KW/rack<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

35


Power Chain Losses – Could Be<br />

4,930 barrels<br />

6,539 tons CO2<br />

47 tons SO2<br />

16 tons N2O<br />

Per mW/yr<br />

1,971 barrels<br />

2,615 tons CO2<br />

19 tons SO2<br />

6 tons N2O<br />

1mW<br />

400kW<br />

DCiE @ 70%<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

36


Visit us<br />

Hall 4, booth 4405<br />

(next to the BP Carbon Theater)<br />

Tour the booth and be entered into our daily lottery<br />

The lucky winner to receive a brand new<br />

Amazon Kindle e-reading device right away!<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

37


Questions?<br />

<strong>Schneider</strong> <strong>Electric</strong> - Division - Name – Date<br />

38

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!