Forecast: Mostly Cloudy - Ziti - Ruprecht-Karls-Universität Heidelberg
Forecast: Mostly Cloudy - Ziti - Ruprecht-Karls-Universität Heidelberg
Forecast: Mostly Cloudy - Ziti - Ruprecht-Karls-Universität Heidelberg
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
(Weather) <strong>Forecast</strong>: <strong>Mostly</strong> <strong>Cloudy</strong><br />
Dr. Holger Fröning<br />
Juniorprofessor für Technische Informatik<br />
<strong>Ruprecht</strong>-<strong>Karls</strong>-Universität <strong>Heidelberg</strong>
Popular Web Services<br />
<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 2
Cloud Computing – What is it<br />
• John McCarthy (MIT/Stanford), MIT Centennial, 1961:<br />
„If computers of the kind I have advocated become the computers of<br />
the future, then computing may someday be organized as a public<br />
utility just as the telephone system is a public utility... The computer<br />
utility could become the basis of a new and important industry.”<br />
• Larry Ellison (Oracle), Wall Street Journal 2008:<br />
„The interesting thing about Cloud Computing is that we‘ve redefined<br />
Cloud Computing to include everything that we already do...“<br />
• Andy Isherwood (HP), ZDnet News, 2008:<br />
„... I have not heard two people say the same thing about it [the<br />
Cloud].“<br />
• Richard Stallman (Free Software Advocat), 2008:<br />
„It‘s a marketing hype campaign...“<br />
[M. Armbrust et al. 2009. Above the Clouds: A Berkeley View of Cloud Computing, Technical Report No. UCB/EECS-2009-28, University of California, Berkeley]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 3
Cloud Computing<br />
• Pervasive internet access<br />
enables utility computing<br />
• Computing is provided as a<br />
service<br />
• Elasticity: no need to<br />
overprovision resources<br />
Virtualization<br />
• Cloud computing:<br />
Software<br />
applications & hardware & as a<br />
system software delivered Service<br />
as a service<br />
• Cloud: datacenter hardware and<br />
software<br />
Connectivity<br />
Cloud<br />
Computing<br />
Grid<br />
Computing<br />
Distributed<br />
Systems<br />
Utility<br />
Computing<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 4
Analogies<br />
Electricity<br />
Supply<br />
• Past: companies<br />
produced their<br />
own electricity<br />
(steam, water)<br />
• Today: electricity<br />
as a service,<br />
enabled by the<br />
electrical grid<br />
Chip<br />
Manufacturing<br />
• Past: most big<br />
companies had<br />
their own fabs<br />
• Today: fab-less<br />
manufacturing,<br />
only few fabs left<br />
(Intel, Samsung,<br />
TSMC)<br />
Cloud<br />
Computing<br />
• Past/today: run<br />
your own computing<br />
facilities<br />
• Today/tomorrow:<br />
rely on cloud<br />
computing<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 5
Different Levels of Service<br />
• Software as a Service<br />
• Targets: end users<br />
• Examples: Web search, E-Business,<br />
Web Mail<br />
• Platform as a Service<br />
• Targets: application developers<br />
• Examples: Google AppEngine,<br />
Microsoft Azure<br />
• Infrastructure as a Service<br />
• Targets: network architects<br />
• Examples: Amazon Elastic<br />
Computing Cloud, GoGrid<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 6
Typical Questions<br />
• Who needs it Can it be really be successful<br />
• Definitely, look at the analogies<br />
• Disruptive technology<br />
• Is it faster than a high-performance cluster<br />
• Edward Walker (TACC): NO (absolute performance)<br />
• Ian Foster (ANL): YES (response time)<br />
• How many datacenters are there<br />
• 500k in total, but only a fraction public available<br />
• Major players: Google, Microsoft, Facebook, Amazon, Apple, ...<br />
• Is it interesting for research<br />
• David Patterson, 2008: „[...] the most interesting computers of the future are at<br />
the extremes in scale: the {datacenter,cell phone} is the computer“<br />
• Power wall<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 7
Hardware Perspectives of Cloud Computing<br />
• Illusion of infinite on-demand computing resources<br />
• No need for users to plan ahead for provisioning<br />
• No up-front commitment by cloud users, use on a shortterm<br />
basis as needed<br />
• Allowing companies to start small and increase resources only if<br />
required<br />
• Keys behind these perspectives:<br />
Extreme scalability & cost-effectiveness<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 8
Rest of this Talk<br />
• Backbone of Cloud Computing: Datacenters<br />
• Short Analysis<br />
• Research in our labs<br />
• Conclusion<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 9
Backbone of Cloud Computing: Datacenters<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 10
Google Datacenters<br />
First Production<br />
System (1999)<br />
Google Datacenter<br />
today<br />
[Flickr.com]<br />
[Wikipedia.org]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 11
Microsoft Datacenters<br />
[blogs.msdn.com]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 12
Inside a Datacenter<br />
[CNet News, 2009]<br />
[A. Fox, Science 331, 406 (2011)]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 13
Facebook Datacenter<br />
• Facebook‘s<br />
OpenCompute Initiative<br />
[building43.com]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 14
Apple Datacenter<br />
• One of three Apple DCs<br />
• Used for iCloud and Siri<br />
[EarthTechling.com, 2012]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 15
[D. Dyer, Current trends/challenges in datacenter<br />
thermal management, ITHERM, San Diego, CA, 2006.]<br />
Datacenter Architecture<br />
• 10,000 – 50,000+ servers ( 320k - 1.6M+ cores)<br />
• Powerful server CPUs<br />
• 10-30 MW<br />
• 60,000 sqm<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 16
Energy Consumption<br />
• Cloud computing allows for (almost) any location<br />
+ Cooling advantages, lower electricity costs<br />
- Costs for data movement, response latency increases<br />
• Greenpeace: “Dirty data triangle”<br />
• Datacenter Hub: Apple, Google and Facebook in North Carolina<br />
• „Facebook will receive about $17 million in local subsidies and tax<br />
breaks over 10 years” [NY Times, Nov.11, 2010]<br />
• Main reasons for NC are cheap energy and tax savings<br />
• Typical electricity mix: coal and nuclear<br />
• Increasing interest to use renewable sources like solar, wind and<br />
water<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 17
Energy Consumption<br />
• Power Usage Efficiency (PUE)<br />
• PUE = total power for facility / IT equipment power<br />
• Losses due to cooling and<br />
power suppy<br />
• Facebook reported a PUE<br />
of 1.07<br />
• Nothing about total<br />
energy consumption<br />
• Fact: about 1% of<br />
world-wide energy<br />
is consumed by<br />
datacenters<br />
[J. G. Koomey. Worldwide electricity used in data centers.<br />
Environmental Research Letters, 3(3):034008, 2008]<br />
[Hennessy/Patterson, Computer Architecture: A quantitative<br />
approach, Morgan Kaufmann, 5. Ed., 2011]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 18
Industrialization of IT<br />
• Huge efforts to<br />
minimize PUE<br />
• Cooling and power supply<br />
• Warehouse-Scale<br />
Computers (WSC)<br />
• Composed of commodity<br />
parts<br />
• Is this the solution<br />
[Hennessy/Patterson, Computer Architecture: A<br />
quantitative approach, Morgan Kaufmann, 5. Ed., 2011]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 19
Short Analysis<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 20
New Degree of Parallelism<br />
Instruction-level parallelism<br />
• One instruction stream, many dependencies<br />
• Limited amount (2-6)<br />
ILP<br />
• Pipelined architectures<br />
• Superscalar architectures<br />
Thread-level parallelism<br />
• Multiple instruction streams, less dependencies<br />
• Requires functional decomposition, still limited<br />
TLP<br />
• Multi-Core<br />
• Multi-Threading<br />
Data-level parallelism<br />
• Even less dependencies<br />
• Depends almost completely on problem size<br />
DLP<br />
• GPUs and Clusters<br />
• MMX/SSE/AVX<br />
New: Request-level parallelism<br />
• Vast amount of users, no dependencies!<br />
• Allows for relaxed consistency models<br />
RLP<br />
• Datacenters<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 21
System Architecture<br />
• Oversubscription of the network<br />
• Local to Rack to Array<br />
• Huge locality effects<br />
[Hennessy/Patterson, Computer Architecture: A<br />
quantitative approach, Morgan Kaufmann, 5. Ed., 2011]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 22
Access Cost Disparities<br />
Local Rack Array<br />
DRAM Latency 0.1 usec 100 usec 500 usec<br />
Disk Latency 10,000 usec 10,000 usec 10,000 usec<br />
DRAM Bandwidth 20,000 MB/s 100 MB/s 10 MB/s<br />
Disk Bandwidth 200 MB/s 100 MB/s 10 MB/s<br />
[Barroso and Hölzle 2009]<br />
DRAM Capacity 16 GB 1,040 GB 31,200 GB<br />
Disk Capacity 2,000 GB 160,000 GB 4,800,000 GB<br />
• Performance degradation, but huge amount of aggregated<br />
resources<br />
• Data movement is expensive<br />
• MapReduce<br />
• Static Resource Partitioning – no shared use!<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 23
CPU Utilization and (non-)Energy-proportional Computing<br />
[Barroso and Hölzle, The Case for<br />
Energy-Proportional Computing, IEEE<br />
Computer, 2007]<br />
[Hennessy/Patterson, Computer Architecture: A<br />
quantitative approach, Morgan Kaufmann, 5. Ed., 2011]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 24
Datacenters compared to HPC<br />
• Datacenter<br />
• Highly dynamic use<br />
• No structured execution of online tasks<br />
• Energy consumption (costs for energy exceed acquisition costs)<br />
• Low-power CPUs likely not an option<br />
• High Performance Computing (HPC)<br />
• E.g.: clusters, massively parallel processors<br />
• Extremely optimized for selected use models<br />
• Similar energy consumption problems<br />
• Low-power CPUs likely not an option<br />
• Lessons learned in HPC<br />
• Interconnection Networks<br />
• Accelerators – absolute Performance and Performance/Watt<br />
• Commodity parts + custom parts<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 25
Research in our labs<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 26
One Example: Memory as a Scarce Resource<br />
• Performance disparity for<br />
different technologies<br />
• Memory hierarchy only helps if<br />
enough locality is present<br />
• Effective memory capacity<br />
• Memory capacity per core is<br />
not keeping the pace of core<br />
count increase<br />
500x<br />
Memory capacity per computing core<br />
for 4P servers at highest memory speed<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 27
Current Computer Architectures<br />
Shared Memory Computers<br />
(shared-everything)<br />
Message Passing Systems<br />
(shared-nothing)<br />
• Single address space<br />
• Up to 64 Cores and 2TB RAM<br />
• Shared Memory programming model<br />
• Global coherency among all processors<br />
• Limited scalability!<br />
• Resource aggregation<br />
• Multiple address spaces<br />
• Over-provisioning of single nodes<br />
• Message Passing programming model<br />
• No coherency among nodes<br />
• Unlimited scalabilty<br />
• Resource partitioning<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Address<br />
Space<br />
Address<br />
Space 1<br />
Address<br />
Space 2<br />
Address<br />
Space 3<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 28
A New Approach: MEMSCALE<br />
• Dynamic, selective aggregation of resources<br />
• Shared use of scalable resources (memory)<br />
• Exclusive use of resources with limited scalability (cores/caches)<br />
• Memory regions can expand to other nodes<br />
• Overhead of global coherency is avoided<br />
Spanning up global address spaces<br />
Decoupled resource aggregation, no resource partitioning<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Main Memory<br />
Memory<br />
Region 1<br />
Memory<br />
Region 2<br />
Memory<br />
Region 3<br />
Memory<br />
Region 4<br />
Memory<br />
Region 5<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Processors/Caches<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 29
Proof of Concept – In-Memory-Database<br />
MySQL Cluster<br />
• Cluster-level execution<br />
• MEMSCALE vs. MySQL cluster<br />
• MySQL cluster<br />
• 16 nodes, Gigabit Ethernet<br />
• 450 queries per second<br />
• Saturation starts at 20-30 flows<br />
• MEMSCALE<br />
• 16 nodes, EXTOLL R1 w/t SME<br />
• 128GB memory pool<br />
• 35k queries per second, 77x<br />
• Linear scalability up to 4/5 flows per<br />
node (64-80 total), then saturation<br />
• Limited by<br />
• Number of outstanding loads<br />
• Access latency<br />
MEMSCALE<br />
2012/04/11 - CERCS Systems Seminar 30
Impact of Dynamic Resource Aggregation<br />
• Dynamic Aggregation and<br />
Disaggregation of<br />
Resources<br />
• Shared and exclusive use<br />
models<br />
• Overcome capacity<br />
limitations<br />
• Scarce resources like memory<br />
• No need for overprovisioning<br />
• Provision for average case, not<br />
for worst case<br />
• Maximize utilization<br />
[P. Ranganathan and N. Jouppi, Enterprise IT Trends and<br />
Implications for Architecture Research, HPCA2005]<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 31
Future Project: MEERkAT<br />
•MEERkAT – Improving<br />
datacenter utilization<br />
and energy efficiency<br />
• System-level virtualization<br />
• Dynamic resource<br />
aggregation of<br />
hetereogeneous resources<br />
• Various migration levels<br />
•Analogy – meerkats are<br />
highly social<br />
• Compatible with big cats<br />
Red Fox and Ocelot Project at Georgia Tech<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 32
General research methodology<br />
• Optimize the common case<br />
• Effective specialized hardware<br />
components, completed by<br />
software stacks<br />
• Parallel Computing and<br />
Computer Architecture<br />
• Parallel programming, communication<br />
libraries, synchronization methods<br />
• High Performance Computing,<br />
Accelerated Computing<br />
• Interconnection Networks<br />
Prototype cluster, Valencia<br />
• Collaborations<br />
• Technical University of Valencia, Spain – Prof.<br />
Jose Duato<br />
• Georgia Tech, US – Prof. Sudhakar Yalamanchili<br />
• Simula Labs, Norway – Prof. Olav Lysne<br />
• EXTOLL & Computer Architecture Group – Prof.<br />
Ulrich Brüning<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 33
Conclusion<br />
• Cloud Computing is the future of IT, based on datacenters<br />
• Limited by power consumption with economic, ecologic and technical<br />
implications<br />
• Utilization far from optimum<br />
• Sole use of commodity parts is<br />
hitting a wall - Learn from HPC!<br />
• Interconnection Networks<br />
• Accelerated Computing<br />
• Dynamic resource aggregation<br />
• Only aggregated selected resources<br />
• Avoid over-provisioning<br />
• Overcome the static partitioning<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 34
Danke für die Aufmerksamkeit!<br />
froening@uni-hd.de<br />
http://ce.uni-hd.de
Backup Slides<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 36
Motivation – In-Memory Computing<br />
• DRAM for storage In-Memory Databases<br />
• Jim Gray: „Memory is the new disk and disk is the new tape”<br />
• HDD bad for random access, but pretty good for linear access<br />
• Natural for logging and journaling an in-memory database<br />
• Google, Yahoo!: entire indices are stored in DRAM<br />
• Bigtable, Memcached<br />
• Litte or no locality for many new applications<br />
• “[…] new Web applications such as Facebook appear to have little or<br />
no locality, due to complex linkages between data (e.g., friendships in<br />
Facebook). As of August 2009 about 25% of all the online data for<br />
Facebook is kept in main memory on memcached servers at any given<br />
point in time, providing a hit rate of 96.5%”. [Ousterhout2009]<br />
• Typical computer: 2-8MB cache, 2-8GB DRAM. At most 0.1% of DRAM<br />
capacity can be held in caches<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 37
Setting up global address spaces<br />
• Distributed shared memory<br />
• Local address space<br />
is split up:<br />
1. Private partition<br />
2. Shared partition<br />
3. Mapping to global<br />
address space<br />
• Virtually unlimited<br />
• Only limited by physical<br />
address sizes<br />
• Currently: 2^48 bytes or 256 TB<br />
2012/04/11 - CERCS Systems Seminar 38
Remote memory access<br />
• Software-transparent access<br />
to remote memory<br />
• Loads/stores to local mapping<br />
of the global address space<br />
• Parts of the global address will<br />
identify target node<br />
• Forward request over the<br />
network<br />
• Request hits shared local<br />
partition on target node<br />
• If appropriate, send response<br />
back<br />
origin<br />
address space<br />
0<br />
private<br />
local<br />
shared<br />
local<br />
global<br />
• Direct, low-latency path to remote memory<br />
• Shared Memory Engine (SME)<br />
2 40<br />
2 64 -1<br />
global<br />
address space<br />
node 0<br />
node 1<br />
node n-1<br />
target<br />
address space<br />
0<br />
private<br />
local<br />
shared<br />
local<br />
2 40<br />
global<br />
oAddr gAddr tAddr<br />
2 64 -1<br />
2012/04/11 - CERCS Systems Seminar 39
Remote memory access in detail<br />
Node #0 (Source)<br />
Issuing loads/stores on<br />
remote memory<br />
Memory<br />
CPU<br />
(MCT)<br />
Memory<br />
CPU<br />
(MCT)<br />
Remote load latency:<br />
1.89usec (R1, Virtex-4)<br />
1.44usec (R2, Virtex-6)<br />
DRAM<br />
Controller<br />
Memory<br />
Controller<br />
Core<br />
Core<br />
Node #1 (Target)<br />
Serving as memory host<br />
Memory<br />
CPU<br />
(MCT)<br />
Memory<br />
CPU<br />
(MCT)<br />
HT<br />
HT<br />
Coherency<br />
Domain<br />
SME<br />
EXTOLL Packet<br />
SME<br />
Coherency<br />
Domain<br />
Source-local address<br />
Target node determination<br />
Address calculation<br />
Global address<br />
Loss-less and in-order<br />
packet forwarding<br />
Target-local address<br />
Source tag management<br />
Address calculation<br />
2012/04/11 - CERCS Systems Seminar 40
Excursion: EXTOLL<br />
• High performance interconnection network<br />
• Designed from scratch for HPC demands<br />
• Optimized for low latency and high message rate<br />
• Virtex-4 (156MHz, HT400, 6.24Gbps)<br />
HOST<br />
NETWORK<br />
NETWORK<br />
• 40% faster for INTERFACE WRF (compared INTERFACE to IBDDR)<br />
ATU<br />
Linkport<br />
Host Interface<br />
(HT3 or PCIe)<br />
On Chip<br />
Network<br />
VELO<br />
RMA<br />
EXTOLL<br />
Network<br />
Switch<br />
Networkport<br />
Networkport<br />
Networkport<br />
Linkport<br />
Linkport<br />
Linkport<br />
Linkport<br />
Control<br />
& Status<br />
Linkport<br />
2012/04/11 - CERCS Systems Seminar 41
Behind a Google Web Search<br />
• Data Acquisition – Web Crawling<br />
• Offline processing<br />
• MapReduce<br />
• Developed and optimized for distributed, loosely-coupled systems<br />
like datacenters<br />
• Online processing (Googling)<br />
• Serving user request<br />
• Delivery of previously prepared results<br />
• Goal: minimal request latency, exploit vast amount of RLP<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 42
Abstract<br />
Als Cloud Computing bezeichnet man die Bereitstellung von Rechen- und Speicherkapazitäten über<br />
das Internet als ein Service, mit hochgradig unterschiedlichen Nutzerprofilen von privaten Nutzern<br />
über kleine mittelständige bis hin zu großen internationalen Unternehmen. Aktueller Trend ist eine<br />
starke Zunahme der Nutzung des Cloud Computings, und es ist absehbar das dies sich auch in der<br />
Zukunft fortsetzen wird. Während das Cloud Computing schon relativ bekannt ist, ist der wahre Kern<br />
dieses Nutzungsmodells eher unbekannt: die sogenannten Datacenters, welche große Mengen an<br />
Rechenleistung an einem Ort bündeln und als Rückgrat des Internets gelten. Alle modernen<br />
Internetdienste wie Web Search, Content Delivery und soziale Netzwerke nutzen diese Datacenters<br />
um Anwendern möglichst kurze Antwortzeiten zu bieten. Da die Nutzung dieser Dienste immens<br />
zunimmt, steigen auch die Anforderungen an die Datacenters rapide an. Im Rahmen dieses Vortrages<br />
wird die Architektur und Funktionsweise von Datacenters erläutert, insbesondere wie aktuelle<br />
Datacenters konstruiert und organisiert sind und wie Dienste auf diesen Datacenters realisiert werden.<br />
Obwohl ein hoher Bedarf an immer größeren Installationen besteht, ist die Skalierbarkeit aber<br />
aufgrund von technischen, ökonomischen und ökologischen Faktoren limitiert, insbesondere was die<br />
immense Leistungsaufnahme dieser Systeme betrifft. So wird unter anderem eine ausschliessliche<br />
Nutzung von Standardtechnologien über kurz oder lang zu einem prinzipiellen Problem werden. Diese<br />
einschränkenden Faktoren bieten jedoch der Forschung die Möglichkeit mit neuartigen Methoden und<br />
Architekturen aktiv zu der Entwicklung der Datacenters beizutragen.<br />
Holger Fröning, Informatiktag der Universität <strong>Heidelberg</strong>, 22.06.2012 43