03.12.2012 Views

C++ for Scientists - Technische Universität Dresden

C++ for Scientists - Technische Universität Dresden

C++ for Scientists - Technische Universität Dresden

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

110 CHAPTER 4. GENERIC PROGRAMMING<br />

};<br />

T data[Size] ;<br />

If you compare this implementation with the implementation in Section 4.3 on page 95 you<br />

realize that there not so many differences.<br />

The essential difference is that the size is now part of the type and that the compiler knows it.<br />

Let us start with the latter. The compiler can use its knowlegde <strong>for</strong> optimization. For instance,<br />

if we create a variable<br />

fsize vector v(w);<br />

the compiler can decide that the generated code <strong>for</strong> the copy constructor is not per<strong>for</strong>med in a<br />

loop but as a sequence of independent operations like:<br />

fsize vector( const self& that )<br />

{<br />

data[0]= that.data[0];<br />

data[1]= that.data[1];<br />

data[2]= that.data[2];<br />

}<br />

This saves the incrementation of the counter and the test <strong>for</strong> the loop end. In some sense,<br />

this test is already per<strong>for</strong>med at compile time. As a rule of thumb, the more is known during<br />

compilation the more potential <strong>for</strong> optimization exist. We will come back to this in more detail<br />

in Section 8.2 and Chapter ??.<br />

Which optimization is induced by additional compile-time in<strong>for</strong>mation is of course compilerdependent.<br />

One can only find out which trans<strong>for</strong>mation is actually done by reading the generated<br />

assembler code — what is not that easy, especially with high optimization and with low<br />

optimization the effect will probably not be there — or indirectly by observing per<strong>for</strong>mance and<br />

comparing it with other implementations. In the example above, the compiler will probably<br />

unroll the loop as shown <strong>for</strong> small sizes like 3 and keep the loop <strong>for</strong> larger sizes say 100. You<br />

see, why this compile-time sizes are particularly interesting <strong>for</strong> small matrices and vectors, e.g.<br />

three-dimensional coordinates or rotations.<br />

Another benefit of knowning the size at compile time is that we can store the values in an array<br />

and even inside the class. Then the values of temporary objects are stored on the stack and not<br />

on the heap. 19 The creation and destruction is much less expensive because only the change of<br />

the program counter at function begin and end needs to adapted to the objects size compared<br />

to dynamic memory allocation on the heap that involves the management of lists to keep track<br />

of allocated and free memory blocks. 20 To make a long story short, keeping the data in small<br />

arrays is much less expensive than dynamic allocation.<br />

We said that the size becomes part of the type. The careful reader might have realized that we<br />

omitted the checks whether the vectors have the same size. We do not need them anymore. If<br />

an argument has the class type, it implicitly has the same size. Consider the following program<br />

snippet:<br />

fsize vector v;<br />

fsize vector w;<br />

19 TODO: Picture.<br />

20 TODO: Need easier or longer explication. or citation.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!