Design of an Automatic Control Algorithm for Energy-Efficient ...
Design of an Automatic Control Algorithm for Energy-Efficient ...
Design of an Automatic Control Algorithm for Energy-Efficient ...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
6 The optimiser: <strong>an</strong> evolutionary algorithm approach 64<br />
a population being far away from the optimum a high value is adv<strong>an</strong>tageous. This would<br />
be the case after a parameter or environment ch<strong>an</strong>ge. In a steady state - or close to the<br />
optimum - a lower spread would be more effective.<br />
The common strategy is decreasing the �� with the number <strong>of</strong> generations. To<br />
obtain a short execution time, however, each run c<strong>an</strong> only include a low number. For<br />
the incremental algorithm, it is adv<strong>an</strong>tageous to adapt the exploration factor <strong>for</strong> each run<br />
not <strong>for</strong> each generation. A better way would be to base the mutation on environmental<br />
variations. These values, as temperatures, are directly measured <strong>an</strong>d ch<strong>an</strong>ges c<strong>an</strong> be<br />
detected very fast. The problem here is that a relative ch<strong>an</strong>ge would be needed. There<strong>for</strong>e,<br />
the measurements would have to be compared with a reference value. Since this has to<br />
be specified <strong>for</strong> each watched parameter, <strong>an</strong> easier approach is taken.<br />
The idea <strong>of</strong> this so-called hypermutation [35] is to adapt the st<strong>an</strong>dard deviation with<br />
regard to the fitness (objective results). If the average decreases, a ch<strong>an</strong>ge is occurring<br />
<strong>an</strong>d the �� has to be increased to allow the search <strong>for</strong> better solutions. While Cobb <strong>an</strong>d<br />
Grefenstette used a switching exploration factor, this is not done here. The algorithm<br />
is not supposed to find <strong>an</strong> optimal solution in one run, but incremental. There<strong>for</strong>e, the<br />
adaption is done also step-wise. Instead <strong>of</strong> the average fitness - which is not a good<br />
indicator with r<strong>an</strong>dom individuals introduced each run - the fitness, ����������, <strong>of</strong> the<br />
chosen control signal is taken. Here the weighted sum <strong>of</strong> the objective results is taken as<br />
the fitness. This value is replaced by a the parameter �������� if no solution is found that<br />
does not exceed a limit.<br />
����� �<br />
�<br />
���������<br />
���������<br />
������<br />
<strong>for</strong> ������������ �� � ��������<br />
���� �� � ��� <strong>for</strong> ������������ �� ������������� �� � ���� �� �������<br />
���� �� ��� <strong>for</strong> ������������ �� ������������� �� � ���� �� �������<br />
���� �� else<br />
(6.6)<br />
In the time step � the exploration factor c<strong>an</strong> be set to a maximum, if no acceptable<br />
solution has been found in the last run. Otherwise it is decreased or increased by a<br />
increment ���, depending if the fitness <strong>of</strong> the controlling individual ���������� is lowered or