30.06.2013 Views

Studio di un algoritmo di consensus Newton-Raphson ... - Automatica

Studio di un algoritmo di consensus Newton-Raphson ... - Automatica

Studio di un algoritmo di consensus Newton-Raphson ... - Automatica

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Algorithm 10 Versione finale dell’<strong>algoritmo</strong> <strong>di</strong>stribuito <strong>di</strong><br />

t<strong>un</strong>ing del parametro ε<br />

<br />

<br />

1: δi(h−τi) = yi(h−τi)<br />

<br />

yi(h−τi−¯τi) <br />

zi(h−τi) − zi(h−τi−¯τi) <br />

<br />

yi(h−τi) <br />

2: δi(h) = − <br />

yi(h)<br />

zi(h)<br />

zi(h−τi)<br />

3: if δi(h−τ) > δi(h) then<br />

4: if εi è stato aumentato precedentemente then<br />

5: ¯εi = εi<br />

6: if εi < 0.5 then<br />

7: εi = 1.5εi<br />

8: else<br />

9: εi = εi +(1−εi)/2.5<br />

10: end if<br />

11: else<br />

12: ∆εi = |¯εi −εi|<br />

13: ¯εi = εi<br />

14: εi = εi +∆εi/2<br />

15: end if<br />

16: τi = τi/2<br />

17: if τi < 1 then<br />

18: τi = 1<br />

19: end if<br />

20: else<br />

21: if ε è stato aumentato precedentemente then<br />

22: ∆εi = |¯εi −εi|<br />

23: ¯εi = εi<br />

24: εi = εi +∆εi/2<br />

25: else<br />

26: εi = εi/2<br />

27: ¯εi = εi<br />

28: end if<br />

29: τi = τi ∗2<br />

30: if τi > τmax then ⊲ non superare il periodo massimo<br />

31: else<br />

32: τi = τmax<br />

33: end if<br />

34: end if<br />

[9] F. Garin, L. Schenato, ”A survey on <strong>di</strong>stributed estimation and control<br />

applications using linear <strong>consensus</strong> algoritmhs”, Networked Control Systems,<br />

Springer, 2011.<br />

[10] F. Girosi, ”An Equivalence Between Sparse Approximation and Support<br />

Vector Machines”, Department of Brain and Cognitive Sciences, Massachusetts<br />

Institute of Technology, 1997.<br />

[11] S.R. G<strong>un</strong>n, ”Support Vector Machines for Classification and Regression”,<br />

Technical Report, Faculty of Engineering and Applied Science,<br />

University of Southampton, 1998.<br />

[12] B. Johansson - ”On <strong>di</strong>stributed optimization in networked systems”,<br />

Ph.D. <strong>di</strong>ssertation, KTH Elettrical Engineering, 2008.<br />

[13] H. K. Khalil, ”Nonlinear Systems (third e<strong>di</strong>tion)”, Prentice Hall, 2002.<br />

[14] J. Lin, E. Elhamifar, I-J. Wang, R. Vidal, ”Consensus with Robustness<br />

to Outliers via Distributed Optimization”, John Hopkins University, Baltimora,<br />

US.<br />

[15] A. Ne<strong>di</strong>ć, D.P. Bertsekas, ”Incremental subgra<strong>di</strong>ents methods for non<strong>di</strong>fferentiable<br />

optimization”, SIAM J. on Optim., Vol.12, No.1, pp.109-138,<br />

2001.<br />

[16] G. Picci, ”Meto<strong>di</strong> statistici per l’identificazione <strong>di</strong> sistemi lineari”,<br />

<strong>di</strong>spensa delle lezioni, 2011.<br />

[17] M. Rabbat, R. Novak, ”Distributed Optimization on Sensor Networks”,<br />

Department of Electrical and Computer Engineering, University of Winsconsin,<br />

2004.<br />

[18] S. S. Ram, A. Ne<strong>di</strong>ć, V. V. Veeravalli, ”Incremental Stochastic Subgra<strong>di</strong>ent<br />

Algorithms for Convex Optimization”, SIAM J. on Optim., Vol.20,<br />

No.2, pp.691-717, 2009.<br />

[19] W. Ren, R. Beard, ”Distributed Consensus in Multi-vehicle Cooperative<br />

Control, Theory and Applications”, Springer, 2008.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!