31.12.2013 Views

Welcome to Adaptive Signal Processing! Lectures and exercises ...

Welcome to Adaptive Signal Processing! Lectures and exercises ...

Welcome to Adaptive Signal Processing! Lectures and exercises ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Welcome</strong> <strong>to</strong> <strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong>! 1<br />

From Merriam-Webster’s Collegiate Dictionary:<br />

Main Entry: ad·ap·ta·tion<br />

Pronunciation: “a-”dap-’tA-sh&n, -d&p-<br />

Function: noun<br />

Date: 1610<br />

1:theac<strong>to</strong>rprocessofadapting:thestateofbeingadapted<br />

2:adjustment<strong>to</strong>environmentalconditions:as<br />

a :adjustmen<strong>to</strong>fasenseorgan<strong>to</strong>theintensityorquality<br />

of stimulation<br />

b :modificationofanorganismoritspartsthatmakesitmore<br />

fit for existence under the conditions of its environment<br />

3:somethingthatisadapted;specifically:acompositionrewritten<br />

in<strong>to</strong> a new form<br />

- ad·ap·ta·tion·al /-shn&l, -sh&-n&l/ adjective<br />

- ad·ap·ta·tion·al·ly adverb<br />

<strong>Lectures</strong> <strong>and</strong> <strong>exercises</strong> 2<br />

<strong>Lectures</strong>: Tuesdays 8.15-10.00 in room E:2311<br />

Exercises:<br />

Wednesdays 13.15-15.00 in room E:3319 (not always)<br />

Computer <strong>exercises</strong>: Thursdays 8.15-10.00 in room E:4115, or<br />

Fridays 8.15-10.00 in room E:4115<br />

Laborations: Lab I: <strong>Adaptive</strong> channel equalizer in room E:4115<br />

Lab II: <strong>Adaptive</strong> filter on a DSP in room E:4115<br />

Sign up on lists on webpage<br />

from Monday Nov 4.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Course literature 3<br />

Contents - References in the 5:th edition 4<br />

Book:<br />

Simon Haykin, <strong>Adaptive</strong> Filter Theory, 5thedition,<br />

Pearson, 2014.<br />

ISBN10: 0-273-76408-X<br />

Kapitel: Backgr.,(2),4,5,6,7,8,9,2012.1–2,14.1–2<br />

4:th: Backgr.,(2),4,5,6,7,8,9,13.2,14.1<br />

Exercise material:Exercise compendium (course home page)<br />

Other material:<br />

Computer <strong>exercises</strong> (course home page)<br />

Laborations (course home page)<br />

Lecture notes (course home page)<br />

Vecka 1: Repetition of OSB (Hayes, or chap.2),<br />

The method of the Steepest descent (chap.4)<br />

Vecka 2: The LMS algorithm (chap.5–6)<br />

Vecka 3: Modified LMS-algorithms (chap.7)<br />

Vecka 4: Freqency adaptive filters (chap.8)<br />

Vecka 5: The RLS algoritm (chap.9–10)<br />

Vecka 6: Tracking <strong>and</strong> implementation aspects (chap.12.2, 13.1)<br />

Vecka 7: Summary<br />

Matlab code (course home page)<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1


Contents - References in the 4:th edition 5<br />

Contents - References in the 3:rd edition 6<br />

Vecka 1: Repetition of OSB (Hayes, or chap.2),<br />

The method of the Steepest descent (chap.4)<br />

Vecka 2: The LMS algorithm (chap.5)<br />

Vecka 3: Modified LMS-algorithms (chap.6)<br />

Vecka 4: Freqency adaptive filters (chap.7)<br />

Vecka 5: The RLS algoritm (chap.8–9)<br />

Vecka 6: Tracking <strong>and</strong> implementation aspects (chap.13.2, 14.1)<br />

Vecka 7: Summary<br />

Vecka 1: Repetition of OSB (Hayes, or chap.5),<br />

The method of the Steepest descent (chap.8)<br />

Vecka 2: The LMS algorithm (chap.9)<br />

Vecka 3: Modified LMS-algorithms (chap.9)<br />

Vecka 4: Freqency adaptive filters (chap.10, 1)<br />

Vecka 5: The RLS algoritm (chap.11)<br />

Vecka 6: Tracking <strong>and</strong> implementation aspects (chap.16.1, 17.2)<br />

Vecka 7: Summary<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Lecture 1 7<br />

Recap of Optimal signal processing (OSB) 8<br />

This lecture deals with<br />

• Repetition of the course Optimal signal processing (OSB)<br />

• The method of the Steepest descent<br />

The following problems were treated in OSB<br />

• <strong>Signal</strong> modeling Either a model with both poles <strong>and</strong> zeros or a<br />

model with only poles (vocal tract) or only zeros (lips).<br />

• Invers filter of FIR type Deconvolution or equalization of a channel.<br />

• Wiener filter Filtrering, equalization, prediction och deconvolution.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1


Optimal Linear Filtrering 9<br />

Optimal Linear Filtrering 10<br />

Input signal<br />

u(n)<br />

✲<br />

Filter<br />

w<br />

Desired signal<br />

d(n)<br />

Output signal<br />

✛✘<br />

+ ❄<br />

y(n)<br />

✲<br />

Σ<br />

– ✚✙<br />

✲<br />

Estimation error<br />

e(n)=d(n)−y(n)<br />

The filter w = ˆw 0 w 1 w 2 ...˜T which minimizes the estimation<br />

error e(n), suchthattheoutputsignaly(n) resembles the desired<br />

signal d(n) as much as possible is searched for.<br />

In order <strong>to</strong> determine the optimal filter a cost function J, whichpunish<br />

the deviation e(n), isintroduced.Thelargere(n), thehighercost.<br />

From OSB you know some different strategies, e.g.,<br />

• The <strong>to</strong>tal squared error (LS) Deterministic description of the<br />

signal.<br />

X<br />

n 2<br />

J = e 2 (n)<br />

n 1<br />

• Mean squared error (MS) S<strong>to</strong>chastic description of the signal.<br />

J = E{|e(n)| 2 }<br />

• Mean squared error with extra contraint<br />

J = E{|e(n)| 2 }+λ|u(n)| 2<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Optimal Linear Filtrering 11<br />

The cost function J(n)=E{|e(n)| p } can be used for any p ≥ 1, but<br />

most oftenly for p =2.Thischoicegivesaconvexcostfunctionwhich<br />

is refered <strong>to</strong> as the Mean Squared Error.<br />

J = E{e(n)e ∗ (n)} = E{|e(n)| 2 }<br />

MSE<br />

Optimal Linear Filtrering 12<br />

In order <strong>to</strong> find the optimal filter coefficients J is minimized with<br />

regard <strong>to</strong> themselves. This is done by differentiating J with regard <strong>to</strong><br />

w 0 , w 1 ,...,<strong>and</strong>thenbysettingthederivative<strong>to</strong>zero.Here,itis<br />

important that the cost function is convex, i.e., so that there is a global<br />

minimum.<br />

The minimization is expressed in terms of the gradient opera<strong>to</strong>r ∇,<br />

∇J =0<br />

where ∇J is called gradient vec<strong>to</strong>r.<br />

In particular, the choice of the squared cost function Mean Squared<br />

Error leads <strong>to</strong> the Wiener-Hopf equation-system.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1


Optimal Linear Filtrering 13<br />

In matrix form, the cost function J = E{|e(n)| 2 } can be written<br />

J(w)=E{[d(n)−w H u(n)][d(n)−w H u(n)] ∗ }<br />

= σd 2 −wH p−p H w+w H Rw<br />

där<br />

w = ˆw 0 w 1 ... w M−1˜T M × 1<br />

u(n)=ˆu(n) u(n − 1) ... u(n − M +1)˜T M × 1<br />

2<br />

3<br />

r(0) r(1) ... r(M − 1)<br />

R = E{u(n)u H r ∗ (1) r(0) r(M − 2)<br />

(n)} = 6<br />

4<br />

.<br />

.<br />

. .<br />

7<br />

. .<br />

r ∗ (M − 1) r ∗ 5<br />

(M − 2) ... r(0)<br />

p = E{u(n)d ∗ (n)} = ˆp(0) p(−1) ... p(−(M − 1))˜T M × 1<br />

Optimal Linear Filtrering 14<br />

The gradient opera<strong>to</strong>r yields<br />

∇J(w)=2 ∂J(w)<br />

∂w ∗ =2 ∂<br />

∂w ∗ (σ2 d −wH p−p H w+w H Rw)<br />

= −2p +2Rw<br />

If the gradient vec<strong>to</strong>r is set <strong>to</strong> zero, the Wiener-Hopf equation system<br />

results<br />

Rw o = p Wiener-Hopf<br />

which solution is the Wiener filter.<br />

w o = R −1 p<br />

Wienerfilter<br />

In other words, the Wiener filter is optimal when the cost is controlled<br />

by MSE.<br />

σ 2 d = E{d(n)d∗ (n)}<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Optimal Linear Filtrering 15<br />

The cost function’s dependence on the filter coefficients w can be made<br />

clear if written in canonical form<br />

Optimal Linear Filtrering 16<br />

Error-Performance Surface för FIR-filter with two coefficients,<br />

J(w) =σ 2 d −wH p−p H w+w H Rw<br />

4<br />

14<br />

15<br />

3.5<br />

12<br />

= σ 2 d −pH R −1 p+(w−w o) H R(w−w o)<br />

Here, Wiener-Hopf <strong>and</strong> the expression of the Wienerfilter have been<br />

used in addition <strong>to</strong> the fact that the following decomposition canbe<br />

made<br />

w H Rw =(w−w o) H R(w−w o)−w H o Rwo+wH o Rw+wH Rw o<br />

J(w)<br />

10<br />

5<br />

0<br />

4<br />

w =[w 0 , w 1 ] T 0<br />

w0<br />

−4<br />

−3<br />

0.5<br />

3<br />

−2<br />

2<br />

0<br />

1<br />

−1<br />

0 −1 −2<br />

w 0<br />

0 w 1 w 1<br />

3<br />

2.5<br />

2<br />

1.5<br />

1<br />

w o<br />

−3<br />

−4<br />

10<br />

8<br />

6<br />

4<br />

2<br />

With the optimal filter w = w o the minimal error J min is achieved:<br />

J min ≡ J(w o)=σd 2 − pH R −1 p MMSE<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

p = ˆ0.5272<br />

−0.4458˜T<br />

R =<br />

» 1.1<br />

– 0.5<br />

0.5 1.1<br />

w o = ˆ0.8360 −0.7853˜T Jmin =0.1579<br />

σ 2 d =0.9486<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1


Steepest Descent 17<br />

Steepest Descent 18<br />

The method of the Steepest descent is a recursive method <strong>to</strong> find<br />

the Wienerfiltret when the statistics of the signals are known.<br />

The method of the Steepest descent is not an adaptive filter, but serves<br />

as a basis for the LMS algorithm which is presented in Lecture 2.<br />

The method of the Steepest descent is a recursive method that leads<br />

<strong>to</strong> the Wiener-Hopfs equations. The statistics are known (R, p). The<br />

purpose is <strong>to</strong> avoid inversion of R. (savescomputations)<br />

• Set start values for the filter coefficients, w(0) (n =0)<br />

• Determine the gradient ∇J(n) that points in the direction in<br />

which the cost function increases the most. ∇J(n)=−2p+<br />

2Rw(n)<br />

• Adjust w(n +1) in the opposite direction <strong>to</strong> the gradient, but<br />

weight down the adjustment with the stepsize parameter µ<br />

w(n+1)=w(n)+ 1 2 µ[−∇J(n)]<br />

• Repete steps 2 <strong>and</strong> 3.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Convergence, filter coefficients 19<br />

Since the method of the Steepest Descent contains feedback, there<br />

is a risk that the algorithm diverges. This limits the choices ofthe<br />

stepsize parameter µ. One example of the critical choice of µ is<br />

given below. The statistics are the same as in the previous example.<br />

w(n)<br />

wo 0<br />

0<br />

µ =1.5<br />

µ =0.1<br />

µ =1.0<br />

µ =1.25<br />

wo 1<br />

0 20 40 60 80 100<br />

Iteration n<br />

» – 0.5272<br />

p =<br />

−0.4458<br />

» – 1.1 0.5<br />

R =<br />

0.5 1.1<br />

» – 0.8360<br />

w o =<br />

−0.7853<br />

» 0<br />

w(0) =<br />

0–<br />

Convergence, error surface 20<br />

The influence of the stepsize parameter on the convergence can beseen<br />

when analyzing J(w). The example below illustrates the convergence<br />

<strong>to</strong>wards J min for different choices of µ.<br />

w 0<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

-0.5<br />

-1<br />

-1.5<br />

wo<br />

µ =0.1<br />

µ =0.5<br />

µ =1.0<br />

w(0)<br />

-2<br />

-2 -1.5 -1 -0.5 0 0.5 1 1.5<br />

w 1<br />

2<br />

» – 0.5272<br />

p =<br />

−0.4458<br />

» – 1.1 0.5<br />

R =<br />

0.5 1.1<br />

» – 0.8360<br />

w o =<br />

−0.7853<br />

» – 1<br />

w(0)=<br />

1.7<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1


Convergence analysis 21<br />

Convergence analysis 22<br />

How should µ be chosen? A small value gives slow convergence, while<br />

alargevalueconstitutesariskfordivergence.<br />

Perform an eigenvalue decomposition of R in the expression of<br />

J(w(n))<br />

J(n)=J min +(w(n)−w o) H R(w(n)−w o)<br />

= J min +(w(n)−w o) H QΛQ H (w(n)−w o)<br />

= J min + ν H (n)Λν(n)=J min + X k<br />

λ k |ν k (n)| 2<br />

The convergence of the cost function depends on ν(n), i.e., the<br />

convergence of w(n) through the relationship ν(n)=Q H (w(n)−<br />

w o).<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

With the observation that w(n)=Qν(n)+w o the update of the cost<br />

function can be derived:<br />

w(n+1)=w(n)+µ[p−Rw(n)]<br />

Qν(n+1)+w o = Qν(n)+w o+µ[p−RQν(n) − Rw o]<br />

ν(n+1)=ν(n) − µQ H RQν(n) =(I − µΛ)ν(n)<br />

ν k (n +1)=(1− µλ k )ν k (n) (Elementk i ν(n) )<br />

The latter is a 1:st order difference equation, with the solution<br />

ν k (n)=(1 − µλ k ) n ν k (0)<br />

For this equation <strong>to</strong> converge it is required that |1 − µλ k | < 1, which<br />

leads <strong>to</strong> the stability criterion of the method of the Steepest Descent:<br />

0


Summary Lecture 1 25<br />

Repetition av OSB<br />

• Quadratic cost function, J = E{|e(n)| 2 }<br />

• Definition of u(n), w, R <strong>and</strong> p<br />

• The correlations R <strong>and</strong> p is assumed <strong>to</strong> be known in advance.<br />

• The Gradient vec<strong>to</strong>r ∇J<br />

• Optimal filter coefficients is given by w o = R −1 p<br />

• Optimal (minimal) cost J min ≠0<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Lecture 1<br />

Summary Lecture 1 26<br />

Summary of the method of the Steepest Descent<br />

• Recursive solution <strong>to</strong> the Wiener filter<br />

• The statistics (R och p) isassumed<strong>to</strong>beknown<br />

• The gradient vec<strong>to</strong>r ∇J(n) is timedependent but deterministic,<br />

o<strong>and</strong> points in the direction in which the cost function increases<br />

the most.<br />

• Recursion of filter veights: w(n+1)=w(n)+ 1 2 µ[−∇J(n)]<br />

• The cost function J(n) → J min , n → ∞, dvs w(n) →<br />

w o, n →∞<br />

• For convergence it is required that the stepsize satisfies 0


Exempel: Inversmodellering 29<br />

Exempel: Modellering/Identifiering 30<br />

Fördröjning<br />

Exciter<strong>and</strong>e signal<br />

✲<br />

z −∆<br />

d(n)<br />

u(n)<br />

✲<br />

Undersökt<br />

system<br />

d(n)<br />

Undersökt u(n)<br />

+<br />

FIR<br />

✎☞<br />

✲ ✲ d(n) b<br />

system<br />

filter<br />

–<br />

✲ ❄<br />

✍✌ Σ<br />

✁ ✁✁✁✁✁✁✕<br />

✲<br />

FIR<br />

filter<br />

✁ ✁✁✁✁✁✁✕<br />

bd(n)<br />

+ ✎☞ ❄<br />

✲<br />

– ✍✌ Σ<br />

✲<br />

Adaptiv<br />

algoritm<br />

✛<br />

e(n)<br />

✲<br />

Adaptiv<br />

algoritm<br />

✛<br />

e(n)<br />

Vid inversmodellering kopplas det adaptiva filtret i kaskad med det<br />

undersökta systemet. Har systemet endast poler, kan ett adaptivt filter<br />

av motsvar<strong>and</strong>e ordning användas.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1<br />

Vid modellering/identifiering kopplas det adaptiva filtret parallellt med<br />

det undersökta systemet. Om det undersökta systemet endast har<br />

nollställen, så är det lämpligt att använda motsvar<strong>and</strong>e längd på filtret.<br />

Har systemet både poler och nollställen krävs i regel ett långt adaptivt<br />

FIR-filter.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1<br />

Exempel: Ekosläckare I 31<br />

Exempel: Ekosläckare II 32<br />

u(n) Talare 1<br />

u(n) Talare 1<br />

✲<br />

<br />

❅ ❅<br />

✛<br />

✲<br />

✲<br />

FIR<br />

filter<br />

✁ ✁✁✁✁✁✁✕<br />

Adaptiv<br />

algoritm<br />

✛<br />

e(n)<br />

bd(n)<br />

Hybrid<br />

–<br />

✎☞ ❄<br />

✍✌ Σ<br />

d(n)<br />

✛<br />

+<br />

❄<br />

✲<br />

✛<br />

Talare 2<br />

Vid telefoni så läcker talet från Talare 1 igenom vid hybriden. Talare 1<br />

kommer då att höra sin egen röst som ett eko. Detta vill man ta bort.<br />

Normalt s<strong>to</strong>ppas adaptionen då Talare 2 pratar. Filtreringen sker dock<br />

under hela tiden.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1<br />

✛<br />

✲<br />

✲<br />

FIR<br />

filter<br />

✁ ✁✁✁✁✁✁✕<br />

Adaptiv<br />

algoritm<br />

✛<br />

e(n)<br />

bd(n)<br />

❄<br />

Impulssvar<br />

Ekoväg<br />

+<br />

✎☞ ❄ Talare 2<br />

– ✍✌ Σ ✛<br />

+<br />

✎☞ ❄<br />

✍✌ Σ<br />

d(n) ✎☞<br />

✛<br />

✛ + ✍✌<br />

Denna struktur är tillämpbar på högtalartelefoner, videokonferenssystem<br />

och dylikt. Precis som vid telefonifallet hör Talare 1 sig själv i form av<br />

ett eko. Dock är denna effekt mer påtaglig här, eftersom mikrofonen<br />

fångar upp det som sänds ut av högtalaren. Adaptionen s<strong>to</strong>ppas normalt<br />

när Talare 2 pratar, men filtreringen sker under hela tiden.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1


Exempel: <strong>Adaptive</strong> Line Enhancer 33<br />

Exempel: Kanalutjämnare (Equalizer) 34<br />

v(n) Färgad störning<br />

Periodisk<br />

signal + ✎☞<br />

✲ ❄ ✛<br />

✲<br />

s(n)<br />

+ ✍✌ Σ<br />

✚<br />

d(n)<br />

❄<br />

+ ❄<br />

FIR<br />

z −∆ u(n)<br />

bd(n) ✎☞<br />

✲<br />

✲<br />

filter<br />

– ✍✌ Σ<br />

Fördröjning<br />

✁ ✁✁✁✁✁✁✕<br />

✲<br />

Adaptiv<br />

algoritm<br />

✛<br />

e(n)<br />

p(n)<br />

Sluten under “träning”<br />

✲<br />

Kanal<br />

C(z)<br />

Brus v(n)<br />

✟ ✟✟✯<br />

✲<br />

z −∆<br />

+ ✎☞ u(n)<br />

FIR<br />

bd(n)<br />

+ ✎☞ ❄<br />

✲<br />

✍✌ Σ ✲<br />

✲<br />

filter – ✍✌ Σ<br />

✻+<br />

✁ ✁✁✁✁✁✁✕<br />

✲<br />

Fördröjning<br />

Adaptiv<br />

algoritm<br />

d(n)<br />

e(n)<br />

✛<br />

Information i form av en periodisk signal störs av ett färgat brus som<br />

är korrelerat med sig själv inom en viss tidsram. Genom att fördröja<br />

signalen så pass mycket att bruset (i u(n) respektive d(n)) blir<br />

okorrelerat, kan bruset tryckas ned genom linjär prediktion.<br />

En känd pseudo noise-sekvens används för att skatta en invers modell av<br />

kanalen (“träning”). Därefter s<strong>to</strong>ppas adaptionen, men filtret fortsätter<br />

att verka på den översända signalen. Syftet är att ta bort kanalens<br />

inverkan på den översända signalen.<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1<br />

<strong>Adaptive</strong> <strong>Signal</strong> <strong>Processing</strong> 2013 Bilaga, Föreläsning 1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!