23.03.2013 Views

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

242 11 <strong>Neural</strong> <strong>Models</strong> <strong>of</strong> <strong>Bayesian</strong> <strong>Belief</strong> <strong>Propagation</strong> <strong>Rajesh</strong> P. N. Rao<br />

time step, one can relax the equality in equation (11.21) and make log P (I ′ |θt i ) ∝<br />

F(θi)I(t) for some linear filter F(θi) = ɛwi. This avoids the problem <strong>of</strong> calculating<br />

the normalization factor for P (I ′ |θt i ), which can be especially hard when I′<br />

takes on continuous values such as in an image. A more challenging problem<br />

is to pick recurrent weights Uij such that equation (11.22) holds true. For equation<br />

(11.22) to hold true, we need to approximate a log-sum with a sum-<strong>of</strong>-logs.<br />

One approach is to generate a set <strong>of</strong> random probabilities xj(t) for t = 1, . . . , T<br />

and find a set <strong>of</strong> weights Uij that satisfy:<br />

<br />

j<br />

<br />

<br />

Uij log xj(t) ≈ log P (θ t i|θ t−1<br />

<br />

j )xj(t)<br />

j<br />

(11.24)<br />

for all i and t. This can be done by minimizing the squared error in equation<br />

(11.24) with respect to the recurrent weights Uij. This empirical approach,<br />

followed in [30], is used in some <strong>of</strong> the experiments below. An alternative approach<br />

is to exploit the nonlinear properties <strong>of</strong> dendrites as suggested in the<br />

following section.<br />

11.3.2 Exact Inference in Nonlinear Networks<br />

A firing rate model that takes into account some <strong>of</strong> the effects <strong>of</strong> nonlinear filtering<br />

in dendrites can be obtained by generalizing equation (11.18) as follows:<br />

vi(t + 1) = f wiI(t) + g <br />

Uijvj(t) , (11.25)<br />

j<br />

where f and g model nonlinear dendritic filtering functions for feedforward<br />

and recurrent inputs. By comparing this equation with the belief propagation<br />

equation in the log domain (equation (11.19)), it can be seen that the first equation<br />

can implement the second if:<br />

vi(t + 1) = log m t i (11.26)<br />

f wiI(t) = log P (I ′ |θ t i) (11.27)<br />

g <br />

Uijvj(t) = log <br />

P (θ t i|θ t−1<br />

j )m t−1,t<br />

j<br />

(11.28)<br />

j<br />

j<br />

In this model (figure 11.2B), N neurons represent log mt i (i = 1, . . . , N) in their<br />

firing rates. The dendritic filtering functions f and g approximate the logarithm<br />

function, the feedforward weights wi act as a linear filter on the input<br />

to yield the likelihood P (I ′ |θt i ) and the recurrent synaptic weights Uij directly<br />

encode the transition probabilities P (θt i |θt−1 j ). The normalization step is com-<br />

puted as in equation (11.23) using a separate group <strong>of</strong> neurons that represent<br />

log posterior probabilities log m t,t+1<br />

i and that convey these probabilities for use<br />

.<br />

in equation (11.28) by the neurons computing log m t+1<br />

i

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!