23.03.2013 Views

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

Neural Models of Bayesian Belief Propagation Rajesh ... - Washington

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

244 11 <strong>Neural</strong> <strong>Models</strong> <strong>of</strong> <strong>Bayesian</strong> <strong>Belief</strong> <strong>Propagation</strong> <strong>Rajesh</strong> P. N. Rao<br />

Inference in Spiking Networks<br />

By comparing the membrane potential equation (11.32) with the belief propagation<br />

equation in the log domain (equation (11.19)), we can postulate the<br />

following correspondences:<br />

vi(t + 1) = log m t i (11.34)<br />

f <br />

wijIj(t) = log P (I ′ |θ t i) (11.35)<br />

j<br />

g <br />

j<br />

Uijv ′ j(t) = log <br />

j<br />

P (θ t i|θ t−1<br />

j )m t−1,t<br />

j<br />

(11.36)<br />

The dendritic filtering functions f and g approximate the logarithm function,<br />

the synaptic currents Ij(t) and v ′ j (t) are approximated by the corresponding<br />

instantaneous firing rates, and the recurrent synaptic weights Uij encode the<br />

).<br />

transition probabilities P (θt i |θt−1 j<br />

Since the membrane potential vi(t+1) is assumed to be equal to log mt i (equation<br />

(11.34)), we can use equation (11.33) to calculate the probability <strong>of</strong> spiking<br />

for each neuron i as:<br />

(vi(t+1)−T )<br />

P (neuron i spikes at time t + 1) ∝ e<br />

∝ e<br />

(log mti<br />

−T )<br />

(11.37)<br />

(11.38)<br />

∝ m t i (11.39)<br />

Thus, the probability <strong>of</strong> spiking (or, equivalently, the instantaneous firing rate)<br />

for neuron i in the recurrent network is directly proportional to the message m t i ,<br />

which is the posterior probability <strong>of</strong> the neuron’s preferred state and current<br />

input given past inputs. Similarly, the instantaneous firing rates <strong>of</strong> the group <strong>of</strong><br />

neurons representing log m t,t+1<br />

i is proportional to m t,t+1<br />

i , which is the precisely<br />

the input required by equation (11.36).<br />

11.4 Results<br />

11.4.1 Example 1: Detecting Visual Motion<br />

We first illustrate the application <strong>of</strong> the linear firing rate-based model (section<br />

11.3.1) to the problem <strong>of</strong> detecting visual motion. A prominent property<br />

<strong>of</strong> visual cortical cells in areas such as V1 and MT is selectivity to the direction<br />

<strong>of</strong> visual motion. We show how the activity <strong>of</strong> such cells can be interpreted as<br />

representing the posterior probability <strong>of</strong> stimulus motion in a particular direction,<br />

given a series <strong>of</strong> input images. For simplicity, we focus on the case <strong>of</strong> 1D<br />

motion in an image consisting <strong>of</strong> X pixels with two possible motion directions:<br />

leftward (L) or rightward (R).<br />

Let the state θij represent a motion direction j ∈ {L, R} at spatial location<br />

i. Consider a network <strong>of</strong> N neurons, each representing a particular state<br />

θij (figure 11.4A). The feedforward weights are assumed to be Gaussians, i.e.<br />

F(θiR) = F(θiL) = F(θi) = Gaussian centered at location i with a standard

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!