22.03.2013 Views

Intelligence, Surveillance, and Reconnaissance - Spawar

Intelligence, Surveillance, and Reconnaissance - Spawar

Intelligence, Surveillance, and Reconnaissance - Spawar

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

170<br />

INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE<br />

1<br />

where f i = is the output of the i th hidden node, x i is the<br />

1 + exp(–x i)<br />

dot product sum of the previous input layer’s output with the connecting<br />

weights of the hidden layer, NN m is the m th output of the neural network,<br />

w im is the m th output weight connected to the i th hidden node, w ki<br />

is the k th input weight connected to the i th hidden node, <strong>and</strong> I k is the k th<br />

input feeding the neural network.<br />

Neural Extended Kalman Filter<br />

The NEKF developed by Stubberud [1] is based on the Singhal <strong>and</strong> Wu<br />

EKF neural-network trainer. The algorithm uses a Kalman filter to estimate<br />

the states by using a linear system model <strong>and</strong>, at the same time,<br />

using the Kalman filter to train a neural network to calculate the nonlinearities,<br />

mismodeled dynamics, higher order modes, <strong>and</strong> other facets of a<br />

system. Estimation of the system states is performed at once without the<br />

necessity of modeling the nonlinearities a priori as in the case of the<br />

extended Kalman filter. The inputs to the neural network are the updated<br />

states of the filter. The inputs are passed through an input layer, a hidden<br />

layer with nonlinear squashing functions, <strong>and</strong> an output layer. The outputs<br />

of the neural network are nonlinear corrections to the linear predicted<br />

state of the underlying Kalman filter.<br />

NEKF IMM ALGORITHM<br />

The NEKF IMM algorithm was developed under this ILIR project.<br />

The IMM architecture allows for Kalman-filter models of different state<br />

dimensions to be mixed together appropriately. In order to mix the<br />

NEKF’s state vector <strong>and</strong> covariance matrix with the other two linear<br />

Kalman-filter models, care had to be taken. To keep a positive definite<br />

covariance matrix of the neural-network model, the mixing probability<br />

of the interacting multiple-model architecture had to be applied to the<br />

artificial-neural-network weights <strong>and</strong> covariance matrix. At first glance,<br />

it was not obvious to weight probabilistically the weight vector <strong>and</strong><br />

weight covariance matrix of the NEKF by the mode probability. Once<br />

the probability weighting was incorporated <strong>and</strong> had stabilized the neuralnetwork<br />

covariance matrix, a different sigmoid squashing function had<br />

to be implemented. By weighting the neural network with its mixing probability,<br />

the weight vector tended toward zero, which caused the weights<br />

to be untrainable. What caused the weights to be untrainable was using a<br />

hyperbolic tangent function as a sigmoid-squashing function that varied<br />

from [–1, 1]. A new sigmoid varying from [0, 1] allowed for a derivative<br />

of 0.5 if the weights of the neural network were equal to 0.0. This allowed<br />

the weights to be trainable if they were initialized to zero or tended<br />

toward zero. Once this was incorporated, the neural network was able<br />

to be trained inside an IMM architecture.<br />

RESULTS<br />

The scenario consisted of a single target moving in a circular motion for<br />

500 seconds. The speed of the target was 50 meters per second, <strong>and</strong> the<br />

angular rate of the turn was –10 degrees per second. The sampling rate of<br />

the radar was one sample per second. There were 500 samples for each<br />

Monte Carlo run. Figures 1 through 5 <strong>and</strong> Table 1 are the 100 Monte

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!