11.07.2015 Views

ISSN: 2250-3005 - ijcer

ISSN: 2250-3005 - ijcer

ISSN: 2250-3005 - ijcer

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

International Journal Of Computational Engineering Research (<strong>ijcer</strong>online.com) Vol. 2 Issue. 8Still this work has many prospects in multilevel inverters, where the better operating algorithms can be proposed withincrease in the level of inverters. The number of redundant states is more they have to be exploited in the near future work.The system behaves as no fault in the system at all.4. NEURAL NETWROK MODEL DESCRIPTIONThere are various hardware implementations based on ASIC,DSP & FPGA.DSP based implementation issequential and hence does not preserve the parallel architecture of the neurons in a layer. ASIC implementations do not offerreconfigurablity by the user. FPGA is a suitable hardware for neural network implementation as it preserves the parallelarchitecture of the neurons in a layer and offers flexibility in reconfiguration. FPGA realization of ANNs with a largenumber of neurons is a challenging task. Selecting weight precision is one of the important ch oices when implementingANNs on FPGAs. Weight precision is used to trade-off the capabilities of the realized ANNs against the implementationcost. A higher weight precision means fewer quantization errors in the final implementations, while a lower precis ion leadsto simpler designs, greater speed and reductions in area requirements and power consumption. One way of resolving thetrade-off is to determine the ―minimum precision‖ required. In this work we consider the weights as 8 -bit fixed-point values.Direct implementation for non-linear sigmoid transfer functions is very expensive. As the excitation function is highlynonlinear we adopt the Look Up Table (LUT) method in order to simplify function computation. The LUT is implementedusing the inbuilt RAM available in FPGA IC.ROM(Weights)LUTMultiplier + Accumulator LUTOutputRegisterOutputInputFig 1. Neuron RTL Block DiagramThe use of LUTs reduces the resource requirement and improves the speed. Also the implementation of LUT needsno external RAM since the inbuilt memory is sufficient to implement the excitation function. The basic structure of thefunctional unit (neuron) that implements the calculations associated with neuron. Each neuron has a ROM, which stores theweights of the connection links between the particular neuron to the neurons of the previous layer. The multiplier performshigh speed multiplication of input signals with weights from ROM. Multiplier is again implemented using an LUTmultiplier. Such implementation of a multiplier needs one of the operands to be constant. In this case the other operandaddresses the LUT where the result of multiplication is previously stored. Given two operands A&B with n & m bitsrespectively & B is constant, it is possible to implement their multiplication in LUT of 2 n . Since both multiplier & ActivationFunction(sigmoid function) are implemented using LUT, cost of implementation is very much reduced. We have triedcomparing the model in the reference no. [15], with that of our model with respect to cost of implementation, in theconclusion section. A sixteen bit register is used to hold the weights from the ROM and the input signal from the previouslayer. The whole MLP implementation is shown in Fig. 2.The network mainly consists of input registers, control unit,neurons and output register. To provide on neuron output to the next stage at each clock cycle a Multiplexer and a counter isused. The training of the network is done in software and the results loaded into hardware .Weights are updated during thetraining process, but remain constant during the detection process. The Register Transfer Level design of the system hasbeen carried out using standard VHDL as the hardware description language. This language allows three different levels ofdescription. We have chosen RTL to implement this system. The entire design process was done using the ISE developmenttool, from Xilinx (ISE development). The system physically implemented on Spartan-3 XC3 S4000 XILINX FPGA device.Issn <strong>2250</strong>-<strong>3005</strong>(online) December| 2012 Page 23

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!