www.allitebooks.com

Learning%20Data%20Mining%20with%20Python Learning%20Data%20Mining%20with%20Python

24.07.2016 Views

Chapter 11 Using Theano, we can define many types of functions working on scalars, arrays, and matrices, as well as other mathematical expressions. For instance, we can create a function that computes the length of the hypotenuse of a right-angled triangle: import theano from theano import tensor as T First, we define the two inputs, a and b. These are simple numerical values, so we define them as scalars: a = T.dscalar() b = T.dscalar() Then, we define the output, c. This is an expression based on the values of a and b: c = T.sqrt(a ** 2 + b ** 2) Note that c isn't a function or a value here—it is simply an expression, given a and b. Note also that a and b don't have actual values—this is an algebraic expression, not an absolute one. In order to compute on this, we define a function: f = theano.function([a,b], c) This basically tells Theano to create a function that takes values for a and b as inputs, and returns c as an output, computed on the values given. For example, f(3, 4) returns 5. While this simple example may not seem much more powerful than what we can already do with Python, we can now use our function or our mathematical expression c in other parts of code and the remaining mappings. In addition, while we defined c before the function was defined, no actual computation was done until we called the function. An introduction to Lasagne Theano isn't a library to build neural networks. In a similar way, NumPy isn't a library to perform machine learning; it just does the heavy lifting and is generally used from another library. Lasagne is such a library, designed specifically around building neural networks, using Theano to perform the computation. Lasagne implements a number of modern types of neural network layers, and the building blocks for building them. [ 249 ]

Classifying Objects in Images Using Deep Learning These include the following: • Network-in-network layers: These are small neural networks that are easier to interpret than traditional neural network layers. • Dropout layers: These randomly drop units during training, preventing overfitting, which is a major problem in neural networks. • Noise layers: These introduce noise into the neurons; again, addressing the overfitting problem. In this chapter, we will use convolution layers (layers that are organized to mimic the way in which human vision works). They use small collections of connected neurons that analyze only a segment of the input values (in this case, an image). This allows the network to deal with standard alterations such as dealing with translations of images. In the case of vision-based experiments, an example of an alteration dealt with by convolution layers is translating the image. In contrast, a traditional neural network is often heavily connected—all neurons from one layer connect to all neurons in the next layer. Convolutional networks are implemented in the lasagne.layers.Conv1DLayer and lasagne.layers.Conv2DLayer classes. At the time of writing, Lasagne hasn't had a formal release and is not on pip. You can install it from github. In a new folder, download the source code repository using the following: git clone https://github.com/Lasagne/Lasagne.git From within the created Lasagne folder, you can then install the library using the following: sudo python3 setup.py install See http://lasagne.readthedocs.org/en/latest/user/ installation.html for installation instructions. Neural networks use convolutional layers (generally, just Convolutional Neural Networks) and also the pooling layers, which take the maximum output for a certain region. This reduces noise caused by small variations in the image, and reduces (or down-samples) the amount of information. This has the added benefit of reducing the amount of work needed to be done in later layers. Lasagne also implements these pooling layers—for example in the lasagne.layers. MaxPool2DLayer class. Together with the convolution layers, we have all the tools needed to build a convolution neural network. [ 250 ]

Chapter 11<br />

Using Theano, we can define many types of functions working on scalars, arrays,<br />

and matrices, as well as other mathematical expressions. For instance, we can create<br />

a function that <strong>com</strong>putes the length of the hypotenuse of a right-angled triangle:<br />

import theano<br />

from theano import tensor as T<br />

First, we define the two inputs, a and b. These are simple numerical values, so we<br />

define them as scalars:<br />

a = T.dscalar()<br />

b = T.dscalar()<br />

Then, we define the output, c. This is an expression based on the values of a and b:<br />

c = T.sqrt(a ** 2 + b ** 2)<br />

Note that c isn't a function or a value here—it is simply an expression, given a and b.<br />

Note also that a and b don't have actual values—this is an algebraic expression, not<br />

an absolute one. In order to <strong>com</strong>pute on this, we define a function:<br />

f = theano.function([a,b], c)<br />

This basically tells Theano to create a function that takes values for a and b<br />

as inputs, and returns c as an output, <strong>com</strong>puted on the values given. For example,<br />

f(3, 4) returns 5.<br />

While this simple example may not seem much more powerful than what we<br />

can already do with Python, we can now use our function or our mathematical<br />

expression c in other parts of code and the remaining mappings. In addition, while<br />

we defined c before the function was defined, no actual <strong>com</strong>putation was done until<br />

we called the function.<br />

An introduction to Lasagne<br />

Theano isn't a library to build neural networks. In a similar way, NumPy isn't a<br />

library to perform machine learning; it just does the heavy lifting and is generally<br />

used from another library. Lasagne is such a library, designed specifically around<br />

building neural networks, using Theano to perform the <strong>com</strong>putation.<br />

Lasagne implements a number of modern types of neural network layers, and the<br />

building blocks for building them.<br />

[ 249 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!