www.allitebooks.com
Learning%20Data%20Mining%20with%20Python Learning%20Data%20Mining%20with%20Python
Chapter 11 Using Theano, we can define many types of functions working on scalars, arrays, and matrices, as well as other mathematical expressions. For instance, we can create a function that computes the length of the hypotenuse of a right-angled triangle: import theano from theano import tensor as T First, we define the two inputs, a and b. These are simple numerical values, so we define them as scalars: a = T.dscalar() b = T.dscalar() Then, we define the output, c. This is an expression based on the values of a and b: c = T.sqrt(a ** 2 + b ** 2) Note that c isn't a function or a value here—it is simply an expression, given a and b. Note also that a and b don't have actual values—this is an algebraic expression, not an absolute one. In order to compute on this, we define a function: f = theano.function([a,b], c) This basically tells Theano to create a function that takes values for a and b as inputs, and returns c as an output, computed on the values given. For example, f(3, 4) returns 5. While this simple example may not seem much more powerful than what we can already do with Python, we can now use our function or our mathematical expression c in other parts of code and the remaining mappings. In addition, while we defined c before the function was defined, no actual computation was done until we called the function. An introduction to Lasagne Theano isn't a library to build neural networks. In a similar way, NumPy isn't a library to perform machine learning; it just does the heavy lifting and is generally used from another library. Lasagne is such a library, designed specifically around building neural networks, using Theano to perform the computation. Lasagne implements a number of modern types of neural network layers, and the building blocks for building them. [ 249 ]
Classifying Objects in Images Using Deep Learning These include the following: • Network-in-network layers: These are small neural networks that are easier to interpret than traditional neural network layers. • Dropout layers: These randomly drop units during training, preventing overfitting, which is a major problem in neural networks. • Noise layers: These introduce noise into the neurons; again, addressing the overfitting problem. In this chapter, we will use convolution layers (layers that are organized to mimic the way in which human vision works). They use small collections of connected neurons that analyze only a segment of the input values (in this case, an image). This allows the network to deal with standard alterations such as dealing with translations of images. In the case of vision-based experiments, an example of an alteration dealt with by convolution layers is translating the image. In contrast, a traditional neural network is often heavily connected—all neurons from one layer connect to all neurons in the next layer. Convolutional networks are implemented in the lasagne.layers.Conv1DLayer and lasagne.layers.Conv2DLayer classes. At the time of writing, Lasagne hasn't had a formal release and is not on pip. You can install it from github. In a new folder, download the source code repository using the following: git clone https://github.com/Lasagne/Lasagne.git From within the created Lasagne folder, you can then install the library using the following: sudo python3 setup.py install See http://lasagne.readthedocs.org/en/latest/user/ installation.html for installation instructions. Neural networks use convolutional layers (generally, just Convolutional Neural Networks) and also the pooling layers, which take the maximum output for a certain region. This reduces noise caused by small variations in the image, and reduces (or down-samples) the amount of information. This has the added benefit of reducing the amount of work needed to be done in later layers. Lasagne also implements these pooling layers—for example in the lasagne.layers. MaxPool2DLayer class. Together with the convolution layers, we have all the tools needed to build a convolution neural network. [ 250 ]
- Page 221 and 222: Authorship Attribution Kernels When
- Page 223 and 224: Authorship Attribution We can reuse
- Page 225 and 226: Authorship Attribution With our dat
- Page 227 and 228: Authorship Attribution We then reco
- Page 229 and 230: Authorship Attribution If it doesn'
- Page 231 and 232: Authorship Attribution Finally, we
- Page 234 and 235: Clustering News Articles In most of
- Page 236 and 237: Chapter 10 API Endpoints are the ac
- Page 238 and 239: The token object is just a dictiona
- Page 240 and 241: Chapter 10 We then create a list to
- Page 242 and 243: Chapter 10 We are going to use MD5
- Page 244 and 245: Chapter 10 Next, we develop the cod
- Page 246 and 247: Chapter 10 We use clustering techni
- Page 248 and 249: Chapter 10 The k-means algorithm is
- Page 250 and 251: Chapter 10 We only fit the X matrix
- Page 252 and 253: Chapter 10 We then print out the mo
- Page 254 and 255: Chapter 10 Our function definition
- Page 256 and 257: Chapter 10 The result from the prec
- Page 258 and 259: Chapter 10 Implementation Putting a
- Page 260 and 261: Chapter 10 Neural networks can also
- Page 262 and 263: We then call the partial_fit functi
- Page 264 and 265: Classifying Objects in Images Using
- Page 266 and 267: Chapter 11 This dataset comes from
- Page 268 and 269: You can change the image index to s
- Page 270 and 271: Chapter 11 Each of these issues has
- Page 274 and 275: Chapter 11 Building a neural networ
- Page 276 and 277: Chapter 11 Finally, we create Thean
- Page 278 and 279: Chapter 11 return [image,] return s
- Page 280 and 281: Chapter 11 Next, we define how the
- Page 282 and 283: Chapter 11 Getting your code to run
- Page 284 and 285: Chapter 11 Setting up the environme
- Page 286 and 287: This will unzip only one Coval.otf
- Page 288 and 289: Chapter 11 First we create the laye
- Page 290 and 291: Chapter 11 Finally, we set the verb
- Page 292: Chapter 11 Summary In this chapter,
- Page 295 and 296: Working with Big Data Big data What
- Page 297 and 298: Working with Big Data Governments a
- Page 299 and 300: Working with Big Data We start by c
- Page 301 and 302: Working with Big Data The final ste
- Page 303 and 304: Working with Big Data Getting the d
- Page 305 and 306: Working with Big Data If we aren't
- Page 307 and 308: Working with Big Data Before we sta
- Page 309 and 310: Working with Big Data The first val
- Page 311 and 312: Working with Big Data This gives us
- Page 313 and 314: Working with Big Data Next, we crea
- Page 315 and 316: Working with Big Data Then, make a
- Page 317 and 318: Working with Big Data Left-click th
- Page 319 and 320: Working with Big Data The result is
- Page 321 and 322: Next Steps… Extending the IPython
Chapter 11<br />
Using Theano, we can define many types of functions working on scalars, arrays,<br />
and matrices, as well as other mathematical expressions. For instance, we can create<br />
a function that <strong>com</strong>putes the length of the hypotenuse of a right-angled triangle:<br />
import theano<br />
from theano import tensor as T<br />
First, we define the two inputs, a and b. These are simple numerical values, so we<br />
define them as scalars:<br />
a = T.dscalar()<br />
b = T.dscalar()<br />
Then, we define the output, c. This is an expression based on the values of a and b:<br />
c = T.sqrt(a ** 2 + b ** 2)<br />
Note that c isn't a function or a value here—it is simply an expression, given a and b.<br />
Note also that a and b don't have actual values—this is an algebraic expression, not<br />
an absolute one. In order to <strong>com</strong>pute on this, we define a function:<br />
f = theano.function([a,b], c)<br />
This basically tells Theano to create a function that takes values for a and b<br />
as inputs, and returns c as an output, <strong>com</strong>puted on the values given. For example,<br />
f(3, 4) returns 5.<br />
While this simple example may not seem much more powerful than what we<br />
can already do with Python, we can now use our function or our mathematical<br />
expression c in other parts of code and the remaining mappings. In addition, while<br />
we defined c before the function was defined, no actual <strong>com</strong>putation was done until<br />
we called the function.<br />
An introduction to Lasagne<br />
Theano isn't a library to build neural networks. In a similar way, NumPy isn't a<br />
library to perform machine learning; it just does the heavy lifting and is generally<br />
used from another library. Lasagne is such a library, designed specifically around<br />
building neural networks, using Theano to perform the <strong>com</strong>putation.<br />
Lasagne implements a number of modern types of neural network layers, and the<br />
building blocks for building them.<br />
[ 249 ]