www.allitebooks.com
Learning%20Data%20Mining%20with%20Python Learning%20Data%20Mining%20with%20Python
• method='nelder-mead': This is used to select the Nelder-Mead optimize routing (SciPy supports quite a number of different options) Chapter 7 • args=(friends,): This passes the friends dictionary to the function that is being minimized This function will take quite a while to run. Our graph creation function isn't that fast, nor is the function that computes the Silhouette Coefficient. Decreasing the maxiter value will result in fewer iterations being performed, but we run the risk of finding a suboptimal solution. Running this function, I got a threshold of 0.135 that returns 10 components. The score returned by the minimize function was -0.192. However, we must remember that we negated this value. This means our score was actually 0.192. The value is positive, which indicates that the clusters tend to be more separated than not (a good thing). We could run other models and check whether it results in a better score, which means that the clusters are better separated. We could use this result to recommend users—if a user is in a connected component, then we can recommend other users in that component. This recommendation follows our use of the Jaccard Similarity to find good connections between users, our use of connected components to split them up into clusters, and our use of the optimization technique to find the best model in this setting. However, a large number of users may not be connected at all, so we will use a different algorithm to find clusters for them. Summary In this chapter, we looked at graphs from social networks and how to do cluster analysis on them. We also looked at saving and loading models from scikit-learn by using the classification model we created in Chapter 6, Social Media Insight Using Naive Bayes. We created a graph of friends from a social network, in this case Twitter. We then examined how similar two users were, based on their friends. Users with more friends in common were considered more similar, although we normalize this by considering the overall number of friends they have. This is a commonly used way to infer knowledge (such as age or general topic of discussion) based on similar users. We can use this logic for recommending users to others—if they follow user X and user Y is similar to user X, they will probably like user Y. This is, in many ways, similar to our transaction-led similarity of previous chapters. [ 159 ]
Discovering Accounts to Follow Using Graph Mining The aim of this analysis was to recommend users, and our use of cluster analysis allowed us to find clusters of similar users. To do this, we found connected components on a weighted graph we created based on this similarity metric. We used the NetworkX package for creating graphs, using our graphs, and finding these connected components. We then used the Silhouette Coefficient, which is a metric that evaluates how good a clustering solution is. Higher scores indicate a better clustering, according to the concepts of intra-cluster and inter-cluster distance. SciPy's optimize module was used to find the solution that maximises this value. In this chapter, we compared a few opposites too. Similarity is a measure between two objects, where higher values indicate more similarity between those objects. In contrast, distance is a measure where lower values indicate more similarity. Another contrast we saw was a loss function, where lower scores are considered better (that is, we lost less). Its opposite is the score function, where higher scores are considered better. In the next chapter, we will see how to extract features from another new type of data: images. We will discuss how to use neural networks to identify numbers in images and develop a program to automatically beat CAPTCHA images. [ 160 ]
- Page 131 and 132: Social Media Insight Using Naive Ba
- Page 133 and 134: Social Media Insight Using Naive Ba
- Page 135 and 136: Social Media Insight Using Naive Ba
- Page 137 and 138: Social Media Insight Using Naive Ba
- Page 139 and 140: Social Media Insight Using Naive Ba
- Page 141 and 142: Social Media Insight Using Naive Ba
- Page 143 and 144: Social Media Insight Using Naive Ba
- Page 145 and 146: Social Media Insight Using Naive Ba
- Page 147 and 148: Social Media Insight Using Naive Ba
- Page 149 and 150: Social Media Insight Using Naive Ba
- Page 151 and 152: Social Media Insight Using Naive Ba
- Page 153 and 154: Social Media Insight Using Naive Ba
- Page 155 and 156: Social Media Insight Using Naive Ba
- Page 158 and 159: Discovering Accounts to Follow Usin
- Page 160 and 161: Chapter 7 Next, we will need a list
- Page 162 and 163: Chapter 7 Make sure the filename is
- Page 164 and 165: Chapter 7 cursor = results['next_cu
- Page 166 and 167: Chapter 7 Next, we are going to rem
- Page 168 and 169: Chapter 7 Creating a graph Now, we
- Page 170 and 171: Chapter 7 As you can see, it is ver
- Page 172 and 173: Chapter 7 Next, we will only add th
- Page 174 and 175: Chapter 7 The difference in this gr
- Page 176 and 177: Chapter 7 We can graph the entire s
- Page 178 and 179: Chapter 7 Optimizing criteria Our a
- Page 180 and 181: Chapter 7 Next, we need to get the
- Page 184 and 185: Beating CAPTCHAs with Neural Networ
- Page 186 and 187: Chapter 8 The red lines indicate th
- Page 188 and 189: Chapter 8 The combination of an app
- Page 190 and 191: Chapter 8 Next we set the font of t
- Page 192 and 193: Chapter 8 We can then extract the s
- Page 194 and 195: Chapter 8 Our targets are integer v
- Page 196 and 197: Chapter 8 Then we iterate over our
- Page 198 and 199: Chapter 8 From these predictions, w
- Page 200 and 201: Chapter 8 This code correctly predi
- Page 202 and 203: The result is shown in the next gra
- Page 204 and 205: Chapter 8 However, it isn't very go
- Page 206: Chapter 8 Summary In this chapter,
- Page 209 and 210: Authorship Attribution Attributing
- Page 211 and 212: Authorship Attribution If we cannot
- Page 213 and 214: Authorship Attribution After taking
- Page 215 and 216: Authorship Attribution This dataset
- Page 217 and 218: Authorship Attribution "instead", "
- Page 219 and 220: Authorship Attribution Support vect
- Page 221 and 222: Authorship Attribution Kernels When
- Page 223 and 224: Authorship Attribution We can reuse
- Page 225 and 226: Authorship Attribution With our dat
- Page 227 and 228: Authorship Attribution We then reco
- Page 229 and 230: Authorship Attribution If it doesn'
- Page 231 and 232: Authorship Attribution Finally, we
• method='nelder-mead': This is used to select the Nelder-Mead optimize<br />
routing (SciPy supports quite a number of different options)<br />
Chapter 7<br />
• args=(friends,): This passes the friends dictionary to the function that is<br />
being minimized<br />
This function will take quite a while to run. Our graph creation<br />
function isn't that fast, nor is the function that <strong>com</strong>putes the Silhouette<br />
Coefficient. Decreasing the maxiter value will result in fewer iterations<br />
being performed, but we run the risk of finding a suboptimal solution.<br />
Running this function, I got a threshold of 0.135 that returns 10 <strong>com</strong>ponents.<br />
The score returned by the minimize function was -0.192. However, we must<br />
remember that we negated this value. This means our score was actually 0.192.<br />
The value is positive, which indicates that the clusters tend to be more separated<br />
than not (a good thing). We could run other models and check whether it results<br />
in a better score, which means that the clusters are better separated.<br />
We could use this result to re<strong>com</strong>mend users—if a user is in a connected <strong>com</strong>ponent,<br />
then we can re<strong>com</strong>mend other users in that <strong>com</strong>ponent. This re<strong>com</strong>mendation<br />
follows our use of the Jaccard Similarity to find good connections between users,<br />
our use of connected <strong>com</strong>ponents to split them up into clusters, and our use of the<br />
optimization technique to find the best model in this setting.<br />
However, a large number of users may not be connected at all, so we will use a<br />
different algorithm to find clusters for them.<br />
Summary<br />
In this chapter, we looked at graphs from social networks and how to do<br />
cluster analysis on them. We also looked at saving and loading models from<br />
scikit-learn by using the classification model we created in Chapter 6, Social Media<br />
Insight Using Naive Bayes.<br />
We created a graph of friends from a social network, in this case Twitter. We then<br />
examined how similar two users were, based on their friends. Users with more<br />
friends in <strong>com</strong>mon were considered more similar, although we normalize this by<br />
considering the overall number of friends they have. This is a <strong>com</strong>monly used way<br />
to infer knowledge (such as age or general topic of discussion) based on similar<br />
users. We can use this logic for re<strong>com</strong>mending users to others—if they follow user X<br />
and user Y is similar to user X, they will probably like user Y. This is, in many ways,<br />
similar to our transaction-led similarity of previous chapters.<br />
[ 159 ]