www.allitebooks.com

Learning%20Data%20Mining%20with%20Python Learning%20Data%20Mining%20with%20Python

24.07.2016 Views

• method='nelder-mead': This is used to select the Nelder-Mead optimize routing (SciPy supports quite a number of different options) Chapter 7 • args=(friends,): This passes the friends dictionary to the function that is being minimized This function will take quite a while to run. Our graph creation function isn't that fast, nor is the function that computes the Silhouette Coefficient. Decreasing the maxiter value will result in fewer iterations being performed, but we run the risk of finding a suboptimal solution. Running this function, I got a threshold of 0.135 that returns 10 components. The score returned by the minimize function was -0.192. However, we must remember that we negated this value. This means our score was actually 0.192. The value is positive, which indicates that the clusters tend to be more separated than not (a good thing). We could run other models and check whether it results in a better score, which means that the clusters are better separated. We could use this result to recommend users—if a user is in a connected component, then we can recommend other users in that component. This recommendation follows our use of the Jaccard Similarity to find good connections between users, our use of connected components to split them up into clusters, and our use of the optimization technique to find the best model in this setting. However, a large number of users may not be connected at all, so we will use a different algorithm to find clusters for them. Summary In this chapter, we looked at graphs from social networks and how to do cluster analysis on them. We also looked at saving and loading models from scikit-learn by using the classification model we created in Chapter 6, Social Media Insight Using Naive Bayes. We created a graph of friends from a social network, in this case Twitter. We then examined how similar two users were, based on their friends. Users with more friends in common were considered more similar, although we normalize this by considering the overall number of friends they have. This is a commonly used way to infer knowledge (such as age or general topic of discussion) based on similar users. We can use this logic for recommending users to others—if they follow user X and user Y is similar to user X, they will probably like user Y. This is, in many ways, similar to our transaction-led similarity of previous chapters. [ 159 ]

Discovering Accounts to Follow Using Graph Mining The aim of this analysis was to recommend users, and our use of cluster analysis allowed us to find clusters of similar users. To do this, we found connected components on a weighted graph we created based on this similarity metric. We used the NetworkX package for creating graphs, using our graphs, and finding these connected components. We then used the Silhouette Coefficient, which is a metric that evaluates how good a clustering solution is. Higher scores indicate a better clustering, according to the concepts of intra-cluster and inter-cluster distance. SciPy's optimize module was used to find the solution that maximises this value. In this chapter, we compared a few opposites too. Similarity is a measure between two objects, where higher values indicate more similarity between those objects. In contrast, distance is a measure where lower values indicate more similarity. Another contrast we saw was a loss function, where lower scores are considered better (that is, we lost less). Its opposite is the score function, where higher scores are considered better. In the next chapter, we will see how to extract features from another new type of data: images. We will discuss how to use neural networks to identify numbers in images and develop a program to automatically beat CAPTCHA images. [ 160 ]

• method='nelder-mead': This is used to select the Nelder-Mead optimize<br />

routing (SciPy supports quite a number of different options)<br />

Chapter 7<br />

• args=(friends,): This passes the friends dictionary to the function that is<br />

being minimized<br />

This function will take quite a while to run. Our graph creation<br />

function isn't that fast, nor is the function that <strong>com</strong>putes the Silhouette<br />

Coefficient. Decreasing the maxiter value will result in fewer iterations<br />

being performed, but we run the risk of finding a suboptimal solution.<br />

Running this function, I got a threshold of 0.135 that returns 10 <strong>com</strong>ponents.<br />

The score returned by the minimize function was -0.192. However, we must<br />

remember that we negated this value. This means our score was actually 0.192.<br />

The value is positive, which indicates that the clusters tend to be more separated<br />

than not (a good thing). We could run other models and check whether it results<br />

in a better score, which means that the clusters are better separated.<br />

We could use this result to re<strong>com</strong>mend users—if a user is in a connected <strong>com</strong>ponent,<br />

then we can re<strong>com</strong>mend other users in that <strong>com</strong>ponent. This re<strong>com</strong>mendation<br />

follows our use of the Jaccard Similarity to find good connections between users,<br />

our use of connected <strong>com</strong>ponents to split them up into clusters, and our use of the<br />

optimization technique to find the best model in this setting.<br />

However, a large number of users may not be connected at all, so we will use a<br />

different algorithm to find clusters for them.<br />

Summary<br />

In this chapter, we looked at graphs from social networks and how to do<br />

cluster analysis on them. We also looked at saving and loading models from<br />

scikit-learn by using the classification model we created in Chapter 6, Social Media<br />

Insight Using Naive Bayes.<br />

We created a graph of friends from a social network, in this case Twitter. We then<br />

examined how similar two users were, based on their friends. Users with more<br />

friends in <strong>com</strong>mon were considered more similar, although we normalize this by<br />

considering the overall number of friends they have. This is a <strong>com</strong>monly used way<br />

to infer knowledge (such as age or general topic of discussion) based on similar<br />

users. We can use this logic for re<strong>com</strong>mending users to others—if they follow user X<br />

and user Y is similar to user X, they will probably like user Y. This is, in many ways,<br />

similar to our transaction-led similarity of previous chapters.<br />

[ 159 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!