www.allitebooks.com
Learning%20Data%20Mining%20with%20Python Learning%20Data%20Mining%20with%20Python
Chapter 9 The use of function words is less defined by the content of the document and more by the decisions made by the author. This makes them good candidates for separating the authorship traits between different users. For instance, while many Americans are particular about the different in usage between that and which in a sentence, people from other countries, such as Australia, are less particular about this. This means that some Australians will lean towards almost exclusively using one word or the other, while others may use which much more. This difference, combined with thousands of other nuanced differences, makes a model of authorship. Counting function words We can count function words using the CountVectorizer class we used in Chapter 6, Social Media Insight Using Naive Bayes. This class can be passed a vocabulary, which is the set of words it will look for. If a vocabulary is not passed (we didn't pass one in the code of Chapter 6), then it will learn this vocabulary from the dataset. All the words are in the training set of documents (depending on the other parameters of course). First, we set up our vocabulary of function words, which is just a list containing each of them. Exactly which words are function words and which are not is up for debate. I've found this list, from published research, to be quite good: function_words = ["a", "able", "aboard", "about", "above", "absent", "according" , "accordingly", "across", "after", "against", "ahead", "albeit", "all", "along", "alongside", "although", "am", "amid", "amidst", "among", "amongst", "amount", "an", "and", "another", "anti", "any", "anybody", "anyone", "anything", "are", "around", "as", "aside", "astraddle", "astride", "at", "away", "bar", "barring", "be", "because", "been", "before", "behind", "being", "below", "beneath", "beside", "besides", "better", "between", "beyond", "bit", "both", "but", "by", "can", "certain", "circa", "close", "concerning", "consequently", "considering", "could", "couple", "dare", "deal", "despite", "down", "due", "during", "each", "eight", "eighth", "either", "enough", "every", "everybody", "everyone", "everything", "except", "excepting", "excluding", "failing", "few", "fewer", "fifth", "first", "five", "following", "for", "four", "fourth", "from", "front", "given", "good", "great", "had", "half", "have", "he", "heaps", "hence", "her", "hers", "herself", "him", "himself", "his", "however", "i", "if", "in", "including", "inside", [ 193 ]
Authorship Attribution "instead", "into", "is", "it", "its", "itself", "keeping", "lack", "less", "like", "little", "loads", "lots", "majority", "many", "masses", "may", "me", "might", "mine", "minority", "minus", "more", "most", "much", "must", "my", "myself", "near", "need", "neither", "nevertheless", "next", "nine", "ninth", "no", "nobody", "none", "nor", "nothing", "notwithstanding", "number", "numbers", "of", "off", "on", "once", "one", "onto", "opposite", "or", "other", "ought", "our", "ours", "ourselves", "out", "outside", "over", "part", "past", "pending", "per", "pertaining", "place", "plenty", "plethora", "plus", "quantities", "quantity", "quarter", "regarding", "remainder", "respecting", "rest", "round", "save", "saving", "second", "seven", "seventh", "several", "shall", "she", "should", "similar", "since", "six", "sixth", "so", "some", "somebody", "someone", "something", "spite", "such", "ten", "tenth", "than", "thanks", "that", "the", "their", "theirs", "them", "themselves", "then", "thence", "therefore", "these", "they", "third", "this", "those", "though", "three", "through", "throughout", "thru", "thus", "till", "time", "to", "tons", "top", "toward", "towards", "two", "under", "underneath", "unless", "unlike", "until", "unto", "up", "upon", "us", "used", "various", "versus", "via", "view", "wanting", "was", "we", "were", "what", "whatever", "when", "whenever", "where", "whereas", "wherever", "whether", "which", "whichever", "while", "whilst", "who", "whoever", "whole", "whom", "whomever", "whose", "will", "with", "within", "without", "would", "yet", "you", "your", "yours", "yourself", "yourselves"] Now, we can set up an extractor to get the counts of these function words. We will fit this using a pipeline later: from sklearn.feature_extraction.text import CountVectorizer extractor = CountVectorizer(vocabulary=function_words) [ 194 ]
- Page 166 and 167: Chapter 7 Next, we are going to rem
- Page 168 and 169: Chapter 7 Creating a graph Now, we
- Page 170 and 171: Chapter 7 As you can see, it is ver
- Page 172 and 173: Chapter 7 Next, we will only add th
- Page 174 and 175: Chapter 7 The difference in this gr
- Page 176 and 177: Chapter 7 We can graph the entire s
- Page 178 and 179: Chapter 7 Optimizing criteria Our a
- Page 180 and 181: Chapter 7 Next, we need to get the
- Page 182 and 183: • method='nelder-mead': This is u
- Page 184 and 185: Beating CAPTCHAs with Neural Networ
- Page 186 and 187: Chapter 8 The red lines indicate th
- Page 188 and 189: Chapter 8 The combination of an app
- Page 190 and 191: Chapter 8 Next we set the font of t
- Page 192 and 193: Chapter 8 We can then extract the s
- Page 194 and 195: Chapter 8 Our targets are integer v
- Page 196 and 197: Chapter 8 Then we iterate over our
- Page 198 and 199: Chapter 8 From these predictions, w
- Page 200 and 201: Chapter 8 This code correctly predi
- Page 202 and 203: The result is shown in the next gra
- Page 204 and 205: Chapter 8 However, it isn't very go
- Page 206: Chapter 8 Summary In this chapter,
- Page 209 and 210: Authorship Attribution Attributing
- Page 211 and 212: Authorship Attribution If we cannot
- Page 213 and 214: Authorship Attribution After taking
- Page 215: Authorship Attribution This dataset
- Page 219 and 220: Authorship Attribution Support vect
- Page 221 and 222: Authorship Attribution Kernels When
- Page 223 and 224: Authorship Attribution We can reuse
- Page 225 and 226: Authorship Attribution With our dat
- Page 227 and 228: Authorship Attribution We then reco
- Page 229 and 230: Authorship Attribution If it doesn'
- Page 231 and 232: Authorship Attribution Finally, we
- Page 234 and 235: Clustering News Articles In most of
- Page 236 and 237: Chapter 10 API Endpoints are the ac
- Page 238 and 239: The token object is just a dictiona
- Page 240 and 241: Chapter 10 We then create a list to
- Page 242 and 243: Chapter 10 We are going to use MD5
- Page 244 and 245: Chapter 10 Next, we develop the cod
- Page 246 and 247: Chapter 10 We use clustering techni
- Page 248 and 249: Chapter 10 The k-means algorithm is
- Page 250 and 251: Chapter 10 We only fit the X matrix
- Page 252 and 253: Chapter 10 We then print out the mo
- Page 254 and 255: Chapter 10 Our function definition
- Page 256 and 257: Chapter 10 The result from the prec
- Page 258 and 259: Chapter 10 Implementation Putting a
- Page 260 and 261: Chapter 10 Neural networks can also
- Page 262 and 263: We then call the partial_fit functi
- Page 264 and 265: Classifying Objects in Images Using
Authorship Attribution<br />
"instead", "into", "is", "it", "its", "itself", "keeping",<br />
"lack", "less", "like", "little", "loads", "lots", "majority",<br />
"many", "masses", "may", "me", "might", "mine", "minority",<br />
"minus", "more", "most", "much", "must", "my", "myself",<br />
"near", "need", "neither", "nevertheless", "next", "nine",<br />
"ninth", "no", "nobody", "none", "nor", "nothing",<br />
"notwithstanding", "number", "numbers", "of", "off", "on",<br />
"once", "one", "onto", "opposite", "or", "other", "ought",<br />
"our", "ours", "ourselves", "out", "outside", "over", "part",<br />
"past", "pending", "per", "pertaining", "place", "plenty",<br />
"plethora", "plus", "quantities", "quantity", "quarter",<br />
"regarding", "remainder", "respecting", "rest", "round",<br />
"save", "saving", "second", "seven", "seventh", "several",<br />
"shall", "she", "should", "similar", "since", "six", "sixth",<br />
"so", "some", "somebody", "someone", "something", "spite",<br />
"such", "ten", "tenth", "than", "thanks", "that", "the",<br />
"their", "theirs", "them", "themselves", "then", "thence",<br />
"therefore", "these", "they", "third", "this", "those",<br />
"though", "three", "through", "throughout", "thru", "thus",<br />
"till", "time", "to", "tons", "top", "toward", "towards",<br />
"two", "under", "underneath", "unless", "unlike", "until",<br />
"unto", "up", "upon", "us", "used", "various", "versus",<br />
"via", "view", "wanting", "was", "we", "were", "what",<br />
"whatever", "when", "whenever", "where", "whereas",<br />
"wherever", "whether", "which", "whichever", "while",<br />
"whilst", "who", "whoever", "whole", "whom", "whomever",<br />
"whose", "will", "with", "within", "without", "would", "yet",<br />
"you", "your", "yours", "yourself", "yourselves"]<br />
Now, we can set up an extractor to get the counts of these function words. We will fit<br />
this using a pipeline later:<br />
from sklearn.feature_extraction.text import CountVectorizer<br />
extractor = CountVectorizer(vocabulary=function_words)<br />
[ 194 ]