You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
In this assignment you will explore text message data and create models to predict if a message is spam or not.
import pandas as pd
import numpy as np
spam_data = pd.read_csv('spam.csv')
spam_data['target'] = np.where(spam_data['target']=='spam',1,0)
spam_data.head(10)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(spam_data['text'],
spam_data['target'],
random_state=0)
What percentage of the documents in spam_data
are spam?
This function should return a float, the percent value (i.e. $ratio 100$).*
def answer_one():
spam = sum(spam_data['target']) * 100 / len(spam_data)
return spam
answer_one()
Fit the training data X_train
using a Count Vectorizer with default parameters.
What is the longest token in the vocabulary?
This function should return a string.
from sklearn.feature_extraction.text import CountVectorizer
def answer_two():
vectorizer = CountVectorizer()
vectorizer.fit(X_train)
length = float("-inf")
for word in (vectorizer.vocabulary_).keys() :
if len(word) > length :
longest = word
length = len(word)
return longest
answer_two()
Fit and transform the training data X_train
using a Count Vectorizer with default parameters.
Next, fit a fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1
. Find the area under the curve (AUC) score using the transformed test data.
This function should return the AUC score as a float.
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score
def answer_three():
vectorizer = CountVectorizer()
vector_train = vectorizer.fit_transform(X_train)
vector_test = vectorizer.transform(X_test)
clf = MultinomialNB(alpha = 0.1)
clf.fit(vector_train, y_train)
y_predict = clf.predict(vector_test)
score = roc_auc_score(y_test, y_predict)
return score
answer_three()
Fit and transform the training data X_train
using a Tfidf Vectorizer with default parameters.
What 20 features have the smallest tf-idf and what 20 have the largest tf-idf?
Put these features in a two series where each series is sorted by tf-idf value and then alphabetically by feature name. The index of the series should be the feature name, and the data should be the tf-idf.
The series of 20 features with smallest tf-idfs should be sorted smallest tfidf first, the list of 20 features with largest tf-idfs should be sorted largest first.
This function should return a tuple of two series
(smallest tf-idfs series, largest tf-idfs series)
.
from sklearn.feature_extraction.text import TfidfVectorizer
def answer_four():
vectorizer = TfidfVectorizer().fit(X_train)
X_train_vectorized = vectorizer.transform(X_train)
data = {'Features' : vectorizer.get_feature_names(),
'IDFS' : X_train_vectorized.max(0).toarray()[0]}
df = pd.DataFrame(data)
df.sort_values(by = ['IDFS', 'Features'], inplace = True)
df.set_index('Features', inplace = True)
smallest = df['IDFS'][:20]
smallest.name = None
smallest.index.name = None
largest = df['IDFS'][-20:]
largest.name = None
largest.index.name = None
return (smallest, largest)
answer_four()
Fit and transform the training data X_train
using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 3.
Then fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1
and compute the area under the curve (AUC) score using the transformed test data.
This function should return the AUC score as a float.
def answer_five():
vectorizer = TfidfVectorizer(min_df=3)
vector_train = vectorizer.fit_transform(X_train)
vector_test = vectorizer.transform(X_test)
clf = MultinomialNB(alpha = 0.1)
clf.fit(vector_train, y_train)
y_predict = clf.predict(vector_test)
score = roc_auc_score(y_test, y_predict)
return score
answer_five()
What is the average length of documents (number of characters) for not spam and spam documents?
This function should return a tuple (average length not spam, average length spam).
def answer_six():
spam_data['length'] = spam_data['text'].str.len()
spam_0 = spam_data[spam_data['target'] == 0]
spam_1 = spam_data[spam_data['target'] == 1]
not_spam = spam_0['length'].sum() / len(spam_0)
spam = spam_1['length'].sum() / len(spam_1)
return (not_spam, spam)
answer_six()
The following function has been provided to help you combine new features into the training data:
def add_feature(X, feature_to_add):
"""
Returns sparse feature matrix with added feature.
feature_to_add can also be a list of features.
"""
from scipy.sparse import csr_matrix, hstack
return hstack([X, csr_matrix(feature_to_add).T], 'csr')
Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5.
Using this document-term matrix and an additional feature, the length of document (number of characters), fit a Support Vector Classification model with regularization C=10000
. Then compute the area under the curve (AUC) score using the transformed test data.
This function should return the AUC score as a float.
from sklearn.svm import SVC
def answer_seven():
vectorizer = TfidfVectorizer(min_df=5)
vector_train = vectorizer.fit_transform(X_train)
vector_test = vectorizer.transform(X_test)
vector_train = add_feature(vector_train, X_train.str.len())
vector_test = add_feature(vector_test, X_test.str.len())
svm = SVC(C = 1e4).fit(vector_train, y_train)
y_predicted = svm.predict(vector_test)
score = roc_auc_score(y_test, y_predicted)
return score
answer_seven()
What is the average number of digits per document for not spam and spam documents?
This function should return a tuple (average # digits not spam, average # digits spam).
def answer_eight():
import re
spam_data['digits'] = spam_data['text'].str.count('[0-9]')
spam_0 = spam_data[spam_data['target'] == 0]
spam_1 = spam_data[spam_data['target'] == 1]
spam_0_len = sum(spam_0['digits']) / len(spam_0)
spam_1_len = sum(spam_1['digits']) / len(spam_1)
return (spam_0_len, spam_1_len)
answer_eight()
Fit and transform the training data X_train
using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using word n-grams from n=1 to n=3 (unigrams, bigrams, and trigrams).
Using this document-term matrix and the following additional features:
fit a Logistic Regression model with regularization C=100
. Then compute the area under the curve (AUC) score using the transformed test data.
This function should return the AUC score as a float.
from sklearn.linear_model import LogisticRegression
def answer_nine():
import re
from sklearn.svm import SVC
vectorizer = TfidfVectorizer(ngram_range = (1, 3), min_df = 5)
vector_train = vectorizer.fit_transform(X_train)
vector_test = vectorizer.transform(X_test)
vector_train = add_feature(vector_train, [X_train.str.len(), X_train.str.count('[0-9]')])
vector_test = add_feature(vector_test, [X_test.str.len(), X_test.str.count('[0-9]')])
LogReg = LogisticRegression(C = 100, max_iter = 1000).fit(vector_train, y_train)
y_predicted = LogReg.predict(vector_test)
score = roc_auc_score(y_test, y_predicted)
return score
answer_nine()
What is the average number of non-word characters (anything other than a letter, digit or underscore) per document for not spam and spam documents?
Hint: Use \w
and \W
character classes
This function should return a tuple (average # non-word characters not spam, average # non-word characters spam).
def answer_ten():
import re
spam_data['nonword'] = spam_data['text'].str.count('\W')
spam_0 = spam_data[spam_data['target'] == 0]
spam_1 = spam_data[spam_data['target'] == 1]
spam_0_len = sum(spam_0['nonword']) / len(spam_0)
spam_1_len = sum(spam_1['nonword']) / len(spam_1)
return (spam_0_len, spam_1_len)
answer_ten()
Fit and transform the training data X_train using a Count Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using character n-grams from n=2 to n=5.
To tell Count Vectorizer to use character n-grams pass in analyzer='char_wb'
which creates character n-grams only from text inside word boundaries. This should make the model more robust to spelling mistakes.
Using this document-term matrix and the following additional features:
fit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.
Also find the 10 smallest and 10 largest coefficients from the model and return them along with the AUC score in a tuple.
The list of 10 smallest coefficients should be sorted smallest first, the list of 10 largest coefficients should be sorted largest first.
The three features that were added to the document term matrix should have the following names should they appear in the list of coefficients: ['length_of_doc', 'digit_count', 'non_word_char_count']
This function should return a tuple (AUC score as a float, smallest coefs list, largest coefs list)
.
def answer_eleven():
vectorizer = CountVectorizer(analyzer='char_wb',
ngram_range = (2, 5),
min_df = 5)
vector_train = vectorizer.fit_transform(X_train)
vector_test = vectorizer.transform(X_test)
vector_train = add_feature(vector_train, [X_train.str.len(),
X_train.str.count('[0-9]'),
(X_train.str.len() - X_train.str.count('[a-zA-Z0-9_]'))])
vector_test = add_feature(vector_test, [X_test.str.len(),
X_test.str.count('[0-9]'),
(X_test.str.len() - X_test.str.count('[a-zA-Z0-9_]'))])
LogReg = LogisticRegression(C = 100, max_iter = 1000).fit(vector_train, y_train)
y_predict = LogReg.predict(vector_test)
score = roc_auc_score(y_test, y_predict)
features = vectorizer.get_feature_names()
features.append('length_of_doc')
features.append('digit_count')
features.append('non_word_char_count')
data = {'Features' : features,
'Weights' : LogReg.coef_[0]}
df = pd.DataFrame(data)
df.sort_values(by = ['Weights', 'Features'], inplace = True)
df.set_index('Features', inplace = True)
smallest = df['Weights'][:20]
smallest.name = None
smallest.index.name = None
largest = df['Weights'][-20:]
largest.name = None
largest.index.name = None
return (score, smallest, largest)
answer_eleven()