You are currently looking at version 1.1 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.


Assignment 3

In this assignment you will explore text message data and create models to predict if a message is spam or not.

In [1]:
import pandas as pd
import numpy as np

spam_data = pd.read_csv('spam.csv')

spam_data['target'] = np.where(spam_data['target']=='spam',1,0)
spam_data.head(10)
Out[1]:
text target
0 Go until jurong point, crazy.. Available only ... 0
1 Ok lar... Joking wif u oni... 0
2 Free entry in 2 a wkly comp to win FA Cup fina... 1
3 U dun say so early hor... U c already then say... 0
4 Nah I don't think he goes to usf, he lives aro... 0
5 FreeMsg Hey there darling it's been 3 week's n... 1
6 Even my brother is not like to speak with me. ... 0
7 As per your request 'Melle Melle (Oru Minnamin... 0
8 WINNER!! As a valued network customer you have... 1
9 Had your mobile 11 months or more? U R entitle... 1
In [2]:
from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(spam_data['text'], 
                                                    spam_data['target'], 
                                                    random_state=0)

Question 1

What percentage of the documents in spam_data are spam?

This function should return a float, the percent value (i.e. $ratio 100$).*

In [3]:
def answer_one():
    
    spam = sum(spam_data['target']) * 100 / len(spam_data) 
    
    return spam
In [4]:
answer_one()
Out[4]:
13.406317300789663

Question 2

Fit the training data X_train using a Count Vectorizer with default parameters.

What is the longest token in the vocabulary?

This function should return a string.

In [5]:
from sklearn.feature_extraction.text import CountVectorizer

def answer_two():
    
    vectorizer = CountVectorizer()
    vectorizer.fit(X_train)
    length = float("-inf")
    for word in (vectorizer.vocabulary_).keys() :
        if len(word) > length :
            longest = word
            length = len(word)
    return longest
In [6]:
answer_two()
Out[6]:
'com1win150ppmx3age16subscription'

Question 3

Fit and transform the training data X_train using a Count Vectorizer with default parameters.

Next, fit a fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1. Find the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [7]:
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import roc_auc_score

def answer_three():
    
    vectorizer = CountVectorizer()
    vector_train = vectorizer.fit_transform(X_train)
    vector_test = vectorizer.transform(X_test)
    clf = MultinomialNB(alpha = 0.1)
    clf.fit(vector_train, y_train)
    y_predict = clf.predict(vector_test)
    score = roc_auc_score(y_test, y_predict)
    return score
In [8]:
answer_three()
Out[8]:
0.9720812182741116

Question 4

Fit and transform the training data X_train using a Tfidf Vectorizer with default parameters.

What 20 features have the smallest tf-idf and what 20 have the largest tf-idf?

Put these features in a two series where each series is sorted by tf-idf value and then alphabetically by feature name. The index of the series should be the feature name, and the data should be the tf-idf.

The series of 20 features with smallest tf-idfs should be sorted smallest tfidf first, the list of 20 features with largest tf-idfs should be sorted largest first.

This function should return a tuple of two series (smallest tf-idfs series, largest tf-idfs series).

In [9]:
from sklearn.feature_extraction.text import TfidfVectorizer

def answer_four():
    
    vectorizer = TfidfVectorizer().fit(X_train)
    X_train_vectorized = vectorizer.transform(X_train)
    data = {'Features' : vectorizer.get_feature_names(),
            'IDFS' : X_train_vectorized.max(0).toarray()[0]}
    df = pd.DataFrame(data)
    df.sort_values(by = ['IDFS', 'Features'], inplace = True)
    df.set_index('Features', inplace = True)
    smallest = df['IDFS'][:20]
    smallest.name = None
    smallest.index.name = None
    largest = df['IDFS'][-20:]
    largest.name = None
    largest.index.name = None
    return (smallest, largest)
    
In [10]:
answer_four()
Out[10]:
(aaniye          0.074475
 athletic        0.074475
 chef            0.074475
 companion       0.074475
 courageous      0.074475
 dependable      0.074475
 determined      0.074475
 exterminator    0.074475
 healer          0.074475
 listener        0.074475
 organizer       0.074475
 pest            0.074475
 psychiatrist    0.074475
 psychologist    0.074475
 pudunga         0.074475
 stylist         0.074475
 sympathetic     0.074475
 venaam          0.074475
 afternoons      0.091250
 approaching     0.091250
 dtype: float64,
 blank        0.932702
 tick         0.980166
 146tf150p    1.000000
 645          1.000000
 anything     1.000000
 anytime      1.000000
 beerage      1.000000
 done         1.000000
 er           1.000000
 havent       1.000000
 home         1.000000
 lei          1.000000
 nite         1.000000
 ok           1.000000
 okie         1.000000
 thank        1.000000
 thanx        1.000000
 too          1.000000
 where        1.000000
 yup          1.000000
 dtype: float64)

Question 5

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 3.

Then fit a multinomial Naive Bayes classifier model with smoothing alpha=0.1 and compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [11]:
def answer_five():
    
    vectorizer = TfidfVectorizer(min_df=3)
    vector_train = vectorizer.fit_transform(X_train)
    vector_test = vectorizer.transform(X_test)
    clf = MultinomialNB(alpha = 0.1)
    clf.fit(vector_train, y_train)
    y_predict = clf.predict(vector_test)
    score = roc_auc_score(y_test, y_predict)
    return score
In [12]:
answer_five()
Out[12]:
0.9416243654822335

Question 6

What is the average length of documents (number of characters) for not spam and spam documents?

This function should return a tuple (average length not spam, average length spam).

In [13]:
def answer_six():
    
    spam_data['length'] = spam_data['text'].str.len()
    spam_0 = spam_data[spam_data['target'] == 0]
    spam_1 = spam_data[spam_data['target'] == 1]
    not_spam = spam_0['length'].sum() / len(spam_0)
    spam = spam_1['length'].sum() / len(spam_1)
    return (not_spam, spam)
In [14]:
answer_six()
Out[14]:
(71.02362694300518, 138.8661311914324)



The following function has been provided to help you combine new features into the training data:

In [15]:
def add_feature(X, feature_to_add):
    """
    Returns sparse feature matrix with added feature.
    feature_to_add can also be a list of features.
    """
    from scipy.sparse import csr_matrix, hstack
    return hstack([X, csr_matrix(feature_to_add).T], 'csr')

Question 7

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5.

Using this document-term matrix and an additional feature, the length of document (number of characters), fit a Support Vector Classification model with regularization C=10000. Then compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [16]:
from sklearn.svm import SVC

def answer_seven():
    
    vectorizer = TfidfVectorizer(min_df=5)
    vector_train = vectorizer.fit_transform(X_train)
    vector_test = vectorizer.transform(X_test)
    vector_train = add_feature(vector_train, X_train.str.len())
    vector_test = add_feature(vector_test, X_test.str.len())
    svm = SVC(C = 1e4).fit(vector_train, y_train)
    y_predicted = svm.predict(vector_test)
    score = roc_auc_score(y_test, y_predicted)
    return score
In [17]:
answer_seven()
Out[17]:
0.9661689557407943

Question 8

What is the average number of digits per document for not spam and spam documents?

This function should return a tuple (average # digits not spam, average # digits spam).

In [18]:
def answer_eight():
    
    import re
    
    spam_data['digits'] = spam_data['text'].str.count('[0-9]')
    spam_0 = spam_data[spam_data['target'] == 0]
    spam_1 = spam_data[spam_data['target'] == 1]
    
    spam_0_len = sum(spam_0['digits']) / len(spam_0)
    spam_1_len = sum(spam_1['digits']) / len(spam_1)

    return (spam_0_len, spam_1_len)
In [19]:
answer_eight()
Out[19]:
(0.2992746113989637, 15.759036144578314)

Question 9

Fit and transform the training data X_train using a Tfidf Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using word n-grams from n=1 to n=3 (unigrams, bigrams, and trigrams).

Using this document-term matrix and the following additional features:

  • the length of document (number of characters)
  • number of digits per document

fit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.

This function should return the AUC score as a float.

In [20]:
from sklearn.linear_model import LogisticRegression

def answer_nine():
    
    import re
    from sklearn.svm import SVC
    
    vectorizer = TfidfVectorizer(ngram_range = (1, 3), min_df = 5)
    vector_train = vectorizer.fit_transform(X_train)
    vector_test = vectorizer.transform(X_test)
    vector_train = add_feature(vector_train, [X_train.str.len(), X_train.str.count('[0-9]')])
    vector_test = add_feature(vector_test, [X_test.str.len(), X_test.str.count('[0-9]')])
    LogReg = LogisticRegression(C = 100, max_iter = 1000).fit(vector_train, y_train)
    y_predicted = LogReg.predict(vector_test)
    score = roc_auc_score(y_test, y_predicted)

    return score
In [21]:
answer_nine()
Out[21]:
0.9674528462047772

Question 10

What is the average number of non-word characters (anything other than a letter, digit or underscore) per document for not spam and spam documents?

Hint: Use \w and \W character classes

This function should return a tuple (average # non-word characters not spam, average # non-word characters spam).

In [22]:
def answer_ten():
    
    import re
    
    spam_data['nonword'] = spam_data['text'].str.count('\W')
    spam_0 = spam_data[spam_data['target'] == 0]
    spam_1 = spam_data[spam_data['target'] == 1]
    
    spam_0_len = sum(spam_0['nonword']) / len(spam_0)
    spam_1_len = sum(spam_1['nonword']) / len(spam_1)

    return (spam_0_len, spam_1_len)
In [23]:
answer_ten()
Out[23]:
(17.29181347150259, 29.041499330655956)

Question 11

Fit and transform the training data X_train using a Count Vectorizer ignoring terms that have a document frequency strictly lower than 5 and using character n-grams from n=2 to n=5.

To tell Count Vectorizer to use character n-grams pass in analyzer='char_wb' which creates character n-grams only from text inside word boundaries. This should make the model more robust to spelling mistakes.

Using this document-term matrix and the following additional features:

  • the length of document (number of characters)
  • number of digits per document
  • number of non-word characters (anything other than a letter, digit or underscore.)

fit a Logistic Regression model with regularization C=100. Then compute the area under the curve (AUC) score using the transformed test data.

Also find the 10 smallest and 10 largest coefficients from the model and return them along with the AUC score in a tuple.

The list of 10 smallest coefficients should be sorted smallest first, the list of 10 largest coefficients should be sorted largest first.

The three features that were added to the document term matrix should have the following names should they appear in the list of coefficients: ['length_of_doc', 'digit_count', 'non_word_char_count']

This function should return a tuple (AUC score as a float, smallest coefs list, largest coefs list).

In [24]:
def answer_eleven():

    vectorizer = CountVectorizer(analyzer='char_wb',
                                 ngram_range = (2, 5),
                                 min_df = 5)
    vector_train = vectorizer.fit_transform(X_train)
    vector_test = vectorizer.transform(X_test)
    vector_train = add_feature(vector_train, [X_train.str.len(),
                                              X_train.str.count('[0-9]'),
                                              (X_train.str.len() - X_train.str.count('[a-zA-Z0-9_]'))])
    
    vector_test = add_feature(vector_test, [X_test.str.len(),
                                            X_test.str.count('[0-9]'),
                                            (X_test.str.len() - X_test.str.count('[a-zA-Z0-9_]'))])
    
    LogReg = LogisticRegression(C = 100, max_iter = 1000).fit(vector_train, y_train)
    y_predict = LogReg.predict(vector_test)
    score = roc_auc_score(y_test, y_predict)
    
    features = vectorizer.get_feature_names()
    features.append('length_of_doc')
    features.append('digit_count')
    features.append('non_word_char_count')
    data = {'Features' : features,
            'Weights' : LogReg.coef_[0]}
    df = pd.DataFrame(data)
    df.sort_values(by = ['Weights', 'Features'], inplace = True)
    df.set_index('Features', inplace = True)
    smallest = df['Weights'][:20]
    smallest.name = None
    smallest.index.name = None
    largest = df['Weights'][-20:]
    largest.name = None
    largest.index.name = None
    return (score, smallest, largest)
In [25]:
answer_eleven()
Out[25]:
(0.9788593110707434,
 ca    -0.620363
  i    -0.609761
 .     -0.541314
 ..    -0.538522
 pe    -0.513154
  m    -0.505422
  go   -0.505281
 if    -0.490769
 us    -0.475787
 go    -0.467111
 a     -0.464740
 no    -0.462759
 f     -0.451966
  on   -0.440154
  h    -0.423558
 is    -0.419658
 ay    -0.415984
 la    -0.415474
 if    -0.415281
 ea    -0.414933
 dtype: float64,
 text           0.433914
  ch            0.439832
 lt             0.443900
 ar             0.446531
 eply           0.449927
 mob            0.453645
 ww             0.454578
 ba             0.459213
 an             0.478211
 ian            0.485108
 46             0.530568
 ian            0.535336
  ba            0.541616
  x             0.542743
 co             0.563659
 ne             0.570836
 xt             0.573584
  r             0.584542
 ia             0.619224
 digit_count    1.483487
 dtype: float64)