You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.


Assignment 2 - Introduction to NLTK

In part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling.

Part 1 - Analyzing Moby Dick

In [1]:
import nltk
from nltk.stem import WordNetLemmatizer
from nltk.util import ngrams  
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
import numpy as np
import operator
import pandas as pd

# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
    moby_raw = f.read()
    
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
[nltk_data] Downloading package punkt to
[nltk_data]     C:\Users\Rohan\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping tokenizers\punkt.zip.
[nltk_data] Downloading package wordnet to
[nltk_data]     C:\Users\Rohan\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\wordnet.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     C:\Users\Rohan\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping taggers\averaged_perceptron_tagger.zip.

Example 1

How many tokens (words and punctuation symbols) are in text1?

This function should return an integer.

In [2]:
def example_one():
    
    return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)

example_one()
Out[2]:
255038

Example 2

How many unique tokens (unique words and punctuation) does text1 have?

This function should return an integer.

In [3]:
def example_two():
    
    return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))

example_two()
Out[3]:
20742

Example 3

After lemmatizing the verbs, how many unique tokens does text1 have?

This function should return an integer.

In [4]:
def example_three():

    lemmatizer = WordNetLemmatizer()
    lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]

    return len(set(lemmatized))

example_three()
Out[4]:
16887

Question 1

What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)

This function should return a float.

In [5]:
def answer_one():
        
    return (example_two() / example_one())

answer_one()
Out[5]:
0.08132905684643073

Question 2

What percentage of tokens is 'whale'or 'Whale'?

This function should return a float.

In [6]:
def answer_two():
    
    dist = nltk.FreqDist(text1)
    
    return (dist['whale'] + dist['Whale']) * 100 / example_one()

answer_two()
Out[6]:
0.4124875508747716

Question 3

What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?

This function should return a list of 20 tuples where each tuple is of the form (token, frequency). The list should be sorted in descending order of frequency.

In [7]:
def answer_three():
    
    dist = nltk.FreqDist(text1)
    dist_2 = sorted(dist.items(), key = operator.itemgetter(1), reverse = True)
  
    return dist_2[:20]

answer_three()
Out[7]:
[(',', 19204),
 ('the', 13715),
 ('.', 7306),
 ('of', 6513),
 ('and', 6010),
 ('a', 4545),
 ('to', 4515),
 (';', 4173),
 ('in', 3908),
 ('that', 2978),
 ('his', 2459),
 ('it', 2196),
 ('I', 2113),
 ('!', 1767),
 ('is', 1722),
 ('--', 1713),
 ('with', 1659),
 ('he', 1658),
 ('was', 1639),
 ('as', 1620)]

Question 4

What tokens have a length of greater than 5 and frequency of more than 150?

This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use sorted()

In [8]:
def answer_four():
    
    dist = nltk.FreqDist(text1)
    vocab = dist.keys()
    freqwords = sorted([w for w in vocab if len(w) > 5 and dist[w] > 150])
    
    return freqwords

answer_four()
Out[8]:
['Captain',
 'Pequod',
 'Queequeg',
 'Starbuck',
 'almost',
 'before',
 'himself',
 'little',
 'seemed',
 'should',
 'though',
 'through',
 'whales',
 'without']

Question 5

Find the longest word in text1 and that word's length.

This function should return a tuple (longest_word, length).

In [9]:
def answer_five():
    
    dist = nltk.FreqDist(text1)
    vocab = list(dist.keys())
    
    length = float("-inf")
    word = None
    for w in vocab :
        if len(w) > length :
            length = len(w)
            word = w
            
    return (word, length)

answer_five()
Out[9]:
("twelve-o'clock-at-night", 23)

Question 6

What unique words have a frequency of more than 2000? What is their frequency?

"Hint: you may want to use isalpha() to check if the token is a word and not punctuation."

This function should return a list of tuples of the form (frequency, word) sorted in descending order of frequency.

In [10]:
def answer_six():
    
    words_2000 = []
    for pair in answer_three() :
        if pair[0].isalpha() :
            if pair[1] > 2000 :
                words_2000.append((pair[1], pair[0]))
    
    return words_2000

answer_six()
Out[10]:
[(13715, 'the'),
 (6513, 'of'),
 (6010, 'and'),
 (4545, 'a'),
 (4515, 'to'),
 (3908, 'in'),
 (2978, 'that'),
 (2459, 'his'),
 (2196, 'it'),
 (2113, 'I')]

Question 7

What is the average number of tokens per sentence?

This function should return a float.

In [11]:
def answer_seven():
    
    sentences = nltk.sent_tokenize(moby_raw)
    
    return (example_one() / len(sentences))

answer_seven()
Out[11]:
25.886926512383273

Question 8

What are the 5 most frequent parts of speech in this text? What is their frequency?

This function should return a list of tuples of the form (part_of_speech, frequency) sorted in descending order of frequency.

In [12]:
def answer_eight():    
    
    POS = nltk.pos_tag(text1)
    PosFreq = {}
    for pair in POS :
        if pair[1] in PosFreq :
            PosFreq[pair[1]] += 1
        else :
            PosFreq[pair[1]] = 1
    
    PosFreq = sorted(PosFreq.items(), key = operator.itemgetter(1), reverse = True)
    return PosFreq[:5]

answer_eight()
Out[12]:
[('NN', 32729), ('IN', 28663), ('DT', 25879), (',', 19204), ('JJ', 17613)]

Part 2 - Spelling Recommender

For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.

For every misspelled word, the recommender should find find the word in correct_spellings that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.

*Each of the three different recommenders will use a different distance measure (outlined below).

Each of the recommenders should provide recommendations for the three default words provided: ['cormulent', 'incendenece', 'validrate'].

In [13]:
from nltk.corpus import words
nltk.download('words')

correct_spellings = words.words()
[nltk_data] Downloading package words to
[nltk_data]     C:\Users\Rohan\AppData\Roaming\nltk_data...
[nltk_data]   Unzipping corpora\words.zip.

Question 9

For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:

Jaccard distance on the trigrams of the two words.

This function should return a list of length three: ['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation'].

In [14]:
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
    
    answers = []
    for entry in entries :
        entry_trigrams = set(nltk.ngrams(entry, n = 3))
        dist = float("-inf")
        for w in correct_spellings :
            if entry[0] == w[0] :
                word_trigrams = set(nltk.ngrams(w, n = 3))
                jd = 1 - nltk.jaccard_distance(entry_trigrams, word_trigrams)
                if jd > dist :
                    dist = jd
                    answer = w
        
        answers.append(answer)
 
    return answers
    
answer_nine()
Out[14]:
['corpulent', 'indecence', 'validate']

Question 10

For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:

Jaccard distance on the 4-grams of the two words.

This function should return a list of length three: ['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation'].

In [15]:
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
    
    answers = []
    for entry in entries :
        entry_trigrams = set(nltk.ngrams(entry, n = 4))
        dist = float("-inf")
        for w in correct_spellings :
            if entry[0] == w[0] :
                word_trigrams = set(nltk.ngrams(w, n = 4))
                jd = 1 - nltk.jaccard_distance(entry_trigrams, word_trigrams)
                if jd > dist :
                    dist = jd
                    answer = w
        
        answers.append(answer)
 
    return answers
    
answer_ten()
Out[15]:
['cormus', 'incendiary', 'valid']

Question 11

For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:

Edit distance on the two words with transpositions.

This function should return a list of length three: ['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation'].

In [16]:
from nltk.metrics import edit_distance

def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
    
    answers = []
    for entry in entries :
        dist = float("inf")
        for w in correct_spellings :
            if entry[0] == w[0] :
                edit = edit_distance(entry, w, transpositions = True)
                if edit < dist :
                    dist = edit
                    answer = w
        answers.append(answer)
 
    return answers
    
answer_eleven()
Out[16]:
['corpulent', 'intendence', 'validate']