You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
In part 1 of this assignment you will use nltk to explore the Herman Melville novel Moby Dick. Then in part 2 you will create a spelling recommender function that uses nltk to find words similar to the misspelling.
import nltk
from nltk.stem import WordNetLemmatizer
from nltk.util import ngrams
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('averaged_perceptron_tagger')
import numpy as np
import operator
import pandas as pd
# If you would like to work with the raw text you can use 'moby_raw'
with open('moby.txt', 'r') as f:
moby_raw = f.read()
# If you would like to work with the novel in nltk.Text format you can use 'text1'
moby_tokens = nltk.word_tokenize(moby_raw)
text1 = nltk.Text(moby_tokens)
How many tokens (words and punctuation symbols) are in text1?
This function should return an integer.
def example_one():
return len(nltk.word_tokenize(moby_raw)) # or alternatively len(text1)
example_one()
How many unique tokens (unique words and punctuation) does text1 have?
This function should return an integer.
def example_two():
return len(set(nltk.word_tokenize(moby_raw))) # or alternatively len(set(text1))
example_two()
After lemmatizing the verbs, how many unique tokens does text1 have?
This function should return an integer.
def example_three():
lemmatizer = WordNetLemmatizer()
lemmatized = [lemmatizer.lemmatize(w,'v') for w in text1]
return len(set(lemmatized))
example_three()
What is the lexical diversity of the given text input? (i.e. ratio of unique tokens to the total number of tokens)
This function should return a float.
def answer_one():
return (example_two() / example_one())
answer_one()
def answer_two():
dist = nltk.FreqDist(text1)
return (dist['whale'] + dist['Whale']) * 100 / example_one()
answer_two()
What are the 20 most frequently occurring (unique) tokens in the text? What is their frequency?
This function should return a list of 20 tuples where each tuple is of the form (token, frequency)
. The list should be sorted in descending order of frequency.
def answer_three():
dist = nltk.FreqDist(text1)
dist_2 = sorted(dist.items(), key = operator.itemgetter(1), reverse = True)
return dist_2[:20]
answer_three()
What tokens have a length of greater than 5 and frequency of more than 150?
This function should return an alphabetically sorted list of the tokens that match the above constraints. To sort your list, use sorted()
def answer_four():
dist = nltk.FreqDist(text1)
vocab = dist.keys()
freqwords = sorted([w for w in vocab if len(w) > 5 and dist[w] > 150])
return freqwords
answer_four()
Find the longest word in text1 and that word's length.
This function should return a tuple (longest_word, length)
.
def answer_five():
dist = nltk.FreqDist(text1)
vocab = list(dist.keys())
length = float("-inf")
word = None
for w in vocab :
if len(w) > length :
length = len(w)
word = w
return (word, length)
answer_five()
What unique words have a frequency of more than 2000? What is their frequency?
"Hint: you may want to use isalpha()
to check if the token is a word and not punctuation."
This function should return a list of tuples of the form (frequency, word)
sorted in descending order of frequency.
def answer_six():
words_2000 = []
for pair in answer_three() :
if pair[0].isalpha() :
if pair[1] > 2000 :
words_2000.append((pair[1], pair[0]))
return words_2000
answer_six()
def answer_seven():
sentences = nltk.sent_tokenize(moby_raw)
return (example_one() / len(sentences))
answer_seven()
What are the 5 most frequent parts of speech in this text? What is their frequency?
This function should return a list of tuples of the form (part_of_speech, frequency)
sorted in descending order of frequency.
def answer_eight():
POS = nltk.pos_tag(text1)
PosFreq = {}
for pair in POS :
if pair[1] in PosFreq :
PosFreq[pair[1]] += 1
else :
PosFreq[pair[1]] = 1
PosFreq = sorted(PosFreq.items(), key = operator.itemgetter(1), reverse = True)
return PosFreq[:5]
answer_eight()
For this part of the assignment you will create three different spelling recommenders, that each take a list of misspelled words and recommends a correctly spelled word for every word in the list.
For every misspelled word, the recommender should find find the word in correct_spellings
that has the shortest distance*, and starts with the same letter as the misspelled word, and return that word as a recommendation.
*Each of the three different recommenders will use a different distance measure (outlined below).
Each of the recommenders should provide recommendations for the three default words provided: ['cormulent', 'incendenece', 'validrate']
.
from nltk.corpus import words
nltk.download('words')
correct_spellings = words.words()
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
Jaccard distance on the trigrams of the two words.
This function should return a list of length three:
['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']
.
def answer_nine(entries=['cormulent', 'incendenece', 'validrate']):
answers = []
for entry in entries :
entry_trigrams = set(nltk.ngrams(entry, n = 3))
dist = float("-inf")
for w in correct_spellings :
if entry[0] == w[0] :
word_trigrams = set(nltk.ngrams(w, n = 3))
jd = 1 - nltk.jaccard_distance(entry_trigrams, word_trigrams)
if jd > dist :
dist = jd
answer = w
answers.append(answer)
return answers
answer_nine()
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
Jaccard distance on the 4-grams of the two words.
This function should return a list of length three:
['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']
.
def answer_ten(entries=['cormulent', 'incendenece', 'validrate']):
answers = []
for entry in entries :
entry_trigrams = set(nltk.ngrams(entry, n = 4))
dist = float("-inf")
for w in correct_spellings :
if entry[0] == w[0] :
word_trigrams = set(nltk.ngrams(w, n = 4))
jd = 1 - nltk.jaccard_distance(entry_trigrams, word_trigrams)
if jd > dist :
dist = jd
answer = w
answers.append(answer)
return answers
answer_ten()
For this recommender, your function should provide recommendations for the three default words provided above using the following distance metric:
Edit distance on the two words with transpositions.
This function should return a list of length three:
['cormulent_reccomendation', 'incendenece_reccomendation', 'validrate_reccomendation']
.
from nltk.metrics import edit_distance
def answer_eleven(entries=['cormulent', 'incendenece', 'validrate']):
answers = []
for entry in entries :
dist = float("inf")
for w in correct_spellings :
if entry[0] == w[0] :
edit = edit_distance(entry, w, transpositions = True)
if edit < dist :
dist = edit
answer = w
answers.append(answer)
return answers
answer_eleven()