home

day 2: getting started with nlp

section 2.3.1

  • some text

section 2.3.2

  • some text

section 2.3.3

  • some text
In [8]:
import spacy
from spacy import displacy

nlp = spacy.load("en_core_web_sm")

Amazon and IMDB Review Sentiment Classification using SpaCy

What is NLP

Natural Language Processing (NLP) is the field of Artificial Intelligence concerned with the processing and understanding of human language. Since its inception during the 1950s, machine understanding of language has played a pivotal role in translation, topic modeling, document indexing, information retrieval, and extraction.

Application of NLP

  • Text Classification
  • Spam Filters
  • Voice text messaging
  • Sentiment analysis
  • Spell or grammar check
  • Chat bot
  • Search Suggestion
  • Search Autocorrect
  • Automatic Review Analysis system
  • Machine translation
  • And so much more
In [ ]:
# !pip install scikit-learn
In [ ]:
# !pip install -U spacy
In [ ]:
# !python -m spacy download en
In [ ]:
#!python -m spacy download en_core_web_sm

Data Cleaning Options

  • Case Normalization
  • Removing Stop Words
  • Removing Punctuations or Special Symbols
  • Lemmatization or Stemming
  • Parts of Speech Tagging
  • Entity Detection
  • Bag of Words
  • TF-IDF

Bag of Words - The Simplest Word Embedding Technique

This is one of the simplest methods of embedding words into numerical vectors. It is not often used in practice due to its oversimplification of language, but often the first embedding technique to be taught in the classroom setting.

doc1 = "I am high"
doc2 = "Yes I am high"
doc3 = "I am kidding"

image.png

Bag of Words and Tf-idf

https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html

tf–idf for “Term Frequency times Inverse Document Frequency

image.png

image.png

Let's Get Started

In [1]:
import spacy
from spacy import displacy
In [16]:
nlp = spacy.load('en_core_web_sm')
In [17]:
text = "Apple, This is first sentence. and Google this is another one. here 3rd one is"
In [18]:
doc = nlp(text)
In [19]:
doc
Out[19]:
Apple, This is first sentence. and Google this is another one. here 3rd one is
In [20]:
for token in doc:
    print(token)
Apple
,
This
is
first
sentence
.
and
Google
this
is
another
one
.
here
3rd
one
is
In [21]:
sent = nlp.create_pipe('sentencizer')
In [22]:
nlp.add_pipe(sent, before='parser')
In [23]:
doc = nlp(text)
In [24]:
for sent in doc.sents:
    print(sent)
Apple, This is first sentence.
and Google this is another one.
here 3rd one is
In [25]:
from spacy.lang.en.stop_words import STOP_WORDS
In [26]:
stopwords = list(STOP_WORDS)
In [27]:
print(stopwords)
['beyond', 'hers', 'anyhow', 'seemed', 'for', 'although', 'therefore', 'beforehand', 'something', 'almost', 'cannot', '‘d', 'seem', 'from', 'whereby', '’d', 'everywhere', 'via', 'therein', 'below', 'full', 'you', 'however', 'thus', 'yours', 'whole', 'did', 'really', 'quite', 'sometime', 'either', 'less', 'whenever', 'about', 'because', "n't", '‘re', 'each', 'moreover', 'himself', 'this', 'after', 'hundred', 'by', 'fifty', 'somewhere', 'part', 'down', 'being', 'still', 'your', 'alone', 'hereby', 'whence', 'under', 'noone', 'among', 'keep', 'though', 'own', 'only', 'can', 'otherwise', 'everyone', 'back', 'unless', 'throughout', 'bottom', 'now', 'first', 'least', 'her', 'also', 'nowhere', 'whether', 'anywhere', 'had', 'eleven', 'on', "'ve", 'around', 'i', 'our', 'last', 'perhaps', 'get', 'formerly', 'four', 'former', 'whoever', 'even', 'done', 'anything', 'enough', 'into', 'too', 'some', "'s", 'it', 'all', 'if', 'neither', 'who', 'front', 'few', 'have', 'and', 'which', 'twenty', 'been', 'rather', 'as', 'amongst', 'latterly', 'often', 'make', 'ever', 'call', 'with', 'or', 'serious', 'ours', 'against', 'their', 'afterwards', 'hereupon', 'a', 'side', 'five', '’re', 'thereafter', 'am', 'up', 'once', 'else', 'please', 'before', 'toward', 'might', 'again', 'between', 'become', 'nothing', 'me', 'besides', 'two', 'would', 'has', 'how', 'its', 'should', 'what', 'elsewhere', 'herein', 'upon', 'must', 'yourselves', 'whereupon', 'twelve', '‘s', 'whereafter', 'third', 'other', 'at', 'so', 'since', 'already', 'why', '’m', 'top', 'where', 'name', 'no', 'one', 'anyone', 'sixty', 'several', "'re", 'seems', 'themselves', 'forty', "'d", 'next', 'namely', 'further', 'wherever', 'became', 'except', 'together', 'three', 'n‘t', 'yet', 'do', 'that', 'doing', 'used', 'becomes', 'more', 'we', 'n’t', 'an', 'over', 'move', 'indeed', 'to', 'myself', 'seeming', 'due', 'much', 'ten', 'beside', 'another', 'many', 'becoming', "'m", 'across', 'yourself', 'behind', 'whatever', 'show', 'wherein', 'mine', 're', '‘ll', 'say', 'than', 'using', 'until', 'see', 'whither', 'same', 'thence', 'not', '’s', 'sometimes', 'give', 'nobody', 'onto', 'could', 'whom', 'off', 'amount', 'along', 'us', 'whereas', '’ll', 'those', 'eight', 'anyway', 'thereupon', 'various', 'is', 'six', 'nine', 'but', 'go', 'these', 'put', 'somehow', 'through', 'thru', 'was', 'out', 'every', 'they', 'his', 'without', 'the', 'none', 'made', 'them', 'ca', 'any', 'will', 'most', 'then', 'regarding', 'hence', 'my', 'him', 'meanwhile', 'fifteen', 'may', 'latter', 'everything', 'such', 'towards', '‘m', '’ve', 'does', 'very', 'empty', "'ll", 'during', 'hereafter', 'well', 'others', 'never', 'he', 'there', 'within', '‘ve', 'herself', 'thereby', 'above', 'someone', 'nevertheless', 'be', 'she', 'were', 'ourselves', 'are', 'whose', 'when', 'per', 'both', 'here', 'while', 'mostly', 'just', 'of', 'always', 'nor', 'take', 'in', 'itself']
In [28]:
len(stopwords)
Out[28]:
326
In [30]:
for token in doc:
    if token.is_stop == False:
        print(token)
Apple
,
sentence
.
Google
.
3rd

Lemmatization

In [31]:
doc = nlp('run runs running runner')
In [32]:
for lem in doc:
    print(lem.text, lem.lemma_)
run run
runs run
running run
runner runner

POS

In [33]:
doc = nlp('All is well at your end!')
In [34]:
for token in doc:
    print(token.text, token.pos_)
All DET
is AUX
well ADV
at ADP
your PRON
end NOUN
! PUNCT
In [35]:
displacy.render(doc, style = 'dep')
All DET is AUX well ADV at ADP your PRON end! NOUN nsubj acomp prep poss pobj

Entity Detection

In [36]:
doc = nlp("New York City on Tuesday declared a public health emergency and ordered mandatory measles vaccinations amid an outbreak, becoming the latest national flash point over refusals to inoculate against dangerous diseases. At least 285 people have contracted measles in the city since September, mostly in Brooklyn’s Williamsburg neighborhood. The order covers four Zip codes there, Mayor Bill de Blasio (D) said Tuesday. The mandate orders all unvaccinated people in the area, including a concentration of Orthodox Jews, to receive inoculations, including for children as young as 6 months old. Anyone who resists could be fined up to $1,000.")
In [37]:
doc
Out[37]:
New York City on Tuesday declared a public health emergency and ordered mandatory measles vaccinations amid an outbreak, becoming the latest national flash point over refusals to inoculate against dangerous diseases. At least 285 people have contracted measles in the city since September, mostly in Brooklyn’s Williamsburg neighborhood. The order covers four Zip codes there, Mayor Bill de Blasio (D) said Tuesday. The mandate orders all unvaccinated people in the area, including a concentration of Orthodox Jews, to receive inoculations, including for children as young as 6 months old. Anyone who resists could be fined up to $1,000.
In [38]:
displacy.render(doc, style = 'ent')
New York City GPE on Tuesday DATE declared a public health emergency and ordered mandatory measles vaccinations amid an outbreak, becoming the latest national flash point over refusals to inoculate against dangerous diseases. At least 285 CARDINAL people have contracted measles in the city since September DATE , mostly in Brooklyn GPE ’s Williamsburg GPE neighborhood. The order covers four CARDINAL Zip codes there, Mayor Bill de Blasio PERSON (D) said Tuesday DATE . The mandate orders all unvaccinated people in the area, including a concentration of Orthodox Jews NORP , to receive inoculations, including for children as young as 6 months old DATE . Anyone who resists could be fined up to $1,000 MONEY .

Text Classification

In [3]:
import pandas as pd
In [4]:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
In [41]:
data_yelp = pd.read_csv('datasets/yelp_labelled.txt', sep='\t', header = None)
In [42]:
data_yelp.head()
Out[42]:
0 1
0 Wow... Loved this place. 1
1 Crust is not good. 0
2 Not tasty and the texture was just nasty. 0
3 Stopped by during the late May bank holiday of... 1
4 The selection on the menu was great and so wer... 1
In [43]:
columns_name = ['Review', 'Sentiment']
data_yelp.columns = columns_name
In [44]:
data_yelp.head()
Out[44]:
Review Sentiment
0 Wow... Loved this place. 1
1 Crust is not good. 0
2 Not tasty and the texture was just nasty. 0
3 Stopped by during the late May bank holiday of... 1
4 The selection on the menu was great and so wer... 1
In [45]:
data_yelp.shape
Out[45]:
(1000, 2)
In [49]:
data_amazon = pd.read_csv('datasets/amazon_cells_labelled.txt', sep = '\t', header = None)
data_amazon.columns = columns_name
In [50]:
data_amazon.head()
Out[50]:
Review Sentiment
0 So there is no way for me to plug it in here i... 0
1 Good case, Excellent value. 1
2 Great for the jawbone. 1
3 Tied to charger for conversations lasting more... 0
4 The mic is great. 1
In [51]:
data_amazon.shape
Out[51]:
(1000, 2)
In [52]:
data_imdb = pd.read_csv('datasets/imdb_labelled.txt', sep = '\t', header = None)
In [53]:
data_imdb.columns = columns_name
In [54]:
data_imdb.shape
Out[54]:
(748, 2)
In [55]:
data_imdb.head()
Out[55]:
Review Sentiment
0 A very, very, very slow-moving, aimless movie ... 0
1 Not sure who was more lost - the flat characte... 0
2 Attempting artiness with black & white and cle... 0
3 Very little music or anything to speak of. 0
4 The best scene in the movie was when Gerardo i... 1
In [56]:
data = data_yelp.append([data_amazon, data_imdb], ignore_index=True)
In [57]:
data.shape
Out[57]:
(2748, 2)
In [58]:
data.head()
Out[58]:
Review Sentiment
0 Wow... Loved this place. 1
1 Crust is not good. 0
2 Not tasty and the texture was just nasty. 0
3 Stopped by during the late May bank holiday of... 1
4 The selection on the menu was great and so wer... 1
In [59]:
data['Sentiment'].value_counts()
Out[59]:
1    1386
0    1362
Name: Sentiment, dtype: int64
In [60]:
data.isnull().sum()
Out[60]:
Review       0
Sentiment    0
dtype: int64

Tokenization

In [61]:
import string
In [64]:
punct = string.punctuation
In [65]:
punct
Out[65]:
'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
In [68]:
def text_data_cleaning(sentence):
    doc = nlp(sentence)
    
    tokens = []
    for token in doc:
        if token.lemma_ != "-PRON-":
            temp = token.lemma_.lower().strip()
        else:
            temp = token.lower_
        tokens.append(temp)
    
    cleaned_tokens = []
    for token in tokens:
        if token not in stopwords and token not in punct:
            cleaned_tokens.append(token)
    return cleaned_tokens
In [70]:
text_data_cleaning("    Hello how are you. Like this video")
Out[70]:
['hello', 'like', 'video']

Vectorization Feature Engineering (TF-IDF)

In [71]:
from sklearn.svm import LinearSVC
In [72]:
tfidf = TfidfVectorizer(tokenizer = text_data_cleaning)
classifier = LinearSVC()
In [73]:
X = data['Review']
y = data['Sentiment']
In [74]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
In [75]:
X_train.shape, X_test.shape
Out[75]:
((2198,), (550,))
In [77]:
clf = Pipeline([('tfidf', tfidf), ('clf', classifier)])
In [78]:
clf.fit(X_train, y_train)
Out[78]:
Pipeline(memory=None,
         steps=[('tfidf',
                 TfidfVectorizer(analyzer='word', binary=False,
                                 decode_error='strict',
                                 dtype=<class 'numpy.float64'>,
                                 encoding='utf-8', input='content',
                                 lowercase=True, max_df=1.0, max_features=None,
                                 min_df=1, ngram_range=(1, 1), norm='l2',
                                 preprocessor=None, smooth_idf=True,
                                 stop_words=None, strip_accents=None,
                                 sublinear_tf=False,
                                 token_pattern='(?u)\\b\\w\\w+\\b',
                                 tokenizer=<function text_data_cleaning at 0x0000016262FABD90>,
                                 use_idf=True, vocabulary=None)),
                ('clf',
                 LinearSVC(C=1.0, class_weight=None, dual=True,
                           fit_intercept=True, intercept_scaling=1,
                           loss='squared_hinge', max_iter=1000,
                           multi_class='ovr', penalty='l2', random_state=None,
                           tol=0.0001, verbose=0))],
         verbose=False)
In [79]:
y_pred = clf.predict(X_test)
In [80]:
print(classification_report(y_test, y_pred))
              precision    recall  f1-score   support

           0       0.78      0.80      0.79       285
           1       0.78      0.75      0.76       265

    accuracy                           0.78       550
   macro avg       0.78      0.78      0.78       550
weighted avg       0.78      0.78      0.78       550

In [81]:
confusion_matrix(y_test, y_pred)
Out[81]:
array([[228,  57],
       [ 66, 199]], dtype=int64)
In [85]:
clf.predict(['Wow, this is amzing lesson'])
Out[85]:
array([1], dtype=int64)
In [86]:
clf.predict(['Wow, this sucks'])
Out[86]:
array([0], dtype=int64)
In [87]:
clf.predict(['Worth of watching it. Please like it'])
Out[87]:
array([1], dtype=int64)
In [88]:
clf.predict(['Loved it. amazing'])
Out[88]:
array([1], dtype=int64)
In [ ]: