# Load packages
import requests
import pymupdf
from textblob import TextBlob
# Download the data
= "https://funnyengwish.wordpress.com/wp-content/uploads/2017/05/pratchett_terry_wyrd_sisters_-_royallib_ru.pdf"
url = requests.get(url)
response
# Extract data from pdf
= response.content
data = pymupdf.Document(stream=data)
doc
# Create text from first pdf page
= doc[0].get_text()
page1
# Turn text into TextBlob
= TextBlob(page1) text
Sentiment analysis
One of the common tasks of natural language processing is sentiment analysis. Let’s see how it works with TextBlob.
The data
It wouldn’t make much sense to do sentiment analysis on the first pdf page of Wyrd Sisters. Instead, let’s use some Goodreads reviews of Wyrd Sisters.
The method is as easy as everything we have seen so far: you simply use the sentiment
attribute of a TextBlob
object and you get a named tuple with the polarity on a continuous scale from -1 to +1 and a subjectivity ranging from 0 to 1.
We could get very fancy, scrape the site and analyse all the reviews, but this is not the purpose of this course. Instead, we will simply copy and paste a few reviews in the TextBlob
function to turn them into TextBlob
objects and look at their sentiment
attributes.
If TextBlob does a good job at analysing those reviews, we should get a polarity close to 1 for four and five-star reviews and a polarity close to -1 for one-star reviews.
Results
Be careful that you have to remove all end of lines when you paste the reviews into your code to make sure that you create a Python string.
Let’s create a string with one of the four-star reviews:
= "How have I never read Terry Pratchett before? He's like ... Shakespeare and Wodehouse and Monty Python all wrapped into one! A student gave me this book while we were studying Macbeth in class. Wyrd Sisters is a sort of parallel story, which manages to poke fun at the play, revere the play, make inside jokes about the play, and ... well, generally turn the play on its head. All the while, you, the reader, get to feel very smart and superior for getting all the jokes and allusions. And yet it manages to avoid being gimmicky. It really is a good story with good characters, too. This is no Life of Brian where the story itself matters less than the hilarity of the parody. Wyrd Sisters may draw a good deal of life from Macbeth, but its real liveliness comes from Pratchett's skilled characterizations of a regicidal Duke, his murderess Dutchess, their depressed Fool, and three very colorful witches. Oh, it's just genius. My only problem is figuring out what Pratchett novel to read next ... he's dauntingly prolific!" review1
We need to turn the string into a TextBlob
object:
= TextBlob(review1) r1
And now we can see the result of the sentiment analysis:
print(r1.sentiment)
Sentiment(polarity=0.2532440476190476, subjectivity=0.4513988095238095)
The result is a bit far from 1 for a four-star review. Maybe we can do better. TextBlob can use a more fancy approach for sentiment analysis based on a naive Bayes analyzer from NLTK trained on a movie reviews corpus. Let’s see if we get better results.
First, we need to load the module:
from textblob.sentiments import NaiveBayesAnalyzer
Then we run the analysis:
= TextBlob(review1, analyzer=NaiveBayesAnalyzer())
r1b print(r1b.sentiment)
Sentiment(classification='pos', p_pos=0.9992160761268937, p_neg=0.000783923873090155)
It looks a lot better!
Let’s try on a five-star review this time:
= "A veritable smorgasbord of Shakespeare references sees the 6th Discworld novel come to life, dragging the two most prominent witches (Granny and Nanny) and their occult-leaning protege, Magrat, though thickets and thick-cities alike as they attempt to make sure fate happens. Maybe with a few encouraging prods along the way… A mixture of Macbeth and Hamlet with a little dollop of King Lear thrown in, along with many other Shakespeare nonsense that doesn’t stand out right way, Wyrd Sisters is one of PTerry’s very early masterpieces. His hands and head seem to meld together as one, where his imagination is not shackled by the human-only speed of his writing (or typing) or the darned debilitation of the English language.The plot simmers nicely, following and then not following the true path of how this kind of story goes. It’s so nice to see the Discworld countryside getting a deep look-in, and how well it contrasts with city-life, but also manages to co-exist very peacefully (as long as the city stays far away from the countryside, thank you). Whilst previously we visited vast open vistas on the Disc, these close-quarter scenes are even more brimful of life and curiosity and share yet another facet to not only PTerry’s imagination, but the world he has created. One of the best things about PTerry (and of any comedy at all) is that, whilst he does take the mickey out of Shakespeare and fantasy and all those things, he does it with such love and reverence that it shines through and makes the humour that much more poignant and, well, funny. (My previous review of this had me not enjoying it and I’m actually pretty baffled by that. I loved every single minute of this re-read and it’s curious how your tastes and feelings can change even after only a few years. I’ll leave the old review below just for the sake of it: Wyrd Sisters is the second of the Witch mini-series, in the ever popular Discworld series. Equal Rites was the first and we were introduced to one of the greatest characters of all-time: Granny Weatherwax. Wyrd Sisters brings two more witches-and mentions of many others-in to fray: Nanny Ogg, Granny's best friend, and Magrat Garlick, a new-wave witch who thinks jangling jewellery and occult symbols makes you a better witch. Adding two new witches alongside Granny just emphasises how cantankerous, stubborn and bloody brilliant she is. Even they can't deny that she's the best. She is tolerated most of the time, but there's always an underlying current of total respect, in the same way you respect your grandparents because they lived through the war, even if they do still say \"does anyone want to get a Chinky?\" The plot is Shakespearean-Macbeth in particular-and takes many plot points from that, as well as a lot of the quotes. It's a wonderful juxtaposition of Discworld nonsense and Shakespearean tragedy that is twisted with unique Pratchett humour. It is written much the same way all the early Discworld books were. Very well, hardly any technical faults and smatterings of Pratchett humour. Despite the wonderful Granny, the amusing Nanny and the Straightforward but naive Magrat, and my love for all the Discworld witches, I couldn't enjoy this as much as I wanted. It was funny in a tittering kind of way, and the plot was interesting, but it never quite held my attention. I never felt like I wanted to read it all the time, or try and finish reading it. It took me quite a while to get through it (for other reasons I won't go in to) but it never really held me enough to want it. Still a better love story than Twilight.)"
review2
= TextBlob(review2)
r2 = TextBlob(review2, analyzer=NaiveBayesAnalyzer())
r2b
print(r2.sentiment)
print(r2b.sentiment)
Sentiment(polarity=0.23647686688311684, subjectivity=0.550209595959596)
Sentiment(classification='pos', p_pos=1.0, p_neg=9.942749572246986e-16)
Here too, the naive Bayesian classifier performs a lot better.
Now, let’s see what we get for a one-star review:
= "I struggled finishing the book as I lost interest."
review3
= TextBlob(review3)
r3 = TextBlob(review3, analyzer=NaiveBayesAnalyzer())
r3b
print(r3.sentiment)
print(r3b.sentiment)
Sentiment(polarity=0.0, subjectivity=0.0)
Sentiment(classification='pos', p_pos=0.8798380278119963, p_neg=0.12016197218800397)
Well… 🙁 Frankly, both models performed poorly here.
Let’s try another one-star review:
= "I did not enjoy this book at all. Slow and tedious. It had some funny bits in the beginning, but I struggled to finish it."
review4
= TextBlob(review4)
r4 = TextBlob(review4, analyzer=NaiveBayesAnalyzer())
r4b
print(r4.sentiment)
print(r4b.sentiment)
Sentiment(polarity=-0.1875, subjectivity=0.725)
Sentiment(classification='pos', p_pos=0.7495164489528581, p_neg=0.250483551047142)
Here again, both models perform poorly, but the Bayesian model does even worse than the default pattern analyzer.
Let’s do a few more one-star reviews:
= "It doesn't get much more boring than that. Zero improvement from Equal Rights, a complete snore-fest, poorly paced and with a clear lack of stakes and humour. I quit Discworld."
review5
= TextBlob(review5)
r5 = TextBlob(review5, analyzer=NaiveBayesAnalyzer())
r5b
print(r5.sentiment)
print(r5b.sentiment)
Sentiment(polarity=-0.11666666666666668, subjectivity=0.5222222222222223)
Sentiment(classification='neg', p_pos=0.19603626650674996, p_neg=0.8039637334932532)
= "Hated this book. To be fair, not my style. Only read parts of it because it was assigned for a book club."
review6
= TextBlob(review6)
r6 = TextBlob(review6, analyzer=NaiveBayesAnalyzer())
r6b
print(r6.sentiment)
print(r6b.sentiment)
Sentiment(polarity=-0.0666666666666667, subjectivity=0.8666666666666667)
Sentiment(classification='neg', p_pos=0.45003652998125615, p_neg=0.5499634700187429)
= "Dnf - boring"
review7
= TextBlob(review7)
r7 = TextBlob(review7, analyzer=NaiveBayesAnalyzer())
r7b
print(r7.sentiment)
print(r7b.sentiment)
Sentiment(polarity=-1.0, subjectivity=1.0)
Sentiment(classification='neg', p_pos=0.22247706422018348, p_neg=0.7775229357798165)
These fared better, although, the results are not exactly impressive.
Rule-based vs data-based NLP
TextBlob (as well as NLTK on which it is built) uses a rule-based or lexicon-based approach to NLP. This is an old technique that requires linguistic knowledge, but is computationally basic. As we saw, it has many limitations.
The recent successes of large language models (LLMs) have shown unequivocally that AI algorithms fed vast amounts of data do a much better job. They do not need to be programmed explicitly since they learn by experience. They do however require a lot of computing power and data for training.
Such models are built from multiple complex artificial neural networks trained using machine learning frameworks such as PyTorch or JAX.
As Frederick Jelinek is famously often quoted for saying (although what he said was probably slightly different):
Every time I fire a linguist, the performance of the speech recognizer goes up.
I will quickly demo how much better any of the LLMs out there does at this task.