My Kaggle Learning Curve: Artificial Stupidity

Tags

, ,

This month I started competing in my very first Kaggle competition, Denoising Dirty Documents. I was first introduced to Kaggle a few years ago by Xavier Conort, an insurance industry colleague who also lives here in Singapore. But I had been passive with my Kaggle membership, and hadn’t even considered competing.

This year two things changed. Firstly, I joined IntelliM, an image processing, machine learning and software house, and I needed to get out into the real world and make business connections and start adding value in these fields. Secondly, Kaggle opened the Denoising Dirty Documents competition, which is about pre-processing scanned documents so that they are suitable for optical character recognition, and this competition required both image processing skills and machine learning skills. So this competition looked like a great match for me, and hopefully would be an easy transition to build some experience within Kaggle.

COPR

Although I am an actuary by training, I have not always stayed within the traditional bounds of actuarial work. Back in the 1990s I first started playing with machine learning, using neural networks to predict which customers will renew their insurance policies. Then, inspired by Kim and Nelson’s book, I developed a state space regime switching model for predicting periods of massive builder insolvencies. That model has subsequently been adapted for cancer research, to measure the timing of genes switching off and on. In the 2000s I started getting involved in image processing, firstly to create optical character recognition for a web scraper software package, and later developing COPR, license plate recognition software. Over the past decade I have been using machine learning for customer analytics and insurance pricing.

the problem to be solved

So I thought that just doing some pre-processing for optical character recognition would be quick and easy. When I looked at the examples (see one example above), my eyes could quickly see what the answer should look like even before I peeked at the example cleaned image. I was so wrong…

Lesson: Avoid Artificial Stupidity

Machine learning is sometimes called artificial intelligence. After all, aren’t neural networks based upon the architecture of the human brain?

My first competition submission was a pure machine learning solution. I modelled the target image one pixel at a time. For predictors, I got the raw pixel brightnesses for a region around each pixel location. This is a brute force approach that I have used in the past for optical character recognition. I figured that the machine learning algorithm would learn what the character strokes looked like, and thereby know which pixels should be background.

What really happened was that the machine learning algorithm simply adjusted the brightness and contrast of the image, to better match the required solution. So I scored 8.58%, giving me 24th ranking, much higher than I was expecting, and much closer to some naive benchmarks than I was comfortable with.

submission 1

I wanted a top ten placing, but I was a long way away from it. So I fine-tuned the model hyperparameters. This moderately improved the score, and only moved me up 3 ranks. My next competition submission actually scored far worse than my preceding two submissions! I needed to rethink my approach because I was going backwards, and the better submissions were almost an order of magnitude better than mine.

The reason my submission scored so poorly was because I was asking the machine learning model to learn complex interactions between pixels, without any guidance from me. There are heuristics about text images that I intuitively know, but I hadn’t passed on any of that knowledge to the machine learning algorithm, either via predictors or model structure.

My algorithm wasn’t artificially intelligent; it was artificially stupid!

So I stopped making submissions to the competitions, and started looking at the raw images and cleaned images, and I applied some common image processing algorithms. I asked myself these questions:

  • what is it about the text that is different to the background?
  • what are the typical characteristics of text?
  • what are the typical characteristics of stains?
  • what are the typical characteristics of folded or crinkled paper?
  • how does a dark stain differ from dark text?
  • what does the output from a image processing algorithm tell me about whether a pixel is text or background?
  • what are the shortcomings of a particular image processing algorithm?
  • what makes an image processing algorithm drop out some of the text?
  • what makes an image processing algorithm think that a stain is text?
  • what makes an image processing algorithm think that a paper fold is text?
  • which algorithms have opposing types of classification errors?

3

For example, in the image above, the algorithm thins out the text too much, does not remove the outer edges of stains, and does not remove small stains. That prompted me to think that maybe an edge finding algorithm would complement this algorithm.

leaderboard 20150725

After a week of experimentation and feature extraction, I finally made a new competition submission, and it jumped me up in the rankings. Then I started fine tuning my model, and split the one all-encompassing machine learning model into multiple specialist models. At the time of writing this blob I am ranked 4th in the competition, and after looking at the scores of the top 3 competitors, I realise that I will have to do more than just fine tune my algorithm. It’s time for me to get back into research mode and find a new feature that identifies the blob stain at the end of the first paragraph in this image:

3-postprocessed

Kaggle is addictive. I can’t wait to solve this problem!

Efficiently Building GLMs: Part 2

Tags

, , , , ,

In my last blog, I started this series of blogs discussing how to make the GLM model building process more efficient, and I showed an example of how R can help you find a good set of starting predictors that capture the essence of what can be explained.

In this blog I will show you a few ways that R can also help you to fine tune the choice of predictors.

In my experience, most statisticians and actuaries follow a heuristic process for fine tuning their GLMs. They look for predictors with low statistical significance, that can be dropped. They try to add predictors that they expect might be valuable. It’s quite a manual process, with experimentation and tinkering, and frequently isn’t documented.

Before the invention of the printing press, books were hand written. Monasteries had rooms called scriptoria where monks would copy manuscripts, painstakingly drawing and writing, copying pages of existing books. Later, as the first universities emerged, a new type of scribe, who wasn’t a monk, would carry out the same process in scriptoria that were located within those universities.

Escribano

"Escribano" by Jean Le Tavernier - [1]. Licensed under Public Domain via Wikimedia Commons - https://commons.wikimedia.org/wiki/File:Escribano.jpg#/media/File:Escribano.jpg

Are we using statisticians and actuaries like scribes, doing manual work that can (and should) be automated? That would free them up to make more value-added contributions, such as sensibility checks, contextualisation, and recommending practical improvements to underwriting, pricing, risk management and marketing.

Improvement 2: Automating the Process for Selection of Predictors

You can automate the search process. A modern computer can exhaustively search through models at a rate that is several orders of magnitude faster than a human. If you include some of the same heuristics that the humans use, you can save a lot of time, and reduce operational risk.

There are a few R packages that can do this for you, without requiring customisation. Today I will consider two of them, bestglm and glmulti.

By default, but only for normally distributed residuals, the bestglm package uses the “leaps and bounds” algorithm, which was developed by Furnival and Wilson back in 1974. Click here to download and read the original paper. The algorithm begins with an overspecified model, and then recursively considers dropping out each remaining predictor. It rules out searching further when the child models (caused by choosing whether to keep or drop out a specific predictor) do not improve the quality of the model. It can be visualised as searching through the set of models, using a tree structure.

For non-normally distributed GLMs, an exhaustive search is carried out i.e. bestglm considers every possible subset of predictors, and evaluates the quality of the model.

To give a practical demonstration of this process, I will again use the diabetes readmission data in the UCI Machine Learning Repository at https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008. I have reformatted the data slightly, so if you wish to exactly replicate my analysis, then I suggest that you download a copy of the reformatted data from here.

# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(bestglm)

# the working folder for this batch job
folderPath = "C:\\Users\\Colin\\Documents\\IntelliM\\"

# read the training data
td = read.csv(paste(folderPath, "training.csv", sep=""))

# pretend that we only have the target plus these predictors
# remember that the target must be the last column in the data
# just use a subset of the rows, because this is just an example
set.seed(1)
td = td[sample(nrow(td), 1000),c("gender", "time_in_hospital", "number_inpatient", "diag_1", "diag_2", "diag_3", "discharge_id_desc", "readmitted_flag")]

# find the model with the best BIC
bestBIC = bestglm(td, family = binomial, IC = "BIC")

# Show top 5 models
bestBIC$BestModels

# show a summary of the best model
summary(bestBIC$BestModel)</pre>
# show the relationship between the number of predictors and the model quality
plot(seq_len(nrow(bestBIC$Subsets)) - 1, bestBIC$Subsets[,"BIC"], type="b", xlab = "Number of Predictors", ylab = "BIC")

# show again, but cap it at 3 predictors
plot(seq_len(4) - 1, bestBIC$Subsets[seq_len(4),"BIC"], type="b", xlab = "Number of Predictors", ylab = "BIC")

For the purpose of this example, to keep the run time down to a few minutes, I have only used a subset of the columns and the rows in the data. In practice you would use as many of the columns and rows as you need, limited by the computing power available to you. While I have used the BIC as the measure of the “best” GLM model, this is purely for illustrative purposes. The discussion of which measure to use will be the topic of another blog.

bestglm top 5

The BIC criteria heavily penalises the use of diag_1, diag_2 and diag_3 as predictors because of their very high degrees of freedom (each is a factor containing hundreds of different diagnosis codes). Interestingly, the null model (no predictors) ranks number 4 amongst the top 5 models! However, that is probably because we haven’t transformed the numeric predictors to have linear relationships to the target, and we haven’t grouped together any of the factor levels in diag_1, diag_2 and diag_3. Note that the model rankings will change with the number of rows that you include, and the choice of information criteria by which to measure the model performance.

bestglm top summary

The summary of the best model shows a simple model with highly significant predictors.

bestglm plot 2

The plot shows that the best BIC score happens with the use of 2 predictors.

But the bestglm package has its limitations. Exhaustively searching through every possible model is time consuming, even when automated, and especially when the data has many columns and/or rows. And bestglm doesn’t automatically consider interaction terms.

Enter the glmulti package. First let’s get glmulti to replicate the results of the bestglm script.

# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(glmulti)
# the working folder for this batch job
folderPath = "C:\\Users\\Colin\\Documents\\IntelliM\\"

# read the training data
td = read.csv(paste(folderPath, "training.csv", sep=""))

# pretend that we only have the target plus these predictors
# just use a subset of the rows, because this is just an example
set.seed(1)
td = td[sample(nrow(td), 1000),c("gender", "time_in_hospital", "number_inpatient", "diag_1", "diag_2", "diag_3", "discharge_id_desc", "readmitted_flag")]

# replicate the analysis done by bestglm
bestBIC = glmulti(readmitted_flag ~ ., data = td, family = binomial, level = 1, crit=bic, fitfunc=glm, method="h",
confsetsize = 256, plotty = TRUE, report = TRUE)

print(bestBIC)

The “level” parameter has been set to a value of 1, which means that we are telling glmulti to not consider interaction effects.

glmulti example 1

It can be seen that the results line up with those of bestglm.

To consider 2-way interaction effects, use “level = 2”.

</pre>
# read the training data
td = read.csv(paste(folderPath, "training.csv", sep=""))

# allow for 2-way interactions
bestBIC2 = glmulti(readmitted_flag ~ gender + time_in_hospital + num_procedures + number_inpatient, data = td[sample(nrow(td), 10000),], family = binomial, level = 2, crit=bic, fitfunc=glm, method="h", confsetsize = 256, plotty = TRUE, report = TRUE)
print(bestBIC2)
<pre>

glmulti exhaustive 2 way results glmulti exhaustive 2 way plot

Including 2-way interaction effects increases the run time exponentially. So in this example I have used fewer predictors. Normally you would not do this.

Once we start including 2-way interactions and all of the possible predictor columns, an exhaustive search becomes prohibitively time consuming. In such cases, a better approach is to use genetic algorithms to search through possible models. To do this, change the “method” parameter to “g”.

# read the training data
td = read.csv(paste(folderPath, "training.csv", sep=""))

# pretend that we only have the target plus these predictors
# just use a subset of the rows, because this is just an example
set.seed(1)
td = td[sample(nrow(td), 10000),c("race", "gender", "time_in_hospital", "num_medications", "number_emergency", "number_inpatient", "discharge_id_desc", "medical_specialty_desc", "readmitted_flag")]

# solve using genetic algorithms
bestBIC.ga = glmulti(readmitted_flag ~ ., data = td, family = binomial, level = 1, crit=bic, fitfunc=glm, method="g",
confsetsize = 256, plotty = TRUE, report = TRUE)
print(bestBIC.ga)

glmulti ga results 1

glmulti hides the complexity of the genetic algorithm implementation from you, so it won’t take long for you to get up and running with your own model optimisation using genetic algorithms. Despite this ease of use, this year I have switched from using glmulti, to writing my own customised genetic algorithm implementations because:

  • glmulti’s genetic algorithm implementation doesn’t have parallel processing abilities, which would speed up the search (am I being impatient?),
  • I often want to customise the initial population, to allow for knowledge I already have about what constitutes a reasonable starting model (for example, by including the results from the analysis explained in my last blog,
  • the inclusion of interaction terms can often lead to GLM fitting errors, such as collinearity or overspecification, that can crash or freeze up glmulti’s genetic algorithm (and this problem occurs on the diabetes readmission data that I use in this blog),
  • glmulti’s genetic algorithm often has duplicate model choices within its population, and so runs the GLM fitting process for exactly the same choice of columns more than once, and this comes with a computational expense, but by customising the implementation I can cache the previously fitted models and just pull out the results without refitting that model all over again,
  • glmulti has an undocumented upper limit on how many predictors it will consider, throwing the error message “too many predictors” if that limit is exceeded, and that limit is low (low enough to be triggered for the diabetes dataset example that I am using in this blog), and
  • sometimes I want to retain more information about intermediate models e.g. AIC, BIC, coefficient values, leverage

If there are enough requests from readers wanting to know how to create custom genetic algorithm searches for GLM model building, then I will make that the topic of a future blog 🙂
Would you prefer to apply these process improvements without the effort of writing R scripts? Consider using a visual user interface, such as IntelliM.

Efficiently Building GLMs: Part 1

Tags

, , , , , , ,

It was way back in 1972 that John Nelder and Robert Wedderburn first introduced GLMs to the world (you can find their original paper here), and by the 1990s actuaries in the insurance industry (I am an actuary) had started using GLMs for technical pricing, empowered by the increased accessibility of modern powerful computers. In some parts of the world there are now huge teams, consisting of dozens of actuaries, sometimes more than 100 of them, building generalized linear models (GLMs) for technical pricing of insurance policies. But is this efficient or is it as out of date as a dinosaur riding a penny farthing?

Dinosaur on Penny Farthing

At university, when I was doing my masters degree in Applied Statistics, I was taught that the “best” GLM model is the one with the lowest AIC or BIC (the topic of whether AIC and BIC are good model benchmarks will be the topic of a future blog). But the information criterion is not a well behaved function (it can have many local minima and maxima). To find the model with the lowest information criterion, one cannot follow a steepest gradient approach, but instead one must systematically search through the different combinations of predictors.

Consider a scenario in which there are 30 possible predictors, consisting of rating factors collected from the customer (e.g. zip code / post code) and / or data variables collected from other sources (e.g. credit rating of the insured) and all of the these values are either categorical, or are numeric and have a linear relationship to the variable being modelled (whether that be claim frequency, severity or claims cost per policy). In such a situation, there are 2^30=1,073,741,824 possible combinations of predictors that could be included in the GLM formula. If each of your actuarial or statistical staff took 10 minutes to test a particular combination of predictors, and each of those staff worked 8 hours per day, for 50 weeks per year, it would take 11,185 man-years to find the best model! Once you include the search for 2-way interactions between predictors, the search time blows out to longer than the life-span of the universe!!!

In practice, actuaries and statisticians are producing GLM models faster than that estimate because they are using heuristics to substantially reduce the number of combinations to search. Those heuristics are usually based upon variants of step-wise regression, whereby they add or remove one predictor at a time. This is still extremely time consuming, and the process still does not necessarily produce the model with the best AIC or BIC.

How can you make this process more efficient?

Let’s make this more real by considering some publicly available data tracking hospital readmission of diabetes patients in USA from 1999 to 2008. You can find the data in the UCI Machine Learning Repository at https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008

Improvement 1: Starting With a Reasonable Choice of Predictors

One can improve upon the step-wise regression approach by starting with a model that already has a few of the most useful predictors.

Instead of beginning with a GLM that uses all the predictors and removes them one-by-one, or starting with no predictors and adding more predictors one-by-one, you can start by understanding which predictors are likely to be the best candidates, and this involves taking a step beyond GLMs into some more modern techniques, including:

For the sake of simplicity, this blog will only apply just one of these approaches, the variable importance measure based upon a gradient boosting machine. But any of these three approaches will usually achieve the task.

A gradient boosting machine (GBM) is a very flexible type of machine learning algorithm. It can be used for regression and classification problems. You can use GBMs via the GBM package available in R. GBMs are a forest of trees whereby each successive tree is fitted to the residuals of the previous iteration of the forest i.e. each new tree predicts the errors from the existing forest. The GBM package has a measure of “relative influence” that is quite similar to a variable importance measure, and can be used for the same purpose.

Variable importance or relative influence is a measure of how much of the variation in outcomes is explained by the inclusion of the predictor in the model. A predictor will explain more of the variation in outcomes if:

  • it is statistically significant i.e. the difference isn’t random,
  • the difference is large for different values of the predictor i.e. the predictor differentiates well, and
  • there is considerable variation in the predictor value between observations i.e. more than just a small number of predictor observations are different to the average.

Here is some sample R code to give an indication of how one would do this with the diabetes readmission data:

# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(gbm)

# the working folder for this batch job
folderPath = "C:\\Users\\Colin\\Documents\\IntelliM\\";

# read the training data
td = read.csv(paste(folderPath, "training.csv", sep=""))

# GBM variable importance
set.seed(1)
gbm_imp = gbm(formula = readmitted_flag ~ ., distribution = &quot;bernoulli&quot;, data=td, n.trees = 1000, interaction.depth = 1, verbose=TRUE, shrinkage = 0.01, cv.folds=0, keep.data = F)
s = summary(gbm_imp)
head(s)

As I write in my last blog, I’m a fan of the pacman package in R. It conveniently ensures that I have installed packages before I load them, and then installs and loads the packages as required.

The next step is to read the diabetes readmission data into R. I am reading the data from a comma delimited file that I have created previously after downloading the UCI Machine Learning Repository data. You should edit the sample R script to use the folder and file name of your data.

Finally I fitted a GBM model. For the purposes of this blog I set the random seed, to make the results replicable. Note that the model hyperparameters were not optimised – I am just creating a model for the sake of understanding which predictors are important, not trying to fit the best possible GBM model. But sometimes the interaction.depth hyperparameter does matter. In my script above I have used interaction.depth = 1, which excludes the possibility of 2-way interaction effects between two predictors. I chose a value of 1 for simplicity for this example, and because most of the time it doesn’t make a difference to the discovery of the most important predictors. However, if you have strong reason to believe that your data exhibits strong 2-way effects between predictors, and also have reason to believe that the one-way effect of those same predictors is weak, then try higher values for this hyperparameter.

variable importance console screenshot

The summary function gets the variable importance measures from the GBM model. It displays a plot of the most important predictors, and also stores them in a table. As you can see from the screenshot above, R shows that just 5 predictors will give most of the predictive power. Note that the variable importance scores have been automatically scaled to add to 100.

R variable importance plot

The default plot for GBM variable importance is rather difficult to read and interpret. If I am working on a project that is purely within R, then I usually script for a better looking plot based upon the ggplot2 package in R. Lately I have been using IntelliM to build my GLMs because it automates the feature extraction, model building and validation process for GLMs, and it does it visually instead of manually writing R scripts. It gives an easier to read graph, shown below.

Variable Importance

Later in this series of blogs, I will discuss dimensionality reduction, but since one approach to dimensionality reduction relates to what I have just shown in this blog, I will give you a sneak peak.

The diabetes readmission data contains categorical predictors that contain codes for diagnoses and other data of interest. Some of these predictors have several hundred different possible codes. Unless we had many millions of observations in total, there is no way that all of those codes are going to have enough observations to provide statistically valid predictions. What we expect is that just a small number of these codes are important.

Factor Importance

So we can apply the same variable importance approach to the codes within a categorical predictor, and then group the unimportant codes together, making for a better model.

Tutorial: Using R and Twitter to Analyse Consumer Sentiment

Tags

, ,

Tutorial: Using R and Twitter to Analyse Consumer Sentiment
Content

This year I have been working with a Singapore Actuarial Society working party to introduce Singaporean actuaries to big data applications, and the new techniques and tools they need in order to keep up with this technology. The working group’s presentation at the 2015 General Insurance Seminar was well received, and people want more. So we are going to run some training tutorials, and want to extend our work.
One of those extensions is text mining. Inspired by a CAS paper by Roosevelt C. Mosly Jnr, I thought that I’d try to create a simple working example of twitter text mining, using R. I thought that I could just Google for an R script, make some minor changes, and run it. If only it were as simple as that…
I quickly ran into problems that none of the on-line blogs and documentation fully deal with:

    • Twitter changed its search API to require authorisation. That authorisation process is a bit time-consuming and even the most useful blogs got some minor but important details wrong.
    • CRAN has withdrawn its sentiment package, meaning that I couldn’t access the key R library that makes the example interesting.

After much experimentation, and with the help of some R experts, I finally created a working example. Here it goes, step by step:

STEP 1: Log on to https://apps.twitter.com/

Just use your normal Twitter account login. The screen should look like this:
step 1

STEP 2: Create a New Twitter Application

Click on the “Create New App” button, then you will be asked to fill in the following form:
step 2
Choose your own application name, and your own application description. The website needs to be a valid URL. If you don’t have your own URL, then JULIANHI recommends that you use http://test.de/ , then scroll down the page.
step 2b
Click “Yes, I Agree” for the Developer Agreement, and then click the “Create your Twitter application” button. You will see something like this:

step 2c

Go to the “Keys and Access Tokens” tab. Then look for the Consumer Key and the Consumer Secret. I have circled them in the image below. We will use these keys later in our R script, to authorise R to access the Twitter API.

step 2d2

Scroll down to the bottom of the page, where you will find the “Your Access Token” section.

step 2e

Click on the button labelled “Create my access token”.step 2f

Look for the Access Token and Access Token Secret. We will use these in the next step, to authorise R to access the Twitter API.

STEP 3: Authorise R to Access Twitter

First we need to load the Twitter authorisation libraries. I like to use the pacman package to install and load my packages. The other packages we need are:

    • twitteR : which gives an R interface to the Twitter API
    • ROAuth : OAuth authentication to web servers
    • RCurl : http requests and processing the results returned by a web server

The R script is below. But first remember to replace each “xxx” with the respective token or secret you obtained from the Twitter app page.


# authorisation
if (!require('pacman')) install.packages('pacman')
pacman::p_load(twitteR, ROAuth, RCurl)

api_key = 'xxx'
api_secret = 'xxx'
access_token = 'xxx'
access_token_secret = 'xxx'

# Set SSL certs globally
options(RCurlOptions = list(cainfo = system.file('CurlSSL', 'cacert.pem', package = 'RCurl')))

# set up the URLs
reqURL = 'https://api.twitter.com/oauth/request_token'
accessURL = 'https://api.twitter.com/oauth/access_token'
authURL = 'https://api.twitter.com/oauth/authorize'

twitCred = OAuthFactory$new(consumerKey = api_key, consumerSecret = api_secret, requestURL = reqURL, accessURL = accessURL, authURL = authURL)

twitCred$handshake(cainfo = system.file('CurlSSL', 'cacert.pem', package = 'RCurl'))

After substituting your own token and secrets for “xxx”, run the script. It will open a web page in your browser. Note that on some systems R can’t open the browser automatically, so you will have to copy the URL from R, open your browser, then paste the link into your browser. If R gives you any error messages, then check that you have pasted the token and secret strings correctly, and ensure that you have the latest versions of the twitteR, ROAuth and RCurl libraries by reinstalling them using the install.packages command.

The web page will look something like this:

step 3a

Click the “Authorise app” button, and you will be given a PIN (note that your PIN will be different to the one in my example).

step 3b

Copy this PIN to the clipboard and then return to R, which is asking you to enter the PIN.

step 3c

Paste in, or type, the PIN from the Twitter web page, then click enter. R is now authorised to run Twitter searches. You only need to do this once, but you do need to use your token strings and secret strings again in your R search scripts.

Go back to https://apps.twitter.com/ and go to the “Setup” tab for your application.

step 3d

For the Callback URL enter http://127.0.0.1:1410 . This will allow us the option of an alternative authorisation method later.

STEP 4: Install the Sentiment Package

Since the sentiment package is no longer available on CRAN, we have to download the archived source code and install it via this RScript:

if (!require('pacman')) install.packages('pacman&')
pacman::p_load(devtools, installr)
install.Rtools()
install_url('http://cran.r-project.org/src/contrib/Archive/Rstem/Rstem_0.4-1.tar.gz')
install_url('http://cran.r-project.org/src/contrib/Archive/sentiment/sentiment_0.2.tar.gz')

Note that we only have to download and install the sentiment package once.

UPDATE: There’s a new package on CRAN for sentiment analysis, and I have written a tutorial about it.

STEP 5: Create A Script to Search Twitter

Finally we can create a script to search twitter. The first step is to set up the authorisation credentials for your script. This requires the following packages:

  • twitteR : which gives an R interface to the Twitter API
  • sentiment : classifies the emotions of text
  • plyr : for splitting text
  • ggplot2 : for plots of the categorised results
  • wordcloud : creates word clouds of the results
  • RColorBrewer :  colour schemes for the plots and wordcloud
  • httpuv : required for the alternative web authorisation process
  • RCurl : http requests and processing the results returned by a web server

if (!require('pacman')) install.packages('pacman')
pacman::p_load(twitteR, sentiment, plyr, ggplot2, wordcloud, RColorBrewer, httpuv, RCurl, base64enc)

options(RCurlOptions = list(cainfo = system.file('CurlSSL', 'cacert.pem', package = 'RCurl')))

api_key = 'xxx'
api_secret = 'xxx'
access_token = 'xxx'
access_token_secret = 'xxx'

setup_twitter_oauth(api_key,api_secret,access_token,access_token_secret)

Remember to replace the “xxx” strings with your token strings and secret strings.

Using the setup_twitter_oauth function with all four parameters avoids the case where R opens a web browser again. But I have found that it can be problematic to get this function to work on some computers. If you are having problems, then I suggest that you try the alternative call with just two parameters:

setup_twitter_oauth(api_key,api_secret)

This alternative way opens your browser and uses your login credentials from your current Twitter session.

Once authorisation is complete, we can run a search. For this example, I am doing a search on tweets mentioning a well-known brand: Starbucks. I am restricting the results to tweets written in English, and I am getting a sample of 10,000 tweets. It is also possible to give date range and geographic restrictions.


# harvest some tweets
some_tweets = searchTwitter('starbucks', n=10000, lang='en')

# get the text
some_txt = sapply(some_tweets, function(x) x$getText())

Please note that the Twitter search API does not return an exhaustive list of tweets that match your search criteria, as Twitter only makes available a sample of recent tweets. For a more comprehensive search, you will need to use the Twitter streaming API, creating a database of results and regularly updating them, or use an online service that can do this for you.

Now that we have tweet texts, we need to clean them up before doing any analysis. This involves removing content, such as punctuation, that has no emotional content, and removing any content that causes errors.


# remove retweet entities
some_txt = gsub('(RT|via)((?:\\b\\W*@\\w+)+)', '', some_txt)
# remove at people
some_txt = gsub('@\\w+', '', some_txt)
# remove punctuation
some_txt = gsub('[[:punct:]]', '', some_txt)
# remove numbers
some_txt = gsub('[[:digit:]]', '', some_txt)
# remove html links
some_txt = gsub('http\\w+', '', some_txt)
# remove unnecessary spaces
some_txt = gsub('[ \t]{2,}', '', some_txt)
some_txt = gsub('^\\s+|\\s+$', '', some_txt)

# define 'tolower error handling' function
try.error = function(x)
{
# create missing value
y = NA
# tryCatch error
try_error = tryCatch(tolower(x), error=function(e) e)
# if not an error
if (!inherits(try_error, 'error'))
y = tolower(x)
# result
return(y)
}
# lower case using try.error with sapply
some_txt = sapply(some_txt, try.error)

# remove NAs in some_txt
some_txt = some_txt[!is.na(some_txt)]
names(some_txt) = NULL

Now that we have clean text for analysis, we can do sentiment analysis. The classify_emotion function is from the sentiment package and “classifies the emotion (e.g. anger, disgust, fear, joy, sadness, surprise) of a set of texts using a naive Bayes classifier trained on Carlo Strapparava and Alessandro Valitutti’s emotions lexicon.”

# Perform Sentiment Analysis
# classify emotion
class_emo = classify_emotion(some_txt, algorithm='bayes', prior=1.0)
# get emotion best fit
emotion = class_emo[,7]
# substitute NA's by 'unknown'
emotion[is.na(emotion)] = 'unknown'

# classify polarity
class_pol = classify_polarity(some_txt, algorithm='bayes')
# get polarity best fit
polarity = class_pol[,4]
# Create data frame with the results and obtain some general statistics
# data frame with results
sent_df = data.frame(text=some_txt, emotion=emotion,
polarity=polarity, stringsAsFactors=FALSE)

# sort data frame
sent_df = within(sent_df,
emotion <- factor(emotion, levels=names(sort(table(emotion), decreasing=TRUE))))

With the sentiment analysis done, we can start to look at the results. Let’s look at a histogram of the number of tweets with each emotion:

# Let’s do some plots of the obtained results
# plot distribution of emotions
ggplot(sent_df, aes(x=emotion)) +
geom_bar(aes(y=..count.., fill=emotion)) +
scale_fill_brewer(palette='Dark2') +
labs(x='emotion categories', y='number of tweets') +
ggtitle('Sentiment Analysis of Tweets about Starbucks\n(classification by emotion)') +
theme(plot.title = element_text(size=12, face='bold'))

step 5a.jpg

Most of the tweets have unknown emotional content. But that sort of makes sense when there are tweets such as “With risky, diantri, and Rizky at Starbucks Coffee Big Mal”.

Let’s get a simpler plot, that just tells us whether the tweet is positive or negative.


# plot distribution of polarity
ggplot(sent_df, aes(x=polarity)) +
geom_bar(aes(y=..count.., fill=polarity)) +
scale_fill_brewer(palette='RdGy') +
labs(x='polarity categories', y='number of tweets') +
ggtitle('Sentiment Analysis of Tweets about Starbucks\n(classification by polarity)') +
theme(plot.title = element_text(size=12, face='bold'))

step 5b

So it’s clear that most of the tweets are positive. That would explain why there are more than 21,000 Starbucks stores around the world!

Finally, let’s look at the words in the tweets, and create a word cloud that uses the emotions of the words to determine their locations within the cloud.

# Separate the text by emotions and visualize the words with a comparison cloud
# separating text by emotion
emos = levels(factor(sent_df$emotion))
nemo = length(emos)
emo.docs = rep('', nemo)
for (i in 1:nemo)
{
tmp = some_txt[emotion == emos[i]]
emo.docs[i] = paste(tmp, collapse=' ')
}

# remove stopwords
emo.docs = removeWords(emo.docs, stopwords('english'))
# create corpus
corpus = Corpus(VectorSource(emo.docs))
tdm = TermDocumentMatrix(corpus)
tdm = as.matrix(tdm)
colnames(tdm) = emos

# comparison word cloud
comparison.cloud(tdm, colors = brewer.pal(nemo, 'Dark2'),
scale = c(3,.5), random.order = FALSE, title.size = 1.5)

step 5c

Word clouds give a more intuitive feel for what people are tweeting. This can help you validate the categorical results you saw earlier.

And that’s it for this post! I hope that you can get Twitter sentiment analysis working on your computer too.

UPDATE: There’s a new package on CRAN for sentiment analysis, and I have written a tutorial about it.