Keeping Up With The Latest Techniques

~ brief insights

Keeping Up With The Latest Techniques

Tag Archives: Image Processing

Denoising Dirty Documents : Part 4

21 Friday Aug 2015

Posted by Colin Priest in Edge Detection, Image Processing, Kaggle, Machine Learning, Morphology, R

≈ 18 Comments

Tags

Edge Detection, Image Processing, Kaggle, Machine Learning, Morphology, R

At the end of the last blog, we had made some progress in removing the coffee cup stain from the image, but we needed to do more.

20150815 output 5

Adaptive thresholding has started to separate the writing from the stain, but it has also created a speckled pattern within the stains. We need to engineer a feature that can tell apart a stroke of writing from a speckled local maxima i.e. distinguish a ridge from a peak in the 3D surface.

1045988_SMPNG_46U34561M9028014G

In image processing, we do this via edge detection, which is the process of calculating the slope of the 3D surface of the image, and retaining lines where the slope is high. There are several different standard algorithms to do edge detection, and today we will use the canny edge detector.

The biOps package, which has an implementation of the canny edge detector, has been removed from CRAN. It has been migrated to Google Code. You will need to follow the installation instructions that can be found here. For example, since I am using Windows 64, I mostly followed the following instructions from the web site:

Windows 64 bit

    1. Note: we dropped jpeg and tiff IO functions.
    2. Download (go to the DLL’s page then download the raw file) libfftw3-3.dll, libfftw3f-f.dll, libfftw3l-3.dll, and zlib1.dll to C:\Program Files\R\R-3.x.x\bin\x64 (x.x needs to be edited) or somewhere present in the PATH variables. Make sure that the downloaded dll files are MB in file size.
    3. Download biOps_0.2.2.zip (go to the DLL’s page then download the raw file). Make sure that the file size is around 700KB.
    4. Run 64 bit R.
    5. Choose biOps_0.2.2.zip from Packages>Install package(s)…
    6. Load the library.
> library(biOps)

However for step E) I used the following R code:


install.packages("C:/Program Files/R/R-3.1.3/bin/biOps_0.2.2.zip")

That’s because I had downloaded biOps_0.2.2.zip into the C:/Program Files/R/R-3.1.3/bin folder. You should substitute the folder path that you downloaded the zip file into.

Update!!!: Google has switched off Google Code. But I have found the dlls inside an archive that you can download from here. Warning: the download is large (613.91MB).

Now we can start to experiment with edge detection. Note that biOps images have pixel brightnesses from 0 to 255 rather than from 0 to 1. So we have to rescale whenever we switch packages.


if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, biOps)

# read in the coffee cup stain image
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")
plot(raster(img))

# convert the image to a biOps image
img.biOps = imagedata(img * 255)
plot.imagedata(img.biOps)

# canny edge detection
img.canny = imgCanny(img.biOps, 0.7)
plot.imagedata(img.canny)

20150822 output 1

One of the things I like about this particular edge detection algorithm is that it has automatically thresholded the edges so that we are only left with the strong edges. So here are my observations about the behaviour of the edges:

  • they surround the writing
  • they surround the stains
  • there are few edges within a stain, so a lack of edges in a region may be a useful feature for removing the speckles within a stain
  • writing has a pair of parallel edges around each stroke, while the boundary of the stain has only a single edge, so the presence of a pair of edges may be a useful feature for separating stains from writing

To take advantage of these observations we shall use image morphology. Dilation is the process of making a line or blog thicker by expanding its boundary by one pixel. Erosion is the opposite, removing a 1 pixel thick layer from the boundary of an object. If we dilate the edges, then the pair of edges around the writing will expand to include the writing inside, and the edge of the stain will also expand.


# do some morphology on the edges to fill the gaps between them
mat <- matrix (0, 3, 3)
mask <- imagedata (mat, "grey", 3, 3)
img.dilation = imgBinaryDilation(img.canny, mask)
plot.imagedata(img.dilation)

20150822 output 3

The writing is all black, whereas most of the stain is white. This will probably be a useful feature for removing the coffee cup stain. But we can do more: now that we have dilated the edges, we can erode them to remove where we started with single edges.

20150822 output 2

This looks pretty good – all of the writing is black, but only a small part of the stain remains. the stain has a thin line, while the writing has thick lines. So we can erode once, then dilate once, and the thin lines will disappear.


# do some morphology to remove the stain lines
mat <- matrix (0, 3, 3)
mask <- imagedata (mat, "grey", 3, 3)
img.dilation = imgBinaryDilation(img.canny, mask)
img.erosion = imgBinaryErosion(img.dilation, mask)
img.erosion.2 = imgBinaryErosion(img.erosion, mask)
img.dilation.2 = imgBinaryDilation(img.erosion.2, mask)
img.dilation.3 = imgBinaryDilation(img.dilation.2, mask)
plot.imagedata(img.dilation.3)

20150822 output 4

The stain is now almost completely removed! But unfortunately some of the writing has been removed too. So it is an imperfect feature for removing the coffee cup stain.

Let’s put it all together with the existing features that we have developed over the past few blogs, by adding canny edges and the dilated / eroded edges to the gradient boosted model:


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, data.table, gbm, foreach, doSNOW, biOps)

if (!require("EBImage"))
{
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
}

# a function to do k-means thresholding
kmeansThreshold = function(img)
{
# fit 3 clusters
v = img2vec(img)
km.mod = kmeans(v, 3)
# allow for the random ordering of the clusters
oc = order(km.mod$centers)
# the higher threshold is the halfway point between the top of the middle cluster and the bottom of the highest cluster
hiThresh = 0.5 * (max(v[km.mod$cluster == oc[2]]) + min(v[km.mod$cluster == oc[3]]))

# using upper threshold
imgHi = v
imgHi[imgHi <= hiThresh] = 0
imgHi[imgHi > hiThresh] = 1

return (imgHi)
}

# a function that applies adaptive thresholding
adaptiveThresholding = function(img)
{
img.eb <- Image(t(img))
img.thresholded.3 = thresh(img.eb, 3, 3)
img.thresholded.5 = thresh(img.eb, 5, 5)
img.thresholded.7 = thresh(img.eb, 7, 7)
img.thresholded.9 = thresh(img.eb, 9, 9)
img.thresholded.11 = thresh(img.eb, 11, 11)
img.kmThresh = kmeansThreshold(img)

# combine the adaptive thresholding
ttt.1 = cbind(img2vec(Image2Mat(img.thresholded.3)), img2vec(Image2Mat(img.thresholded.5)), img2vec(Image2Mat(img.thresholded.7)), img2vec(Image2Mat(img.thresholded.9)), img2vec(Image2Mat(img.thresholded.11)), img2vec(kmeansThreshold(img)))
ttt.2 = apply(ttt.1, 1, max)
ttt.3 = matrix(ttt.2, nrow(img), ncol(img))
return (ttt.3)
}

# a function to turn a matrix image into a vector
img2vec = function(img)
{
return (matrix(img, nrow(img) * ncol(img), 1))
}

# a function to convert an Image into a matrix
Image2Mat = function(Img)
{
m1 = t(matrix(Img, nrow(Img), ncol(Img)))
return(m1)
}

# a function to do canny edge detector
cannyEdges = function(img)
{
img.biOps = imagedata(img * 255)
img.canny = imgCanny(img.biOps, 0.7)
return (matrix(img.canny / 255, nrow(img), ncol(img)))
}

# a function combining canny edge detector with morphology
cannyDilated1 = function(img)
{
img.biOps = imagedata(img * 255)
img.canny = imgCanny(img.biOps, 0.7)
# do some morphology on the edges to fill the gaps between them
mat <- matrix (0, 3, 3)
mask <- imagedata (mat, "grey", 3, 3)
img.dilation = imgBinaryDilation(img.canny, mask)
img.erosion = imgBinaryErosion(img.dilation, mask)
return(matrix(img.erosion / 255, nrow(img), ncol(img)))
}

# a function combining canny edge detector with morphology
cannyDilated2 = function(img)
{
img.biOps = imagedata(img * 255)
img.canny = imgCanny(img.biOps, 0.7)
# do some morphology on the edges to fill the gaps between them
mat <- matrix (0, 3, 3)
mask <- imagedata (mat, "grey", 3, 3)
img.dilation = imgBinaryDilation(img.canny, mask)
img.erosion = imgBinaryErosion(img.dilation, mask)
img.erosion.2 = imgBinaryErosion(img.erosion, mask)
img.dilation.2 = imgBinaryDilation(img.erosion.2, mask)
return(matrix(img.dilation.2 / 255, nrow(img), ncol(img)))
}

dirtyFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train"
cleanFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train_cleaned"
outFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train_predicted"

outPath = file.path(outFolder, "trainingdata.csv")
filenames = list.files(dirtyFolder)
for (f in filenames)
{
print(f)
imgX = readPNG(file.path(dirtyFolder, f))
imgY = readPNG(file.path(cleanFolder, f))

# turn the images into vectors
x = matrix(imgX, nrow(imgX) * ncol(imgX), 1)
y = matrix(imgY, nrow(imgY) * ncol(imgY), 1)

# threshold the image
x2 = kmeansThreshold(imgX)

# adaptive thresholding
x3 = img2vec(adaptiveThresholding(imgX))

# canny edge detector and related features
x4 = img2vec(cannyEdges(imgX))
x5 = img2vec(cannyDilated1(imgX))
x6 = img2vec(cannyDilated2(imgX))

dat = data.table(cbind(y, x, x2, x3, x4, x5, x6))
setnames(dat,c("y", "raw", "thresholded", "adaptive", "canny", "cannyDilated1", "cannyDilated2"))
write.table(dat, file=outPath, append=(f != filenames[1]), sep=",", row.names=FALSE, col.names=(f == filenames[1]), quote=FALSE)
}

# read in the full data table
dat = read.csv(outPath)

# fit a model to a subset of the data
set.seed(1)
rows = sample(nrow(dat), 1000000)
gbm.mod = gbm(y ~ raw + thresholded + adaptive + canny + cannyDilated1 + cannyDilated2, data = dat[rows,], n.trees = 10000, train.fraction = 0.5, interaction.depth = 5)
best.iter <- gbm.perf(gbm.mod)

s = summary(gbm.mod)

# get the predictions - using parallel processing to save time
numCores = 6 #change the 6 to your number of CPU cores. or maybe lower due to RAM limits
cl = makeCluster(numCores)
registerDoSNOW(cl)
num_splits = numCores
split_testing = sort(rank(1:nrow(dat))%%numCores)
yHat = foreach(i=unique(split_testing),.combine=c,.packages=c("gbm")) %dopar% {
as.numeric(predict(gbm.mod, newdata=dat[split_testing==i,], n.trees = best.iter))
}
stopCluster(cl)
yHat[yHat < 0] = 0
yHat[yHat > 1] = 1
# what score do we get on the training data?
rmse = sqrt(mean( (yHat - dat$y) ^ 2 ))
print(rmse) # 4.1%

# show the predicted result for a sample image
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")
x = data.table(matrix(img, nrow(img) * ncol(img), 1), kmeansThreshold(img), img2vec(adaptiveThresholding(img)), img2vec(cannyEdges(img)), img2vec(cannyDilated1(img)), img2vec(cannyDilated2(img)))
setnames(x, c("raw", "thresholded", "adaptive", "canny", "cannyDilated1", "cannyDilated2"))
yHatImg = predict(gbm.mod, newdata=x, n.trees = best.iter)
yHatImg[yHatImg < 0] = 0
yHatImg[yHatImg > 1] = 1
imgOut = matrix(yHatImg, nrow(img), ncol(img))
writePNG(imgOut, "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\sample.png")
plot(raster(imgOut))

20150822 output 5

The first dilation operation that we tried in this blog is the most powerful predictor that we developed today, even more powerful than the adaptive thresholding that we used in the last blog. We have improved the RMSE score on the training set from 5.4% to 4.1%

20150822 sample

The model doesn’t completely remove the coffee cup stain, but it has faded the stain enough that we have a good chance at removing it later. In the next blog in this series, we will do more to clean up the coffee cup stain.

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...

Denoising Dirty Documents: Part 3

14 Friday Aug 2015

Posted by Colin Priest in Adaptive Thresholding, GBM, Gradient Boosting Machine, Image Processing, Kaggle, Machine Learning, R

≈ 9 Comments

Tags

Adaptive Thresholding, GBMs, Image Processing, Kaggle, Machine Learning, R

In my last blog, I discussed how to threshold an image, and we ended up doing quite a good job of cleaning up the image with the creased paper. But then I finished the blog by showing that our algorithm had a problem, it didn’t cope well with coffee cup stains.

coffee stain cropped

The problem is that coffee cup stains are dark, so our existing algorithm does not distinguish between dark writing and dark stains. We need to find some features that distinguish between dark stains and dark writing.20150808 plot 7

Looking at the image, we can see that even though the stains are dark, the writing is darker. The stains are dark compared to the overall image, but lighter than the writing within them,

So we can hypothesise that we might be able to separate the writing from the stains by using thresholding that looks at the pixel brightnesses over a localised set of pixels rather than across all of the image. That way we might separate the darker writing from the dark background stain.

While I am an R programmer and a fan of R, I have to admit that R can be a frustrating platform for image processing. For example, I cannot get the adimpro package to work at all, despite much experimentation and tinkering, and googling for solutions. Then I discovered that the biOps package was removed from CRAN, and the archived versions refuse to install on my PC. But eventually I found ways to get image processing done in R.

First, I tried the EbayesThresh package.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, EbayesThresh)

# sample image containing coffee cup stain
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")


# using EbayesThresh
et = ebayesthresh(img)
plot(raster(et))
range(et)

20150815 output 1
All of the values are zero! That isn’t very helpful.

Second, I tried the treethresh package.


# experiment with treethresh
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, treethresh)

# sample image containing coffee cup stain
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")

# using tree thresholding
tt = treethresh(img)
tt.pruned &lt;- prune(tt)
tt.thresholded &lt;- thresh(tt.pruned)
plot(raster(tt.thresholded))
range(tt.thresholded)

Once again I only got zeroes.
Finally, I tried wavelet thresholding using the treethresh package.

# experiment with wavelet thresholding
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, treethresh)

# sample image containing coffee cup stain
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")

# using wavelet thresholding
# need a square image that has with that is a power of 2
# compute wavelet transform
wt.square &lt;- imwd(img[1:256,1:256])
# Perform the thresholding
wt.thresholded = threshold(wt.square)
# Compute inverse wavelet transform
wt.denoised &lt;- imwr(wt.thresholded)
plot(raster(wt.denoised))

20150815 output 2
It wasn’t what I was looking for, but at least it’s not just a matrix of zeroes!

After these two failures with CRAN packages, I decided that I needed to look beyond CRAN and try out the EBImage package on Bioconductor.


if (!require("EBImage"))
{
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
}

# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster)

# sample image containing coffee cup stain
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")

# using adaptive thresholding
img.eb = readImage("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")
img.thresholded.3 = thresh(img.eb, 3, 3)
display(img.thresholded.3)
img.thresholded.5 = thresh(img.eb, 5, 5)
display(img.thresholded.5)
img.thresholded.7 = thresh(img.eb, 7, 7)
display(img.thresholded.7)
img.thresholded.9 = thresh(img.eb, 9, 9)
display(img.thresholded.9)

20150815 output 3

These images show that we are on the right track. The adaptive thresholding is good at separating the letters from the background, but it also adds black to the background. While the adaptive thresholding may not be perfect, any feature extraction that adds to our knowledge will probably improve our ensemble model.

Before we add adaptive thresholding to our model, we will do a bit more pre-processing. Based on the observation that adaptive thresholding is keeping the letters but adding some unwanted black to the background, we can combine all of the adaptive thresholding with normal thresholding images, and use the maximum pixel brightness across each 5 image set.


# a function to do k-means thresholding
kmeansThreshold = function(img)
{
# fit 3 clusters
v = img2vec(img)
km.mod = kmeans(v, 3)
# allow for the random ordering of the clusters
oc = order(km.mod$centers)
# the higher threshold is the halfway point between the top of the middle cluster and the bottom of the highest cluster
hiThresh = 0.5 * (max(v[km.mod$cluster == oc[2]]) + min(v[km.mod$cluster == oc[3]]))

# using upper threshold
imgHi = v
imgHi[imgHi &lt;= hiThresh] = 0
imgHi[imgHi &gt; hiThresh] = 1

return (imgHi)
}

# a function to turn a matrix image into a vector
img2vec = function(img)
{
return (matrix(img, nrow(img) * ncol(img), 1))
}
img.thresholded.3 = thresh(img.eb, 3, 3)
img.thresholded.5 = thresh(img.eb, 5, 5)
img.thresholded.7 = thresh(img.eb, 7, 7)
img.thresholded.9 = thresh(img.eb, 9, 9)
img.thresholded.11 = thresh(img.eb, 11, 11)

# a function to convert an Image into a matrix
Image2Mat = function(Img)
{
m1 = t(matrix(Img, nrow(Img), ncol(Img)))
return(m1)
}

# combine the adaptive thresholding
ttt.1 = cbind(img2vec(Image2Mat(img.thresholded.3)), img2vec(Image2Mat(img.thresholded.5)), img2vec(Image2Mat(img.thresholded.7)),
img2vec(Image2Mat(img.thresholded.9)), img2vec(Image2Mat(img.thresholded.11)), img2vec(kmeansThreshold(img)))
ttt.2 = apply(ttt.1, 1, max)
ttt.3 = matrix(ttt.2, nrow(img), ncol(img))
plot(raster(ttt.3))

20150815 output 4

It’s starting to look much better. The coffee cup stain isn’t removed, but we are starting to successfully clean it up. This suggests that adaptive thresholding will be an important predictor in our ensemble.

Let’s bring it all together, and build a new ensemble model, extending the model from my last blog on this subject.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, data.table, gbm, foreach, doSNOW)

if (!require("EBImage"))
{
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
}

# a function to do k-means thresholding
kmeansThreshold = function(img)
{
# fit 3 clusters
v = img2vec(img)
km.mod = kmeans(v, 3)
# allow for the random ordering of the clusters
oc = order(km.mod$centers)
# the higher threshold is the halfway point between the top of the middle cluster and the bottom of the highest cluster
hiThresh = 0.5 * (max(v[km.mod$cluster == oc[2]]) + min(v[km.mod$cluster == oc[3]]))

# using upper threshold
imgHi = v
imgHi[imgHi &lt;= hiThresh] = 0
imgHi[imgHi &gt; hiThresh] = 1

return (imgHi)
}

# a function that applies adaptive thresholding
adaptiveThresholding = function(img)
{
img.eb &lt;- Image(t(img))
img.thresholded.3 = thresh(img.eb, 3, 3)
img.thresholded.5 = thresh(img.eb, 5, 5)
img.thresholded.7 = thresh(img.eb, 7, 7)
img.thresholded.9 = thresh(img.eb, 9, 9)
img.thresholded.11 = thresh(img.eb, 11, 11)
img.kmThresh = kmeansThreshold(img)

# combine the adaptive thresholding
ttt.1 = cbind(img2vec(Image2Mat(img.thresholded.3)), img2vec(Image2Mat(img.thresholded.5)), img2vec(Image2Mat(img.thresholded.7)), img2vec(Image2Mat(img.thresholded.9)), img2vec(Image2Mat(img.thresholded.11)), img2vec(kmeansThreshold(img)))
ttt.2 = apply(ttt.1, 1, max)
ttt.3 = matrix(ttt.2, nrow(img), ncol(img))
return (ttt.3)
}

# a function to turn a matrix image into a vector
img2vec = function(img)
{
return (matrix(img, nrow(img) * ncol(img), 1))
}

# a function to convert an Image into a matrix
Image2Mat = function(Img)
{
m1 = t(matrix(Img, nrow(Img), ncol(Img)))
return(m1)
}

dirtyFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train"
cleanFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train_cleaned"
outFolder = "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train_predicted"

outPath = file.path(outFolder, "trainingdata.csv")
filenames = list.files(dirtyFolder)
for (f in filenames)
{
print(f)
imgX = readPNG(file.path(dirtyFolder, f))
imgY = readPNG(file.path(cleanFolder, f))

# turn the images into vectors
x = matrix(imgX, nrow(imgX) * ncol(imgX), 1)
y = matrix(imgY, nrow(imgY) * ncol(imgY), 1)

# threshold the image
x2 = kmeansThreshold(imgX)

# adaptive thresholding
x3 = img2vec(adaptiveThresholding(imgX))

dat = data.table(cbind(y, x, x2, x3))
setnames(dat,c("y", "raw", "thresholded", "adaptive"))
write.table(dat, file=outPath, append=(f != filenames[1]), sep=",", row.names=FALSE, col.names=(f == filenames[1]), quote=FALSE)
}

# read in the full data table
dat = read.csv(outPath)

# fit a model to a subset of the data
set.seed(1)
rows = sample(nrow(dat), 1000000)
gbm.mod = gbm(y ~ raw + thresholded + adaptive, data = dat[rows,], n.trees = 7500, cv.folds = 3, train.fraction = 0.5, interaction.depth = 5)
best.iter &lt;- gbm.perf(gbm.mod,method="cv",oobag.curve = FALSE)

s = summary(gbm.mod)

# get the predictions - using parallel processing to save time
numCores = 6 #change the 6 to your number of CPU cores. or maybe lower due to RAM limits
cl = makeCluster(numCores)
registerDoSNOW(cl)
num_splits = numCores
split_testing = sort(rank(1:nrow(dat))%%numCores)
yHat = foreach(i=unique(split_testing),.combine=c,.packages=c("gbm")) %dopar% {
as.numeric(predict(gbm.mod, newdata=dat[split_testing==i,], n.trees = best.iter))
}
stopCluster(cl)

# what score do we get on the training data?
rmse = sqrt(mean( (yHat - dat$y) ^ 2 ))
print(rmse)

# show the predicted result for a sample image
img = readPNG("C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")
x = data.table(matrix(img, nrow(img) * ncol(img), 1), kmeansThreshold(img), img2vec(adaptiveThresholding(img)))
setnames(x, c("raw", "thresholded", "adaptive"))
yHat = predict(gbm.mod, newdata=x, n.trees = best.iter)
imgOut = matrix(yHat, nrow(img), ncol(img))
writePNG(imgOut, "C:\\Users\\Colin\\dropbox\\Kaggle\\Denoising Dirty Documents\\data\\sample.png")
plot(raster(imgOut))

20150815 output 6

The adaptive thresholding gets a relative importance score of 2.4%, but full image thresholding loses all of its importance in the model.

20150815 output 5

The result may not be perfect, but you can see how the coffee cup stain is starting to be erased. We will need to find some more features to continue the clean up, but that will be left to future blogs.

This blog’s model has improved the RMSE on the training data from 6.5% reported in my last blog to 5.4% in this latest version. There’s a long way to go to have a top 10 performing model, but this process is steadily moving us in the right direction.

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...

Denoising Dirty Documents: Part 2

07 Friday Aug 2015

Posted by Colin Priest in Cluster Analysis, GBM, Gradient Boosting Machine, Image Processing, k-means, Kaggle, Machine Learning, R

≈ 9 Comments

Tags

Cluster Analysis, GBMs, Image Processing, k-means, Kaggle, Machine Learning, R

In the first blog in this series, I explained the nature of the problem to be solved, and showed how to create a simple linear model that gives an RMSE score on the training data of 7.8%. In this blog I introduce the feedback loop for creating features, and we extend and improve the existing competition model to include a couple of new features.

20150731 leaderboard

I have been building machine learning models for almost 20 years now, and never once has my first model been satisfactory, nor has that model been the model that I finally use. The Denoising Dirty Documents competition is no different. In the plots above, you can see my progress through the public leaderboard, as I iteratively improved my model. The same has occurred more broadly in the competition overall, as the best score has iteratively improved since the competition began.

Coming from an actuarial / statistical background, I tend to think in terms of a “control cycle”. For actuaries, the control cycle looks something like this:

control cycle

We can categorise the content of my last blog as:

1) Define the Problem

  • remove noise from images
  • images are three-dimensional surfaces

2) Hypothesise Solutions

  • we hypothesised that the pixel brightnesses needed to be rescaled

3) Implement

  • we fitted a least squares linear model to rescale the pixel brightnesses, and we capped and floored the values to keep them within the [0, 1] range

4) Monitor Results

  • we calculated the RMSE on the training data before and after the implementation
  • we looked at one of the output images

So we have completed one cycle. Now we need to do another cycle.

What I like about the Denoising Dirty Images Competition is that the data visualisation and model visualisation are obvious. We don’t need to develop clever visualisation tools to understand the problem or the model output. We can just look at the dirty and predicted images. Let’s begin with reviewing the example input and output images from the model used in the last blog i.e. let’s begin by recalling the Monitor Results step from the previous cycle:

20150801 - before

20150801 - after

Now we can progress to the beginning of the next cycle, the Define the Problem stage. This time the problem to be solved is to remove one or more of the errors from the output of the existing model. Once we already have a model, we can define the problem by asking ourselves the following question.

Question: In the predicted image, where have prediction errors have been made?

Answer: The predicted image contains some of the crease lines.

So in this cycle, the problem is to remove the remaining crease lines.

It’s time to begin the Hypothesise Solutions stage of the cycle by asking ourselves the following questions:

  • In the dirty image, what is a single characteristic of what we want to keep or remove that we haven’t captured in our existing model?
  • What characteristics do the errors in the predicted image have?
  • What characteristics does the dirty image have at the locations where these errors occur?

Here are a few observations that I made in answer to these questions:

  • In the dirty image, the writing is darker than the background around it. We haven’t captured this locality information.
  • The predicted image retains some of the crease lines. Those remaining crease lines are narrower and not as dark as the writing.
  • In the dirty image, the shadows next to the white centres of the crease are the remaining crease lines from the predicted image.

This led me to the hypothesis that the model for a single pixel needs to include information about the brightness of other pixels in the image. There are multiple ways that we could consider the brightnesses of other pixels, but in today’s blog I will limit myself to considering the brightness of the pixel under consideration versus the range of pixel brightnesses across the entire image.

To Implement a hypothesis test, and solution, we need to know some theory about image processing. There are some good textbooks that cover this material. The image processing / machine vision textbook that I have been using is


Computer and Machine Vision, Fourth Edition: Theory, Algorithms, Practicalities

The machine vision technique that I will apply this today is thresholding, which is the process of turning an image into pixels that can only be black or white, with no grey shades or colours. Writing code to do thresholding is the easy part. The trickier part is to decide the threshold value at which pixels are split into either black or white.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster)

img = readPNG("C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train\\6.png")

# turn the image into a vector
img2vec = function(img)
{
return (matrix(img, nrow(img) * ncol(img), 1))
}

# show a histogram
hist(img2vec(img))

20150808 plot 1

One way of finding a threshold value is to look at a histogram of the pixel brightnesses and look for a natural break between local maxima in the histogram. Since the histogram above is tri-modal, this would leave us with two choices of thresholds, one around 0.3 and one around 0.65.


# threshold at 0.3
img0.3 = img
img0.3[img0.3 <= 0.3] = 0 img0.3[img0.3 > 0.3] = 1
plot(raster(img0.3))

20150808 plot 2

Now we can Monitor Results. Thresholding at 0.3 gives us no false positives for this image, but leaves out a lot of the writing.


# threshold at 0.65
img0.65 = img
img0.65[img0.65 <= 0.65] = 0 img0.65[img0.65 > 0.65] = 1
plot(raster(img0.65))

20150808 plot 3

Thresholding at 0.65 gives us no false positives for this image, and correctly flags the writing without flagging the creases. That makes it quite a good feature for use on this image, as it will help us to remove the residual crease mark that we found when reviewing the output of the last blog’s model. We should definitely include a thresholding feature in our model!

But it feels very clumsy to plot a histogram and manually select the threshold, and such a feature definitely isn’t “machine learning”. Can we automate this?

Otsu’s Method is a well-known technique in image processing that can automatically suggest a threshold. But it assumes a bi-modal histogram, which we have seen is not true.

Inspired by Otsu’s method, we could use cluster analysis to generate three clusters of pixel brightnesses, and use the splits between those clusters to threshold the image.


# fit 3 clusters
v = img2vec(img)
km.mod = kmeans(v, 3)
# allow for the random ordering of the clusters
oc = order(km.mod$centers)
# the lower threshold is the halfway point between the top of the lowest cluster and the bottom of the middle cluster
loThresh = 0.5 * (max(v[km.mod$cluster == oc[1]]) + min(v[km.mod$cluster == oc[2]]))
# the higher threshold is the halfway point between the top of the middle cluster and the bottom of the highest cluster
hiThresh = 0.5 * (max(v[km.mod$cluster == oc[2]]) + min(v[km.mod$cluster == oc[3]]))

# using lower threshold
imgLo = img
imgLo[imgLo <= loThresh] = 0 imgLo[imgLo > loThresh] = 1
plot(raster(imgLo))

# using upper threshold
imgHi = img
imgHi[imgHi <= hiThresh] = 0 imgHi[imgHi > hiThresh] = 1
plot(raster(imgHi))

20150808 plot 4 20150808 plot 5

Now we can Monitor Results. Once again, the lower threshold choice doesn’t capture enough of the writing, while the upper threshold choice works well for this image. And this time there wasn’t any manual work required!

Let’s put it all together and combine last blog’s feature (a linear transformation of the raw pixel brightnesses) with this blog’s feature (thresholding). This time we will use something more sophisticated than linear regression. Our model will be based upon R’s GBM package, a gradient boosted machine to combine the raw pixel brightnesses with the thresholded pixels. GBMs are good all-purpose models, are simple to set up, perform well, and are almost always my first choice when creating machine learning models.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, data.table, gbm)

# a function to do k-means thresholding

kmeansThreshold = function(img)
{
# fit 3 clusters
v = img2vec(img)
km.mod = kmeans(v, 3)
# allow for the random ordering of the clusters
oc = order(km.mod$centers)
# the higher threshold is the halfway point between the top of the middle cluster and the bottom of the highest cluster
hiThresh = 0.5 * (max(v[km.mod$cluster == oc[2]]) + min(v[km.mod$cluster == oc[3]]))

# using upper threshold
imgHi = v
imgHi[imgHi <= hiThresh] = 0 imgHi[imgHi > hiThresh] = 1

return (imgHi)
}

dirtyFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train"
cleanFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train_cleaned"
outFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train_predicted"

outPath = file.path(outFolder, "trainingdata.csv")
filenames = list.files(dirtyFolder)
for (f in filenames)
{
print(f)
imgX = readPNG(file.path(dirtyFolder, f))
imgY = readPNG(file.path(cleanFolder, f))

# turn the images into vectors
x = matrix(imgX, nrow(imgX) * ncol(imgX), 1)
y = matrix(imgY, nrow(imgY) * ncol(imgY), 1)

# threshold the image
x2 = kmeansThreshold(imgX)

dat = data.table(cbind(y, x, x2))
setnames(dat,c("y", "raw", "thresholded"))
write.table(dat, file=outPath, append=(f != filenames[1]), sep=",", row.names=FALSE, col.names=(f == filenames[1]), quote=FALSE)
}

# view the data
dat = read.csv(outPath)
rows = sample(nrow(dat), 10000)
d1 = dat[rows,]
plot(d1$raw[dat$thresholded == 0], d1$y[dat$thresholded == 0], col = "blue")
lines(d1$raw[dat$thresholded == 1], d1$y[dat$thresholded == 1], col = "red", type="p")

# fit a model to a subset of the data
rows = sample(nrow(dat), 100000)
gbm.mod = gbm(y ~ raw + thresholded, data = dat[rows,], n.trees = 5000, cv.folds = 10, train.fraction = 0.5)
best.iter <- gbm.perf(gbm.mod,method="cv")

# what score do we get on the training data?
yHat = predict(gbm.mod, newdata=dat, n.trees = best.iter)
rmse = sqrt(mean( (yHat - dat$y) ^ 2 ))
print(rmse)

# show the predicted result for a sample image
img = readPNG("C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train\\6.png")
x = data.table(matrix(img, nrow(img) * ncol(img), 1), kmeansThreshold(img))
setnames(x, c("raw", "thresholded"))
yHat = predict(gbm.mod, newdata=x, n.trees = best.iter)
imgOut = matrix(yHat, nrow(img), ncol(img))
writePNG(imgOut, "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\sample.png")
plot(raster(imgOut))

20150808 plot 6
Notice how the sample output image is different from the thresholded image? The edges of the writing are lighter than the centre of the letters. That’s good, because that’s one of the characteristics of the clean image.
20150808 sample output
The sample image that we created from the predicted values is looking really good. The residual crease marks have disappeared, yet we have kept the writing. And the RMSE score on the training data has improved to 6.5%.

But before this blog ends, let’s consider an image that wasn’t cleaned so well:

# here's a sample image that doesn't perform as well
img = readPNG("C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train\\3.png")
x = data.table(matrix(img, nrow(img) * ncol(img), 1), kmeansThreshold(img))
setnames(x, c("raw", "thresholded"))
yHat = predict(gbm.mod, newdata=x)
imgOut = matrix(yHat, nrow(img), ncol(img))
writePNG(imgOut, "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\sample.png")
plot(raster(imgOut))

20150808 plot 7
Damn! I feel like cursing the jerk who put their coffee cup on top of the important document and left a stain!

In my next blog, I will discuss the first steps towards removing that coffee cup stain.
http://wms-na.amazon-adsystem.com/20070822/US/js/link-enhancer-common.js?tag=keeupwitthela-20&linkId=VYH2LC2X67R2SDNO

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...

Denoising Dirty Documents: Part 1

01 Saturday Aug 2015

Posted by Colin Priest in Image Processing, Kaggle, Machine Learning, R

≈ 16 Comments

Tags

Image Processing, Kaggle, Machine Learning, R

I recently blogged about my learning curve in my first Kaggle competition. This has become my most popular blog to date, and some readers have asked for more. So this blog is the first in a series of blogs about how to put together a reasonable solution to Kaggle’s Denoising Dirty Documents competition.

Some other competitors have been posting scripts, but those scripts are usually written in Python, whereas my background makes me an R programmer. So I will be writing scripts that make use of R.

The Structure of the Problem

We have been given a series of training images, both dirty (with stains and creased paper) and clean (with a white background and black letters). We are asked to develop an algorithm that converts, as close as possible, the dirty images into clean images.

the problem to be solved

A greyscale image (such as shown above) can be thought of as a three-dimensional surface. The x and y axes are the location within the image, and the z axis is the brightness of the image at that location. The great the brightness, the whiter the image at that location.

So from a mathematical perspective, we are being asked to transform one three-dimensional surface into another three dimensional surface.

436772_SMPNG_7WS66170WB3494916

Our task is to clean the images, to remove the stains, remove the paper creases, improve the contrast, and just leave the writing.

Loading the Image Data

In R, images are stored as matrices, with the row being the y-axis, the column being the x-axis, and the numerical value being the brightness of the pixel. Since Kaggle has stored the images in png format, we can use the png package to load the images.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster)

img = readPNG("C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train\\6.png")
head(img)
plot(raster(img))

20150801 output 1 20150801 output 2

You can see that the brightness values lie within the [0, 1] range, with 0 being black and 1 being white.

Restructuring the Data for Machine Learning

Instead of modelling the entire image at once, we should predict the cleaned-up brightness for each pixel within the image, and construct a cleaned image by combining together a set of predicted pixel brightnesses. We want a vector of y values, and a matrix of x values. The simplest data set is where the x values are just the pixel brightnesses of the dirty images.


# libraries
if (!require("pacman")) install.packages("pacman")
pacman::p_load(png, raster, data.table)

dirtyFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train"
cleanFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train_cleaned"
outFolder = "C:\\Users\\Colin\\Kaggle\\Denoising Dirty Documents\\data\\train_predicted"

outPath = file.path(outFolder, "trainingdata.csv")
filenames = list.files(dirtyFolder)
for (f in filenames)
{
print(f)
imgX = readPNG(file.path(dirtyFolder, f))
imgY = readPNG(file.path(cleanFolder, f))

# turn the images into vectors
x = matrix(imgX, nrow(imgX) * ncol(imgX), 1)
y = matrix(imgY, nrow(imgY) * ncol(imgY), 1)

dat = data.table(cbind(y, x))
setnames(dat,c("y", "x"))
write.table(dat, file=outPath, append=(f != filenames[1]), sep=",", row.names=FALSE, col.names=(f == filenames[1]), quote=FALSE)
}
# view the data
dat = read.csv(outPath)
head(dat)
rows = sample(nrow(dat), 10000)
plot(dat$x[rows], dat$y[rows])

20150801 output 4

The data is now in a familiar format, which each row representing a data point, the first column being the target value, and the remaining column being the predictors.

Our First Predictive Model

Look at the relationship between x and y.

20150801 output 3

Except at the extremes, there is a linear relationship between the brightness of the dirty images and the cleaned images. There is some noise around this linear relationship, and a clump of pixels that are halfway between white and black. There is a broad spread of x values as y approaches 1, and these pixels probably represent stains that need to be removed.

So the obvious first model would be a linear transformation, with truncation to ensure that the predicted brightnesses remain within the [0, 1] range.


# fit a linear model, ignoring the data points at the extremes
lm.mod.1 = lm(y ~ x, data=dat[dat$y &gt; 0.05 & dat$y &lt; 0.95,])
summary(lm.mod.1)
dat$predicted = sapply(predict(lm.mod.1, newdata=dat), function(x) max(min(x, 1),0))
plot(dat$predicted[rows], dat$y[rows])
rmse1 = sqrt(mean( (dat$y - dat$x) ^ 2))
rmse2 = sqrt(mean( (dat$predicted - dat$y) ^ 2))
c(rmse1, rmse2)

20150801 output 5 20150801 output 6

The linear model has done a brightness and contrast correction. This reduces the RMSE score from 0.157 to 0.078. Let’s see an output image:


# show the predicted result for a sample image
img = readPNG("C:\\Users\\Colin\\Dropbox\\Kaggle\\Denoising Dirty Documents\\data\\train\\6.png")
x = data.table(matrix(img, nrow(img) * ncol(img), 1))
setnames(x, "x")
yHat = sapply(predict(lm.mod.1, newdata=x), function(x) max(min(x, 1),0))
imgOut = matrix(yHat, nrow(img), ncol(img))
writePNG(imgOut, "C:\\Users\\Colin\\Dropbox\\Kaggle\\Denoising Dirty Documents\\data\\sample.png")
plot(raster(imgOut))

20150801 output 7

Although we have used a very simple model, we have been able to clean up this image:

20150801 - before

Our predicted image is:

20150801 - after

That’s quite good performance for a simple least squares linear regression!

To be fair though, I deliberately chose an example image that performs well. In my next blog in this series, I will discuss the use of a feedback loop in model design, and how to design new features to use as predictors.

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...

My Kaggle Learning Curve: Artificial Stupidity

25 Saturday Jul 2015

Posted by Colin Priest in Image Processing, Kaggle, Machine Learning

≈ 10 Comments

Tags

Image Processing, Kaggle, Machine Learning

This month I started competing in my very first Kaggle competition, Denoising Dirty Documents. I was first introduced to Kaggle a few years ago by Xavier Conort, an insurance industry colleague who also lives here in Singapore. But I had been passive with my Kaggle membership, and hadn’t even considered competing.

This year two things changed. Firstly, I joined IntelliM, an image processing, machine learning and software house, and I needed to get out into the real world and make business connections and start adding value in these fields. Secondly, Kaggle opened the Denoising Dirty Documents competition, which is about pre-processing scanned documents so that they are suitable for optical character recognition, and this competition required both image processing skills and machine learning skills. So this competition looked like a great match for me, and hopefully would be an easy transition to build some experience within Kaggle.

COPR

Although I am an actuary by training, I have not always stayed within the traditional bounds of actuarial work. Back in the 1990s I first started playing with machine learning, using neural networks to predict which customers will renew their insurance policies. Then, inspired by Kim and Nelson’s book, I developed a state space regime switching model for predicting periods of massive builder insolvencies. That model has subsequently been adapted for cancer research, to measure the timing of genes switching off and on. In the 2000s I started getting involved in image processing, firstly to create optical character recognition for a web scraper software package, and later developing COPR, license plate recognition software. Over the past decade I have been using machine learning for customer analytics and insurance pricing.

the problem to be solved

So I thought that just doing some pre-processing for optical character recognition would be quick and easy. When I looked at the examples (see one example above), my eyes could quickly see what the answer should look like even before I peeked at the example cleaned image. I was so wrong…

Lesson: Avoid Artificial Stupidity

Machine learning is sometimes called artificial intelligence. After all, aren’t neural networks based upon the architecture of the human brain?

My first competition submission was a pure machine learning solution. I modelled the target image one pixel at a time. For predictors, I got the raw pixel brightnesses for a region around each pixel location. This is a brute force approach that I have used in the past for optical character recognition. I figured that the machine learning algorithm would learn what the character strokes looked like, and thereby know which pixels should be background.

What really happened was that the machine learning algorithm simply adjusted the brightness and contrast of the image, to better match the required solution. So I scored 8.58%, giving me 24th ranking, much higher than I was expecting, and much closer to some naive benchmarks than I was comfortable with.

submission 1

I wanted a top ten placing, but I was a long way away from it. So I fine-tuned the model hyperparameters. This moderately improved the score, and only moved me up 3 ranks. My next competition submission actually scored far worse than my preceding two submissions! I needed to rethink my approach because I was going backwards, and the better submissions were almost an order of magnitude better than mine.

The reason my submission scored so poorly was because I was asking the machine learning model to learn complex interactions between pixels, without any guidance from me. There are heuristics about text images that I intuitively know, but I hadn’t passed on any of that knowledge to the machine learning algorithm, either via predictors or model structure.

My algorithm wasn’t artificially intelligent; it was artificially stupid!

So I stopped making submissions to the competitions, and started looking at the raw images and cleaned images, and I applied some common image processing algorithms. I asked myself these questions:

  • what is it about the text that is different to the background?
  • what are the typical characteristics of text?
  • what are the typical characteristics of stains?
  • what are the typical characteristics of folded or crinkled paper?
  • how does a dark stain differ from dark text?
  • what does the output from a image processing algorithm tell me about whether a pixel is text or background?
  • what are the shortcomings of a particular image processing algorithm?
  • what makes an image processing algorithm drop out some of the text?
  • what makes an image processing algorithm think that a stain is text?
  • what makes an image processing algorithm think that a paper fold is text?
  • which algorithms have opposing types of classification errors?

3

For example, in the image above, the algorithm thins out the text too much, does not remove the outer edges of stains, and does not remove small stains. That prompted me to think that maybe an edge finding algorithm would complement this algorithm.

leaderboard 20150725

After a week of experimentation and feature extraction, I finally made a new competition submission, and it jumped me up in the rankings. Then I started fine tuning my model, and split the one all-encompassing machine learning model into multiple specialist models. At the time of writing this blob I am ranked 4th in the competition, and after looking at the scores of the top 3 competitors, I realise that I will have to do more than just fine tune my algorithm. It’s time for me to get back into research mode and find a new feature that identifies the blob stain at the end of the first paragraph in this image:

3-postprocessed

Kaggle is addictive. I can’t wait to solve this problem!

Share this:

  • Twitter
  • Facebook

Like this:

Like Loading...
Newer posts →

Blogroll

  • Discover New Voices
  • Discuss
  • Get Inspired
  • Get Mobile
  • Get Polling
  • Get Support
  • Great Reads
  • Learn WordPress.com
  • Theme Showcase
  • WordPress.com News
  • www.r-bloggers.com

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 277 other subscribers

Blog at WordPress.com.

  • Follow Following
    • Keeping Up With The Latest Techniques
    • Join 86 other followers
    • Already have a WordPress.com account? Log in now.
    • Keeping Up With The Latest Techniques
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...
 

    %d bloggers like this: