Tags

, ,

This month I started competing in my very first Kaggle competition, Denoising Dirty Documents. I was first introduced to Kaggle a few years ago by Xavier Conort, an insurance industry colleague who also lives here in Singapore. But I had been passive with my Kaggle membership, and hadn’t even considered competing.

This year two things changed. Firstly, I joined IntelliM, an image processing, machine learning and software house, and I needed to get out into the real world and make business connections and start adding value in these fields. Secondly, Kaggle opened the Denoising Dirty Documents competition, which is about pre-processing scanned documents so that they are suitable for optical character recognition, and this competition required both image processing skills and machine learning skills. So this competition looked like a great match for me, and hopefully would be an easy transition to build some experience within Kaggle.

COPR

Although I am an actuary by training, I have not always stayed within the traditional bounds of actuarial work. Back in the 1990s I first started playing with machine learning, using neural networks to predict which customers will renew their insurance policies. Then, inspired by Kim and Nelson’s book, I developed a state space regime switching model for predicting periods of massive builder insolvencies. That model has subsequently been adapted for cancer research, to measure the timing of genes switching off and on. In the 2000s I started getting involved in image processing, firstly to create optical character recognition for a web scraper software package, and later developing COPR, license plate recognition software. Over the past decade I have been using machine learning for customer analytics and insurance pricing.

the problem to be solved

So I thought that just doing some pre-processing for optical character recognition would be quick and easy. When I looked at the examples (see one example above), my eyes could quickly see what the answer should look like even before I peeked at the example cleaned image. I was so wrong…

Lesson: Avoid Artificial Stupidity

Machine learning is sometimes called artificial intelligence. After all, aren’t neural networks based upon the architecture of the human brain?

My first competition submission was a pure machine learning solution. I modelled the target image one pixel at a time. For predictors, I got the raw pixel brightnesses for a region around each pixel location. This is a brute force approach that I have used in the past for optical character recognition. I figured that the machine learning algorithm would learn what the character strokes looked like, and thereby know which pixels should be background.

What really happened was that the machine learning algorithm simply adjusted the brightness and contrast of the image, to better match the required solution. So I scored 8.58%, giving me 24th ranking, much higher than I was expecting, and much closer to some naive benchmarks than I was comfortable with.

submission 1

I wanted a top ten placing, but I was a long way away from it. So I fine-tuned the model hyperparameters. This moderately improved the score, and only moved me up 3 ranks. My next competition submission actually scored far worse than my preceding two submissions! I needed to rethink my approach because I was going backwards, and the better submissions were almost an order of magnitude better than mine.

The reason my submission scored so poorly was because I was asking the machine learning model to learn complex interactions between pixels, without any guidance from me. There are heuristics about text images that I intuitively know, but I hadn’t passed on any of that knowledge to the machine learning algorithm, either via predictors or model structure.

My algorithm wasn’t artificially intelligent; it was artificially stupid!

So I stopped making submissions to the competitions, and started looking at the raw images and cleaned images, and I applied some common image processing algorithms. I asked myself these questions:

  • what is it about the text that is different to the background?
  • what are the typical characteristics of text?
  • what are the typical characteristics of stains?
  • what are the typical characteristics of folded or crinkled paper?
  • how does a dark stain differ from dark text?
  • what does the output from a image processing algorithm tell me about whether a pixel is text or background?
  • what are the shortcomings of a particular image processing algorithm?
  • what makes an image processing algorithm drop out some of the text?
  • what makes an image processing algorithm think that a stain is text?
  • what makes an image processing algorithm think that a paper fold is text?
  • which algorithms have opposing types of classification errors?

3

For example, in the image above, the algorithm thins out the text too much, does not remove the outer edges of stains, and does not remove small stains. That prompted me to think that maybe an edge finding algorithm would complement this algorithm.

leaderboard 20150725

After a week of experimentation and feature extraction, I finally made a new competition submission, and it jumped me up in the rankings. Then I started fine tuning my model, and split the one all-encompassing machine learning model into multiple specialist models. At the time of writing this blob I am ranked 4th in the competition, and after looking at the scores of the top 3 competitors, I realise that I will have to do more than just fine tune my algorithm. It’s time for me to get back into research mode and find a new feature that identifies the blob stain at the end of the first paragraph in this image:

3-postprocessed

Kaggle is addictive. I can’t wait to solve this problem!