## Quick tips for data analysis in python MDP and matplotlib

I've been using MDP and matplotlib a lot recently and although overall I've been very pleased with the documentation for both projects I have run into a few problems for which the solutions were not immediately obvious. This post gives the solution for each in the expectation it will certainly be useful to me in the future and the hope that it may also be useful to others.

#### Principal Component Analysis with MDP

##### Data Layout

The tutorial for the Modular Toolkit for Data Processing (MDP) starts with a quick example of using the toolkit for a pca analysis and yet I still ran into a couple of problems. The first issue I had was how the pca function expects to receive data. I suspect this is simply due to unfamiliarity with the field and the language used within the field. For future reference the data is expected to be in the following format.

 Gene 1 Gene 2 Gene 3 Gene 4 Experimental Condition 1 . . . . Experimental Condition 2 . . . .
##### Variance Accounted For in PC1, 2, etc

The previously mentioned quick start tutorial was very useful in getting something useful out quickly but I couldn't find a way to get a value for how much of the variance present in the data was accounted for in the principal components. To get that, as far as I've been able to determine, you need to interact with the PCANode directly rather than using the convenience function. The code is still relative straightforward.

```import mdp
import numpy as np
import matplotlib.pyplot as plt

#Create sample data
var1 = np.random.normal(loc=0., scale=0.5, size=(10,5))
var2 = np.random.normal(loc=4., scale=1., size=(10,5))
var = np.concatenate((var1,var2), axis=0)

#Create the PCA node and train it
pcan = mdp.nodes.PCANode(output_dim=3)
pcar = pcan.execute(var)

#Graph the results
fig = plt.figure()
ax.plot(pcar[:10,0], pcar[:10,1], 'bo')
ax.plot(pcar[10:,0], pcar[10:,1], 'ro')

#Show variance accounted for
ax.set_xlabel('PC1 (%.3f%%)' % (pcan.d[0]))
ax.set_ylabel('PC2 (%.3f%%)' % (pcan.d[1]))

plt.show()
```

Running this code produces an image similar to the one below.

#### Growing neural gas with MDP

The growing neural gas implementation was another sample application highlighted in the tutorial for MDP. It held my interest for a while as a technique which could potentially be applied to the transcription of plaques for the openplaques project. It wasn't immediately obvious how to get the position of a node from a connected nodes object. As the tutorial left the details of visualisation up to the user I'll present the solution to getting the node location in the form of the necessary code to visualise the node training. The end result will look something like the following.

#### Matplotlib

I've been using Matplotlib to plot data exclusively for a while now. The defaults produce reasonable quality graphs and any differences in opinion can be quickly fixed either by altering options in matplotlib or, as the graphs can be saved in svg format, in a vector image manipulation program such as Inkscape. Although most options can be changed in matplotlib it can sometimes be difficult to find the correct option. Most of the time the naming of variables are, to my mind, logical but sometimes I just can't find the right way to describe what I want to do.

##### Hiding axes

I wanted to have a grid of 6 graphs but didn't want to display the axes on all the graphs as I felt this looked cluttered.

##### Fixing the axis range

If I was going to display the axes on only some of the graphs then the values for the axes needed to be the same on all of them.

```import numpy as np
import matplotlib.pyplot as plt

#Generate sample data
var = np.random.random_sample((40,2))

fig = plt.figure()
for i in range(4):
ax = fig.add_subplot(220 + i + 1)
start = i * 10
ax.plot(var[start:start+10,0], var[start:start+10,1], 'bo')

#Hide the x axis on the top row of charts
if i in [0,1]:
ax.set_xticklabels(ax.get_xticklabels(), visible=False)

#Hide the y axis on the right column of charts
if i in [1,3]:
ax.set_yticklabels(ax.get_yticklabels(), visible=False)

#Set the axis range
ax.axis([0,1,0,1])
plt.show()
```

Running this code should produce an image similar to the one below.

##### Removing second point in plot legend

The legend assumes that values are connected so two points and the connecting line are shown by default. If the points on the graph aren't connected then this looked strange. To remove the duplicate symbol is straightforward.

```import numpy as np
import matplotlib.pyplot as plt

#Generate sample data
var = np.random.random_sample((10,2))

#Plot data with labels
fig = plt.figure()
ax.plot(var[0:5,0], var[0:5,1], 'bo', label="First half")
ax.plot(var[5:10,0], var[5:10,1], 'r^', label="Second half")
ax.legend(numpoints=1)
plt.show()
```

## AI Cookbook Competition - Month Three

A little over two months ago I wrote about the first round of the AI cookbook competition. Since then there have been two further rounds and a considerable amount of further progress. For the latest round I was able to get the error score down to 10.867 using an additional image pre-processing step and then a variety of text clean-up improvements.

#### Image Pre-processing

Ian, who writes the AI Cookbook, had the theory that the curved text present at the top of many of the plaques in the test set were causing tesseract, our OCR software of choice, significant problems in transcribing the main text. If we could automatically recognise the curved text and block it out the transcription should be significantly improved. In the diagram below the text we want to be transcribed is in green and the text we don't want is in red.

I couldn't think of a good method to actually recognise the curved text at the top so decided to use a 'dumb' approach. The curved text is in the same place on all the plaques so I built a system to apply the same mask to all the images. To do this I went back to what I could still remember from high school math lessons. To the probable delight of my old math teachers I quickly had some working code. The code I wrote cycles through all the pixels in the image and converts them to a distance and angle relative to the centre of the image. This process is hopefully easier to visualise in the image below. The distance is simple enough to calculate as we're dealing with a right-angle triangle; we simply square the x and y values, add them together and take the square root. The angle is a little trickier. The y-value represents the opposite length of the triangle and the x-value represents the adjacent length so from the mnemonic SOH CAH TOA we known the angle will be tan-1 (O/A). Knowing that we can then apply our rules for distance and angle.

#### Text Clean-up

The text clean-up was lots of little steps. Briefly I've,

1. Made various improvements to the regexes for cleaning up the years
2. Converted any instances of 'vv' (two v's) to 'w' (one w)
3. Switched 0 (zero) to o (letter o) in words
4. Removed any one/two character tokens from the end of the string
5. Improved the selection of suggestions from the spell checker
6. Broken up long words to see if a valid word can be found in the two halves
7. Changed "s to 's
8. Improved correction for endings where the ending is lived|worked|died here and the spelling checker returns bad results
9. Removed any words containing three of lowercase, uppercase, digits and punctuation.

The regex for that last item is something of a monstrosity and as I'm far from an expert it wouldn't surprise me if it doesn't entirely do what I think it does. I've used whitespace to make it slightly easier to follow. Each line represents a sub-expression, if any sub-expression matches the string then the expression as a whole is considered to match. Each line matches a different combination of three from digits, lowercase, uppercase and punctuation. The .+ at the end means we match one or more of any character. The expressions in brackets starting with a question mark are look ahead assertions. The .+ still matches any character but the look ahead assertions state that at least one of the characters matched must be a digit for instance. It doesn't matter in what order the characters are present as long as they are present. If you suspect there is a flaw in the pattern or know some way to simplify it then I would really appreciate a quick note in the comments field below.

```re.compile(r""" #matching a combination of digits, lowercase, uppercase and punctuation
((?=.*\d)(?=.*[a-z])(?=.*['"-,\.]).+| #d,l,p
(?=.*[A-Z])(?=.*[a-z])(?=.*['"-,\.]).+| #u,l,p
(?=.*\d)(?=.*[A-Z])(?=.*[a-z]).+| #d,u,l
(?=.*\d)(?=.*[A-Z])(?=.*['"-,\.]).+ #u,p,d
)""", re.VERBOSE)
```

That's all for now. I believe Ian is planning to run the competition for a further month and there are still considerable improvements to be made so it would be great to see more people taking part.

## AI cookbook competition - transcription for the openplaques project

Ian Ozsvald over at aicookbook has been doing some work using optical character recognition (OCR) to transcribe plaques for the openplaques group. His write-ups have been interesting so when he posted a challenge to the community to improve on his demo code I decided to give it a try.

The demo code was very much a proof of principle and its score of 709.3 was easy to beat. I managed to quickly get the score down to 44 and with a little more work reached 33.4. The score is a Levenshtein distance metric so the lower the better. I was hoping to get below 30 but in the end just didn't have time. I suspect it wouldn't take a lot of work to improve on my score. Here's what I've done so far . . .

### Configure the system

All the work I've done was on an Ubuntu 10.04 installation and the instructions which follow will only deal with this environment. Beyond the base install I use three different packages:

Python Image Library
Used for pre-processing the images before submitting to tesseract
Tesseract
The OCR software used
Enchant spellchecker
Used for cleaning up the transcribed text

Their installation is straightforward using apt-get

```\$ sudo apt-get install python-imaging python-enchant tesseract-ocr tesseract-ocr-eng
```

### Fetch images

The demo code written by Ian (available here) includes a script to fetch the images from flickr. It's as simple as running the following

```\$ python get_plaques.py easy_blue_plaques.csv
```

Once the images are downloaded I suggest you go ahead and run the demo transcribing script. Again it's nice and simple

```\$ python plaque_transcribe_demo.py easy_blue_plaques.csv
```

Then you can calculate the score using

```\$ python summarise_results.py results.csv
```

### Improving transcription

Ian had posted a number of good suggestions on the wiki for how to improve the transcription quality. I used four approaches:

Image preprocessing
Cropping the image and converting to black and white takes the score from 782 (the demo code produced a higher score on my system than it did for Ian) to 44.6
Restricting the characters tesseract will return
By restricting the character set used by tesseract to alphanumeric characters and a limited selection of punctuation characters further lowered the score from 44.6 to 35.7
Spell checking
Running the results from tesseract through a spell checker and filtering out some common errors brought the score down to 33.4

I'll post the entire script at the bottom of this post but want to highlight a few of the key elements first.

The first stage of cropping the image on the plaque is handled by the function crop_to_plaque which expects a python image library image object. The function then reduces the size of the image to speed up processing before looking for blue pixels. A blue pixel is assumed to be any pixel where the value of the blue channel is 20% higher than both the red and green channels. The number of blue pixels in each row and column of the image is counted and then the image is cropped down to the rows and columns where the number of blue pixels is greater than 15% of the height and width of the image. This value is based solely on experimentation and seemed to give good results for this selection of plaques.

The next stage of converting the image to black and white is handled by the function convert_to_bandl which again expects a python image library image object. The function converts any blue pixels to white and all other pixels to black. Ian has pointed out that this approach might be overly stringent and I might get better results using some grey as well. The result of running these two functions on three of the plaques is shown below.

The next step was limiting the character set used by tesseract. The easiest way to do this is to create a file in /usr/share/tesseract-ocr/tessdata/configs/ which I called goodchars with the following content.

```0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ.,()-"
```

That selection of characters seems to include all the characters present in the plaques. To use this limited character set the call to tesseract needs to be altered to

```cmd = 'tesseract %s %s -l eng nobatch goodchars' % (filename_tif, filename_base)
```

Finally I perform a bunch of small clean up tasks. Firstly I fix the year ranges which frequently had extra spaces inserted and occasionally 1s appeared as i or l and 3 appeared as a parenthesis. These were fixed by a couple of regular expressions including one callback function (clean_years). Then I seperate the transcription out into individual words and fix a number of more issues including lone characters and duplicated characters before checking the spelling on any words of more than two characters.

### Where next?

There is still lots of 'low hanging fruit' on this problem. At the moment the curved text at the top of the plaque and the small symbol at the bottom of the plaques is handled badly and I think the bad characters at the beginning and end of the transcriptions could be easily stripped out. The spelling corrections I make do overall reduce the error but they introduce some new errors. I suspect by being more selective in where spelling checks are made some of these introduced errors could be removed.

### The entire script

```import os
import sys
import csv
import urllib
from PIL import Image # http://www.pythonware.com/products/pil/
import ImageFilter
import enchant
import re

# This recognition system depends on:
# version 2.04, it must be installed and compiled already

# plaque_transcribe_test5.py
# run it with 'cmdline> python plaque_transcribe_test5.py easy_blue_plaques.csv'
# and it'll:
# 1) send images to tesseract
# 2) read in the transcribed text file
# 3) convert the text to lowercase
# 4) use a Levenshtein error metric to compare the recognised text with the
# human supplied transcription (in the plaques list below)
# 5) write error to file

# For more details see:
# http://aicookbook.com/wiki/Automatic_plaque_transcription

"""build plaques structure from CSV file"""
plaques = []
for row in plqs:
image_url = row[1]
text = row[2]
# ignore id (0) and plaque url (3) for now
last_slash = image_url.rfind('/')
filename = image_url[last_slash+1:]
filename_base = os.path.splitext(filename)[0] # turn 'abc.jpg' into 'abc'
filename = filename_base + '.tif'
root_url = image_url[:last_slash+1]
plaque = [root_url, filename, text]
plaques.append(plaque)
return plaques

def levenshtein(a,b):
"""Calculates the Levenshtein distance between a and b
Taken from: http://hetland.org/coding/python/levenshtein.py"""
n, m = len(a), len(b)
if n > m:
# Make sure n <= m, to use O(min(n,m)) space
a,b = b,a
n,m = m,n

current = range(n+1)
for i in range(1,m+1):
previous, current = current, [i]+[0]*n
for j in range(1,n+1):
change = previous[j-1]
if a[j-1] != b[i-1]:
change = change + 1

return current[n]

def transcribe_simple(filename):
"""Convert image to TIF, send to tesseract, read the file back, clean and
return"""
# read in original image, save as .tif for tesseract
im = Image.open(filename)
filename_base = os.path.splitext(filename)[0] # turn 'abc.jpg' into 'abc'

#Enhance contrast
#contraster = ImageEnhance.Contrast(im)
#im = contraster.enhance(3.0)
im = crop_to_plaque(im)
im = convert_to_bandl(im)

filename_tif = 'processed' + filename_base + '.tif'
im.save(filename_tif, 'TIFF')

# call tesseract, read the resulting .txt file back in
cmd = 'tesseract %s %s -l eng nobatch goodchars' % (filename_tif, filename_base)
print "Executing:", cmd
os.system(cmd)
input_filename = filename_base + '.txt'
input_file = open(input_filename)
line = " ".join([x.strip() for x in lines])
input_file.close()
# delete the output from tesseract
os.remove(input_filename)

# convert line to lowercase
transcription = line.lower()

#Remove gaps in year ranges
transcription = re.sub(r"(\d+)\s*-\s*(\d+)", r"\1-\2", transcription)
transcription = re.sub(r"([0-9il\)]{4})", clean_years, transcription)

#Separate words
d = enchant.Dict("en_GB")
newtokens = []
print 'Prior to post-processing: ', transcription
tokens = transcription.split(" ")
for token in tokens:
if (token == 'i') or (token == 'l') or (token == '-'):
pass
elif token == '""':
newtokens.append('"')
elif token == '--':
newtokens.append('-')
elif len(token) > 2:
if d.check(token):
#Token is a valid word
newtokens.append(token)
else:
#Token is not a valid word
suggestions = d.suggest(token)
if len(suggestions) > 0:
#If the spell check has suggestions take the first one
newtokens.append(suggestions[0])
else:
newtokens.append(token)
else:
newtokens.append(token)

transcription = ' '.join(newtokens)

return transcription

def clean_years (m):
digits = m.group(1)
year = []
for digit in digits:
if digit == 'l':
year.append('1')
elif digit == 'i':
year.append('1')
elif digit == ')':
year.append('3')
else:
year.append(digit)
return ''.join(year)

def crop_to_plaque (srcim):

scale = 0.25
wkim = srcim.resize((int(srcim.size[0] * scale), int(srcim.size[1] * scale)))
wkim = wkim.filter(ImageFilter.BLUR)
#wkim.show()

width = wkim.size[0]
height = wkim.size[1]

#result = wkim.copy();
highlight_color = (255, 128, 128)
R,G,B = 0,1,2
lrrange = {}
for x in range(width):
lrrange[x] = 0
tbrange = {}
for y in range(height):
tbrange[y] = 0

for x in range(width):
for y in range(height):
point = (x,y)
pixel = wkim.getpixel(point)
if (pixel[B] > pixel[R] * 1.2) and (pixel[B] > pixel[G] * 1.2):
lrrange[x] += 1
tbrange[y] += 1
#result.putpixel(point, highlight_color)

#result.show();

left = 0
right = 0
cutoff = 0.15
for x in range(width):
if (lrrange[x] > cutoff * height) and (left == 0):
left = x
if lrrange[x] > cutoff * height:
right = x

top = 0
bottom = 0
for y in range(height):
if (tbrange[y] > cutoff * width) and (top == 0):
top = y
if tbrange[y] > cutoff * width:
bottom = y

left = int(left / scale)
right = int(right / scale)
top = int(top / scale)
bottom = int(bottom / scale)

box = (left, top, right, bottom)
region = srcim.crop(box)
#region.show()

return region

def convert_to_bandl (im):
width = im.size[0]
height = im.size[1]

white = (255, 255, 255)
black = (0, 0, 0)
R,G,B = 0,1,2

for x in range(width):
for y in range(height):
point = (x,y)
pixel = im.getpixel(point)
if (pixel[B] > pixel[R] * 1.2) and (pixel[B] > pixel[G] * 1.2):
im.putpixel(point, white)
else:
im.putpixel(point, black)
#im.show()
return im

if __name__ == '__main__':
argc = len(sys.argv)
if argc != 2:
print "Usage: python plaque_transcribe_demo.py plaques.csv (e.g. \
easy_blue_plaques.csv)"
else:

results = open('results.csv', 'w')

for root_url, filename, text in plaques:
print "----"
print "Working on:", filename
transcription = transcribe_simple(filename)
print "Transcription: ", transcription
print "Text: ", text
error = levenshtein(text, transcription)
assert isinstance(error, int)
print "Error metric:", error
results.write('%s,%d\n' % (filename, error))
results.flush()
results.close()
```

## Predicting HIV Progression

About a month ago I came across Kaggle which provides a platform for prediction competitions. It's an interesting concept. Accurate predictions are very useful but designing systems to make such predictions is challenging. By engaging the public it's hoped that talent not normally available to the competition organiser will have a try at the problem and come up with a model which is superior to previous efforts.

Prediction is not exactly my area of expertise but I wanted to have a crack at one of the competitions currently running; predicting response to treatment in HIV patients. I haven't yet started developing a model but wanted to release the python framework I've put together to test ideas. It can be downloaded here.

I've included a number of demonstration prediction methods; randomly guessing, assuming all will respond or assuming none will respond. I suggest you start with one of these methods and then improve on it with your own attempt. The random method was my first submission which, at the time of writing, currently puts me in 30th position out of 33 teams. Improving on that shouldn't be difficult.

The usage of the framework isn't difficult.

```>>> import bootstrap
>>> boot = bootstrap.Bootstrap("method_rand")
>>> boot.run(50)
Mean score:  0.501801084135
Standard deviation:  0.0241816159815
Maximum:  0.544554455446
Minimum:  0.442386831276
>>>
```

During development you can use the bootstrap class to get an idea of how well your method works as demonstrated above. All the training data is split randomly into training and testing sets and then the method trained on the training set and assessed on the test set. This process is repeated, the default is 50 times, and the the scores returned. The score returned will be different to the score when you submit but hopefully should give you an indication of how well you're doing.

```>>> import submission
>>> sub = submission.Submission("method_rand")
>>> sub.run("submission1.csv")
>>>
```

When you are satisfied with your method you can create the file needed for submission using the above code. In this case we are sticking with the random method. The submission file is submission1.csv. Hopefully this code is useful to you and you'll submit a prediction method yourself.

I've been using the Zend Framework to good effect on and off for a few months now and have found it very useful in rapidly bringing projects to completion. Many people feel Zend Framework is more a library than a framework and with good reason. There are few things it prevents you from doing but it's not ready to go 'out of the box' in the way some other frameworks are. One example is in the way passwords are stored in the database. The default is simply to store them in plain text.

The manual does cover hashing the password but even this isn't really ideal. There seems to be some consensus forming that the correct way to handle passwords is using bcrypt, a Blowfish-based hashing scheme. The most widely known demonstration of this within the PHP community is the phpass hashing framework. It has already been integrated into wordpress and phpBB. As such, I was in good company integrating it into my own projects.

The first step is making the phpass code available in your ZF project. The changes I made were minor, renaming the class and switching the PasswordHash function to __construct. I would encourage you to fetch the latest code from the phpass project page. The snippet of code I changed is below.

```class Acai_Hash {

var \$itoa64;
var \$iteration_count_log2;
var \$portable_hashes;
var \$random_state;

function __construct(\$iteration_count_log2, \$portable_hashes)
{
```

The next step was altering the database table adapter in Zend_Auth. The code for this is below.

```<?php

{

public function __construct (\$zendDb = null, \$tableName = null, \$identityColumn = null, \$credentialColumn = null)
{
//From where?  It is not stored in the registry.
if (\$zendDb == null) {
}

//Set default values
\$tableName = \$tableName ? \$tableName : 'accounts';
\$identityColumn = \$identityColumn ? \$identityColumn : 'email';
\$credentialColumn = \$credentialColumn ? \$credentialColumn : 'password';

parent::__construct(\$zendDb,
\$tableName,
\$identityColumn,
\$credentialColumn);
}

protected function _authenticateCreateSelect()
{
// get select
\$dbSelect = clone \$this->getDbSelect();
\$dbSelect->from(\$this->_tableName)
->where(
\$this->_zendDb->quoteIdentifier(\$this->_identityColumn, true)
. ' = ?', \$this->_identity);

return \$dbSelect;
}

protected function _authenticateValidateResult(\$resultIdentity)
{
//Check that hash value is correct
\$hash = new Acai_Hash(8, false);

if (!\$check) {
\$this->_authenticateResultInfo['code'] =
Zend_Auth_Result::FAILURE_CREDENTIAL_INVALID;
\$this->_authenticateResultInfo['messages'][] =
'Supplied credential is invalid.';
return \$this->_authenticateCreateAuthResult();
}

\$this->_resultRow = \$resultIdentity;

\$this->_authenticateResultInfo['code'] =
Zend_Auth_Result::SUCCESS;
\$this->_authenticateResultInfo['messages'][] =
'Authentication successful.';
return \$this->_authenticateCreateAuthResult();
}

public function getResultRowObject (\$returnColumns = null, \$omitColumns = null)
{
if (\$returnColumns || \$omitColumns) {
return parent::getResultRowObject(\$returnColumns, \$omitColumns);
} else {
return parent::getResultRowObject(\$returnColumns, \$omitColumns);
}

}

}
```

Usage is just as for the standard Zend adapter.

```\$auth = Zend_Auth::getInstance();

if (!\$result->isValid()) {

} else {

//Good credentials

}
```

When you're initially registering a new user a hash can be generated simply with the following code.

```//Generate hash for password
\$hash = new Acai_Hash(8, false);