Deep neural networks are typically too slow to train on CPUs. Instead, GPUs are used. The example in the notebook uses a relatively small network so should be runnable on any hardware.
Following a hiatus of a couple of years I have rejoined the competitors on kaggle. The UPenn and Mayo Clinic Seizure Detection Challenge had 8 days to run when I decided to participate. For the time I had available I'm quite pleased with my final score. I finished in 27th place with 0.93558. The metric used was area under the ROC curve, 1.0 is perfect and 0.5 being no better than random.
Prompted by a post from Zac Stewart I decided to give pipelines in scikit-learn a try. The data from the challenge consisted of electroencephalogram recordings from several patients and dogs. These subjects had different numbers of channels in their recordings, so manually implementing the feature extraction would have been very slow and repetitive. Using pipelines made the process incredibly easy and allowed me to make changes quickly.
The features I used were incredibly simple. All the code is in transformers.py - I used variance, median, and the FFT which I pooled into 6 bins. No optimization of hyperparameters was attempted before I ran out of time.
Next time, I'll be looking for a competition with longer to run.
This Saturday the DC Python group ran a coding meetup. As part of the event I ran an introduction to scientific computing for about 7 people.
After a quick introduction to numpy, matplotlib, pandas and scikit-learn we decided to pick a dataset and apply some machine learning. The dataset we decided to use was from a Kaggle competition looking at the Titanic disaster. This competition had been posted to help the community get started with machine learning so it seemed perfect.
I am currently working on a fairly complex data collection task. This is the third in the past year and by now I'm reasonably comfortable handling the mechanics, especially when I can utilise tools like Scrapy, lxml and a reasonable ORM for database access. Deciding exactly what to store seems like an easy question and yet it is this question which seems to be causing me the most trouble.
The difficulty exists because in deciding what to store multiple competing interests need to be balanced.
Storing everything is the easiest to implement and enables the decision of which data points you are interested in to be delayed. The disadvantages with storing everything is that it can place significant demands on storage capacity and risks silent failure.
Storing just the data you are interested in minimises storage requirements and makes it easier to detect failures. If the information you want is moved, more common for html scraping than APIs, or you realise you have not been collecting everything you want there is no way to go back and alter what you extract or how you extract it.
Failure detection is easier with storing just what you need because your expectations are more detailed. If you expect to find an integer at a certain node in the DOM and either fail to find the node or the content is not an integer you can be relatively certain that there is an error. If you are storing the entire document a request to complete a CAPTCHA or a notice that you have exceeded a rate limit may be indistinguishable from the data you are hoping to collect.
So far I've taken an approach somewhere between these two extremes although I doubt I am close to the optimal solution. For the current project I need to parse much of the data I am interested in so that I can collect the remainder. It feels natural in this situation to favour storing only what I intend to use even though this decision has slowed down development.
Have you been in a similar situation and faced these same choices? Which approach did you take?
I'm currently working on a project which centres around pulling in data from an external website, "mashing" it up with some additional content, and then displaying it on a website.
The website is going to be interactive and reasonably complex so I decided to use django. To acquire the external data there isn't a webservice so I'm stuck parsing html (and excel spreadsheets but that's a separate story). Scrapy seemed ideal for this and although I wish I had used some other approach than xpath it largely has been.
Having set up my database models in django and built my spider in scrapy the next step was putting the data from the spider in the database. There are plenty of posts detailing how to use the django ORM from outside a django project, even some specific to scrapy but they didn't seem to be working for me.
The issue was the way I handled development and production environment settings.
Although I frequently use Numpy I'm far from an expert and the content of my talk reflected this. I started with a general introduction to the array object and then expanded the scope of the talk to highlight some of the projects that use Numpy. I gave an example of using MDP and matplotlib.
The talk was followed by some excellent discussion. We went through some of the code on slide 6 in a lot of detail.
The PyNorthwest group meets at Madlab in Manchester city centre on the third Thursday of each month. If you're in the area check it out. The January event is on the 19th, starting at 7pm.
This was the third BarcampNortheast event I have attended. Each has been slightly different but they have all been a weekend well spent. This year felt a little smaller than previous years but that may have partly been because we were in a bigger space.
I have been attending the python Edinburgh meetups for a while. They have always been interesting and the Northwest meetup this Thursday was the first since I moved back to the Northwest. The format, alternating talks and coding sessions, is different to Edinburgh, regular pub meetups with irregular talks, coding sessions and miniconferences. It was an interesting crowd and the other talks, on Apache Thrift and teaching programming to GCSE students (15-16 year olds), gave a really good variety of subjects to discuss later.
Last weekend the Python Edinburgh users group hosted a mini-conference. Saturday morning was kicked off with a series of talks followed by sessions introducing and then focusing on contributing to django prior to sprints which really got going on the Sunday.
The slides for my talk on, "Images and Vision in Python" are now available in pdf format here.
The slide deck I used is relatively lightweight with my focus being on demonstrating using the different packages available. The code I went through is below.
from PIL import Image #Open an image and show it pil1 = Image.open('filename') pil1.show() #Get its size pil1.size #Resize pil1s = pil1.resize((100,100)) #or - thumbnail pil1.thumbnail((100,100), Image.ANTIALIAS) #New image bg = Image.new('RGB', (500,500), '#ffffff') #Two ways of accessing the pixels #getpixel/putpixel and load #load is faster pix = bg.load() for a in range(100, 200): for b in range(100,110): pix[a,b] = (0,0,255) bg.show() #Drawing shapes is slightly more involved from PIL import ImageDraw draw = ImageDraw.Draw(bg) draw.ellipse((300,300,320,320), fill='#ff0000') bg.show() from PIL import ImageFont font = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeSerif.ttf", 72) draw.text((10,10), "Hello", font=font, fill='#00ff00') bg.show() #Demo's for vision from scipy import ndimage import mahotas #Create a sample image v1 = np.zeros((10,10), bool) v1[1:4,1:4] = True v1[4:7,2:6] = True imshow(v1, interpolation="Nearest") imshow(mahotas.dilate(v1), interpolation="Nearest") imshow(mahotas.erode(v1), interpolation="Nearest") imshow(mahotas.thin(v1), interpolation="Nearest") #Opening, closing and top-hat as combinations of dilate and erode #Labeling #Latest version of mahotas has a label func v1[8:,8:] = True imshow(v1) labeled, nr_obj = ndimage.label(v1) nr_obj imshow(labeled, interpolation="Nearest") pylab.jet() #Thresholding #Convert a grayscale image to a binary image v2 = mahotas.imread("/home/jonathan/openplaques/blueness_images/1.jpg") T = mahotas.otsu(v2) imshow(v2) imshow(v2 > T) #Distance Transforms dist = mahotas.distance(v2 > T) imshow(dist)
I've been using MDP and matplotlib a lot recently and although overall I've been very pleased with the documentation for both projects I have run into a few problems for which the solutions were not immediately obvious. This post gives the solution for each in the expectation it will certainly be useful to me in the future and the hope that it may also be useful to others.
The tutorial for the Modular Toolkit for Data Processing (MDP) starts with a quick example of using the toolkit for a pca analysis and yet I still ran into a couple of problems. The first issue I had was how the pca function expects to receive data. I suspect this is simply due to unfamiliarity with the field and the language used within the field. For future reference the data is expected to be in the following format.
|Gene 1||Gene 2||Gene 3||Gene 4|
|Experimental Condition 1||.||.||.||.|
|Experimental Condition 2||.||.||.||.|
The previously mentioned quick start tutorial was very useful in getting something useful out quickly but I couldn't find a way to get a value for how much of the variance present in the data was accounted for in the principal components. To get that, as far as I've been able to determine, you need to interact with the PCANode directly rather than using the convenience function. The code is still relative straightforward.
import mdp import numpy as np import matplotlib.pyplot as plt #Create sample data var1 = np.random.normal(loc=0., scale=0.5, size=(10,5)) var2 = np.random.normal(loc=4., scale=1., size=(10,5)) var = np.concatenate((var1,var2), axis=0) #Create the PCA node and train it pcan = mdp.nodes.PCANode(output_dim=3) pcar = pcan.execute(var) #Graph the results fig = plt.figure() ax = fig.add_subplot(111) ax.plot(pcar[:10,0], pcar[:10,1], 'bo') ax.plot(pcar[10:,0], pcar[10:,1], 'ro') #Show variance accounted for ax.set_xlabel('PC1 (%.3f%%)' % (pcan.d)) ax.set_ylabel('PC2 (%.3f%%)' % (pcan.d)) plt.show()
Running this code produces an image similar to the one below.
The growing neural gas implementation was another sample application highlighted in the tutorial for MDP. It held my interest for a while as a technique which could potentially be applied to the transcription of plaques for the openplaques project. It wasn't immediately obvious how to get the position of a node from a connected nodes object. As the tutorial left the details of visualisation up to the user I'll present the solution to getting the node location in the form of the necessary code to visualise the node training. The end result will look something like the following.
I've been using Matplotlib to plot data exclusively for a while now. The defaults produce reasonable quality graphs and any differences in opinion can be quickly fixed either by altering options in matplotlib or, as the graphs can be saved in svg format, in a vector image manipulation program such as Inkscape. Although most options can be changed in matplotlib it can sometimes be difficult to find the correct option. Most of the time the naming of variables are, to my mind, logical but sometimes I just can't find the right way to describe what I want to do.
I wanted to have a grid of 6 graphs but didn't want to display the axes on all the graphs as I felt this looked cluttered.
If I was going to display the axes on only some of the graphs then the values for the axes needed to be the same on all of them.
import numpy as np import matplotlib.pyplot as plt #Generate sample data var = np.random.random_sample((40,2)) fig = plt.figure() for i in range(4): ax = fig.add_subplot(220 + i + 1) start = i * 10 ax.plot(var[start:start+10,0], var[start:start+10,1], 'bo') #Hide the x axis on the top row of charts if i in [0,1]: ax.set_xticklabels(ax.get_xticklabels(), visible=False) #Hide the y axis on the right column of charts if i in [1,3]: ax.set_yticklabels(ax.get_yticklabels(), visible=False) #Set the axis range ax.axis([0,1,0,1]) plt.show()
Running this code should produce an image similar to the one below.
The legend assumes that values are connected so two points and the connecting line are shown by default. If the points on the graph aren't connected then this looked strange. To remove the duplicate symbol is straightforward.
import numpy as np import matplotlib.pyplot as plt #Generate sample data var = np.random.random_sample((10,2)) #Plot data with labels fig = plt.figure() ax = fig.add_subplot(111) ax.plot(var[0:5,0], var[0:5,1], 'bo', label="First half") ax.plot(var[5:10,0], var[5:10,1], 'r^', label="Second half") ax.legend(numpoints=1) plt.show()
Page 1 of 2 Next