Strategies for data collection

I am currently working on a fairly complex data collection task. This is the third in the past year and by now I'm reasonably comfortable handling the mechanics, especially when I can utilise tools like Scrapy, lxml and a reasonable ORM for database access. Deciding exactly what to store seems like an easy question and yet it is this question which seems to be causing me the most trouble.

The difficulty exists because in deciding what to store multiple competing interests need to be balanced.

Store everything

Storing everything is the easiest to implement and enables the decision of which data points you are interested in to be delayed. The disadvantages with storing everything is that it can place significant demands on storage capacity and risks silent failure.

Store just what you need

Storing just the data you are interested in minimises storage requirements and makes it easier to detect failures. If the information you want is moved, more common for html scraping than APIs, or you realise you have not been collecting everything you want there is no way to go back and alter what you extract or how you extract it.

Failure detection

Failure detection is easier with storing just what you need because your expectations are more detailed. If you expect to find an integer at a certain node in the DOM and either fail to find the node or the content is not an integer you can be relatively certain that there is an error. If you are storing the entire document a request to complete a CAPTCHA or a notice that you have exceeded a rate limit may be indistinguishable from the data you are hoping to collect.

So far I've taken an approach somewhere between these two extremes although I doubt I am close to the optimal solution. For the current project I need to parse much of the data I am interested in so that I can collect the remainder. It feels natural in this situation to favour storing only what I intend to use even though this decision has slowed down development.

Have you been in a similar situation and faced these same choices? Which approach did you take?

Django and Scrapy

I'm currently working on a project which centres around pulling in data from an external website, "mashing" it up with some additional content, and then displaying it on a website.

The website is going to be interactive and reasonably complex so I decided to use django. To acquire the external data there isn't a webservice so I'm stuck parsing html (and excel spreadsheets but that's a separate story). Scrapy seemed ideal for this and although I wish I had used some other approach than xpath it largely has been.

Having set up my database models in django and built my spider in scrapy the next step was putting the data from the spider in the database. There are plenty of posts detailing how to use the django ORM from outside a django project, even some specific to scrapy but they didn't seem to be working for me.

The issue was the way I handled development and production environment settings.

Continue reading ...