Showing posts with label behemoth. Show all posts
Showing posts with label behemoth. Show all posts

Friday 27 May 2011

Parsing the Enron email dataset using Tika and Hadoop

In order to parse a large collection of emails, such as the Enron Email Dataset, we might choose to use Apache Hadoop, a scalable computing framework, and Apache Tika, a content analysis toolkit. This can be done easily with Behemoth, an open source platform for large scale document analysis developed by DigitalPebble. For more details of Behemoth, see the Behemoth Tutorial.

Using the August 21, 2009 version of the dataset, the first step is to use Behemoth's CorpusGenerator to create a corpus of BehemothDocuments from the Enron Dataset in HDFS. A BehemothDocument is the native object used by Behemoth. At ingest, it contains the original document, its content type and URL. After processing by a Behemoth module, it also contains the extracted text, additional metadata and annotations created about the document.

Once the dataset has been ingested, the next step is to use the Behemoth Tika module to create a Hadoop Map/Reduce job to extract the contents of the emails and metadata about them. Using Apache Tika 0.9, 5% of the documents fail to parse correctly. However using the latest version of Tika (Tika-1.0-snapshot revision 825923) only 0.2% documents fail.

One way to investigate why parsing is failing is by looking at the user logs generated within Hadoop, which contain details of the exceptions causing the failing documents. An alternative way is to write a custom reducer that sorts the exceptions thrown by Tika, with the exception stack being used as a key and a document URL as values. With Tika revision 825923, four exceptions are thrown, caused by two underlying problems: excessive line lengths of over 10,000 characters, the current default in the Tika mail parser, and malformed dates. The first problem can be solved by increasing the maximum line length in a MimeEntityConfig object and then modifying TikaProcessor to pass it into the ParseContext.

As for the second problem, currently the mail parser in Tika performs strict parsing, i.e. parsing a document fails when parsing a field fails. Tika-667 contains a contribution that makes it possible to turn off strict parsing, so some data can still be extracted from the emails with the malformed dates. This can also be configured via MimeEntityConfig. When these changes are incorporated, all documents are processed correctly.

Saturday 19 March 2011

DigitalPebble is hiring!

We are looking for a candidate with the following skills and expertise :
  • strong background in NLP and Java
  • GATE, experience of writing plugins and PRs, excellent knowledge of JAPE
  • IE, Linked Data, Ontologies
  • statistical approaches and machine learning
  • large scale computing with Hadoop
  • knowledge of the following technologies / tools : Lucene, SOLR, NoSQL, Tika, UIMA, Mahout
  • good social and presentation skills
  • good spoken and written English, knowledge of other languages would be a plus
  • taste for challenges and problem solving

    DigitalPebble is located in Bristol (UK) and specialises in open source solutions for text engineering.

    More details on our activities can be found on our website. We would consider candidates working remotely with occasional travel to Bristol and our clients in UK and Europe. Being located in or near Bristol would be a plus.

    This job is an opportunity to get involved in the growth of a small company, work on interesting projects and take part in various Apache related projects and events. Bristol is also a great place to live.


   Please send your CV and cover letter before the 15th April 2011 to job@digitalpebble.com


    Best regards,

    Julien Nioche

Friday 21 January 2011

BerlinBuzzwords 2011

There is a CFP for BerlinBuzzwords 2011 which will be on 6/7 June. As the website says :

I presented Behemoth there last year and really enjoyed the conference. High quality talks, fantastic atmosphere and great exchanges with fellow open source committers. I really recommend it and will definitely try to go next year and probably give a short talk about Nutch 2.0, GORA or maybe give a quick update about Behemoth.

Tuesday 14 December 2010

Module management with IVY

I've just recently some massive changes to the way we manage the code in Behemoth. Prior to that, we had a single src directory containing the various resources for using Tika, GATE, UIMA or Nutch within Behemoth. That worked fine but had a few drawbacks, mostly that we ended up with an enormous job file containing all the dependencies for all the modules. In practice most people use Behemoth with only one type of resource but not more (e.g. UIMA vs GATE).

There was also a concept of Sandbox in Behemoth which I mentioned a couple of times. The idea was to allow external contributions based on Behemoth's core and keep them separated.

Before the change, Grant Ingersoll  (who has been using Behemoth to parse a large amount of documents with Tika) had made a contribution which allowed to generate a jar file for the Behemoth core classes only. In his case, he wanted to be able to play with the Behemoth output without having to deal with a mega large job file. The modularisation of the code allows to do just that but extends the principle to all the modules.

Here is how it now works. I split the code into several modules managed by Apache Ivy (by simply following the tutorials) e.g. core, uima, gate, tika, solr, etc... Most non-core modules have at least a dependency to core as well as the external jars that they require. All modules have the same ant targets and the main ant build script at the root of the project allows to resolve the dependencies, compile, test for each module. We now get separate jars file for each module (which Grant needed for the core) but also publish these jars locally via Ivy so that the other modules can rely on them.

Building a job file is done on a per-module basis, by going into a module's root directory and calling 'ant job'. The resulting job file should then contain all the dependencies for this module and can be used in Hadoop, as usual.

This new organisation of the code is definitely cleaner, leaner and easier to maintain or extend. If for instance a user want to build a process which combines the functionalities of two or more modules, it is just a matter of creating a new module with the right dependencies to the modules used (say for instance Tika + Gate + SOLR), write a custom Job and Mapreduce class and generate a job file as described above.

The concept of sandboxes is now deprecated, as they are now modules, just like everything else. The beauty being that - if the Behemoth modules are published and accessible publicly, one could simply point to them in the Ivy config of a local module and build a Behemoth application with a minimal amount of code.

Isn't that just fun!

Wednesday 10 November 2010

Gora in incubation at Apache

Great news! GORA has been accepted in the Apache Incubator in September. It now has a brand new site, JIRA, wiki, subversion repository etc... As I explained in my very first post, GORA has been developed as a part of Nutch 2.0 to provide an abstract storage layer. Think about it as a ORM that can be plugged into a number of storage backends (Cassandra, Hbase, Mysql, etc...). What we also get from it is the ability to use these backends directly into Hadoop's MapReduce without having to write any custom code. Another way of looking at it is that it provides a simple and unified API over these various backends. This would allow for instance to develop a prototype using say, MySQL as a backend then switch to Cassandra when more scalability is needed. Since your application would be based on GORA you would not need to modify any of your code, but just the mapping schema (which is based on Apache Avro).

I was thinking about using HBase in Behemoth to avoid having multiple SequenceFiles but GORA would be a better solution as it would give us more options as to what backend to use. On top of that, we would be able to operate at an atomic level and not by batches only, i.e. process a single document from the store and put it back to the DB. Since Behemoth currently relies on the Hadoop data structures, we can only process a whole corpus and generate a new version as output, which is exactly why we wanted to have GORA in Nutch (imagine you have a 10+ billion crawlDB and add say 10M pages per fetch round - every update step in Nutch 1.x requires to read 1010M entries and write out between 1000 and 1010M; a bit wasteful isn't it? )

Assuming that we use GORA (and the AVRO schema for the Behemoth documents), we could then implement a custom Datastore in GATE to debug a Behemoth corpus or test a GATE application.

Now that GORA is in Apache-land, it will hopefully get more contributors involved and more back ends supported.

Saturday 28 August 2010

Behemoth talk from BerlinBuzzwords 2010

The talk I gave on Behemoth at BerlinBuzzwords has been filmed (I do not dare watching it) and is available on http://blip.tv/file/3809855.

The slides can be found on http://berlinbuzzwords.wikidot.com/local--files/links-to-slides/nioche_bbuzz2010.odp

The talk contains a quick demo of GATE and mentions Tika, UIMA and of course Hadoop.