I've just recently some massive changes to the way we manage the code in
Behemoth. Prior to that, we had a single src directory containing the various resources for using Tika, GATE, UIMA or Nutch within Behemoth. That worked fine but had a few drawbacks, mostly that we ended up with an enormous job file containing all the dependencies for all the modules. In practice most people use Behemoth with only one type of resource but not more (e.g. UIMA vs GATE).
There was also a concept of Sandbox in Behemoth which I mentioned a couple of times. The idea was to allow external contributions based on Behemoth's core and keep them separated.
Before the change,
Grant Ingersoll (who has been using Behemoth to parse a large amount of documents with Tika) had made a contribution which allowed to generate a jar file for the Behemoth core classes only. In his case, he wanted to be able to play with the Behemoth output without having to deal with a mega large job file. The modularisation of the code allows to do just that but extends the principle to all the modules.
Here is how it now works. I split the code into several modules managed by
Apache Ivy (by simply following the
tutorials) e.g. core, uima, gate, tika, solr, etc... Most non-core modules have at least a dependency to core as well as the external jars that they require. All modules have the same ant targets and the main ant build script at the root of the project allows to resolve the dependencies, compile, test for each module. We now get separate jars file for each module (which Grant needed for the core) but also publish these jars locally via Ivy so that the other modules can rely on them.
Building a job file is done on a per-module basis, by going into a module's root directory and calling 'ant job'. The resulting job file should then contain all the dependencies for this module and can be used in Hadoop, as usual.
This new organisation of the code is definitely cleaner, leaner and easier to maintain or extend. If for instance a user want to build a process which combines the functionalities of two or more modules, it is just a matter of creating a new module with the right dependencies to the modules used (say for instance Tika + Gate + SOLR), write a custom Job and Mapreduce class and generate a job file as described above.
The concept of sandboxes is now deprecated, as they are now modules, just like everything else. The beauty being that - if the Behemoth modules are published and accessible publicly, one could simply point to them in the Ivy config of a local module and build a Behemoth application with a minimal amount of code.
Isn't that just fun!