Tuesday 11 January 2022

What's new in StormCrawler 2.2

StormCrawler 2.2 has just been released. This marks the beginning of having releases only for 2.x, 1.18 was the last release for the 1.x branch which is now discontinued. In case you were wondering why there was no "What's new in StormCrawler 2.1", it is simply that it contained the same modifications as 1.18 and did not get its own announcement.

This version contains many bugfixes, as usual, users are advised to upgrade to this version.

Happy crawling and thanks to our sponsors, contributors and users! PS: I am tempted to run a workshop on webcrawling with StormCrawler at the BigData conference in Vilnius in November. Anyone interested? If so please get in touch and let me know what you'd like to learn about. https://bigdataconference.eu/

Wednesday 5 May 2021

What's new in StormCrawler 1.18

 
StormCrawler 1.18 has just been released. Since the previous version dates from nearly 10 months ago, the number of changes is rather large (see below).

This version contains many bugfixes, as usual, users are advised to upgrade to this version. One of the noticeable new features is module for URLFrontier (if you haven't checked it up, do so right now!); I will publish a tutorial on how to use it soon.

1.18 is also likely to be the last release based an Apache Storm 1.x, our 2.x branch will become master as soon as I have released 2.1.

Happy crawling and thanks to our sponsors, contributors and users!

Monday 20 July 2020

Please welcome StormCrawler 2.0

Nearly 6 years after its initial release and after another 32 releases, StormCrawler has just reached version 2.0! 

This is similar to what we did 4 years ago when 1.0 was released, in that the change of major version reflects the version of Apache Storm that StormCrawler is based on. This is not a major refactoring of StormCrawler in any way, although some minor changes can be found, mainly in the way the topologies are submitted. These changes are documented in the READMEs generated by our archetypes.

In terms of functionalities and behavior, StormCrawler 2.0 is similar to the version 1.17 released a few minutes ago.

I expect to keep both branches in parallel for a bit, at least until StormCrawler 2.0 has been sufficiently tested and is used by the majority of our users.

The change to Apache Storm 2 is not just a way of future-proofing StormCrawler, since version 2 is the current branch in Apache Storm. By adopting Storm 2, we are also getting a platform 100% Java making debugging and possible contributions to Apache Storm itself, and we also benefit from Storm's recent improvements such as improved performance and better backpressure model.

I am looking forward to getting feedback (and bugfixes) from the StormCrawler community. Please give StormCrawler 2.0 a try if you can.

Happy crawling! 




What's new in StormCrawler 1.17


I have just released StormCrawler 1.17. As you can see in the list below, this contains important bugfixes and improvements. For this reason, we recommend that all users upgrade to this version, however, please check the breaking changes below if you apply it to an existing crawl.

Dependency upgrades

  • Various dependency upgrades  #808
  • CrawlerCommons 1.1 dependency #807
  • Tika 1.24.1 #797
  • Jackson-databind  #803 #793 #798

Core

  • Use regular expressions for custom number of threads per queue fetcher #788
  • /!breaking!/ Prefix protocol metadata #789
  • Basic authentication for OKHTTP #792
  • Utility to debug / test parsefilters #794
  • /!breaking!/ Remove deprecated methods and fields enhancement #791
  • AdaptiveScheduler to set last-modified time in metadata  #777 #812
  • /bugfix/ _fetch.exception_ key should be removed from metadata if subsequent fetches are successful #813
  • /bugfix/ SimpleFetcherBolt maxThrottleSleepMSec not deactivated #814
  • /!breaking!/ Index pages with content="noindex,follow" meta tag #750
  • Enable extension parsing for SitemapParser enhancement parser #749 #815

WARC



Elasticsearch


  • /bugfix/ AggregationSpout error due SimpleDateFormat not thread safe #809
  • /bugfix/ IndexerBolt issue causing ack failures #801
  • Allow ES to connect over a proxy #787
Of the breaking changes above, #789 is particularly important. If you want to use SC 1.17 on an existing crawl, make sure you add 

protocol.md.prefix: ""

to the configuration. Similarly, http.skip.robots has changed to http.robots.file.skip


Thanks to all contributors and users! Happy crawling! 

PS: something equally exciting is coming next ;-)



Thursday 16 January 2020

What's new in StormCrawler 1.16?

Happy new year!

StormCrawler 1.16 was released a couple of days ago. You can find the full list of changes on https://github.com/DigitalPebble/storm-crawler/milestone/26?closed=1

As usual, we recommend that all users upgrade to this version as it contains important fixes and performance improvements.

Dependency upgrades

  • Tika 1.23 (#771)
  • ES 7.5.0 (#770
  • jackson-databind from 2.9.9.2 to 2.9.10.1 dependency (#767)

Core

  • OKHttp configure authentication for proxies (#751)
  • Make URLBuffer configurable + AbstractURLBuffer uses URLPartitioner (#754)
  • /bugfix/ okhttp protocol: reliably mark trimmed content because of content limit (#757)
  • /!breaking!/ urlbuffer code in a separate package + 2 new implementations (#764)
  • Crawl-delay handling: allow `fetcher.max.crawl.delay` exceed 300 sec.(#768)
  • okhttp protocol: HTTP request header lacks protocol name and version (#775)
  • Locking mechanism for Metadata objects (#781)

LangID

  • /bugfix/ langID parse filter gets stuck (#758)

Elasticsearch

  • /bugfix/ Fix NullPointerException in JSONResourceWrappers  (#760)
  • ES specify field used for grouping the URLs explicitly in mapping (#761)
  • Use search after for pagination in HybridSpout (#762)
  • Filter queries in ES can be defined as lists (#765)
  • es.status.bucket.sort.field can take a list of values (#766)
  • Archetype for SC+Elasticsearch (#773)
  • ES merge seed injection into crawl topology (#778)
  • Kibana - change format of templates to ndjson (#780)
  • /bugfix/ HybridSpout get key for results when prefixed by "metadata." (#782)
  • AggregationSpout to store sortValues for the last result of each bucket (#783)
  • Import Kibana dashboards using the API (#785)
  • Include Kibana script and resources in ES archetype (#786)

One of the main improvements in 1.16 is the addition of a Maven archetype to generate a crawl topology using Elasticsearch as a backend (#773). This is done by calling

mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -DarchetypeArtifactId=storm-crawler-elasticsearch-archetype -DarchetypeVersion=LATEST

The generated project also contains a script and resources to load templates into Kibana.

The topology for Elasticsearch now includes the injection of seeds from a file, which was previously in a separate topology. These changes should help beginners get started with StormCrawler.

The previous release included URLBuffers, with just one simple implementation. Two new implementations have been added in #764. The brand new PriorityURLBuffer sorts the buckets by the number of acks they got since the last sort whereas the SchedulingURLBuffer tries to guess when a queue should release a URL based on how long it took its previous URLs to be acked on average. The former has been used extensively with the HybridSpout but the latter is still experimental.

Finally, we added a soft locking mechanism to Metadata (#781)  to help trace the source of ConcurrentModificationExceptions. If you are experiencing such exceptions, calling metadata.lock() when emitting e.g.

collector.emit(StatusStreamName, tuple, new Values(url, metadata.lock(), Status.FETCHED))

will trigger an exception whenever the metadata object is modified somewhere else. You might need to call unlock() in the subsequent bolts.

This does not change the way the Metadata works but is just there to help you debug.

Hopefully, we should be able to release 2.0 in the next few months. In the meantime, happy crawling and a massive thank you to all contributors!



Thursday 19 September 2019

What's new in StormCrawler 1.15?

StormCrawler 1.15 was released yesterday and as usual, contains loads of improvements and bugfixes.


We recommend that all users upgrade to this version as it contains very important fixes and performance improvements.

Dependency upgrades

Core

  • /bugfix/ CharsetIdentification crashes on binary content (#747)
  • FetcherBolt skips tuples which have spent too much time in queues (#746)
  • Fetcher bolts generate metrics for HTTP status (#745)
  • improvements to URLFilterBolt (#740)
  • /bugfix/ FetcherBolt doesn't recover when entering maxNumberURLsInQueues (#738)
  • /bugfix/ RemoteDriverProtocol does not set user agent correctly (#735)
  • Force English Locale for SimpleDateFormat in cookie converter (#732)

LangID

  • LangId normalises and returns value found via extraction (#733)

Elasticsearch

  • Pluggable URLBuffer and Hybrid Elasticsearch spout (#752)
  • ES spouts control how long the search is allowed to take with timeout (#753)
  • Improve types used for numeric values for metrics mappings (#744)
  • Use sniffer for ES connections (#734)
  • ScrollSpout to quit logging when finished (#727)
  • ES spouts use nextFetchDate RangeQuery as a filter (#725)
  • MetricsConsumer takes an optional date format (#724)
  • StatusMetricsBolt returns a max of 10K results per status (#723)

Happy crawling and thanks to all contributors!

Friday 24 May 2019

Reindexing StormCrawler's URL Status Index

StormCrawler holds the frontier and the page fetch status in the "status" index. It's a sharded Elasticsearch index (in the most common setup) and every document contains the page URL, the fetch status, the time when the URL should be fetched, and further information as metadata. The index uses the SHA-256 of the URL as id and is usually sharded. How URLs are distributed among shards is directly related to crawler politeness. A crawler needs to limit the load to a particular server. If all URLs of the same host or domain are kept in the same shard, the spouts (we run one spout per shard) can already control that only a limited number of URLs per host or domain is emitted into the topology during a certain time interval. A controlled flow of URLs supports the fetcher bolt which ultimatively enforces politeness and guarantees a minimum delay between successive requests sent to the same server.

Motivation

The status index grows over time. Sooner or later, unless you decide to delete it and start the crawler from scratch, you hit one of the possible reasons why to reindex it:
  1. change the number of shards (to allow the index growing)
  2. fix domain names acting as sharding key (see the discussion in #684)
  3. strip metadata to save storage space
  4. apply current URL filters and normalization rules to all URLs in the status index
  5. remove the document type – previously the documents in the "status" index were of type "status". In Elasticsearch 7.0 document types are deprecated and consequently SC 1.14 also removed the document types in all indexes (status, metrics, content)
  6. merge/append indexes
  7. pull or push to another ES cluster
  8. ... (there are some more good reasons, for sure)
In my case it was a combination of points 1, 2, 3 and 5 applied to a status index holding 250 million URLs of Common Crawl's news crawler. At the same time, the server has been upgraded and I had to move the index to a new machine (point 7). Enough reasons to try the reindexing topology which has been introduced in StormCrawler 1.14 (#688). Of course, some of the points (but not all) could also be achieved by Elasticsearch standard tools.

Configure the Reindex Topology

There are multiple scenarios possible: both Elasticsearch indexes are on the same machine or cluster (using index aliases), or you pull or push from/to a remote index. My decision was to run the reindex topology on the machine hosting the new index and pull the data from the old index. To easily get around the Elasticsearch authentication, I've simply forwarded the remote ES port to the local machine running ssh -R 9201:localhost:9200 <ip_new> -fN & on the old machine. The old index is now visible on port 9201 on the new machine.

In any case, you need to first upgrade Elasticsearch on the target and source machine/cluster to version 7.0 required by StromCrawler 1.14. Or in other words, both Elasticsearch indexes must be compatible with the Elasticsearch client used by StormCrawler running the reindexing topology.

The topology is easily configured as Storm Flux:

    name: "reindexer"
    
    includes:
      - resource: true
        file: "/crawler-default.yaml"
        override: false
    
      - resource: false
        file: "crawler-conf.yaml"
        override: false   # let the properties defined in the flux take precedence
    
      - resource: false
        file: "es-conf.yaml"
        override: false
    
    config:
      # topology settings
      topology.max.spout.pending: 600
      topology.workers: 1
      # old index (remote, forwarded to port 9201)
      es.status.addresses: "http://localhost:9201"
      es.status.index.name: "status"
      es.status.routing.fieldname: "metadata.hostname"
      es.status.concurrentRequests: 1
      # new index (local)
      es.status2.addresses: "localhost"
      es.status2.index.name: "status"
      es.status2.routing: true
      es.status2.routing.fieldname: "metadata.hostname"
      es.status2.bulkActions: 500
      es.status2.flushInterval: "1s"
      es.status2.concurrentRequests: 1
      es.status2.settings:
        cluster.name: "elasticsearch"
    
    spouts:
    #  - id: "filter"
    #    className: "com.digitalpebble.stormcrawler.bolt.URLFilterBolt"
    #    parallelism: 10
      - id: "spout"
        className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.ScrollSpout"
        parallelism: 10   # must be equal to number of shards in the old index
    
    bolts:
      - id: "status"
        className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt"
        parallelism: 4
        constructorArgs:
          - "status2"
    
    streams:
      - from: "spout"
    #   to: "filter"
    #   grouping:
    #     type: FIELDS
    #     args: ["url"]
    #     streamId: "status"
    # - from: "filter"
        to: "status"
        grouping:
          streamId: "status"
          type: CUSTOM
          customClass:
            className: "com.digitalpebble.stormcrawler.util.URLStreamGrouping"
            constructorArgs:
              - "byDomain"

The parts to plug in an URL filter bolt are commented out. Of course, you should review all settings carefully so that they fit your situation. One important point is the number of spouts which must be equal to the number of shards in the old index.

After some failed trials I've decided to choose defensive performance settings:
  • a not too high number of tuples pending in the topology: topology.max.spout.pending: 600. With 10 spouts there are max. 6000 pending tuples allowed.
  • four status bolts without concurrent requests (es.status2.concurrentRequests: 1).

To speed up the reindexing you might also change the refresh interval of the new Elasticsearch index by calling:
    curl -H Content-Type:application/json -XPUT 'http://localhost:9200/status/_settings' \
        --data '{"index" : {"refresh_interval" : -1 }}'

Don't forget to set it back to the default if the reindexing is done:
    curl ... --data '{"index" : {"refresh_interval" : null }}'

Running the Topology

The reindex topology is started as Flux via:
    java -cp crawler-1.14.jar org.apache.storm.flux.Flux --remote reindex-flux.yaml

Depending on the size of your index it might run longer. I've achieved 6,000 documents reindexed per second which means that the entire 250 million docs are reindexed after 12 hours.

Verifying the New Status Index

The topology has finished now let's check the new index and whether everything has been reindexed. Let's first get the metrics of the old status index (on port 9201):
%> curl -H Content-Type:application/json -XGET 'http://localhost:9201/_cat/indices/status?v'
health status index  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   status xFN4EGHYRaGw8hi4qxMWrg  10   1  250305363      6038664      101gb          101gb
and compare it with those of the new status index:
%> curl -H Content-Type:application/json -XGET 'http://localhost:9200/_cat/indices/status?v'
health status index  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   status ZFNT8FovT4eTl3140iOUvw  16   1  250278257         5938     87.2gb         87.2gb
Great! The new index now has 16 shards and 10 GB storage have been saved by stripping unneeded metadata. All 250 million documents have been reindexed. But stop: it's not all documents – a few thousand are missing! Panic! What's going on? Checked the logs for errors – nothing. Also: why there are deleted documents in the new index?

Ok, let's first calculate the loss: 250305363 – 250278257 = 27106 (0.01%). Well, probably not worth to redo the procedure, either the links are outdated or the crawler will find them again. Anyway, I was interested to figure out the reason. But how to find the missing URLs? – It's not trivial to compare two lists of 250 million items.

The solution is to get first the counts of items per domain aggregating counts on the field "metadata.hostname" which is used for routing documents to shards. The idea is to find the domains where the counts differ and then compare only the per-domain lists. Let's do it:
curl -s -H Content-Type:application/json -XGET http://localhost:9200/status/_search --data '{
  "aggs" : {
    "agg" : {
      "terms" : {
        "field" : "metadata.hostname",
        "size" : 100000
      }
    }
  }
}' | jq --raw-output '.aggregations.agg.buckets[] | [(.doc_count|tostring),.key] | join("\t")' \
   | rev | sort | rev >domain_counts_new_index.txt
A short explanation what this command does:
  1. we send an aggregation query to the Elasticsearch index which counts documents per domain name (kept in "metadata.hostname")
  2. the JSON output is processed by jq and the output is written into a text file with two columns – count and domain name
  3. the list is sorted using reversed strings – this keeps the counts together per top-level domain, domain, subdomain which makes the comparison easier
The procedure to get the per-domain counts from the old index is the same. After we have both lists we can compare them side by side:
%> diff -y domain_counts_old_index.txt domain_counts_new_index.txt
...
4       athletics.africa              | 80      athletics.africa
76      www.athletics.africa          <
108     chri.ca                         108     chri.ca
2       www.alta-frequenza.corsica    | 2       alta-frequenza.corsica
2       corsenetinfos.corsica         | 3       corsenetinfos.corsica
1       www.corsenetinfos.corsica     <
...
Already the first difference brought up the reason for the differences between the old and new index! You remember, there was this issue with domain names changing over time with different versions of the public suffix list? It was one of the reasons why the reindexing topology had been introduced, see #684 and news-crawler#28). If the routing key is not stable it might happen that the same URL is indexed twice with different routing keys in two shards. Exactly this happened for some of the domains affected by this issue. Here is one example:
6       hr.de                |  19      hr.de
2       reportage.hr.de      <
1       daserste.hr.de       <
11      www.hr.de            <
In the new index there are only 19 items although the sum of 6 + 2 + 1 + 11 is 20. I've checked the remaining differences and the assumption has proven true: affected are only domain names either with recently introduced TLDs (.africa has been registered by ICANN in 2017) or misclassified suffixes (co.uk is a valid suffix but hr.de is not).

Everything is fine now and the only open point is to bring the new index into production and restart the crawl topology on the new machine!