lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Høydahl (JIRA) <>
Subject [jira] Commented: (SOLR-1763) Integrate Solr Cell/Tika as an UpdateRequestProcessor
Date Mon, 08 Feb 2010 20:43:27 GMT


Jan Høydahl commented on SOLR-1763:

Re-posting my comment from solr-dev in this ticket:
Good match. UpdateProcessors is the way to go for functionality which modifiy documents prior
to indexing.
With this, we can mix and match any type of content source with other processing needs.

I think it can be neneficial to have the choice to do extration on the SolrJ side. But you
don't always have that choice, if your source is a crawler without built-in Tika, some base64
encoded field in an XML or some other random source, you want to do the extraction at an arbitrary
place in the chain.

 Crawler (httpheaders, binarybody) -> TikaUpdateProcessor (+title, +text, +meta...) ->
 XML (title, pdfurl) -> GetUrlProcessor (+pdfbin) -> TikaUpdateProcessor (+text, +meta)
-> index
 DIH (city, street, lat, lon) -> LatLon2GeoHashProcessor (+geohash) -> index

I propose to model the document processor chain more after FAST ESP's flexible processing
chain, which must be seen as an industry best practice. I'm thinking of starting a Wiki page
to model what direction we should go.

Jan Høydahl  - search architect
Cominvent AS -

> Integrate Solr Cell/Tika as an UpdateRequestProcessor
> -----------------------------------------------------
>                 Key: SOLR-1763
>                 URL:
>             Project: Solr
>          Issue Type: New Feature
>          Components: update
>            Reporter: Jan Høydahl
> From Chris Hostetter's original post in solr-dev:
> As someone with very little knowledge of Solr Cell and/or Tika, I find myself wondering
if ExtractingRequestHandler would make more sense as an extractingUpdateProcessor -- where
it could be configured to take take either binary fields (or string fields containing URLs)
out of the Documents, parse them with tika, and add the various XPath matching hunks of text
back into the document as new fields.
> Then ExtractingRequestHandler just becomes a handler that slurps up it's ContentStreams
and adds them as binary data fields and adds the other literal params as fields.
> Wouldn't that make things like SOLR-1358, and using Tika with URLs/filepaths in XML and
CSV based updates fairly trivial?
> -Hoss
> I couldn't agree more, so I decided to add it as an issue.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message