lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jack Krupansky" <j...@basetechnology.com>
Subject Re: Solr: extracting/indexing HTML via cURL
Date Mon, 30 Apr 2012 15:55:10 GMT
If by "extracting HTML content via cURL" you mean using SolrCell to parse 
html files, this seems to make sense. The sequence is that regardless of the 
file type, each file extraction "parser" will strip off all formatting and 
produce a raw text stream. Office, PDF, and HTML files are all treated the 
same in that way. Then, the unformatted text stream is sent through the 
field type analyzers to be tokenized into terms that Lucene can index. The 
input string to the field type analyzer is what gets stored for the field, 
but this occurs after the extraction file parser has already removed 
formatting.

No way for the formatting to be preserved in that case, other than to go 
back to the original input document before extraction parsing.

If you really do want to preserve full HTML formatted text, you would need 
to define a field whose field type uses the HTMLStripCharFilter and then 
directly add documents that direct the raw HTML to that field.

There may be some other way to hook into the update processing chain, but 
that may be too much effort compared to the HTML strip filter.

-- Jack Krupansky

-----Original Message----- 
From: okayndc
Sent: Monday, April 30, 2012 10:07 AM
To: solr-user@lucene.apache.org
Subject: Solr: extracting/indexing HTML via cURL

Hello,

Over the weekend I experimented with extracting HTML content via cURL and
just
wondering why the extraction/indexing process does not include the HTML
tags.
It seems as though the HTML tags either being ignored or stripped somewhere
in the pipeline.
If this is the case, is it possible to include the HTML tags, as I would
like to keep the
formatted HTML intact?

Any help is greatly appreciated. 


Mime
View raw message