lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dino Chopins <>
Subject Re: Running Lucene/SOR on Hadoop
Date Sun, 10 Jan 2016 07:10:21 GMT
Hi Steve,

I cannot remove deduplication at index time, but rather to find duplicates
of the document then inform the duplicate data back to user.

Yes, I need to query each document of all 40 million rows. It will be about
10 mapper tasks max. Will try the SolrJ for this purpose. Thanks Steve.



On Sun, Jan 10, 2016 at 11:31 AM, Steve Davids <> wrote:

> You might consider trying to get the de-duplication done at index time:
> that way
> the map reduce job wouldn't even be necessary.
> When it comes to the map reduce job, you would need to be more specific
> with *what* you are doing for people to try and help, are you attempting to
> query for every record of all 40 million rows - how many mapper tasks? But
> right off the bat I see you are using Java's HttpURLConnection, you should
> really use SolrJ for querying purposes:
> you won't
> need
> to deal with xml parsing and it uses Apache's HttpClient with much more
> reasonable defaults.
> -Steve



  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message