lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mugeesh Husain <muge...@gmail.com>
Subject Re: Can Apache Solr Handle TeraByte Large Data
Date Tue, 04 Aug 2015 09:51:57 GMT
Thank @Alexandre and  Erickson ,Hatcher.

I will generate ID of MD5  with help of filename using java.
I can do it with help of SolrJ nicely because i am java developer apart from
this 
The question raised that data is too large i think it will break into
multiple shards(core)
Using multi core indexing how i can analysed duplicate ID while reindexing
the whole.(Using Solrj) and
How i will analysed one core contains such amount of data and other etc.

I have decide i will do it with SolrJ because i don't have good
understanding with DIH for such type operation which i needed on my
requirement. i'd google but unable to find such type of DIH Example which i
can implement on my problem.





--
View this message in context: http://lucene.472066.n3.nabble.com/Can-Apache-Solr-Handle-TeraByte-Large-Data-tp3656484p4220673.html
Sent from the Solr - User mailing list archive at Nabble.com.

Mime
View raw message