hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 27g <tiangang...@gmail.com>
Subject mapreduce combiner
Date Mon, 26 Dec 2011 09:56:42 GMT
I have biuld a distribute index using the source code of
hadoop/contrib/index,but I found that when the input files become big(such
as one file is 16G),the OOM exception will be throwed .The cause is that: in
combiner ,"writer.addIndexNoOptimize()",this use much memory cause to OOM,
it's the Lucene OOM insead of the MapReduce OOM, but I hope create a new
method like the "spill" to solve this problem ,how can I do? My English is
poor ,sorry.


Thanks

--
View this message in context: http://lucene.472066.n3.nabble.com/mapreduce-combiner-tp3612513p3612513.html
Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.

Mime
View raw message