hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leon Mergen <l.p.mer...@solatis.com>
Subject RE: OutOfMemoryError with map jobs
Date Sat, 06 Sep 2008 16:36:12 GMT
Hello,

> I'm currently developing a map/reduce program that emits a fair amount
> of maps per input record (around 50 - 100), and I'm getting OutOfMemory
> errors:

Sorry for the noise, I found out I had to set the mapred.child.java.opts JobConf parameter
to "-Xmx512m" to make 512MB of heap space available in the map processes.

However, I was wondering: are these hard architectural limits? Say that I wanted to emit 25,000
maps for a single input record, would that mean that I will require huge amounts of (virtual)
memory? In other words, what exactly is the reason that increasing the number of emitted maps
per input record causes an OutOfMemoryError ?

Regards,

Leon Mergen

Mime
View raw message