hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vijay Murthi" <murt...@yahoo-inc.com>
Subject RE: Out of memory after Map tasks
Date Thu, 25 May 2006 18:52:28 GMT
I have added my comments below.

> Are you running the current trunk?  My guess is that you are.  If so,
> then this error is "normal", things should keep running.
I am using hadoop-0.2.0. I believe this is the current trunk. I used to
think child task exit with "Out of memory" is normal since the job can
be re-executed on another machine that finish whereas Tasktracker which
manages should not. After this message I see only one Tasktracker
running on each node with "99%" on CPU all the time not any reduce task.
On "mapred" local directory I see it writing to directory of name
"*_r_*". Since every output map task produce is on local disk can't it
just read those reduce files Map task create?

 

> Are you running a 64-bit kernel?  If not, can it really take advantage
> of all 4GB?  In my experience, 32-bit JVM's can't effectively use more
> than around 1.5GB, and a 32-bit kernel can't effectively use all 4GB,
> but I may be wrong on that last count.
I am running on 64 bit kernel with JVM set to 32-bit. The JAVA heap size
set to a maximum of 1 GB for both Tasktracker and child process. I
believe Tasktracker and each child process runs on its own JVM of 1 GB
(correct, me if I am wrong).  Does each child process should have less
memory than Tasktracker or total of memory of child process it manage
should be less than Tasktracker memory heap since Tasktracker creates
the children? In my case, I am setting 500 MB of sort memory for each
child reduce process. So 3 reduce task * 500 MB can be more than 1 GB
and causes "Out of memory"?



> > Lister below is the configuration parameter. Am I setting JAVA
memory
> heap very low compared to io.sort.mb or file buffer size? I thought
> Tasktracker just pushes the job to the child node, does it because of
> something like moving data ? If so is there a buffer size I can set a
> limit? Also, I noticed on mapred local each under the directotries for
> reduce files start growing even after tasktracker has "out of memory
> error".
> 
> Sorting does indeed happen in the child process.
> 
> 4MB buffers for file streams seems large to me.
I keep 4 MB buffer because each Map task reading around 2 GB Gzip text
file. I thought this will make the reading process efficient and 4 MB *
3 map task per node is like 12 MB. Not sure, why this is lot.


> You might increase the io.sort.factor.  With 500MB for sorting and a
> sort factor of 100, each sort stream would get a 5MB buffer, plenty to
> ensure that transfer time dominates seek, since the break-even point
is
> around 100kB.  So you could even use a sort factor of 500.  That would
> make sorts a lot faster.
Ok I will try that. I have around 120 reduce files in total each around
1 GB for 6 reduce process. 


> Also why are you setting the task timeout so high?  Do you have
mappers
> or reducers that take a long time per entry and are not calling
> Reporter.setStatus() regularly?  That can cause tasks to time out.
Yes. Map task sometime take a long time and got killed. I have a
reporter that set status when record reader is created. Still things get
printed on the web page only after the task exit with Succeed or Failure
status. 



Thanks,
VJ










Mime
View raw message