hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das" <d...@yahoo-inc.com>
Subject RE: Mapper Out of Memory
Date Tue, 11 Dec 2007 05:06:43 GMT
Rui, pls set the mapred.child.java.opts to 512m. That should take care of
the OOM problem. 

> -----Original Message-----
> From: Rui Shi [mailto:shearershot@yahoo.com] 
> Sent: Tuesday, December 11, 2007 3:15 AM
> To: hadoop-user@lucene.apache.org
> Subject: Re: Mapper Out of Memory
> 
> Hi,
> 
> I didn't change those numbers. Basically, using the defaults.
> 
> Thanks,
> 
> Rui
> 
> ----- Original Message ----
> From: Devaraj Das <ddas@yahoo-inc.com>
> To: hadoop-user@lucene.apache.org
> Sent: Monday, December 10, 2007 4:48:59 AM
> Subject: RE: Mapper Out of Memory
> 
> 
> Was the value of mapred.child.java.opts set to something like 512MB ?
>  What's
> the io.sort.mb set to? 
> 
> > -----Original Message-----
> > From: Rui Shi [mailto:shearershot@yahoo.com]
> > Sent: Sunday, December 09, 2007 6:02 AM
> > To: hadoop-user@lucene.apache.org
> > Subject: Re: Mapper Out of Memory
> > 
> > Hi,
> > 
> > I did some experiments on a single Linux machine. I generated some 
> > data using the 'random writer' and use the 'sort' in the 
> > hadoop-examples to sort them. I still got some out of memory 
> > exceptions as follows:
> > 
> > java.lang.OutOfMemoryError: Java heap space
> >     at java.util.Arrays.copyOf(Unknown Source)
> >     at java.io.ByteArrayOutputStream.write(Unknown Source)
> >     at java.io.DataOutputStream.write(Unknown Source)
> >     at
> > org.apache.hadoop.io.BytesWritable.write(BytesWritable.java:137)
> >     at
> > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTa
> sk.java:340)
> >     at
> > org.apache.hadoop.mapred.lib.IdentityMapper.map(IdentityMapper
> > .java:39)
> >     at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:46)
> >     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:189)
> >     at
> >
>  
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1777)
> > Any ideas?
> > 
> > Thanks,
> > 
> > Rui
> > 
> > ----- Original Message ----
> > From: Rui Shi <shearershot@yahoo.com>
> > To: hadoop-user@lucene.apache.org
> > Sent: Thursday, December 6, 2007 5:56:42 PM
> > Subject: Re: Mapper Out of Memory
> > 
> > 
> > Hi,
> > 
> > Out-of-memory exceptions can also be caused by having too 
> many files 
> > open at once.  What does 'ulimit -n' show?
> > 
> > 29491
> > 
> > You presented an excerpt from a jobtracker log, right?  What do the 
> > tasktracker logs show?
> > 
> > I saw the some warning in the tasktracker log:
> > 
> > 2007-12-06 12:23:41,604 WARN org.apache.hadoop.ipc.Server: 
> > IPC Server  handler 0 on 50050, call
> > progress(task_200712031900_0014_m_000058_0,
> >  9.126612E-12, hdfs:///usr/ruish/400.gz:0+9528361, MAP,
> >  org.apache.hadoop.mapred.Counters@11c135c) from: output error 
> > java.nio.channels.ClosedChannelException
> >     at
> >  
> > sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl
> > .java:125)
> >     at 
> sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:294)
> >     at
> >  
> > org.apache.hadoop.ipc.SocketChannelOutputStream.flushBuffer(So
> > cketChannelOutputStream.java:108)
> >     at
> >  
> > org.apache.hadoop.ipc.SocketChannelOutputStream.write(SocketCh
> > annelOutputStream.java:89)
> >     at
> >  
> >
>  
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> >     at
> >  java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
> >     at java.io.DataOutputStream.flush(DataOutputStream.java:106)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:585)
> > And in the datanode logs:
> > 
> > 2007-12-06 14:42:20,831 ERROR org.apache.hadoop.dfs.DataNode:
> >  DataXceiver: java.io.IOException: Block
> > blk_-8176614602638949879 is valid, and  cannot be written to.
> >     at
> > org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
> >     at
> >  
> > org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode
> .java:822)
> >     at
> >  org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
> >     at java.lang.Thread.run(Thread.java:595)
> > 
> > Also, can you please provide more details about your application?
> >   I.e.,
> > what is your inputformat, map function, etc.
> > 
> > Very simple stuff, projecting certain fields as key and 
> sorting. The  
> > input is gzipped files in which each line has some fields 
> separated by  
> > a  delimiter.
> > 
> > Doug
> > 
> > 
> > 
> > 
> > 
> > 
> >     
> >   
> > ______________________________________________________________
> > ______________________
> > Never miss a thing.  Make Yahoo your home page. 
> > http://www.yahoo.com/r/hs
> > 
> > 
> > 
> > 
> > 
> >       
> > ______________________________________________________________
> > ______________________
> > Never miss a thing.  Make Yahoo your home page. 
> > http://www.yahoo.com/r/hs
> > 
> 
> 
> 
> 
> 
> 
> 
>       
> ______________________________________________________________
> ______________________
> Be a better friend, newshound, and
> know-it-all with Yahoo! Mobile.  Try it now.  
> http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 
> 


Mime
View raw message