hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bill Au <bill.w...@gmail.com>
Subject Re: GC overhead limit reached when tasktrackers start
Date Mon, 30 Nov 2009 18:04:01 GMT
The gc overhead limit exceeded error is caused by the heap being almost out
of space. The JVM is spending more than 98% of the total time garbage
collection and less than 2% of the heap is recovered,

Bill

On Mon, Nov 30, 2009 at 12:48 PM, Todd Lipcon <todd@cloudera.com> wrote:

> That looks like the gc time overhead limit, not an actual out of memory
> error.
>
> It's probably trying to rm -rf the mapred.local.dir contents. If your TT is
> stopped, feel free to remove everything from in there and try to start
> again.
>
> -Todd
>
> On Mon, Nov 30, 2009 at 9:40 AM, Bill Au <bill.w.au@gmail.com> wrote:
>
> > Your JVM is running out of heap space so you will need to run it with a
> > bigger max heap size.
> >
> > Bill
> >
> > On Mon, Nov 30, 2009 at 11:53 AM, Saptarshi Guha
> > <saptarshi.guha@gmail.com>wrote:
> >
> > > Hello,
> > > While trying to start the task tracker I get the following error in
> > > the logs (see below).
> > > I'm guessing its trying to clean up an aborted job( a badly coded one)
> > > and too many files to clean up.
> > >
> > > Does anyone know which directory its looking into so that I manually
> > > clean it up?
> > > Regards
> > > S
> > >
> > > ==Error==
> > >
> > > 2009-11-30 11:39:47,989 ERROR org.apache.hadoop.mapred.TaskTracker:
> > > Can not start task tracker because java.lang.OutOfMemoryError: GC
> > > overhead limit exceeded
> > >        at java.util.Arrays.copyOf(Arrays.java:2882)
> > >        at
> > >
> >
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> > >        at
> > > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:572)
> > >        at java.lang.StringBuilder.append(StringBuilder.java:203)
> > >        at java.io.UnixFileSystem.resolve(UnixFileSystem.java:93)
> > >        at java.io.File.<init>(File.java:207)
> > >        at java.io.File.listFiles(File.java:1056)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:73)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:91)
> > >        at
> > >
> >
> org.apache.hadoop.fs.RawLocalFileSystem.delete(RawLocalFileSystem.java:269)
> > >        at
> > >
> >
> org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:438)
> > >        at
> > > org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:143)
> > >        at
> > > org.apache.hadoop.mapred.JobConf.deleteLocalFiles(JobConf.java:270)
> > >        at
> > > org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:441)
> > >        at
> > org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:934)
> > >        at
> > org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message