hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo (Nicholas), SZE (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-1837) Insufficient space exception from InMemoryFileSystem after raising fs.inmemory.size.mb
Date Wed, 22 Apr 2009 00:49:48 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo (Nicholas), SZE resolved HADOOP-1837.
--------------------------------------------

    Resolution: Won't Fix

InMemoryFileSystem was removed.  See HADOOP-4648.

> Insufficient space exception from InMemoryFileSystem after raising fs.inmemory.size.mb
> --------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1837
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1837
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.13.1
>            Reporter: Joydeep Sen Sarma
>            Priority: Minor
>
> trying out larger in-memory file system (curious if that helped speed the sort phase).
in this run - i had sized it to 500MB. There's plenty of RAM in the machine (8GB) and the
tasks are launched with -Xmx2048 option (so there's plenty of heap space as well). However
- observing this exception:
> 2007-09-04 13:47:51,718 INFO org.apache.hadoop.mapred.ReduceTask: task_0002_r_000002_0
Copying task_0002_m_000124_0 output from hadoop004.sf
> 2p.facebook.com.
> 2007-09-04 13:47:52,188 WARN org.apache.hadoop.mapred.ReduceTask: task_0002_r_000002_0
copy failed: task_0002_m_000124_0 from hadoop004.sf2p
> .facebook.com
> 2007-09-04 13:47:52,189 WARN org.apache.hadoop.mapred.ReduceTask: java.io.IOException:
Insufficient space
>         at org.apache.hadoop.fs.InMemoryFileSystem$RawInMemoryFileSystem$InMemoryOutputStream.write(InMemoryFileSystem.java:181)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
>         at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>         at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:91)
>         at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.close(ChecksumFileSystem.java:416)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:48)
>         at org.apache.hadoop.fs.FSDataOutputStream$Buffer.close(FSDataOutputStream.java:72)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:92)
>         at org.apache.hadoop.mapred.MapOutputLocation.getFile(MapOutputLocation.java:251)
>         at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:680)
>         at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:641)
> 2007-09-04 13:47:52,189 WARN org.apache.hadoop.mapred.ReduceTask: task_0002_r_000002_0
adding host hadoop004.sf2p.facebook.com to penalty bo
> x, next contact in 64 seconds
> so this ends up slowing stuff down since we backoff on the source host (even though it's
not it's fault).  Looking at the code, seems like ReduceTask is trying to write more to InMemoryFileSystem
than it should.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message