hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Soundararajan Velu (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6092) No space left on device
Date Fri, 13 Aug 2010 19:06:18 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12898371#action_12898371
] 

Soundararajan Velu commented on HADOOP-6092:
--------------------------------------------

Meng, from what I can guess looking at your cluster configuration and job that you are executing,
Is that One of the nodes in your cluster is running out of space, and when map/reduce task
tries to write out intermediate result data to its local disks (non HDFS disk space) it is
running out of space causing job failures, Though your HDFS cluster had more than 12TB of
free space, the temp space on the failed node should have fallen well below limits. just check
the free space on the disks holding the temp paths configured for map/reduce.

> No space left on device
> -----------------------
>
>                 Key: HADOOP-6092
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6092
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.19.0
>         Environment: ubuntu0.8.4
>            Reporter: mawanqiang
>
> Exception in thread "main" org.apache.hadoop.fs.FSError: java.io.IOException: No space
left on device
>         at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:199)
>         at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>         at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>         at java.io.FilterOutputStream.close(FilterOutputStream.java:140)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>         at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.close(ChecksumFileSystem.java:339)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:825)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1142)
>         at org.apache.nutch.indexer.Indexer.index(Indexer.java:72)
>         at org.apache.nutch.indexer.Indexer.run(Indexer.java:92)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.nutch.indexer.Indexer.main(Indexer.java:101)
> Caused by: java.io.IOException: No space left on device
>         at java.io.FileOutputStream.writeBytes(Native Method)
>         at java.io.FileOutputStream.write(FileOutputStream.java:260)
>         at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:197)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message