hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koji Noguchi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3154) Job successful but dropping records (when disk full)
Date Wed, 02 Apr 2008 14:57:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12584614#action_12584614
] 

Koji Noguchi commented on HADOOP-3154:
--------------------------------------

Once the disk problem was away, jobs ran as expected. 

Comparing the output, we saw that size of the final output differed in all the reducers. 

Looking at one of the successful mapper userlogs, 


userlogs/task_200803290844_0001_m_000012_0/syslog
{noformat}
2008-03-29 08:46:40,731 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=MAP, sessionId=
2008-03-29 08:46:40,985 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 15
2008-03-29 08:46:41,208 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop
library
2008-03-29 08:46:41,210 INFO org.apache.hadoop.io.compress.zlib.ZlibFactory: Successfully
loaded & initialized native-zlib library
2008-03-29 08:50:36,115 INFO org.apache.hadoop.mapred.TaskRunner: Task 'task_200803290844_0001_m_000012_0'
done.
{noformat}

userlogs/task_200803290844_0001_m_000012_0/stderr
{noformat}Exception in thread "SortSpillThread" org.apache.hadoop.fs.FSError: java.io.IOException:
No space left on device
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:171)
        at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:41)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.writeChunk(ChecksumFileSystem.java:339)
        at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:141)
        at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:100)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:86)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:41)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:990)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.spill(MapTask.java:555)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpillToDisk(MapTask.java:497)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$200(MapTask.java:264)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$1.run(MapTask.java:439)
Caused by: java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:169)
        ... 15 more
{noformat}

I didn't see task_200803290844_0001_m_000012 retried elsewhere.

Seems like SortSpillThread is not catching the FSError and dying silently.
(Note the ' java.io.IOException: No space left on device' is converted to FSError in RawLocalFileSystem.java)


> Job successful but dropping records (when disk full)
> ----------------------------------------------------
>
>                 Key: HADOOP-3154
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3154
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.16.1
>         Environment: hadoop-0.16.1-H3011-H3033-H3056
>            Reporter: Koji Noguchi
>
> I have a mapreduce code that takes an input and just shuffles.
> # of input should be equal to # of output. 
> However, when disks of the nodes were filled accidentally, I started to see some records
dropping, although jobs themselves were successful.
> {noformat}
> 08/03/30 00:17:04 INFO mapred.JobClient: Job complete: job_200803292134_0001
> 08/03/30 00:17:04 INFO mapred.JobClient: Counters: 11
> 08/03/30 00:17:04 INFO mapred.JobClient:   Job Counters
> 08/03/30 00:17:04 INFO mapred.JobClient:     Launched map tasks=23
> 08/03/30 00:17:04 INFO mapred.JobClient:     Launched reduce tasks=4
> 08/03/30 00:17:04 INFO mapred.JobClient:   Map-Reduce Framework
> 08/03/30 00:17:04 INFO mapred.JobClient:   Map-Reduce Framework
> 08/03/30 00:17:04 INFO mapred.JobClient:     Map input records=6852926
> 08/03/30 00:17:04 INFO mapred.JobClient:     Map output records=6852926
> 08/03/30 00:17:04 INFO mapred.JobClient:     Map input bytes=18802382982
> 08/03/30 00:17:04 INFO mapred.JobClient:     Map output bytes=21278202852
> 08/03/30 00:17:04 INFO mapred.JobClient:     Combine input records=0
> 08/03/30 00:17:04 INFO mapred.JobClient:     Combine output records=0
> 08/03/30 00:17:04 INFO mapred.JobClient:     Reduce input groups=6722633
> 08/03/30 00:17:04 INFO mapred.JobClient:     Reduce input records=6839731
> 08/03/30 00:17:04 INFO mapred.JobClient:     Reduce output records=6839731
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message