hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1062) Checksum error in InMemoryFileSystem
Date Thu, 08 Mar 2007 00:00:26 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hairong Kuang updated HADOOP-1062:
----------------------------------

          Component/s:     (was: mapred)
                       fs
        Fix Version/s:     (was: 0.12.1)
                       0.13.0
             Assignee: Hairong Kuang
             Priority: Major  (was: Blocker)
    Affects Version/s:     (was: 0.12.1)
                       0.12.0

I looked at this issue, but I am not able to reproduce the error. I would suggest that we
fix it in 0.13.0 when we get more inputs from the reporter.

> Checksum error in InMemoryFileSystem
> ------------------------------------
>
>                 Key: HADOOP-1062
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1062
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.12.0
>            Reporter: Espen Amble Kolstad
>         Assigned To: Hairong Kuang
>             Fix For: 0.13.0
>
>
> Getting the following error in the tasktracker log on 2 attempts:
> 2007-03-05 14:59:50,320 WARN  mapred.TaskRunner - task_0001_r_000005_0 Intermediate Merge
of the inmemory files threw an exception: org.apache.hadoop.fs.ChecksumException: Checksum
error: /trank/n
> utch-0.9-dev/filesystem/mapred/local/task_0001_r_000005_0/map_2.out at 16776192
>         at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.verifySum(ChecksumFileSystem.java:250)
>         at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:207)
>         at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:163)
>         at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
>         at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>         at java.io.DataInputStream.readFully(DataInputStream.java:178)
>         at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:57)
>         at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:91)
>         at org.apache.hadoop.io.SequenceFile$Reader.readBuffer(SequenceFile.java:1300)
>         at org.apache.hadoop.io.SequenceFile$Reader.seekToCurrentValue(SequenceFile.java:1363)
>         at org.apache.hadoop.io.SequenceFile$Reader.nextRawValue(SequenceFile.java:1656)
>         at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawValue(SequenceFile.java:2579)
>         at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2351)
>         at org.apache.hadoop.io.SequenceFile$Sorter.writeFile(SequenceFile.java:2226)
>         at org.apache.hadoop.mapred.ReduceTaskRunner$InMemFSMergeThread.run(ReduceTaskRunner.java:820)
> When I changed fs.inmemory.size.mb to 0 (was 75 - default) the reduce completes successfully.
> Could it be related to HADOOP-1027 or HADOOP-1014?
> - Espen

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message