hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Koji Noguchi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1526) Dfs client name for a map/reduce task should have some randomness
Date Fri, 03 Dec 2010 16:49:10 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12966579#action_12966579

Koji Noguchi commented on HDFS-1526:

Hairong, using a random number (with time as a seed) for getting a unique filename reminds
me of an old old data corruption bug we looked at.  When dfsclient was writing to temporary
file before Dhruba's HADOOP-1707, we had cases of multiple tasks using the same seed/time
resulting in data corruption.  Wouldn't your patch have the same problem?

> Dfs client name for a map/reduce task should have some randomness
> -----------------------------------------------------------------
>                 Key: HDFS-1526
>                 URL: https://issues.apache.org/jira/browse/HDFS-1526
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.23.0
>         Attachments: clientName.patch
> Fsck shows one of the files in our dfs cluster is corrupt.
> # /bin/hadoop fsck aFile -files -blocks -locations
> aFile: 4633 bytes, 2 block(s): 
> aFile: CORRUPT block blk_-4597378336099313975
> OK
> 0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
> 1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
> On disk, these two blocks are of the same size and the same content. It turns out the
writer of the file is from a multiple threaded map task. Each thread may write to the same
file. One possible interaction of two threads might make this to happen:
> [T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to aFile][T2:
addBlock1 to aFile]...
> Because T1 and T2 have the same client name, which is the map task id, the above interactions
could be done without any lease exception, thus eventually leading to a corrupt file. To solve
the problem, a mapreduce task's client name could be formed by its task id followed by a random

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message