hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3584) Blocks are getting marked as corrupt with append operation under high load.
Date Wed, 11 Jul 2012 18:13:36 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13411801#comment-13411801

Uma Maheswara Rao G commented on HDFS-3584:

Todd, could you please comment on this, for further discussion if you have time?
> Blocks are getting marked as corrupt with append operation under high load.
> ---------------------------------------------------------------------------
>                 Key: HDFS-3584
>                 URL: https://issues.apache.org/jira/browse/HDFS-3584
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 2.0.1-alpha
>            Reporter: Brahma Reddy Battula
> Scenario:
> ========= 
> 1. There are 2 clients cli1 and cli2 cli1 write a file F1 and not closed
> 2. The cli2 will call append on unclosed file and triggers a leaserecovery
> 3. Cli1 is closed
> 4. Lease recovery is completed and with updated GS in DN and got BlockReport since there
is a mismatch in GS the block got corrupted
> 5. Now we got a CommitBlockSync this will also fail since the File is already closed
by cli1 and state in NN is Finalized

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message