hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-2742) HA: observed dataloss in replication stress test
Date Sat, 28 Jan 2012 04:17:23 GMT

     [ https://issues.apache.org/jira/browse/HDFS-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Todd Lipcon updated HDFS-2742:
------------------------------

    Attachment: hdfs-2742.txt

Fixed all the nits above except for the indentation - I didn't see any place with improper
indentation.

{quote}
I think BM should distinguish between corrupt and out-of-dates replicas. The new case in processFirstBlockReport
in thispatch, and where we mark reported RBW replicas for completed blocks as corrupt are
using "corrupt" as a proxy for "please delete". I wasn't able to come up with additional bugs
that with a similar cause but it would be easier to reason about if only truly corrupt replicas
were marked as such. Can punt to a separate jira, if you agree.
{quote}
I don't entirely follow what you're getting at here... so let's open a new JIRA :)

bq. In FSNamesystem#isSafeModeTrackingBlocks, shouldn't we assert haEnabled is enabled if
we're in SM and shouldIncrementallyTrackBlocks is true, instead of short-circuiting? We currently
wouldn't know if we violate this condition because we'll return false if haEnabled.

I did the check for haEnabled in FSNamesystem rather than SafeModeInfo, since when HA is enabled
it means we can avoid the volatile read of safeModeInfo. This is to avoid having any impact
on the HA case. Is that what you're referring to? Not sure specifically what you're asking
for in this change...

I changed {{setBlockTotal}} to only set {{shouldIncrementallyTrackBlocks}} to true when HA
is enabled, and added {{assert haEnabled}} in {{adjustBlockTotals}}. Does that address your
comment?

                
> HA: observed dataloss in replication stress test
> ------------------------------------------------
>
>                 Key: HDFS-2742
>                 URL: https://issues.apache.org/jira/browse/HDFS-2742
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: data-node, ha, name-node
>    Affects Versions: HA branch (HDFS-1623)
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt,
hdfs-2742.txt, log-colorized.txt
>
>
> The replication stress test case failed over the weekend since one of the replicas went
missing. Still diagnosing the issue, but it seems like the chain of events was something like:
> - a block report was generated on one of the nodes while the block was being written
- thus the block report listed the block as RBW
> - when the standby replayed this queued message, it was replayed after the file was marked
complete. Thus it marked this replica as corrupt
> - it asked the DN holding the corrupt replica to delete it. And, I think, removed it
from the block map at this time.
> - That DN then did another block report before receiving the deletion. This caused it
to be re-added to the block map, since it was "FINALIZED" now.
> - Replication was lowered on the file, and it counted the above replica as non-corrupt,
and asked for the other replicas to be deleted.
> - All replicas were lost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message