hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
Date Thu, 05 May 2016 04:16:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15271863#comment-15271863
] 

Colin Patrick McCabe commented on HDFS-10301:
---------------------------------------------

Thanks for looking at this, [~daryn].  I'm not sure about the approach you proposed, though.
 If interleaved full block reports really are very common for [~shv], it seems like throwing
an exception when these are received would be problematic.  It sounds like there might be
some implementation concerns as well, although I didn't look at the patch.

bq. [~shv] wrote: I don't think my approach requires RPC change, since the block-report RPC
message already has all required structures in place. It should require only the processing
logic change.

Just to be clear.  If what is being sent over the wire is changing, I would consider that
an "RPC change."  We can create an RPC change without modifying the {{.proto}} file-- for
example, by choosing not to fill in some optional field, or filling in some other field.

bq. Colin, it would have been good to have an interim solution, but it does not seem reasonable
to commit a patch, which fixes one bug, while introducing another.

The patch doesn't introduce any bugs.  It does mean that we won't remove zombie storages when
interleaved block reports are received.  But we are not handling this correctly right now
either, so that is not a regression.

Like I said earlier, I think your approach is a good one, but I think we should get in the
patch I posted here.  It is a very small and non-disruptive change which doesn't alter what
is sent over the wire.  It can easily be backported to stable branches.  Why don't we commit
this patch, and then work on a follow-on with the RPC change and simplification that you proposed?

> BlockReport retransmissions may lead to storages falsely being declared zombie if storage
report processing happens out of order
> --------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10301
>                 URL: https://issues.apache.org/jira/browse/HDFS-10301
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.1
>            Reporter: Konstantin Shvachko
>            Assignee: Colin Patrick McCabe
>            Priority: Critical
>         Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.01.patch,
HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it sends the
block report again. Then NameNode while process these two reports at the same time can interleave
processing storages from different reports. This screws up the blockReportId field, which
makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately
removed, causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message