hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yiqun Lin (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-11311) HDFS fsck continues to report all blocks present when DataNode is restarted with empty data directories
Date Tue, 07 Feb 2017 10:47:43 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855737#comment-15855737
] 

Yiqun Lin edited comment on HDFS-11311 at 2/7/17 10:47 AM:
-----------------------------------------------------------

Thanks for providing the patch, [~afrimberger]!
I tested the patch in my local, it looks good and can reproduce this scenario as you mentioned
above. Attach a clean patch for this since I see the patch is not rebased on the trunk and
there are many unused imports. In the latest patch, I make a minor change:

* Actually, we don't need to restart the dn with keeping port. This can be simplified as the
following
{code}
    // bring DataNode up again
    cluster.restartDataNode(dn3Prop);
{code}

The new patch will be convenient for others to review. Assign this JIRA for you,  [~afrimberger].
Any thoughts from others? I suppose this should be a good finding and fix.


was (Author: linyiqun):
Thanks for providing the patch, [~afrimberger]!
I tested the patch in my local, it looks good and can reproduce this scenario as you mentioned
above. Attach a clean patch for this since I see the patch is not rebased on the trunk and
there are many unused imports. In the latest patch, I make a minor change:

* Actually, we don't need to restart the dn with keeping port. This can be simplified as the
following
{code}
    // bring DataNode up again
    cluster.restartDataNode(dn3Prop);
{code}
* 

The new patch will be convenient for others to review. Assign this JIRA for you,  [~afrimberger].
Any thoughts from others? I suppose this should be a good finding and fix.

> HDFS fsck continues to report all blocks present when DataNode is restarted with empty
data directories
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11311
>                 URL: https://issues.apache.org/jira/browse/HDFS-11311
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.3, 3.0.0-alpha1
>            Reporter: André Frimberger
>         Attachments: HDFS-11311-branch-3.0.0-alpha2.001.patch, HDFS-11311.reproduce.patch
>
>
> During cluster maintenance, we had to change parameters of the underlying disk filesystem
and we stopped the DataNode, reformatted all of its data directories and started the DataNode
again in under 10 minutes with no data and only the {{VERSION}} file present. Running fsck
afterwards reports that all blocks are fully replicated, which does not reflect the true state
of HDFS. If an administrator trusts {{fsck}} and continues to replace further DataNodes, *data
will be lost!*
> Steps to reproduce:
> 1. Shutdown DataNode
> 2. Remove all BlockPools from all data directories (only {{VERSION}} file is present)
> 3. Startup DataNode in under 10.5 minutes
> 4. Run {{hdfs fsck /}}
> *Actual result:* Average replication is falsely shown as 3.0
> *Expected result:* Average replication factor is < 3.0
> *Workaround:* Trigger a block report with {{hdfs dfsadmin -triggerBlockReport $dn_host:$ipc_port}}
> *Cause:* The first block report is handled differently by NameNode and only added blocks
are respected. This behaviour was introduced in HDFS-7980 for performance reasons. But is
applied too widely and in our case data can be lost.
> *Fix:* We suggest using stricter conditions on applying {{processFirstBlockReport}} in
{{BlockManager:processReport()}}:
> Change
> {code}
> if (storageInfo.getBlockReportCount() == 0) {
>     // The first block report can be processed a lot more efficiently than
>     // ordinary block reports.  This shortens restart times.
>     processFirstBlockReport(storageInfo, newReport);
> } else {
>     invalidatedBlocks = processReport(storageInfo, newReport);
> }
> {code}
> to
> {code}
> if (storageInfo.getBlockReportCount() == 0 && storageInfo.getState() != State.FAILED
&& newReport.getNumberOfBlocks() > 0) {
>     // The first block report can be processed a lot more efficiently than
>     // ordinary block reports.  This shortens restart times.
>     processFirstBlockReport(storageInfo, newReport);
> } else {
>     invalidatedBlocks = processReport(storageInfo, newReport);
> }
> {code}
> In case the DataNode reports no blocks for a data directory, it might be a new DataNode
or the data directory may have been emptied for whatever reason (offline replacement of storage,
reformatting of data disk, etc.). In either case, the changes should be reflected in the output
of {{fsck}} in less than 6 hours to prevent data loss due to misleading output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message