hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Erik Krogen (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed
Date Mon, 19 Sep 2016 20:04:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Erik Krogen updated HDFS-10843:
    Attachment: HDFS-10843.005.patch

After offline discussion with Konstantin, attaching v005 patch with a new {{BlockManager.convertToCompleteBlock}}
method wrapping the two related method calls.  

> Quota Feature Cached Size != Computed Size When Block Committed But Not Completed
> ---------------------------------------------------------------------------------
>                 Key: HDFS-10843
>                 URL: https://issues.apache.org/jira/browse/HDFS-10843
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs, namenode
>    Affects Versions: 2.6.0
>            Reporter: Erik Krogen
>            Assignee: Erik Krogen
>         Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, HDFS-10843.002.patch,
HDFS-10843.003.patch, HDFS-10843.004.patch, HDFS-10843.005.patch
> Currently when a block has been committed but has not yet been completed, the cached
size (used for the quota feature) of the directory containing that block differs from the
computed size. This results in log messages of the following form:
> bq. ERROR namenode.NameNode (DirectoryWithQuotaFeature.java:checkStoragespace(141)) -
BUG: Inconsistent storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed =
> When a block is initially started under construction, the used space is conservatively
set to a full block. When the block is committed, the cached size is updated to the final
size of the block. However, the calculation of the computed size uses the full block size
until the block is completed, so in the period where the block is committed but not completed
they disagree. To fix this we need to decide which is correct and fix the other to match.
It seems to me that the cached size is correct since once the block is committed its size
will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the corresponding
BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. simulate a transient network
> - During this time, call DistributedFileSystem.getContentSummary() on the directory with
the quota

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message