hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dragon (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-10099) CLONE - Erasure Coding: Fix the NullPointerException when deleting file
Date Tue, 15 Mar 2016 08:16:36 GMT
dragon created HDFS-10099:

             Summary: CLONE - Erasure Coding: Fix the NullPointerException when deleting file
                 Key: HDFS-10099
                 URL: https://issues.apache.org/jira/browse/HDFS-10099
             Project: Hadoop HDFS
          Issue Type: Sub-task
            Reporter: dragon
            Assignee: Yi Liu
             Fix For: HDFS-7285

In HDFS, when removing some file, NN will also remove all its blocks from {{BlocksMap}}, and
send {{DNA_INVALIDATE}} (invalidate blocks) commands to datanodes.  After datanodes successfully
delete the block replicas, will report {{DELETED_BLOCK}} to NameNode.

snippet code logic in {{BlockManager#processIncrementalBlockReport}} is as following
        removeStoredBlock(storageInfo, getStoredBlock(rdbi.getBlock()), node);
private void removeStoredBlock(DatanodeStorageInfo storageInfo, Block block,
      DatanodeDescriptor node) {
    if (shouldPostponeBlocksFromFuture &&
        namesystem.isGenStampInFuture(block)) {
      queueReportedBlock(storageInfo, block, null,
    removeStoredBlock(getStoredBlock(block), node);

In EC branch, we add {{getStoredBlock}}. There is {{NullPointerException}} when handling {{DELETED_BLOCK}}
of incrementalBlockReport from DataNode after delete a file, since the block is already removed,
we need to check.

This message was sent by Atlassian JIRA

View raw message