hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ming Ma (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9313) Possible NullPointerException in BlockManager if no excess replica can be chosen
Date Tue, 27 Oct 2015 00:44:27 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ming Ma updated HDFS-9313:
--------------------------
    Attachment: HDFS-9313.patch

Here is the patch that illustrates the scenario. It is better to guard against this.

In addition, for this specific test scenario, {{BlockPlacementPolicyDefault}} should have
been able to delete excessSSD. We can fix it separately.

> Possible NullPointerException in BlockManager if no excess replica can be chosen
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-9313
>                 URL: https://issues.apache.org/jira/browse/HDFS-9313
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ming Ma
>         Attachments: HDFS-9313.patch
>
>
> HDFS-8647 makes it easier to reason about various block placement scenarios. Here is
one possible case where BlockManager won't be able to find the excess replica to delete: when
storage policy changes around the same time balancer moves the block. When this happens, it
will cause NullPointerException.
> {noformat}
> java.lang.NullPointerException
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy.adjustSetsWithChosenReplica(BlockPlacementPolicy.java:156)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseReplicasToDelete(BlockPlacementPolicyDefault.java:978)
> {noformat}
> Note that it isn't found in any production clusters. Instead, it is found from new unit
tests. In addition, the issue has been there before HDFS-8647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message