hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "eBugs in Cloud Systems (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-14468) StorageLocationChecker methods throw DiskErrorExceptions when the configuration has wrong values
Date Mon, 06 May 2019 15:36:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

eBugs in Cloud Systems updated HDFS-14468:
------------------------------------------
    Description: 
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following three {{throw}} statements whose exception class
and error message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java

Line: 96-98, 110-113, and 173-176
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
    + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
    + maxVolumeFailuresTolerated + " "
    + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
    + maxVolumeFailuresTolerated + ". Value configured is >= "
    + "to the number of configured volumes (" + dataDirs.size() + ").");{code}
 

A {{DiskErrorException}} means an error has occurred when the process is interacting with
the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
following code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
However, the error messages of the first three exceptions indicate that the {{StorageLocationChecker}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch
could be a problem. For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.

  was:
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following three {{throw}} statements whose exception class
and error message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java

Line: 96-98, 110-113, and 173-176
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
    + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
    + maxVolumeFailuresTolerated + " "
    + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
    + maxVolumeFailuresTolerated + ". Value configured is >= "
    + "to the number of configured volumes (" + dataDirs.size() + ").");{code}
 

A {{DiskErrorException}} means an error has occurred when the process is interacting with
the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
following code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
However, the error messages of the first three exceptions indicate that the {{StorageLocationChecker}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). Will this
mismatch be a problem? For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.


> StorageLocationChecker methods throw DiskErrorExceptions when the configuration has wrong
values
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-14468
>                 URL: https://issues.apache.org/jira/browse/HDFS-14468
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: eBugs in Cloud Systems
>            Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following three {{throw}} statements whose exception class
and error message seem to indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/StorageLocationChecker.java
> Line: 96-98, 110-113, and 173-176
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_DISK_CHECK_TIMEOUT_KEY + " - "
>     + maxAllowedTimeForCheckMs + " (should be > 0)");{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
>     + maxVolumeFailuresTolerated + " "
>     + DataNode.MAX_VOLUME_FAILURES_TOLERATED_MSG);{code}
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + DFS_DATANODE_FAILED_VOLUMES_TOLERATED_KEY + " - "
>     + maxVolumeFailuresTolerated + ". Value configured is >= "
>     + "to the number of configured volumes (" + dataDirs.size() + ").");{code}
>  
> A {{DiskErrorException}} means an error has occurred when the process is interacting
with the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have
the following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
> However, the error messages of the first three exceptions indicate that the {{StorageLocationChecker}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch
could be a problem. For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message