hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "eBugs in Cloud Systems (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-14469) FsDatasetImpl() throws a DiskErrorException when the configuration has wrong values
Date Mon, 06 May 2019 15:37:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

eBugs in Cloud Systems updated HDFS-14469:
------------------------------------------
    Description: 
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following {{throw}} statement whose exception class and error
message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

Line: 294-297
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
    + ". Value configured is either less than maxVolumeFailureLimit or greater than "
    + "to the number of configured volumes (" + volsConfigured + ").");{code}
 

A {{DiskErrorException}} means an error has occurred when the process is interacting with
the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
following code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
However, the error message of the first exception indicates that {{dfs.datanode.failed.volumes.tolerated}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch
could be a problem. For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.

  was:
Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following {{throw}} statement whose exception class and error
message seem to indicate different error conditions.

 

Version: Hadoop-3.1.2

File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java

Line: 294-297
{code:java}
throw new DiskErrorException("Invalid value configured for "
    + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
    + ". Value configured is either less than maxVolumeFailureLimit or greater than "
    + "to the number of configured volumes (" + volsConfigured + ").");{code}
 

A {{DiskErrorException}} means an error has occurred when the process is interacting with
the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have the
following code (lines 97-98):
{code:java}
throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
However, the error message of the first exception indicates that {{dfs.datanode.failed.volumes.tolerated}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). Will this
mismatch be a problem? For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.


> FsDatasetImpl() throws a DiskErrorException when the configuration has wrong values
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-14469
>                 URL: https://issues.apache.org/jira/browse/HDFS-14469
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: eBugs in Cloud Systems
>            Priority: Minor
>
> Dear HDFS developers, we are developing a tool to detect exception-related bugs in Java.
Our prototype has spotted the following {{throw}} statement whose exception class and error
message seem to indicate different error conditions.
>  
> Version: Hadoop-3.1.2
> File: HADOOP-ROOT/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
> Line: 294-297
> {code:java}
> throw new DiskErrorException("Invalid value configured for "
>     + "dfs.datanode.failed.volumes.tolerated - " + volFailuresTolerated
>     + ". Value configured is either less than maxVolumeFailureLimit or greater than "
>     + "to the number of configured volumes (" + volsConfigured + ").");{code}
>  
> A {{DiskErrorException}} means an error has occurred when the process is interacting
with the disk, e.g., in {{org.apache.hadoop.util.DiskChecker.checkDirInternal()}} we have
the following code (lines 97-98):
> {code:java}
> throw new DiskErrorException("Cannot create directory: " + dir.toString());{code}
> However, the error message of the first exception indicates that {{dfs.datanode.failed.volumes.tolerated}} is
configured incorrectly, which means there is nothing wrong with the disk (yet). This mismatch
could be a problem. For example, the callers trying to handle other {{DiskErrorException}} may
accidentally (and incorrectly) handle the configuration error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message