hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiao Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13511) Provide specialized exception when block length cannot be obtained
Date Thu, 31 May 2018 06:46:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496181#comment-16496181

Xiao Chen commented on HDFS-13511:

Thanks [~yuzhihong@gmail.com] for creating the jira, and [~gabor.bota] for working on this.

Some comments:
 - For downstream to use this, [compat guideline|http://hadoop.apache.org/docs/r3.0.0/hadoop-project-dist/hadoop-common/Compatibility.html] requires
it to be public. Suggest to use Public + Unstable for the initial version
 - {{LocatedBlock}} is Private, let's not expose that via this exception. We can just construct
a string message on the fly, and do not cache the LocatedBlock object. This way, we also don't
have to worry about whether the {{LocatedBlock}} is mutable or not. See {{ReplicaNotFoundException}}
for example.

> Provide specialized exception when block length cannot be obtained
> ------------------------------------------------------------------
>                 Key: HDFS-13511
>                 URL: https://issues.apache.org/jira/browse/HDFS-13511
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Gabor Bota
>            Priority: Major
>         Attachments: HDFS-13511.001.patch
> In downstream project, I saw the following code:
> {code}
>         FSDataInputStream inputStream = hdfs.open(new Path(path));
> ...
>         if (options.getRecoverFailedOpen() && dfs != null && e.getMessage().toLowerCase()
>             .startsWith("cannot obtain block length for")) {
> {code}
> The above tightly depends on the following in DFSInputStream#readBlockLength
> {code}
>     throw new IOException("Cannot obtain block length for " + locatedblock);
> {code}
> The check based on string matching is brittle in production deployment.
> After discussing with [~stevel@apache.org], better approach is to introduce specialized
IOException, e.g. CannotObtainBlockLengthException so that downstream project doesn't have
to rely on string matching.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message