hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
Date Fri, 15 May 2015 17:09:01 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545800#comment-14545800
] 

Chris Nauroth commented on HDFS-8332:
-------------------------------------

This is very strange.  It appears that this only "worked" because the RPC proxy is still operable
even after calling {{RPC#stopProxy}} inside {{DFSClient#closeConnectionToNamenode}}.  This
is not what I would have expected.  I thought that this patch by calling {{checkOpen}} consistently
just changed a failure to give a more descriptive error.

This is going to be a gray area for compatibility.  Code that uses a {{FileSystem}} after
closing it is incorrect code.  Many operations already fail fast.  We might be within the
letter of the law for the compatibility policy by making this change, but there is an argument
that callers could be dependent on the existing bug.

In this kind of situation, I like to consider if the risks outweigh the benefits.  This change
isn't an absolute requirement to fix a critical bug or ship a new feature.  Considering that,
I think a conservative approach would be to re-target this patch to trunk/3.0.0 and revert
from branch-2.  We can set the incompatible flag and enter a release note for 3.0.0 stating
that callers who were dependent on the buggy behavior must fix their code when upgrading.
 What do others think of this?

Also, I'd like to suggest that we change pre-commit to trigger hadoop-hdfs-httpfs tests automatically
for all hadoop-hdfs patches.  We've seen problems like this in the past.  hadoop-hdfs-httpfs
gets patched so infrequently that it's easy to miss it when a hadoop-hdfs change introduces
a test failure.  As a practical matter, we might not be able to add those tests until the
current HDFS test runs get optimized.

> DFS client API calls should check filesystem closed
> ---------------------------------------------------
>
>                 Key: HDFS-8332
>                 URL: https://issues.apache.org/jira/browse/HDFS-8332
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch,
HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch
>
>
> I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even
after the filesystem close. Instead these calls should do {{checkOpen}} and throws:
> {code}
> java.io.IOException: Filesystem closed
> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message