Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 89A1717FB2 for ; Fri, 15 May 2015 17:09:01 +0000 (UTC) Received: (qmail 41968 invoked by uid 500); 15 May 2015 17:09:01 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 41921 invoked by uid 500); 15 May 2015 17:09:01 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 41909 invoked by uid 99); 15 May 2015 17:09:01 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 May 2015 17:09:01 +0000 Date: Fri, 15 May 2015 17:09:01 +0000 (UTC) From: "Chris Nauroth (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545800#comment-14545800 ] Chris Nauroth commented on HDFS-8332: ------------------------------------- This is very strange. It appears that this only "worked" because the RPC proxy is still operable even after calling {{RPC#stopProxy}} inside {{DFSClient#closeConnectionToNamenode}}. This is not what I would have expected. I thought that this patch by calling {{checkOpen}} consistently just changed a failure to give a more descriptive error. This is going to be a gray area for compatibility. Code that uses a {{FileSystem}} after closing it is incorrect code. Many operations already fail fast. We might be within the letter of the law for the compatibility policy by making this change, but there is an argument that callers could be dependent on the existing bug. In this kind of situation, I like to consider if the risks outweigh the benefits. This change isn't an absolute requirement to fix a critical bug or ship a new feature. Considering that, I think a conservative approach would be to re-target this patch to trunk/3.0.0 and revert from branch-2. We can set the incompatible flag and enter a release note for 3.0.0 stating that callers who were dependent on the buggy behavior must fix their code when upgrading. What do others think of this? Also, I'd like to suggest that we change pre-commit to trigger hadoop-hdfs-httpfs tests automatically for all hadoop-hdfs patches. We've seen problems like this in the past. hadoop-hdfs-httpfs gets patched so infrequently that it's easy to miss it when a hadoop-hdfs change introduces a test failure. As a practical matter, we might not be able to add those tests until the current HDFS test runs get optimized. > DFS client API calls should check filesystem closed > --------------------------------------------------- > > Key: HDFS-8332 > URL: https://issues.apache.org/jira/browse/HDFS-8332 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Rakesh R > Assignee: Rakesh R > Fix For: 2.8.0 > > Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch > > > I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: > {code} > java.io.IOException: Filesystem closed > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)