hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12217) HDFS snapshots doesn't capture all open files when one of the open files is deleted
Date Sat, 29 Jul 2017 14:53:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16106143#comment-16106143
] 

Yongjun Zhang commented on HDFS-12217:
--------------------------------------

Hi [~manojg],

{quote}
LeaseManager#getINodeWithLeases already logs the warning message with full exception and stack
trace.
{quote}
The logging you mentioned above happens at the server side and is about the cause of the failure.
What I hope to see is the exception chain at client side when SnapshotException is thrown,
so user can see immediately what caused the exception in one place. Collecting the server
side log to match to the client symptom is a support effort which I hope we can avoid if possible.


Currently SnapshotException is:
{code}
public class SnapshotException extends IOException {
  private static final long serialVersionUID = 1L;

  public SnapshotException(final String message) {
    super(message);
  }

  public SnapshotException(final Throwable cause) {
    super(cause);
  }
}
{code}
I examined all the places that call {{SnapshotException(final String message)}}, none of them
is called inside exception catch block. So the patch here is the first time that we are throwing
SnapshotException inside an exception catch block.  Interestingly, I don't see any places
using {{SnapshotException(final Throwable cause)}}. So it seems introducing a method {{public
SnapshotException(String message, Throwable cause)}} and use it in this patch will not introduce
inconsistency because it will be the first use of it. 

That's what I preferred. However, if you like to create a new jira to add the new API, and
change the single use which is introduced by the patch here, It's ok with me too. 

Thanks.




> HDFS snapshots doesn't capture all open files when one of the open files is deleted
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-12217
>                 URL: https://issues.apache.org/jira/browse/HDFS-12217
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: snapshots
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Manoj Govindassamy
>            Assignee: Manoj Govindassamy
>         Attachments: HDFS-12217.01.patch, HDFS-12217.02.patch, HDFS-12217.03.patch
>
>
> With the fix for HDFS-11402, HDFS Snapshots can additionally capture all the open files.
Just like all other files, these open files in the snapshots will remain immutable. But, sometimes
it is found that snapshots fail to capture all the open files in the system.
> Under the following conditions, LeaseManager will fail to find INode corresponding to
an active lease 
> * a file is opened for writing (LeaseManager allots a lease), and
> * the same file is deleted while it is still open for writing and having active lease,
and
> * the same file is not referenced in any other Snapshots/Trash
> {{INode[] LeaseManager#getINodesWithLease()}} can thus return null for few leases there
by causing the caller to trip over and not return all the open files needed by the snapshot
manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message