hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3627) HDFS allows deletion of file while it is stil open
Date Wed, 25 Jun 2008 18:38:45 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12608144#action_12608144
] 

Doug Cutting commented on HADOOP-3627:
--------------------------------------

We seek unix-like behavior when feasable, and the unix behavior here would be that the write
would not fail.  So permitting deletion is not the bug.  The error in the writer is perhaps
a bug, but fully supporting the unix notion of unlinked files that disappear when the last
reader or writer is closed might prove difficult, and is not the subject of this issue anyway.

> I'd propose that we make this wont-fix.

+1


> HDFS allows deletion of file while it is stil open
> --------------------------------------------------
>
>                 Key: HADOOP-3627
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3627
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.0
>            Reporter: Lohit Vijayarenu
>
> This was a single node cluster, so my DFSClient was from same machine. In a terminal
I was writing to a HDFS file, while on another terminal deleted the same file. Deletion succeeded,
and the write client failed. If the write was still going on, then the next block commit would
result in exception saying, the block does not belong to any file. If the write was about
to close, then we get an exception in completing a file because getBlocks fails. 
> Should we allow deletion of file? Even if we do, should the write fail?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message