[ https://issues.apache.org/jira/browse/HDFS-2828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193185#comment-13193185
]
Daryn Sharp commented on HDFS-2828:
-----------------------------------
@Kihwal
I think that case can actually be handled. {{IOUtils.copyBytes}}, even when passed the close-stream
flag, does not appear to reliably close the streams... The {{try}} block that deletes the
temp file can also close the stream if it's open.
@Todd
I think is actually an issue {{FileSystem}} since {{FsShell}} doesn't get a chance to cleanup
when SIGINT blows it out of the water. The FS shutdown hook is going to delete all temp files
(ie. copy in progress file), and then call {{DFSClient#close}} which will close the stream
to the temp file after it has been deleted. Trying to do signal handling in java seems a
bit messy, so may the shutdown hook behavior could be modified.
> Interrupting hadoop fs -put from the command line causes a LeaseExpiredException
> --------------------------------------------------------------------------------
>
> Key: HDFS-2828
> URL: https://issues.apache.org/jira/browse/HDFS-2828
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.23.0, 0.24.0
> Reporter: Todd Lipcon
>
> If you run "hadoop fs -put - foo", write a few lines, then ^C it from the shell, about
half the time you will get a LeaseExpiredException. It seems like the shell is first calling
{{delete()}} on the file, then calling {{close()}} on the stream. The {{close}} call fails
since the {{delete}} call kills the lease. I saw this on trunk but my guess is that it affects
23 also.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
|