hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-865) Files written to S3 but never closed can't be deleted
Date Mon, 08 Jan 2007 22:05:27 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463140
] 

Tom White commented on HADOOP-865:
----------------------------------

I think I've spotted the problem: the deleteRaw method throws an IOException if the inode
doesn't exist - unlike the DFS or Local implementation. I'll produce a patch - thanks for
the offer to test it.

Tom

> Files written to S3 but never closed can't be deleted
> -----------------------------------------------------
>
>                 Key: HADOOP-865
>                 URL: https://issues.apache.org/jira/browse/HADOOP-865
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>            Reporter: Bryan Pendleton
>
> I've been playing with the S3 integration. My first attempts to use it are actually as
a drop-in replacement for a backup job, streaming data offsite by piping the backup job output
to a "hadoop dfs -put - targetfile".
> If enough errors occur posting to S3 (this happened easily last Thursday, during an S3
growth issue), the write can eventually fail. At that point, there are both blocks and a partial
INode written into S3. Doing a "hadoop dfs -ls filename" shows the file, it has a non-zero
size, etc. However, trying to "hadoop dfs -rm filename" a failed-written file results in the
response "rm: No such file or directory."

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message