hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sameer Paranjpye (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2815) Allowing processes to cleanup dfs on shutdown
Date Wed, 05 Mar 2008 09:41:41 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12575283#action_12575283
] 

Sameer Paranjpye commented on HADOOP-2815:
------------------------------------------

> The way pig does it now is
>     * Client create temp dir

This doesn't strike me as an ask for a filesystem feature. What's described here is a sequence
of jobs where the output of a job is the input of it's successor in the sequence. In such
as scenario pig would like a jobs input to be deleted when it succeeds. Currently this cleanup
is done by the pig client. Maybe it makes sense to implement this as a configuration parameter
that tells a job to delete it's input when it's successful.

I propose that this issue be closed as a 'won't fix' and the appropriate issue filed against
map/reduce.



> Allowing processes to cleanup dfs on shutdown
> ---------------------------------------------
>
>                 Key: HADOOP-2815
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2815
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Olga Natkovich
>            Assignee: dhruba borthakur
>             Fix For: 0.16.1
>
>
> Pig creates temp files that it wants to be removed at the end of the processing. The
code that removes the temp file is in the shutdown hook so that they get removed both under
normal shutdown as well as when process gets killed.
> The problem that we are seeing is that by the time the code is called the DFS might already
be closed and the delete fails leaving temp files behind. Since we have no control over the
shutdown order, we have no way to make sure that the files get removed.
> One way to solve this issue is to be able to mark the files as temp files so that hadoop
can remove them during its shutdown.
> The stack trace I am seeing is
> at org.apache.hadoop.dfs.DFSClient.checkOpen(DFSClient.java:158)
>         at org.apache.hadoop.dfs.DFSClient.delete(DFSClient.java:417)
>         at org.apache.hadoop.dfs.DistributedFileSystem.delete(DistributedFileSystem.java:144)
>         at org.apache.pig.backend.hadoop.datastorage.HPath.delete(HPath.java:96)
>         at org.apache.pig.impl.io.FileLocalizer$1.run(FileLocalizer.java:275)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message