hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (Resolved) (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-67) /tmp/hadoop-${user}/dfs/tmp/tmp/client-${long}.tmp is not cleanup correctly
Date Thu, 29 Dec 2011 14:35:30 GMT

     [ https://issues.apache.org/jira/browse/HDFS-67?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Harsh J resolved HDFS-67.
-------------------------

    Resolution: Not A Problem

Not a problem after Dhruba's HDFS-1707.
                
> /tmp/hadoop-${user}/dfs/tmp/tmp/client-${long}.tmp is not cleanup correctly
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-67
>                 URL: https://issues.apache.org/jira/browse/HDFS-67
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Benjamin Francisoud
>         Attachments: patch-DFSClient-HADOOP-2561.diff
>
>
> Diretory "/tmp/hadoop-${user}/dfs/tmp/tmp" is being filled with those kinfd of files:
client-226966559287638337420857.tmp
> I tried to look at the code and found:
> h3. DFSClient.java
> src/java/org/apache/hadoop/dfs/DFSClient.java
> {code:java}
> private void closeBackupStream() throws IOException {...}
> /* Similar to closeBackupStream(). Theoritically deleting a file
>  * twice could result in deleting a file that we should not.
>  */
> private void deleteBackupFile() {...}
> private File newBackupFile() throws IOException {
> String name = "tmp" + File.separator +
>                      "client-" + Math.abs(r.nextLong());
> File result = dirAllocator.createTmpFileForWrite(name,
>                                                        2 * blockSize,
>                                                        conf);
> return result;
> }
> {code}
> h3. LocalDirAllocator
> src/java/org/apache/hadoop/fs/LocalDirAllocator.java#AllocatorPerContext.java
> {code:java}
> /** Creates a file on the local FS. Pass size as -1 if not known apriori. We
>  *  round-robin over the set of disks (via the configured dirs) and return
>  *  a file on the first path which has enough space. The file is guaranteed
>  *  to go away when the JVM exits.
>  */
> public File createTmpFileForWrite(String pathStr, long size,
>         Configuration conf) throws IOException {
> // find an appropriate directory
> Path path = getLocalPathForWrite(pathStr, size, conf);
> File dir = new File(path.getParent().toUri().getPath());
> String prefix = path.getName();
> // create a temp file on this directory
> File result = File.createTempFile(prefix, null, dir);
> result.deleteOnExit();
> return result;
> }
> {code}
> First it seems to me it's a bit of a mess here I don't know if it's DFSClient.java#deleteBackupFile()
or LocalDirAllocator#createTmpFileForWrite() {deleteOnExit(); ) who is call ... or both. Why
not keep it dry and delete it only once.
> But the most important is the "deleteOnExit();" since it mean if it is never restarted
it will never delete files :(

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message