hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Francisoud (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-2561) /tmp/hadoop-${user}/dfs/tmp/tmp/client-${long}.tmp is not cleanup correctly
Date Wed, 09 Jan 2008 16:46:34 GMT
/tmp/hadoop-${user}/dfs/tmp/tmp/client-${long}.tmp is not cleanup correctly
---------------------------------------------------------------------------

                 Key: HADOOP-2561
                 URL: https://issues.apache.org/jira/browse/HADOOP-2561
             Project: Hadoop
          Issue Type: Bug
    Affects Versions: 0.14.0
            Reporter: Benjamin Francisoud


Diretory "/tmp/hadoop-${user}/dfs/tmp/tmp" is being filled with those kinfd of files: client-226966559287638337420857.tmp

I tried to look at the code and found:
h3. DFSClient.java
src/java/org/apache/hadoop/dfs/DFSClient.java
{code:java}
private void closeBackupStream() throws IOException {...}

/* Similar to closeBackupStream(). Theoritically deleting a file
 * twice could result in deleting a file that we should not.
 */
private void deleteBackupFile() {...}

private File newBackupFile() throws IOException {
String name = "tmp" + File.separator +
                     "client-" + Math.abs(r.nextLong());
File result = dirAllocator.createTmpFileForWrite(name,
                                                       2 * blockSize,
                                                       conf);
return result;
}
{code}

h3. LocalDirAllocator
src/java/org/apache/hadoop/fs/LocalDirAllocator.java#AllocatorPerContext.java
{code:java}
/** Creates a file on the local FS. Pass size as -1 if not known apriori. We
 *  round-robin over the set of disks (via the configured dirs) and return
 *  a file on the first path which has enough space. The file is guaranteed
 *  to go away when the JVM exits.
 */
public File createTmpFileForWrite(String pathStr, long size,
        Configuration conf) throws IOException {

// find an appropriate directory
Path path = getLocalPathForWrite(pathStr, size, conf);
File dir = new File(path.getParent().toUri().getPath());
String prefix = path.getName();

// create a temp file on this directory
File result = File.createTempFile(prefix, null, dir);
result.deleteOnExit();
return result;
}
{code}


First it seems to me it's a bit of a mess here I don't know if it's DFSClient.java#deleteBackupFile()
or LocalDirAllocator#createTmpFileForWrite() {deleteOnExit(); ) who is call ... or both. Why
not keep it dry and delete it only once.

But the most important is the "deleteOnExit();" since it mean if it is never restarted it
will never delete files :(

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message