hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-3382) Memory leak when files are not cleanly closed
Date Tue, 13 May 2008 18:41:55 GMT
Memory leak when files are not cleanly closed

                 Key: HADOOP-3382
                 URL: https://issues.apache.org/jira/browse/HADOOP-3382
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.16.0
            Reporter: Raghu Angadi
            Assignee: Raghu Angadi

{{FSNamesystem.internalReleaseCreate()}} in invoked on files that are open for writing but
not cleanly closed. e.g. when client invokes {{abandonFileInProgress()}} or when lease expires.
It deletes the last block if it has a length of zero. The block is deleted from the file INode
but not from {{blocksMap}}. Then leaves a reference to such file until NameNode is restarted.
When this happens  HADOOP-3381 multiplies amount of memory leak.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message