hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ramkrishna.s.vasudevan (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-7256) Resource leak during failure scenario of closing of resources.
Date Tue, 17 May 2011 12:13:47 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

ramkrishna.s.vasudevan updated HADOOP-7256:
-------------------------------------------

    Description: 
Problem Statement:
===============
There are chances of resource leak and stream not getting closed 
Take the case when after copying data we try to close the Input and output stream followed
by closing of the socket.
Suppose an exception occurs while closing the input stream(due to runtime exception) then
the subsequent operations of closing the output stream and socket may not happen and there
is a chance of resource leak.

Scenario 
=======
During long run of map reduce jobs, the copyFromLocalFile() api is getting called.
Here we found some exceptions happening. As a result of this we found the lsof value raising
leading to resource leak.

Solution:
=======
While doing a close operation of any resource catch the RuntimeException also rather than
catching the IOException alone.

Additionally there are places where we try to close a resource in the catch block.
If this close fails, we just throw and come out of the current flow.
In order to avoid this, we can carry out the close operation in the finally block.

Probable reasons for getting RunTimeExceptions:
=====================================
We may get runtime exception from customised hadoop streams like FSDataOutputStream.close()
. So better to handle RunTimeExceptions also.


 



  was:
Problem Statement:
===============
There are chances of resource leak and stream not getting closed 
Take the case when after copying data we try to close the Input and output stream followed
by closing of the socket.
Suppose an exception occurs while closing the input stream(due to runtime exception) then
the subsequent operations of closing the output stream and socket may not happen and there
is a chance of resource leak.

Scenario 
=======
During long run of map reduce jobs, the copyFromLocalFile() api is getting called.
Here we found some exceptions happening. As a result of this we found the lsof value raising
leading to resource leak.

Solution:
=======
While doing a close operation of any resource catch the RuntimeException also rather than
catching the IOException alone.

Additionally there are places where we try to close a resource in the catch block.
If this close fails, we just throw and come out of the current flow.
In order to avoid this, we can carry out the close operation in the finally block.

Probable reasons for getting RunTimeExceptions:
=====================================
We have many wrapped stream for writing and reading.
These wrappers are prone to errors.


 




> Resource leak during failure scenario of closing of resources. 
> ---------------------------------------------------------------
>
>                 Key: HADOOP-7256
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7256
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.2, 0.21.0
>            Reporter: ramkrishna.s.vasudevan
>            Priority: Minor
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> Problem Statement:
> ===============
> There are chances of resource leak and stream not getting closed 
> Take the case when after copying data we try to close the Input and output stream followed
by closing of the socket.
> Suppose an exception occurs while closing the input stream(due to runtime exception)
then the subsequent operations of closing the output stream and socket may not happen and
there is a chance of resource leak.
> Scenario 
> =======
> During long run of map reduce jobs, the copyFromLocalFile() api is getting called.
> Here we found some exceptions happening. As a result of this we found the lsof value
raising leading to resource leak.
> Solution:
> =======
> While doing a close operation of any resource catch the RuntimeException also rather
than catching the IOException alone.
> Additionally there are places where we try to close a resource in the catch block.
> If this close fails, we just throw and come out of the current flow.
> In order to avoid this, we can carry out the close operation in the finally block.
> Probable reasons for getting RunTimeExceptions:
> =====================================
> We may get runtime exception from customised hadoop streams like FSDataOutputStream.close()
. So better to handle RunTimeExceptions also.
>  

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message