hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5933) Make it harder to accidentally close a shared DFSClient
Date Thu, 28 May 2009 19:01:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714121#action_12714121
] 

Raghu Angadi commented on HADOOP-5933:
--------------------------------------

>  If the other thread then asks for a new client it will get one and the cache repopulated
but if has one already, then I get to see a stack trace. 

Steve, what is this issue? I didn't think there was a cache for DFSClients. Can you post the
stacktrace in your test?

> Make it harder to accidentally close a shared DFSClient
> -------------------------------------------------------
>
>                 Key: HADOOP-5933
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5933
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.21.0
>            Reporter: Steve Loughran
>            Priority: Minor
>         Attachments: HADOOP-5933.patch
>
>
> Every so often I get stack traces telling me that DFSClient is closed, usually in {{org.apache.hadoop.hdfs.DFSClient.checkOpen()
}} . The root cause of this is usually that one thread has closed a shared fsclient while
another thread still has a reference to it. If the other thread then asks for a new client
it will get one -and the cache repopulated- but if has one already, then I get to see a stack
trace. 
> It's effectively a race condition between clients in different threads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message