hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiaobing Zhou (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8855) Webhdfs client leaks active NameNode connections
Date Mon, 05 Oct 2015 18:40:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14943802#comment-14943802
] 

Xiaobing Zhou commented on HDFS-8855:
-------------------------------------

This issue is from DataNodeUGIProvider#tokenUGI
{code}
Token<DelegationTokenIdentifier> token = params.delegationToken();
    ByteArrayInputStream buf =
      new ByteArrayInputStream(token.getIdentifier());
    DataInputStream in = new DataInputStream(buf);
    DelegationTokenIdentifier id = new DelegationTokenIdentifier();
    id.readFields(in);
    UserGroupInformation ugi = id.getUser();
    ugi.addToken(token);
    return ugi;
{code}

for every request, a brand new UGI will be created even with HDFS-7597 patch, because DelegationTokenIdentifier
is initialized per request, although HDFS-7597 returns the same UGI for the same DelegationTokenIdentifier
instance.

> Webhdfs client leaks active NameNode connections
> ------------------------------------------------
>
>                 Key: HDFS-8855
>                 URL: https://issues.apache.org/jira/browse/HDFS-8855
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>            Reporter: Bob Hansen
>            Assignee: Xiaobing Zhou
>         Attachments: HDFS-8855.005.patch, HDFS-8855.006.patch, HDFS-8855.007.patch, HDFS-8855.1.patch,
HDFS-8855.2.patch, HDFS-8855.3.patch, HDFS-8855.4.patch, HDFS_8855.prototype.patch
>
>
> The attached script simulates a process opening ~50 files via webhdfs and performing
random reads.  Note that there are at most 50 concurrent reads, and all webhdfs sessions are
kept open.  Each read is ~64k at a random position.  
> The script periodically (once per second) shells into the NameNode and produces a summary
of the socket states.  For my test cluster with 5 nodes, it took ~30 seconds for the NameNode
to have ~25000 active connections and fails.
> It appears that each request to the webhdfs client is opening a new connection to the
NameNode and keeping it open after the request is complete.  If the process continues to run,
eventually (~30-60 seconds), all of the open connections are closed and the NameNode recovers.
 
> This smells like SoftReference reaping.  Are we using SoftReferences in the webhdfs client
to cache NameNode connections but never re-using them?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message