hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13234) Remove renew configuration instance in ConfiguredFailoverProxyProvider and reduce memory footprint for client
Date Thu, 08 Mar 2018 16:59:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391519#comment-16391519
] 

Kihwal Lee commented on HDFS-13234:
-----------------------------------

[~jlowe] and I discussed a bit about the conf issue this morning. Configuration has both performance
and memory foot print issue, but coming up with a single generic solution to solve them for
all use cases is difficult, if not impossible. That's one of the road blocks many previous
improvement attempts have met. For use cases that do not require refreshing, we can have a
single mutable instance to load/reload all resources, instead of duplicating for each config
instance. Each new conf can have its own "overlay" map internally to keep track of locally
set keys/values. For the keys not found in this map, it will look them up in the base instance.
The look-ups will get a bit more expensive, but it avoids problem of multiple resource reloads
and object duplication. Since this might not work well with refreshable configs, it would
be better to make it a new feature (i.e. a new version of ctor) and offer it opt-in basis.
I think most client-side code will be able to take advantage of this.

Related: HADOOP-11223 and HADOOP-9570

We can start a design/feasibility discussion,  if there is enough interest.

> Remove renew configuration instance in ConfiguredFailoverProxyProvider and reduce memory
footprint for client
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-13234
>                 URL: https://issues.apache.org/jira/browse/HDFS-13234
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: fs, ha, hdfs-client
>            Reporter: He Xiaoqiao
>            Priority: Major
>         Attachments: HDFS-13234.001.patch
>
>
> The memory footprint of #DFSClient is very considerable in some special scenario since
there are many #Configuration instances and occupy much memory resource (In an extreme case,
org.apache.hadoop.conf.Configuration occupies over 600MB we meet under HDFS Federation an
HA with QJM and there are dozens of NameNodes). I think some new Configuration instance is
not necessary. Such as  #ConfiguredFailoverProxyProvider initialization.
> {code:java}
>   public ConfiguredFailoverProxyProvider(Configuration conf, URI uri,
>       Class<T> xface, HAProxyFactory<T> factory) {
>     this.xface = xface;
>     this.conf = new Configuration(conf);
>     ......
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message