hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bob Hansen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9117) Config file reader / options classes for libhdfs++
Date Tue, 10 Nov 2015 13:14:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14998536#comment-14998536
] 

Bob Hansen commented on HDFS-9117:
----------------------------------

[~wheat9]:
I have trimmed down the interface considerably, but kept the XML format and final semantics
consistent with existing deployed configuration files.  Is that your preference?

{code}
  /**
   * Get the value configuration, return empty if it's unspecified.
   **/
  template<class T>
  Optional<T> get(const std::string &key);
{code}
This interface can't be supported while still not using exceptions (as you indicated a preference
for in an earlier review of this code).  If an invalid or too-large value is in the file,
the underlying stdlib code will throw an exception.  Optional is not part of the c++11 standard,
or I would absolutely have used it here; it's a good match.

bq. Many users use Hadoop in a controlled environment. They know where the configuration is
and has preferences on not depending on environment variables as they can be changed easily.
Cloud deployment is one example.

I agree, but would propose that there are other consumers of libhdfs++ (I might guess the
majority) that want to co-exist with current deployed environment and not have to re-specify
the configuration in another, incompatible way.   In the previous patch, the default Configuration
constructor set no search path and loaded no data.  It also allowed consumers explicitly to
config files from strings, files, http connections, or anything that could be shoehorned into
an istream to give them the flexibility to construct the solution they desired.  I _thought_
my initial solution covered the use cases you describe while providing a solid implementation
for interoperability with deployed Hadoop systems.

bq. It's relatively starightforward to add these functionality to another layer but it's hard
to take it out when it's coupled with the core layer.

Where would propose that layer should eventually be?  I would propose a stand-alone class
in lib/common, myself, but would welcome alternatives for inclusion in a later Jira.




> Config file reader / options classes for libhdfs++
> --------------------------------------------------
>
>                 Key: HDFS-9117
>                 URL: https://issues.apache.org/jira/browse/HDFS-9117
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>    Affects Versions: HDFS-8707
>            Reporter: Bob Hansen
>            Assignee: Bob Hansen
>         Attachments: HDFS-9117.HDFS-8707.001.patch, HDFS-9117.HDFS-8707.002.patch, HDFS-9117.HDFS-8707.003.patch,
HDFS-9117.HDFS-8707.004.patch, HDFS-9117.HDFS-8707.005.patch, HDFS-9117.HDFS-8707.006.patch,
HDFS-9117.HDFS-8707.008.patch, HDFS-9117.HDFS-8707.009.patch, HDFS-9117.HDFS-8707.010.patch,
HDFS-9117.HDFS-8707.011.patch, HDFS-9117.HDFS-8707.012.patch, HDFS-9117.HDFS-8707.013.patch,
HDFS-9117.HDFS-9288.007.patch
>
>
> For environmental compatability with HDFS installations, libhdfs++ should be able to
read the configurations from Hadoop XML files and behave in line with the Java implementation.
> Most notably, machine names and ports should be readable from Hadoop XML configuration
files.
> Similarly, an internal Options architecture for libhdfs++ should be developed to efficiently
transport the configuration information within the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message