hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Joseph Evans (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-8115) configuration entry in core-site.xml gets silently ignored
Date Mon, 27 Feb 2012 16:46:49 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217270#comment-13217270

Robert Joseph Evans commented on HADOOP-8115:

Unfortunately the order of loading is not always consistent.  Just look at the bug I just
did HDFS-3012.  Because we us static blocks to load the resources those static blocks are
executed when the class file that contains them is loaded.  The order that class files are
loaded is based off of when java decided to load the file, and because java currently does
lazy loading it tends to only load them when code inside that class file is actually going
to be executed for the first time.  Because core-default.xml and core-site.xml are part of
the Configuration class they are always going to be the first to load.  But there is no guarantee
the order that others will come in, or even if they will be loaded, which is the problem that
showed up in HDFS-3012.

Because the ordering cannot really be controlled when using static blocks to load the configuration
the only way to really do (1) is to give each file some sort of a globally universal priority.
 Who wins when a configuration value is in hdfs-site.xml and also in mapred-site.xml?  We
have that problem now, but because we tend to support more of (2) by convention instead of
having the enforcement it is not really as much of a concern as if we officially support (1).
 I personally would prefer to see option (2) as it is closer to what we do now, but I am not
sure what the proper way is to implement it.
> configuration entry in core-site.xml gets silently ignored
> ----------------------------------------------------------
>                 Key: HADOOP-8115
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8115
>             Project: Hadoop Common
>          Issue Type: Bug
>         Environment: v
> Standard tar release (i.e. not Cloudera or anything)
> Ubuntu
>            Reporter: Marc Harris
> The order of loading configuration files (and thus the order of priority from least to
most) seems to be as follows:
> core-default.xml, core-site.xml, hdfs-default.xml, hdfs-site.xml.
> This means that a configuration parameter that is set in hdfs-default.xml will override
that value set in core-site.xml. 
> Either
> (1) Parameters should be able to go in any site.xml file, and override any default.xml,
even if they don't "match", or
> (2) Putting a parameter in the "wrong" site.xml file should be considered an error, and
result in at the very least a warning.
> What in fact happens is that the parameter is silently ignored, which is the worst combination.
> I my opinion, it is counter-intuitive that a value in a site.xml file should be overridden
by a value in a default.xml file, so I would choose option (1).
> The particular example here was dfs.http.address, by the way.
> I marked this as major rather than minor since it was not at all obvious what the problem
(and therefore the workaround was) and eventually required attaching to a running production
service with a debugger to find out why the parameter I was setting was being ignored.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message