hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sharad Agarwal (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4631) Split the default configurations into 3 parts
Date Tue, 25 Nov 2008 09:49:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12650504#action_12650504

Sharad Agarwal commented on HADOOP-4631:

bq. When you first call FileSystem.get("hdfs:///", conf), your configuration won't yet have
hdfs-specific default values, but, in the course of this call, they will be loaded, before
any hdfs code references them. I think this is fine, since application code should not be
directly referencing hdfs-specific values.

For the Configuration objects created prior to loading of HDFS defaults, the properties won't
be reloaded. Currently, Configuration objects are created and same being passed around. These
objects if used to load hdfs specific values, they won't be able to. no? 
For example: DistributedFileSystem uses the Configuration object created prior to loading
of itself.

> Split the default configurations into 3 parts
> ---------------------------------------------
>                 Key: HADOOP-4631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4631
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Owen O'Malley
>            Assignee: Sharad Agarwal
>             Fix For: 0.20.0
> We need to split hadoop-default.xml into core-default.xml, hdfs-default.xml and mapreduce-default.xml.
That will enable us to split the project into 3 parts that have the defaults distributed with
each component.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message