hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sharad Agarwal (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4631) Split the default configurations into 3 parts
Date Tue, 25 Nov 2008 11:35:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12650537#action_12650537

Sharad Agarwal commented on HADOOP-4631:

I am thinking that instead of keeping defaults in static member, will it make sense if we
keep it in specific instances.  This would separate things clearly; though may require more
JobConf will do addResouce(mapred-default.xml) in its constructor and similarly we can create
HDFSConf (extends Configuration) which will do addResource(hdfs-default.xml)
The hdfs code would instantiate HDFSConf instead of Configuration. If Configuration object
is passed to it for example to DistributedFileSystem, then it would wrap it in HDFSConf before
looking for hdfs specific values.

> Split the default configurations into 3 parts
> ---------------------------------------------
>                 Key: HADOOP-4631
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4631
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: conf
>            Reporter: Owen O'Malley
>            Assignee: Sharad Agarwal
>             Fix For: 0.20.0
> We need to split hadoop-default.xml into core-default.xml, hdfs-default.xml and mapreduce-default.xml.
That will enable us to split the project into 3 parts that have the defaults distributed with
each component.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message