hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-785) Divide the server and client configurations
Date Thu, 02 Aug 2007 19:49:53 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12517354

Arun C Murthy commented on HADOOP-785:

bq. Currently, hadoop-default.xml is not supposed to be changed by users. Would you relax
this convention in your proposal? There might be a few variables that I'd like to set for
client and server at the same time (eg. namenode address).

Hmm... how about letting both *server* and *client* values for {{fs.default.name}}'s {{context}}
tag to let people know to it can be specified in both hadoop-server.xml and hadoop-client.xml,
and will be used appropriately? Would that help? I'd rather keep hadoop-default.xml sacrosanct,
though we don't prevent you from editing it even today - thus it serves as a gold-standard
for everyone.

bq. Why don't you want to split up namenode vs. jobtracker and datanode vs. tasktracker? 

I did think about this, and I really don't see what value a {HDFS|MR}ServerConfiguration and
{HDFS|MR}ClientConfiguration will provide, which is why I didn't take this route... but I'm
open to arguments. Just separation of physical files doesn't seem enough to warrant  4 classes
rather than 2.

bq. This division could be done with xml comments - I don't think it needs to be so formal
as to need a new field.

I agree, yet it's my take that it is better to institutionalise this by adding another tag,
same with the {{context}} tag. Again this depends on whether or not we can reach a common

> Divide the server and client configurations
> -------------------------------------------
>                 Key: HADOOP-785
>                 URL: https://issues.apache.org/jira/browse/HADOOP-785
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: conf
>    Affects Versions: 0.9.0
>            Reporter: Owen O'Malley
>            Assignee: Arun C Murthy
>             Fix For: 0.15.0
> The configuration system is easy to misconfigure and I think we need to strongly divide
the server from client configs. 
> An example of the problem was a configuration where the task tracker has a hadoop-site.xml
that set mapred.reduce.tasks to 1. Therefore, the job tracker had the right number of reduces,
but the map task thought there was a single reduce. This lead to a hard to find diagnose failure.
> Therefore, I propose separating out the configuration types as:
> class Configuration;
> // reads site-default.xml, hadoop-default.xml
> class ServerConf extends Configuration;
> // reads hadoop-server.xml, $super
> class DfsServerConf extends ServerConf;
> // reads dfs-server.xml, $super
> class MapRedServerConf extends ServerConf;
> // reads mapred-server.xml, $super
> class ClientConf extends Configuration;
> // reads hadoop-client.xml, $super
> class JobConf extends ClientConf;
> // reads job.xml, $super
> Note in particular, that nothing corresponds to hadoop-site.xml, which overrides both
client and server configs. Furthermore, the properties from the *-default.xml files should
never be saved into the job.xml.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message