hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-785) Divide the server and client configurations
Date Fri, 17 Aug 2007 21:28:30 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12520679
] 

Doug Cutting commented on HADOOP-785:
-------------------------------------

Owen is concerned that having different files with different semantics (initial versus final)
is confusing.  That we should rather just have a list of files (e.g., hadoop-default.xml,
hadoop-site.xml, job.xml) that are all treated identically.  That has merit.  It is simpler.

But how do we specify that some parameters may not be overridden by files later in the list?
 Instead of having separate files for that, perhaps we can annotate the parameters themselves,
adding a <final/> tag or somesuch to their definitions.  The first 'final' value found
for a parameter when processing the files would determine the value: no values in subsequent
files would modify the value of that parameter.  Thus, in a tasktracker's hadoop-site.xml,
the dfs.client.buffer.dir would be set final, and a job would not be able to override it,
while the job could override the non-final dfs.block.size set there.

Owen, does this address your concern?

> Divide the server and client configurations
> -------------------------------------------
>
>                 Key: HADOOP-785
>                 URL: https://issues.apache.org/jira/browse/HADOOP-785
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: conf
>    Affects Versions: 0.9.0
>            Reporter: Owen O'Malley
>            Assignee: Arun C Murthy
>             Fix For: 0.15.0
>
>
> The configuration system is easy to misconfigure and I think we need to strongly divide
the server from client configs. 
> An example of the problem was a configuration where the task tracker has a hadoop-site.xml
that set mapred.reduce.tasks to 1. Therefore, the job tracker had the right number of reduces,
but the map task thought there was a single reduce. This lead to a hard to find diagnose failure.
> Therefore, I propose separating out the configuration types as:
> class Configuration;
> // reads site-default.xml, hadoop-default.xml
> class ServerConf extends Configuration;
> // reads hadoop-server.xml, $super
> class DfsServerConf extends ServerConf;
> // reads dfs-server.xml, $super
> class MapRedServerConf extends ServerConf;
> // reads mapred-server.xml, $super
> class ClientConf extends Configuration;
> // reads hadoop-client.xml, $super
> class JobConf extends ClientConf;
> // reads job.xml, $super
> Note in particular, that nothing corresponds to hadoop-site.xml, which overrides both
client and server configs. Furthermore, the properties from the *-default.xml files should
never be saved into the job.xml.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message