hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3171) speculative execution should not have default value on hadoop-default.xml bundled in the Hadoop JAR
Date Thu, 10 Apr 2008 10:20:08 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587571#action_12587571
] 

Alejandro Abdelnur commented on HADOOP-3171:
--------------------------------------------

On #1 

Asking people to rejar the hadoop.jar is not reasonable as it may lead to nasty debug situations.

On #2

spec exec is an example only (in our case most of our jobs cannot run with spec-exec but a
few, so we want to set it off by default, and only set it on in the few jobs that we want
it on).

Take the case of number of task retries, replication factor, compression or compression-type
properties. You may want to set the default behavior of your cluster, but not make them final.

On #3

It is not always possible to have a HADOOP_CONF_DIR for example from within a webApp. It gets
complicated. 

---

And if I have a hadoop client that dispatches jobs to different clusters things may get even
more complicated.


My concern is that a cluster should be able to control what a default value is without forcing
with a final. This is not possible today.


> speculative execution should not have default value on hadoop-default.xml bundled in
the Hadoop JAR
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3171
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3171
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 0.16.2
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Arun C Murthy
>         Attachments: HADOOP-3171_20080410.patch
>
>
> Having a default value for speculative execution in the hadoop-default.xml bundled in
the Hadoop JAR file does not allow a cluster to control the default behavior. 
> *ON in hadoop-default.xml (current behavior)*
> * ON in JT hadoop-site.xml
>  * present in job.xml, job's value is used
>  * not-present in job.xml, ON is taken as default from the hadoop-default.xml present
in the client's JAR/conf (*)
> * ON FINAL in the JT hadoop-site.xml
>  * present or not present in the job.xml, ON is used
> * OFF in JT hadoop-site.xml
>  * present in job.xml, job's value is used
>  * not-present in job.xml, ON is taken as default from the hadoop-default.xml present
in the client's JAR/conf
> * OF FINAL in the JT hadoop-site.xml
>  * present or not present in the job.xml, OFF is used
> *OFF in hadoop-default.xml (not current behavior)*
> * ON in JT hadoop-site.xml
>  * present in job.xml, job's value is used
>  * not-present in job.xml, OFF is taken as default from the hadoop-default.xml present
in the client's JAR/conf (*)
> * ON FINAL in the JT hadoop-site.xml
>  * present or not present in the job.xml, ON is used
> * OFF in JT hadoop-site.xml
>  * present in job.xml, job's value is used
>  * not-present in job.xml, ON is taken as default from the hadoop-default.xml present
in the client's JAR/conf
> * OF FINAL in the JT hadoop-site.xml
>  * present or not present in the job.xml, OFF is used
> ---
> Still is desirable for the JT to have a default value. To avoid having to support 2 hadoop-default.xml
files, one for the JT and other for the clients, the easiest why is to remove it from the
hadoop-default.xml and have the default value in the code when getting the config property
(thing that may be already happening).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message