hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HADOOP-9059) hadoop-daemons.sh script constraint that all the nodes should use the same installation path.
Date Tue, 16 Dec 2014 20:33:14 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer resolved HADOOP-9059.
--------------------------------------
    Resolution: Won't Fix

Closing this as Won't Fix for a variety of reasons:

* hadoop 1.x is dead.
* this code is very very different in trunk (3.x)
* this functionality is easier to achieve in trunk (3.x) with .hadooprc and other tricks.


> hadoop-daemons.sh script constraint that all the nodes should use the same installation
path.
> ---------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9059
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9059
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: bin
>    Affects Versions: 1.0.4
>         Environment: Linux
>            Reporter: Chunliang Lu
>            Assignee: Vivek Ganesan
>            Priority: Critical
>   Original Estimate: 25h
>  Remaining Estimate: 25h
>
> To run command on all slave hosts, the bin/hadoop-daemons.sh will call the bin/slaves.sh
at last line:
> {code}
> exec "$bin/slaves.sh" --config $HADOOP_CONF_DIR cd "$HADOOP_HOME" \; "$bin/hadoop-daemon.sh"
--config $HADOOP_CONF_DIR "$@"
> {code}
> where slaves.sh will call ssh and pass the `cd "$HADOOP_HOME" \; "$bin/hadoop-daemon.sh"
--config $HADOOP_CONF_DIR "$@"` part to the slaves. In bash, the $HADOOP_HOME $bin, and $HADOOP_CONF_DIR
will be replaced as current settings on the master, which means that this constraints that
all the slave nodes need to share the same path setting as master node. This is not reasonable.
In my setting, the cluster has a shared NFS, and I would like to use different configuration
files for different machines. I know this is not a recommended way to manage clusters, but
I just have no choice. I think other people may face the same problem. How about replace it
like following and allow different configuration for master and slaves?
> {code}
> exec "$bin/slaves.sh" --config $HADOOP_CONF_DIR cd '$HADOOP_PREFIX' \; "bin/hadoop-daemon.sh"
"$@"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message