hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Niels Basjes <Ni...@basjes.nl>
Subject Hortonworks scripting ...
Date Thu, 14 Aug 2014 22:23:21 GMT
Hi,

In the core Hadoop you can on your (desktop) client have multiple clusters
available simply by having multiple directories with setting files (i.e.
core-site.xml etc.) and select the one you want by changing the environment
settings (i.e. HADOOP_CONF_DIR and such) around.

This doesn't work when I run under the Hortonworks 2.1.2 distribution.

There I find that in all of the scripts placed in /usr/bin/ there is
"mucking about" with the environment settings. Things from /etc/default are
sourced and they override my settings.
Now I can control part of it by directing the BIGTOP_DEFAULTS_DIR into a
blank directory.
But in /usr/bin/pig sourcing /etc/default/hadoop hardcoded into the script.

Why is this done this way?

P.S. Where is the git(?) repo located where this (apperently HW specific)
scripting is maintained?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message