ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Scott Creeley (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-8490) Unconfigured env file /etc/hadoop/conf/hadoop-env.sh
Date Tue, 02 Dec 2014 17:28:12 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14231778#comment-14231778
] 

Scott Creeley commented on AMBARI-8490:
---------------------------------------

[~ncole] [~afernandez]
Not sure when it happened, but probably in the last 2 weeks, it seems something changed that
broke our working 2.1.GlusterFS stack for 1.7.0 release.  The above symptoms as reported by
Daniel are what we are seeing now, a commented/untouched *-env.sh files (yarn-env.sh, hadoop-env.sh,
mapred-env.sh) which we are guessing is causing our problems (but we are not sure).  I've
also tested the public repo today (1.7.0-169) and it has the same issues that Daniel reports.

On a side note, I'm also creating 2.2.GlusterFS stack for the next ambari release and I'm
also seeing this same behavior.  So we are missing some core change or configuration that
is breaking our stacks.  Any thoughts on this, I tried to review the resolved JIRA's for the
past 3 weeks and nothing jumped out at me.  Any insight into this would be appreciated so
we can fix 2.1.GlusterFS and move forward with 2.2.GlusterFS stack.

> Unconfigured env file /etc/hadoop/conf/hadoop-env.sh
> ----------------------------------------------------
>
>                 Key: AMBARI-8490
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8490
>             Project: Ambari
>          Issue Type: Bug
>    Affects Versions: 1.7.0
>         Environment: RHEL 6, HDP_2.1.GlusterFS stack
>            Reporter: Daniel Horak
>              Labels: glusterfs, hcfs
>
> I've tryed to install HDP 2.1.GlusterFS on RHEL6 via ambari 1.7.0 and I'm not able to
start any service, because of {{Error: JAVA_HOME is not set and could not be found.}}
> {noformat}
> 2014-11-28 14:08:56,663 - Error while executing command 'start':
> Traceback (most recent call last):
>   File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 123, in execute
>     method(env)
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.1.GlusterFS/services/YARN/package/scripts/resourcemanager.py",
line 46, in start
>     action='start'
>   File "/var/lib/ambari-agent/cache/stacks/HDP/2.1.GlusterFS/services/YARN/package/scripts/service.py",
line 45, in service
>     not_if=no_op
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148,
in __init__
>     self.env.run()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
149, in run
>     self.run_action(resource, action)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line
115, in run_action
>     provider_action()
>   File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
line 241, in action_run
>     raise ex
> Fail: Execution of 'export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh
--config /etc/hadoop/conf start resourcemanager' returned 1. Error: JAVA_HOME is not set and
could not be found.{noformat}
> It is probably because of "unconfigured" hadoop-env.sh file in /etc/hadoop/conf/ (whole
file is commented).
> {noformat}
> cat /etc/hadoop/conf/hadoop-env.sh
> # Copyright 2011 The Apache Software Foundation
> # 
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements.  See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership.  The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> # Set Hadoop-specific environment variables here.
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
> # The java implementation to use.
> #export JAVA_HOME=${JAVA_HOME}
> # The jsvc implementation to use. Jsvc is required to run secure datanodes.
> #export JSVC_HOME=${JSVC_HOME}
> #export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
> # Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
> #for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
> #  if [ "$HADOOP_CLASSPATH" ]; then
> #    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
> #  else
> #    export HADOOP_CLASSPATH=$f
> #  fi
> #done
> # The maximum amount of heap to use, in MB. Default is 1000.
> #export HADOOP_HEAPSIZE=
> #export HADOOP_NAMENODE_INIT_HEAPSIZE=""
> # Extra Java runtime options.  Empty by default.
> #export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
> # Command specific options appended to HADOOP_OPTS when specified
> #export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
> #export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
> #export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
-Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
> #export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
> #export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
> # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
> #export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
> #HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
> # On secure datanodes, user to run the datanode as after dropping privileges
> #export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
> # Where log files are stored.  $HADOOP_HOME/logs by default.
> #export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
> # Where log files are stored in the secure data environment.
> #export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
> # The directory where pid files are stored. /tmp by default.
> # NOTE: this should be set to a directory that can only be written to by 
> #       the user that will run the hadoop daemons.  Otherwise there is the
> #       potential for a symlink attack.
> #export HADOOP_PID_DIR=${HADOOP_PID_DIR}
> #export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
> # A string representing this instance of hadoop. $USER by default.
> #export HADOOP_IDENT_STRING=$USER
> {noformat}
> {noformat}
> # rpm -qa ambari-*
> ambari-agent-1.7.0-168.x86_64
> ambari-server-1.7.0-168.noarch
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message