ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Itha <ithar...@gmail.com>
Subject Re: NameNode can't be started via Ambari Web UI --> Problematic Property is: fs.defaultFS
Date Wed, 17 Sep 2014 08:57:53 GMT
Yusaku,

I have an update on this. In the meanwhile I did the below:


   - Created a brand new VM and assign the host name as
   server3.mycompany.com
   - Updated both /etc/hosts & /etc/sysconfig/network files with that
   hostname.
   - Installed both Ambari Server and Agent on the same host
   - Created a single node cluster via Ambari Web UI
   - Installed NameNode + SNameNode + DataNode + YARN + other services


This time NameNode started without any issue. Without any surprise, the
default value it took for fs.defaultFS was "hdfs://
server3.mycompany.com:8020"

The other difference is:

This time, when I gave host name as *server3.mycompany.com
<http://server3.mycompany.com>* , it did not say this was not a valid FQDN.
However, it did give me a warning in my earlier case i.e. Server_1

So the FQDN something like Server_1 is not a good practice?

Thanks for your help.

~Ravi Itha



On Wed, Sep 17, 2014 at 11:40 AM, Ravi Itha <itharavi@gmail.com> wrote:


> Thanks Yusaku,
>
> I am using Ambari v 1.6.1. Yes, the default value it took for fs.defaultFS
> is "hdfs://server_1:8020"
>
> The output of hostname -f is: server_1
>
> And, the contents of /etc/hosts is:
>
> 127.0.0.1 localhost.localdomain localhost
> ::1 localhost6.localdomain6 localhost6
> 192.168.21.138 server_1
> 192.168.21.137 ambari_server
>
> the FQDN I gave during host selection was: server_1
>
> As of now, the error is:
>
> safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
> 2014-09-16 23:02:41,225 - Retrying after 10 seconds. Reason: Execution of
> 'su - hdfs -c 'hadoop dfsadmin -safemode get' | grep 'Safe mode is OFF''
> returned 1. DEPRECATED: Use of this script to execute hdfs command is
> deprecated.
> Instead use the hdfs command for it.
>
> safemode: Incomplete HDFS URI, no host: hdfs://server_1:8020
>
>
> Please advise where I am making the mistake?
>
> -Ravi
>
> On Wed, Sep 17, 2014 at 1:59 AM, Yusaku Sako <yusaku@hortonworks.com>
> wrote:
>
>
>> Hi Ravi,
>>
>> What version of Ambari did you use, and how did you install the cluster?
>> Not sure if this would help, but on small test clusters, you should
>> define /etc/hosts on each machine, like so:
>>
>> 127.0.0.1 <localhost and other default entries>
>> ::1 <localhost and other default entries>
>> 192.168.64.101 host1.mycompany.com host1
>> 192.168.64.102 host2.mycompany.com host2
>> 192.168.64.103 host3.mycompany.com host3
>>
>> Make sure that on each machine, "hostname -f" returns the FQDN (such
>> as host1.mycompany.com) and "hostname" returns the short name (such as
>> host1).  Also, make sure that you can resolve all other hosts by FQDN.
>>
>> fs.defaultFS is set up automatically by Ambari and you should not have
>> to adjust it, provided that the networking is configured properly.
>> Ambari sets it to "hdfs://<FQDN of NN host>:8020" (e.g.,
>> "hdfs://host1.mycompany.com:8020)
>>
>> Yusaku
>>
>> On Tue, Sep 16, 2014 at 12:00 PM, Ravi Itha <itharavi@gmail.com> wrote:
>> > All,
>> >
>> > My Ambari cluster setup is below:
>> >
>> > Server 1: Ambari Server was installed
>> > Server 2: Ambari Agent was installed
>> > Server 3: Ambari Agent was installed
>> >
>> > I create cluster with Server 2 and Server 3 and installed
>> >
>> > Server 2 has NameNode
>> > Server 3 has SNameNode & DataNode
>> >
>> > When I try to start NameNode from UI, it does not start
>> >
>> > Following are the errors:
>> >
>> > 1. safemode: Call From server_1/192.168.21.138 to server_1:8020 failed
>> on
>> > connection exception: java.net.ConnectException: Connection refused; For
>> > more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>> >
>> > In this case, the value of fs.defaultFS = hdfs://192.168.21.138  (This
>> ip is
>> > server_1's ip. I gave server_1 as the FQDN)
>> >
>> > 2. safemode: Call From server_1/192.168.21.138 to localhost:9000
>> failed on
>> > connection exception: java.net.ConnectException: Connection refused; For
>> > more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>> >
>> > In this case, the value of fs.defaultFS = hdfs://localhost
>> >
>> > Also, I cannot leave this field as blank.
>> >
>> > So can someone, please help me what should be the right value to be set
>> here
>> > and I how I can fix the issue.
>> >
>> > ~Ravi Itha
>> >
>> >
>> >
>>
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified
>> that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender
>> immediately
>> and delete it from your system. Thank You.
>>
>>
>
>

Mime
View raw message