hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lei Cao <charlie.c...@hotmail.com>
Subject Re: Hadoop 2.7.3 cluster namenode not starting
Date Fri, 28 Apr 2017 02:22:07 GMT
Hi Mr. Bhushan,

Have you tried to format namenode?
Here's the command:
hdfs namenode -format

I've encountered such problem as namenode cannot be started. This command line easily fixed
my problem.

Hope this can help you.

Sincerely,
Lei Cao


On Apr 27, 2017, at 12:09, Brahma Reddy Battula <brahmareddy.battula@huawei.com<mailto:brahmareddy.battula@huawei.com>>
wrote:

Please check “hostname –i” .



1)      What’s configured in the “master” file.(you shared only slave file).?


2)      Can you able to “ping master”?



3)      Can you configure like this check once..?
                1.1.1.1 master


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
Sent: 27 April 2017 18:16
To: Brahma Reddy Battula
Cc: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Re: Hadoop 2.7.3 cluster namenode not starting

Some additional info -
OS: CentOS 7
RAM: 8GB

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <bhushan.pathak02@gmail.com<mailto:bhushan.pathak02@gmail.com>>
wrote:
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP address only as
per company policy, so that original IP addresses are not shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <brahmareddy.battula@huawei.com<mailto:brahmareddy.battula@huawei.com>>
wrote:
Are you sure that you are starting in same machine (master)..?

Please share “/etc/hosts” and configuration files..


Regards
Brahma Reddy Battula

From: Bhushan Pathak [mailto:bhushan.pathak02@gmail.com<mailto:bhushan.pathak02@gmail.com>]
Sent: 27 April 2017 17:18
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: Fwd: Hadoop 2.7.3 cluster namenode not starting

Hello

I have a 3-node cluster where I have installed hadoop 2.7.3. I have updated core-site.xml,
mapred-site.xml, slaves, hdfs-site.xml, yarn-site.xml, hadoop-env.sh files with basic settings
on all 3 nodes.

When I execute start-dfs.sh on the master node, the namenode does not start. The logs contain
the following error -
2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start
namenode.
java.net.BindException: Problem binding to [master:51150] java.net.BindException: Cannot assign
requested address; For more details see:  http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)
        at org.apache.hadoop.ipc.Server.bind(Server.java:425)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:408)
        ... 13 more
2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1<http://1.1.1.1>
************************************************************/



I have changed the port number multiple times, every time I get the same error. How do I get
past this?



Thanks
Bhushan Pathak



Mime
View raw message