hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bhushan Pathak <bhushan.patha...@gmail.com>
Subject Re: Hadoop 2.7.3 cluster namenode not starting
Date Thu, 27 Apr 2017 10:04:36 GMT
Yes, I'm running the command on the master node.

Attached are the config files & the hosts file. I have updated the IP
address only as per company policy, so that original IP addresses are not
shared.

The same config files & hosts file exist on all 3 nodes.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
brahmareddy.battula@huawei.com> wrote:

> Are you sure that you are starting in same machine (master)..?
>
>
>
> Please share “/etc/hosts” and configuration files..
>
>
>
>
>
> Regards
>
> Brahma Reddy Battula
>
>
>
> *From:* Bhushan Pathak [mailto:bhushan.pathak02@gmail.com]
> *Sent:* 27 April 2017 17:18
> *To:* user@hadoop.apache.org
> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>
>
>
> Hello
>
>
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>
>
>
> When I execute start-dfs.sh on the master node, the namenode does not
> start. The logs contain the following error -
>
> 2017-04-27 14:17:57,166 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode:
> Failed to start namenode.
>
> java.net.BindException: Problem binding to [master:51150]
> java.net.BindException: Cannot assign requested address; For more details
> see:  http://wiki.apache.org/hadoop/BindException
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
>
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(
> NetUtils.java:792)
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<
> init>(ProtobufRpcEngine.java:534)
>
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(
> ProtobufRpcEngine.java:509)
>
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<
> init>(NameNodeRpcServer.java:345)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createRpcServer(NameNode.java:674)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(
> NameNode.java:647)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:812)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(
> NameNode.java:796)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.
> createNameNode(NameNode.java:1493)
>
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(
> NameNode.java:1559)
>
> Caused by: java.net.BindException: Cannot assign requested address
>
>         at sun.nio.ch.Net.bind0(Native Method)
>
>         at sun.nio.ch.Net.bind(Net.java:433)
>
>         at sun.nio.ch.Net.bind(Net.java:425)
>
>         at sun.nio.ch.ServerSocketChannelImpl.bind(
> ServerSocketChannelImpl.java:223)
>
>         at sun.nio.ch.ServerSocketAdaptor.bind(
> ServerSocketAdaptor.java:74)
>
>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>
>         ... 13 more
>
> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
> with status 1
>
> 2017-04-27 14:17:57,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
> SHUTDOWN_MSG:
>
> /************************************************************
>
> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>
> ************************************************************/
>
>
>
>
>
>
>
> I have changed the port number multiple times, every time I get the same
> error. How do I get past this?
>
>
>
>
>
>
>
> Thanks
>
> Bhushan Pathak
>
>
>

Mime
View raw message