hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kripal kashyav <kripalkash...@gmail.com>
Subject Re: Getting address bind Exception when starting single node hadoop cluster
Date Tue, 29 May 2012 11:59:16 GMT
Thanks Shaswat for helping me out.
It worked, port 6500 was configured at /etc/hadoop file, while i was always
changing the hadoop/conf/*xml files.
It was environment variable setting issue.
@steeve Thanks for your suggestion.

On Tue, May 29, 2012 at 1:18 PM, shashwat shriparv <
dwivedishashwat@gmail.com> wrote:

> Hi,
>
> Make your host file like :
>
>
> 127.0.0.1       localhost
> 10.86.29.24     chn-29-24.mmt.mmt   chn-29-24
>
>
>
> if you have not specified 6500 any where den y its trying to connect to it
> ??
> localhost/127.0.0.1:6500
>
> *In core-site.xml please make some entries like :*
>
>
> <configuration>
> <property>
>          <name>fs.default.name</name>
>          <value>hdfs://localhost:9000</value>
>      </property>
> <property>
>          <name>hadoop.tmp.dir</name>
>          <value>/home/shashwat/hdfs/tmp</value>
>      </property>
>
> <property>
> <name>hadoop.proxyuser.oozie.hosts</name>
> <value>*</value>
> </property>
> <property>
> <name>hadoop.proxyuser.oozie.groups</name>
> <value>*</value>
> </property>
> </configuration>
>
>
>
> *and in mapred-site.xml something like this :*
>
> <configuration>
>     <property>
>         <name>mapred.job.tracker</name>
>         <value>localhost:9001</value>
>     </property>
> </configuration>
>
> *and in hdfs-site.xml something like this :*
>
> <configuration>
> <property>
>          <name>dfs.name.dir</name>
>          <value>/home/<your user name here>/hdfs/name</value>
>      </property>
> <property>
>          <name>dfs.data.dir</name>
>          <value>/home/<your user name here>/hdfs/data</value>
>      </property>
>
> <property>
>  <name>dfs.datanode.max.xcievers</name>
>  <value>4096</value>
> </property>
> <property>
>   <name>dfs.replication</name>
>   <value>1</value>
> </property>
>
> </configuration>
>
>
>
> You have    10.86.29.24     chn-29-24.mmt.mmt   chn-29-24  this in your
> host file so i am supposing that 10.86.29.24 as the ip of your current
> system right?? then instead of localhost you can gife    chn-29-24.mmt.mmt
>   or    chn-29-24, after making these changes delete your hdfs folder if
> know where it is or just format the namenode and check if it working.
>
>
> also do
>
> chown -R <your user name>  <your hadoop folder name>
> chmod -R 755 <your hadoop folder name>
>
> to change the rights of the folder.
>
> and before trying any thing else
>
> ping on localhost
> ping on your ip
> ping on your hostname
>
> if all above are working
>
> ssh to all above if it working then you can proceed
>
> On Tue, May 29, 2012 at 11:40 AM, kripal kashyav <kripalkashyap@gmail.com
> >wrote:
>
> > Hi  Shashwat !
> > I am getting host name as chn-29-24.
> > My host config is :
> > 127.0.0.1       localhost.localdomain localhost
> > ::1             localhost6.localdomain6 localhost6
> > 10.86.29.24     chn-29-24 chn-29-24.mmt.mmt
> > I have already formatted namenode successfully.
> > I am not sure where to change the port number, bcoz i have not configured
> > 6500 any where.
> > Thanks in advance for helping me out.
> >
> >
> > On Thu, May 24, 2012 at 1:35 PM, shashwat shriparv <
> > dwivedishashwat@gmail.com> wrote:
> >
> > > Is there any line in your host file like
> > > 127.0.1.1 localhost     ???? if so coment it with #
> > >
> > > did you format your namenode??? if not format it.
> > >
> > > 192.168.2.118        yourhostname
> > >
> > >         ^                             ^
> > >         |                              |
> > > This will be ip                this is a name
> > > of your machine
> > >
> > > what is your hostname (you can get it using command "hostname" on
> > terminal
> > > )
> > >
> > > follow these and format your name node and try to start.
> > >
> > > and also try to change the port no you are using give it something like
> > > 9000 or some other which is not already used
> > >
> > > which you can veryfy by giving command "netstat -nl | grep 'Ip you want
> > to
> > > use'"   if this gives some output means that port is acquired and try
> > some
> > > other port..
> > >
> > >
> > > Let me know if it solved your problem...
> > >
> > > Regards
> > >
> > > ∞
> > > Shashwat Shriparv
> > >
> > >
> > > On Thu, May 24, 2012 at 12:42 PM, kripal kashyav <
> > kripalkashyap@gmail.com
> > > >wrote:
> > >
> > > > Hi!
> > > > I am trying to set up hadoop 1.0.2 for single node.
> > > > After starting it, when i execute jps command i get the following :
> > > > NameNode
> > > > 13478 Jps
> > > > 13187 SecondaryNameNode
> > > >
> > > > In log files i get the following errors for tasktracker :
> > > > 12-05-24 12:39:21,268 ERROR
> > > > org.apache.hadoop.security.UserGroupInformation:
> > > PriviledgedActionException
> > > > as:hduser cause:org.apache.hadoop.ipc.RemoteException:
> > > java.io.IOException:
> > > > Unknown protocol to name node:
> > > > org.apache.hadoop.mapred.InterTrackerProtocol
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:149)
> > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >        at
> > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >        at
> > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >        at java.lang.reflect.Method.invoke(Method.java:597)
> > > >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > >        at java.security.AccessController.doPrivileged(Native Method)
> > > >        at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> > > >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > >
> > > > 2012-05-24 12:39:21,268 ERROR org.apache.hadoop.mapred.TaskTracker:
> Can
> > > not
> > > > start task tracker because org.apache.hadoop.ipc.RemoteException:
> > > > java.io.IOException: Unknown protocol to name node:
> > > > org.apache.hadoop.mapred.InterTrackerProtocol
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:149)
> > > >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >        at
> > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > > >        at
> > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > > >        at java.lang.reflect.Method.invoke(Method.java:597)
> > > >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > >        at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > >        at java.security.AccessController.doPrivileged(Native Method)
> > > >        at javax.security.auth.Subject.doAs(Subject.java:396)
> > > >        at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> > > >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > >
> > > >        at org.apache.hadoop.ipc.Client.call(Client.java:1066)
> > > >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > >        at org.apache.hadoop.mapred.$Proxy5.getProtocolVersion(Unknown
> > > > Source)
> > > >        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
> > > >        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:370)
> > > >        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:429)
> > > >        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:331)
> > > >        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:296)
> > > >        at org.apache.hadoop.mapr
> > > >
> > > > And following error for jobtracker :
> > > >  FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException:
> > > Problem
> > > > binding to localhost/127.0.0.1:6500 : Address already in use
> > > >        at org.apache.hadoop.ipc.Server.bind(Server.java:227)
> > > >        at
> org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
> > > >        at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
> > > >        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
> > > >        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
> > > >        at
> > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
> > > >        at
> > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
> > > >        at
> > > org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
> > > >        at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
> > > >        at
> > > > org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
> > > >        at
> > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)
> > > > Caused by: java.net.BindException: Address already in use
> > > >        at sun.nio.ch.Net.bind(Native Method)
> > > >        at
> > > >
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
> > > >        at
> > > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> > > >        at org.apache.hadoop.ipc.Server.bind(Server.java:225)
> > > >        ... 10 more
> > > >
> > > > Please help i am very new to Hadoop.
> > > >
> > > >
> > > >
> > > > Thanks:
> > > > kripal
> > > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > > ∞
> > > Shashwat Shriparv
> > >
> >
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message