Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8E61BC707 for ; Wed, 26 Jun 2013 09:57:45 +0000 (UTC) Received: (qmail 18514 invoked by uid 500); 26 Jun 2013 09:57:40 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 18186 invoked by uid 500); 26 Jun 2013 09:57:38 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 18178 invoked by uid 99); 26 Jun 2013 09:57:37 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Jun 2013 09:57:37 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=HTML_FONT_FACE_BAD,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of varun.uid@gmail.com designates 209.85.219.45 as permitted sender) Received: from [209.85.219.45] (HELO mail-oa0-f45.google.com) (209.85.219.45) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Jun 2013 09:57:31 +0000 Received: by mail-oa0-f45.google.com with SMTP id j1so14424379oag.4 for ; Wed, 26 Jun 2013 02:57:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=RIv9TLKZiA/3WwRtyAm8Lzwpp2Xfb6W0io8Lu23KRUc=; b=Lnbe5gVzy7p/Geeii+BgSxH+8Ya+oHTsVZNlratwcf3Jd/kXfnxw7eI5oyzuU+fAFz uY1CONRBedLwIJcSgo7J26gr2hP0+26TGVEjdir1LmOJefTKdi2l9eWoujR+La34/0xq uyA6cml4nTYak1loeQpJ7EbM0C8knnR8OYAAxv2E968ZiK83QjNsAS5oELbctxQSV+xU CJE9WzQYD8tj3t1z55RjhDRShWErtTf0XqYX296J0sudEsJQ0E9cVmzHGeLdaTDyaVg1 s8mT82GM10VVXAyKDK79LmJtJIy4GxdRrVR7lRQdAkmAd5XfO5yiC/qhzaHiHtMCiYt5 LJ2A== MIME-Version: 1.0 X-Received: by 10.182.134.229 with SMTP id pn5mr1407367obb.9.1372240630019; Wed, 26 Jun 2013 02:57:10 -0700 (PDT) Received: by 10.60.7.196 with HTTP; Wed, 26 Jun 2013 02:57:09 -0700 (PDT) In-Reply-To: References: Date: Wed, 26 Jun 2013 15:27:09 +0530 Message-ID: Subject: Re: datanode can not start From: varun kumar To: user , justlooks@gmail.com Content-Type: multipart/alternative; boundary=001a11c2570e4c7e2f04e00bab6a X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2570e4c7e2f04e00bab6a Content-Type: text/plain; charset=ISO-8859-1 HI huang, * * *Some other service is running on the port or you did not stop the datanode service properly.* * * *Regards,* *Varun Kumar.P * On Wed, Jun 26, 2013 at 3:13 PM, ch huang wrote: > i have running old cluster datanode,so it exist some conflict, i changed > default port, here is my hdfs-site.xml > > > > > > > dfs.name.dir > > /data/hadoopnamespace > > > > > > dfs.data.dir > > /data/hadoopdata > > > > > > dfs.datanode.address > > 0.0.0.0:50011 > > > > > > dfs.permissions > > false > > > > > > dfs.datanode.max.xcievers > > 4096 > > > > > > dfs.webhdfs.enabled > > true > > > > > > dfs.http.address > > 192.168.10.22:50070 > > > > > > > 2013-06-26 17:37:24,923 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host = CH34/192.168.10.34 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.2-cdh3u4 > STARTUP_MSG: build = > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r > 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May 7 > 14:03:02 PDT 2012 > ************************************************************/ > 2013-06-26 17:37:25,335 INFO > org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already > set up for Hadoop, not re-installing. > 2013-06-26 17:37:25,421 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Registered > FSDatasetStatusMBean > 2013-06-26 17:37:25,429 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at > 50011 > 2013-06-26 17:37:25,430 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is > 1048576 bytes/s > 2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2013-06-26 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added > global filtersafety > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) > 2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port > returned by webServer.getConnectors()[0].getLocalPort() before open() is > -1. Opening the listener on 50075 > 2013-06-26 17:37:25,519 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to > exit, active threads is 0 > 2013-06-26 17:37:25,619 INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting > down all async disk service threads... > 2013-06-26 17:37:25,619 INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async > disk service threads have been shut down. > 2013-06-26 17:37:25,620 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: > Address already in use > at sun.nio.ch.Net.bind(Native Method) > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > at > org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) > at org.apache.hadoop.http.HttpServer.start(HttpServer.java:564) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:505) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:303) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1643) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1601) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1727) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1744) > 2013-06-26 17:37:25,622 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34 > ************************************************************/ > -- Regards, Varun Kumar.P --001a11c2570e4c7e2f04e00bab6a Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
HI=C2=A0huang,=

Some other service is running on the port or you did not stop the datanode= service properly.

Regards,
Varun Kumar.P<= br>


On Wed, Jun 26, 2013 at 3:13 PM, ch huan= g <justlooks@gmail.com> wrote:
i have running old cluster datanode,so it exist some = conflict, i changed default port, here is my hdfs-site.xml
=C2=A0

<configuration>

=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.name.dir</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>/data/hadoopnamespace</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.data.dir</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>/data/hadoopdata</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.datanode.address</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>0.0.0.0:50011</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.permissions</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>false</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.datanode.max.xcievers</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>4096</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.webhdfs.enabled</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>true</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 <property>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= name>dfs.http.address</name>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <= value>192.168.1= 0.22:50070</value>

=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 </property>

</configuration>

=
=C2=A0
=C2=A0
2013-06-26 17:37:24,923 INFO org.apache.hadoop.hdfs.server.datanode.Da= taNode: STARTUP_MSG:
/**************************************************= **********
STARTUP_MSG: Starting DataNode
STARTUP_MSG:=C2=A0=C2=A0 ho= st =3D CH34/192.168.10.3= 4
STARTUP_MSG:=C2=A0=C2=A0 args =3D []
STARTUP_MSG:=C2=A0=C2=A0 version = =3D 0.20.2-cdh3u4
STARTUP_MSG:=C2=A0=C2=A0 build =3D file:///data/1/t= mp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9= e248c5690; compiled by 'root' on Mon May=C2=A0 7 14:03:02 PDT 2012<= br> ************************************************************/
2013-06-26= 17:37:25,335 INFO org.apache.hadoop.security.UserGroupInformation: JAAS Co= nfiguration already set up for Hadoop, not re-installing.
2013-06-26 17:= 37:25,421 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered = FSDatasetStatusMBean
2013-06-26 17:37:25,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNod= e: Opened streaming server at 50011
2013-06-26 17:37:25,430 INFO org.apa= che.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 byt= es/s
2013-06-26 17:37:25,470 INFO org.mortbay.log: Logging to org.slf4j.impl.Log= 4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-26= 17:37:25,513 INFO org.apache.hadoop.http.HttpServer: Added global filtersa= fety (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-06-26 17:37:25,518 INFO org.apache.hadoop.http.HttpServer: Port return= ed by webServer.getConnectors()[0].getLocalPort() before open() is -1. Open= ing the listener on 50075
2013-06-26 17:37:25,519 INFO org.apache.hadoop= .hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active thr= eads is 0
2013-06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatas= etAsyncDiskService: Shutting down all async disk service threads...
2013= -06-26 17:37:25,619 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAs= yncDiskService: All async disk service threads have been shut down.
2013-06-26 17:37:25,620 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: java.net.BindException: Address already in use
=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 at sun.nio.ch.Net.bind(Native Method)
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.nio.ch.ServerSocketChannelImpl.bind(S= erverSocketChannelImpl.java:124)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at sun.nio.ch.ServerSocketAdapto= r.bind(ServerSocketAdaptor.java:59)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelCo= nnector.java:216)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apac= he.hadoop.http.HttpServer.start(HttpServer.java:564)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .datanode.DataNode.startDataNode(DataNode.java:505)
=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode= .<init>(DataNode.java:303)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNod= e.java:1643)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .datanode.DataNode.instantiateDataNode(DataNode.java:1583)
=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.datanode.Da= taNode.createDataNode(DataNode.java:1601)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(= DataNode.java:1727)
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server= .datanode.DataNode.main(DataNode.java:1744)
2013-06-26 17:37:25,622 INFO org.apache.hadoop.hdfs.server.datanode.Da= taNode: SHUTDOWN_MSG:
/*************************************************= ***********
SHUTDOWN_MSG: Shutting down DataNode at CH34/192.168.10.34
************************************************************/



--
Regards,
Varun Kumar.P=
--001a11c2570e4c7e2f04e00bab6a--