Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 49DFB18DC8 for ; Mon, 5 Oct 2015 06:13:14 +0000 (UTC) Received: (qmail 60868 invoked by uid 500); 5 Oct 2015 06:13:09 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 60751 invoked by uid 500); 5 Oct 2015 06:13:09 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 60741 invoked by uid 99); 5 Oct 2015 06:13:08 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 05 Oct 2015 06:13:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 3A60E1A0A08 for ; Mon, 5 Oct 2015 06:13:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.901 X-Spam-Level: ** X-Spam-Status: No, score=2.901 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id B7g5vtBo0Po5 for ; Mon, 5 Oct 2015 06:13:01 +0000 (UTC) Received: from mail-oi0-f52.google.com (mail-oi0-f52.google.com [209.85.218.52]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id 95F4620513 for ; Mon, 5 Oct 2015 06:13:01 +0000 (UTC) Received: by oiev17 with SMTP id v17so85410310oie.1 for ; Sun, 04 Oct 2015 23:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=M5g56Xtk7hiK2JCvkFndfw+iq+G3968YLUv0+uVZE2A=; b=ykFRE2xbG/Xd11qsnaMea/ITPpSShYtXH7whtfzc3wuGNdXp0jiUdcvsNrrP8ASnVh 5X95SFz1GXIJTsuDEwZmgPNViy4XuLruNZ8d6SlQunqBzB+MYHgDNBYLAV8Byk/KYTdH SdnknC7G41tJmBNcI5+8iA+4A+pwmqa3xMpGQ5H/ifS1qAbiI023VywF3v5fYhpbQEXd zdx+HqOP1KXDDUa+gWssN7pC/w3Ywx5PmXGLozyQyGnDRg6NEg/UXCgNg6K8XrofKfqg lRVr4hPrRq1pL9SJrfAEwTKeuMACkKWOM3yePTcpd70MMsSzrzLDHdFQgmm9pomGKCxL /Diw== MIME-Version: 1.0 X-Received: by 10.202.197.151 with SMTP id v145mr15810830oif.88.1444025575690; Sun, 04 Oct 2015 23:12:55 -0700 (PDT) Received: by 10.76.54.75 with HTTP; Sun, 4 Oct 2015 23:12:55 -0700 (PDT) Date: Mon, 5 Oct 2015 11:42:55 +0530 Message-ID: Subject: Hadoop DataNodes trying to reconnect to itself. From: Binita Bharati To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a113e261e7c57cf05215567a6 --001a113e261e7c57cf05215567a6 Content-Type: text/plain; charset=UTF-8 Hi , I am using Hadoop 2.7 on Ubuntu 14.04 cluster. I have 1 NamedNode (IP - 192.168.56.101, HostName - ubuntu) and 2 DataNode (IP - 192.168.56.102, HostName - ubuntu2 and IP - 192.168.56.103, HostName - ubuntu3). When I run : $HADOOP_HOME/bin/hadoop fs -put /home/file.txt /user/user1 It fails with the below error: ========================================= 15/10/04 15:18:11 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hdfs1/1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) ============================================ I see that in my DataNode logs , the DataNodes are trying to connect to itself, instead of the NamedNode. ============================================ 2015-10-04 13:42:14,498 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ubuntu2/192.168.56.102:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) ============================================== But, the entry in core-site.xml are identical for all 2 VMs. And the entry points to the NamedNode only (hdfs at port 9000) ============================================== ** * * * fs.default.name * * hdfs://ubuntu:9000* * * ** *==============================================* Thanks Binita --001a113e261e7c57cf05215567a6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi ,
I am using Hadoop 2.7 on Ubuntu 14.04 cluster. I have = 1 NamedNode (IP - 192.168.56.101, HostName - ubuntu) and 2 DataNode (IP - 1= 92.168.56.102, HostName - ubuntu2 and IP - 192.168.56.103, HostName - ubunt= u3).=C2=A0

When I run :
$HADOOP_HOME/bin/hadoop fs -put /home/file.txt /u= ser/user1

It fails with the below error:

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D

15/10/04 15:18:11 WARN hdfs.DFSClient: DataS= treamer Exception

org.apache.hadoop.ipc.RemoteException(java.io.IOExc= eption): File /user/hdfs1/1.txt._COPYING_ could only be replicated to 0 nod= es instead of minReplication (=3D1).=C2=A0 There are 0 datanode(s) running = and no node(s) are excluded in this operation.

at org.apache.hadoop.h= dfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.= java:1550)

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=

I see that in my DataNode logs , the DataNodes are trying to connect= to itself, instead of the NamedNode.

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D

2015-10-04 13:42:14,498 INFO org.a= pache.hadoop.ipc.Client: Retrying connect to server: ubuntu2/192.168.56.102:9000. Alread= y tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxR= etries=3D10, sleepTime=3D1000 MILLISECONDS)

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

But, the entry in core-site.x= ml are identical for all 2 VMs. And the entry points to the NamedNode only = (hdfs at port 9000)

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D

<configuration>

=C2=A0 =C2=A0 = =C2=A0 =C2=A0=C2=A0<property>

=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0<name>fs.default.name</name>

= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0<value>h= dfs://ubuntu:9000</value>

=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2= =A0</property>

</configuration><= /p>

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Thanks

Binita

--001a113e261e7c57cf05215567a6--