Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A7A70104A7 for ; Tue, 25 Feb 2014 02:36:18 +0000 (UTC) Received: (qmail 54642 invoked by uid 500); 25 Feb 2014 02:36:10 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 54165 invoked by uid 500); 25 Feb 2014 02:36:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 54158 invoked by uid 99); 25 Feb 2014 02:36:09 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Feb 2014 02:36:09 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of khangaonkar@gmail.com designates 209.85.212.179 as permitted sender) Received: from [209.85.212.179] (HELO mail-wi0-f179.google.com) (209.85.212.179) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Feb 2014 02:36:03 +0000 Received: by mail-wi0-f179.google.com with SMTP id bs8so96718wib.12 for ; Mon, 24 Feb 2014 18:35:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Q4lp3he1e0sRcXbqM7r+o4g/T1KmA5LAvKa8DnYtEno=; b=w+x4z5Dl5N5zBHfs3tb6CC0SjPMo9GZXHKXwftkUCk9BHZuyD5ac+TEjIGK8fngYUx q0PmWjEvaJYGXomEHMZTcwEm0kaINy8KzXi3uO0FEB0IyUvtVm+umDB/dZUWU5ZXz898 GY7C8kEQI8TagpP9987U/5qDlfEcwoKAsTB4gAMWIUhkbAcfX8u1G+dphfOvGaQkTrfb eKBFxwFsMApUCJqJI6/ZXEboC/cDaf3Xent6kFeWDd4LXhF4Eea8ih4AZs0TdI8EGHUv zYQBDFY+g0RR1y2htprBhYRpu7jK+w/9KVsLNpUgVwMAV2BsfgsczA2G7KKdIvWDi8yb RVTA== MIME-Version: 1.0 X-Received: by 10.194.236.9 with SMTP id uq9mr21740658wjc.31.1393295742034; Mon, 24 Feb 2014 18:35:42 -0800 (PST) Received: by 10.227.127.137 with HTTP; Mon, 24 Feb 2014 18:35:41 -0800 (PST) In-Reply-To: References: Date: Mon, 24 Feb 2014 18:35:41 -0800 Message-ID: Subject: Re: hadoop 2.2.0 cluster setup error : could only be replicated to 0 nodes instead of minReplication (=1) From: Manoj Khangaonkar To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01493de0c56d7c04f331f18d X-Virus-Checked: Checked by ClamAV on apache.org --089e01493de0c56d7c04f331f18d Content-Type: text/plain; charset=ISO-8859-1 Hi Can one of the implementors comment on what conditions trigger this error ? All the data nodes show up as commissioned. No errors during startup If I google for this error, there are several posts reporting the issue : but most of the answers have weak solutions like reformating and restarting none of which help. My guess is that this is a networking /port access issue. If anyone can shed light on what conditions cause this error , it would be much appreciated. regards On Mon, Feb 24, 2014 at 1:07 PM, Manoj Khangaonkar wrote: > Hi, > > I setup a cluster with > > machine1 : namenode and datanode > machine 2 : data node > > A simple hdfs copy is not working. Can someone help with this issue ? > Several folks have posted this error on the web, But I have seen a good > reason or solution. > > command: > bin/hadoop fs -copyFromLocal ~/hello /manoj/ > > Error: > copyFromLocal: File /manoj/hello._COPYING_ could only be replicated to 0 > nodes instead of minReplication (=1). There are 2 datanode(s) running and > no node(s) are excluded in this operation. > 14/02/24 12:56:38 ERROR hdfs.DFSClient: Failed to close file > /manoj/hello._COPYING_ > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /manoj/hello._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 2 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042) > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) > > My setup is very basic : > core-site.xml > > > fs.default.name > hdfs://n-prd-bst-beacon01:9000 > > > hadoop.tmp.dir > /home/manoj/hadoop-2.2.0/tmp > > > > hdfs-site.xml > > > dfs.replication > 1 > > > dfs.permissions > false > > > > slaves: > localhost > n-prd-bst-beacon02.advertising.aol.com > > Namenode and Datanode (on both machines) are up & running without errors > > regards > > -- > http://khangaonkar.blogspot.com/ > -- http://khangaonkar.blogspot.com/ --089e01493de0c56d7c04f331f18d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi

Can one of the implementors com= ment on what conditions trigger this error ?

All the dat= a nodes show up as commissioned.=A0 No errors during startup

<= div> If I google for this error, there are several posts reporting the issue : b= ut most of the answers have weak solutions like reformating and restarting = none of which help.

My guess is that this is a networking= /port access issue. If anyone can shed light on what conditions cause this= error , it would be much appreciated.

regards




On Mon, Feb 24, 2014 at 1:07 PM= , Manoj Khangaonkar <khangaonkar@gmail.com> wrote:
Hi= ,

I setup a cluster with

machine1 : namenode and= datanode
machine 2 : data node

A simple hdfs copy is not working.= Can someone help with this issue ? Several folks have posted this error on= the web, But I have seen a good reason or solution.

command:
bin/hadoop fs -copyFromLocal ~/hello /manoj/

E= rror:
copyFromLocal: File /manoj/hello._COPYING_ could only be replicate= d to 0 nodes instead of minReplication (=3D1).=A0 There are 2 datanode(s) r= unning and no node(s) are excluded in this operation.
14/02/24 12:56:38 ERROR hdfs.DFSClient: Failed to close file /manoj/hello._= COPYING_
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Fil= e /manoj/hello._COPYING_ could only be replicated to 0 nodes instead of min= Replication (=3D1).=A0 There are 2 datanode(s) running and no node(s) are e= xcluded in this operation.
=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.cho= oseTarget(BlockManager.java:1384)
=A0=A0=A0 at org.apache.hadoop.hdfs.se= rver.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
= =A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBl= ock(NameNodeRpcServer.java:555)
=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServer= SideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java= :387)
=A0=A0=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeP= rotocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodePro= tocolProtos.java:59582)
=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvo= ker.call(ProtobufRpcEngine.java:585)
=A0=A0=A0 at org.apache.hadoop.ipc.= RPC$Server.call(RPC.java:928)
=A0=A0=A0 at org.apache.hadoop.ipc.Server$= Handler$1.run(Server.java:2048)
=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)=A0=A0=A0 at java.security.AccessController.doPrivileged(Native Method)=A0=A0=A0 at javax.security.auth.Subject.doAs(Subject.java:396)
=A0=A0= =A0 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInform= ation.java:1491)
=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
=
=A0=A0=A0 at org.apache.hadoop.ipc.Client.call(Client.java:1347)
=A0= =A0=A0 at org.apache.hadoop.ipc.Client.call(Client.java:1300)
=A0=A0=A0 = at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine= .java:206)
=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0 at= sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
=A0=A0=A0 a= t sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39)
=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Delega= tingMethodAccessorImpl.java:25)
=A0=A0=A0 at java.lang.reflect.Method.invoke(Method.java:597)
=A0=A0=A0 = at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo= cationHandler.java:186)
=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInv= ocationHandler.invoke(RetryInvocationHandler.java:102)
=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0 at= org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBl= ock(ClientNamenodeProtocolTranslatorPB.java:330)
=A0=A0=A0 at org.apache= .hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStr= eam.java:1226)
=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockO= utputStream(DFSOutputStream.java:1078)
=A0=A0=A0 at org.apache.hadoop.hd= fs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)

My setup is very basic :
core-s= ite.xml
<configuration>
<property>
=A0=A0=A0 <name&= gt;fs.default.name= </name>
=A0=A0=A0=A0=A0=A0=A0 <value>hdfs://n-prd-bst-beacon01:9000</value= >
=A0=A0=A0=A0=A0=A0=A0=A0=A0 </property>
=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0 <property>
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= <name>hadoop.tmp.dir</name>
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0 <value>/home/manoj/hadoop-2.2.0/tmp</v= alue>
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= </property>
</configuration>

hdfs-site.xml
<configuration= >
<property>
=A0=A0=A0=A0 <name>dfs.replication</na= me>
=A0=A0=A0=A0=A0=A0=A0=A0=A0 <value>1</value>
=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </property>
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <property>
=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 <name>dfs.permiss= ions</name>
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0 <value>false</value>
=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 </= property>
</configuration>


Namenode and Datanode (on both machine= s) are up & running without errors

regards



--
http://khangaonkar.blogspot.com/
--089e01493de0c56d7c04f331f18d--