Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 40FEE1013C for ; Fri, 6 Dec 2013 03:33:46 +0000 (UTC) Received: (qmail 72488 invoked by uid 500); 6 Dec 2013 03:33:36 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 72411 invoked by uid 500); 6 Dec 2013 03:33:35 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 72402 invoked by uid 99); 6 Dec 2013 03:33:34 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Dec 2013 03:33:34 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of justlooks@gmail.com designates 209.85.212.181 as permitted sender) Received: from [209.85.212.181] (HELO mail-wi0-f181.google.com) (209.85.212.181) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 06 Dec 2013 03:33:27 +0000 Received: by mail-wi0-f181.google.com with SMTP id hq4so246144wib.2 for ; Thu, 05 Dec 2013 19:33:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Nu4W8ETFSFJq798LPlod7CxGqhC8AKuSdzMdevvOXUg=; b=rYhwEn0I/FFNhKKN/UyzqdH5wR6sPjjcz7YhS/eMAhRQkvVVotZ/IUjie7Z+dCywo/ py19RjOsX7d+5gacf/ZsV7a5ySzJmdOLnkJXbO2pI9p4e9Dvt9X+DC7Q6NQyX8iMV/LS 1iMvhjSjEIoUAcYjy+aS+YBdXaNAW+FVcigKp3KBaOORO9Dg29vT0U2UJcuhsfvm76J3 QoVwxyr4CWpjrqn+jtwxRSQFD+Z9bZJa5hw1uUqrMoqoXBab8nqT0hrWV24KVakpBZpY PrJHqt/bgUp8KeMLgx1HzsLaC49GN8FWCzhNJVUioacd/ooruDmOmmIyL7LER3arg8j1 rHrw== MIME-Version: 1.0 X-Received: by 10.180.90.37 with SMTP id bt5mr352589wib.43.1386300787652; Thu, 05 Dec 2013 19:33:07 -0800 (PST) Received: by 10.227.145.131 with HTTP; Thu, 5 Dec 2013 19:33:07 -0800 (PST) In-Reply-To: <5DF48A23D7B14649BBA72C2F64C6663B82B2CD62@szxeml523-mbx.china.huawei.com> References: <5DF48A23D7B14649BBA72C2F64C6663B82B2CD62@szxeml523-mbx.china.huawei.com> Date: Fri, 6 Dec 2013 11:33:07 +0800 Message-ID: Subject: Re: error in copy from local file into HDFS From: ch huang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d043d6779fffdac04ecd54ded X-Virus-Checked: Checked by ClamAV on apache.org --f46d043d6779fffdac04ecd54ded Content-Type: text/plain; charset=ISO-8859-1 hi: you are right,my DN disk is full,i delete some file,now it's worked ,thanks On Fri, Dec 6, 2013 at 11:28 AM, Vinayakumar B wrote: > Hi Ch huang, > > > > Please check whether all datanodes in your cluster have enough disk > space and number non-decommissioned nodes should be non-zero. > > > > Thanks and regards, > > Vinayakumar B > > > > *From:* ch huang [mailto:justlooks@gmail.com] > *Sent:* 06 December 2013 07:14 > *To:* user@hadoop.apache.org > *Subject:* error in copy from local file into HDFS > > > > hi,maillist: > > i got a error when i put local file into HDFS > > > > [root@CHBM224 test]# hadoop fs -copyFromLocal /tmp/aa /alex/ > 13/12/06 09:40:29 WARN hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /alex/aa._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 4 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > > at org.apache.hadoop.ipc.Client.call(Client.java:1237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1177) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1030) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:488) > copyFromLocal: File /alex/aa._COPYING_ could only be replicated to 0 nodes > instead of minReplication (=1). There are 4 datanode(s) running and no > node(s) are excluded in this operation. > 13/12/06 09:40:29 ERROR hdfs.DFSClient: Failed to close file > /alex/aa._COPYING_ > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /alex/aa._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 4 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > > at org.apache.hadoop.ipc.Client.call(Client.java:1237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1177) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1030) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:488) > [root@CHBM224 test]# hadoop fs -copyFromLocal /tmp/aa /user/root/ > 13/12/06 09:40:52 WARN hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/root/aa._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 4 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > > at org.apache.hadoop.ipc.Client.call(Client.java:1237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1177) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1030) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:488) > copyFromLocal: File /user/root/aa._COPYING_ could only be replicated to 0 > nodes instead of minReplication (=1). There are 4 datanode(s) running and > no node(s) are excluded in this operation. > 13/12/06 09:40:52 ERROR hdfs.DFSClient: Failed to close file > /user/root/aa._COPYING_ > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/root/aa._COPYING_ could only be replicated to 0 nodes instead of > minReplication (=1). There are 4 datanode(s) running and no node(s) are > excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) > > at org.apache.hadoop.ipc.Client.call(Client.java:1237) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy10.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1177) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1030) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:488) > --f46d043d6779fffdac04ecd54ded Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
hi:
=A0=A0=A0=A0=A0 you are right,my DN disk is full,i delete some file,no= w it's worked ,thanks

On Fri, Dec 6, 2013 at 11:28 AM, Vinayakumar B <= span dir=3D"ltr"><vinayakumar.b@huawei.com> wrote:

Hi Ch huang,

=A0

=A0=A0 Please check whether all= datanodes in your cluster have enough disk space and number non-decommissi= oned nodes should be non-zero.

=A0

Thanks and regards,

Vinayakumar B

=A0

From: ch huang [mailto:justlooks@gmail.com]
Sent: 06 December 2013 07:14
To:
user@hadoop.apache.org
Subject= : error in copy from local file into HDFS

=A0

hi,maillist:

=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 i got a erro= r when i put local file into HDFS

=A0

[root@CHBM224 test]# hadoop fs -copyFromLocal /tmp/a= a /alex/
13/12/06 09:40:29 WARN hdfs.DFSClient: DataStreamer Exceptionorg.apache.hadoop.ipc.RemoteException(java.io.IOException): File /alex/aa= ._COPYING_ could only be replicated to 0 nodes instead of minReplication (= =3D1).=A0 There are 4 datanode(s) running and no node(s) are excluded in th= is operation.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.Bloc= kManager.chooseTarget(BlockManager.java:1339)
=A0=A0=A0=A0=A0=A0=A0 at o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam= esystem.java:2198)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.serve= r.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTrans= latorPB.java:299)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protoc= ol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocking= Method(ClientNamenodeProtocolProtos.java:44954)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pro= toBufRpcInvoker.call(ProtobufRpcEngine.java:453)
=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.= java:1747)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPri= vileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subj= ect.doAs(Subject.java:415)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.se= curity.UserGroupInformation.doAs(UserGroupInformation.java:1408)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1745)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Clien= t.call(Client.java:1237)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.= ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
=A0=A0=A0= =A0=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Nati= ve Method)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl= .invoke(NativeMethodAccessorImpl.java:57)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(De= legatingMethodAccessorImpl.java:43)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.r= eflect.Method.invoke(Method.java:606)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandle= r.java:164)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHandler.= invoke(RetryInvocationHandler.java:83)
=A0=A0=A0=A0=A0=A0=A0 at com.sun.= proxy.$Proxy10.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apa= che.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutput= Stream.java:1177)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreame= r.nextBlockOutputStream(DFSOutputStream.java:1030)
=A0=A0=A0=A0=A0=A0=A0= at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream= .java:488)
copyFromLocal: File /alex/aa._COPYING_ could only be replicat= ed to 0 nodes instead of minReplication (=3D1).=A0 There are 4 datanode(s) = running and no node(s) are excluded in this operation.
13/12/06 09:40:29 ERROR hdfs.DFSClient: Failed to close file /alex/aa._COPY= ING_
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /a= lex/aa._COPYING_ could only be replicated to 0 nodes instead of minReplicat= ion (=3D1).=A0 There are 4 datanode(s) running and no node(s) are excluded = in this operation.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.Bloc= kManager.chooseTarget(BlockManager.java:1339)
=A0=A0=A0=A0=A0=A0=A0 at o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam= esystem.java:2198)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.serve= r.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTrans= latorPB.java:299)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protoc= ol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocking= Method(ClientNamenodeProtocolProtos.java:44954)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pro= toBufRpcInvoker.call(ProtobufRpcEngine.java:453)
=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.= java:1747)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPri= vileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subj= ect.doAs(Subject.java:415)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.se= curity.UserGroupInformation.doAs(UserGroupInformation.java:1408)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1745)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Clien= t.call(Client.java:1237)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.= ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
=A0=A0=A0= =A0=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Nati= ve Method)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl= .invoke(NativeMethodAccessorImpl.java:57)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(De= legatingMethodAccessorImpl.java:43)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.r= eflect.Method.invoke(Method.java:606)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandle= r.java:164)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHandler.= invoke(RetryInvocationHandler.java:83)
=A0=A0=A0=A0=A0=A0=A0 at com.sun.= proxy.$Proxy10.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apa= che.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutput= Stream.java:1177)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreame= r.nextBlockOutputStream(DFSOutputStream.java:1030)
=A0=A0=A0=A0=A0=A0=A0= at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream= .java:488)
[root@CHBM224 test]# hadoop fs -copyFromLocal /tmp/aa /user/r= oot/
13/12/06 09:40:52 WARN hdfs.DFSClient: DataStreamer Exception
org.apache= .hadoop.ipc.RemoteException(java.io.IOException): File /user/root/aa._COPYI= NG_ could only be replicated to 0 nodes instead of minReplication (=3D1).= =A0 There are 4 datanode(s) running and no node(s) are excluded in this ope= ration.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.Bloc= kManager.chooseTarget(BlockManager.java:1339)
=A0=A0=A0=A0=A0=A0=A0 at o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam= esystem.java:2198)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.serve= r.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTrans= latorPB.java:299)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protoc= ol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocking= Method(ClientNamenodeProtocolProtos.java:44954)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pro= toBufRpcInvoker.call(ProtobufRpcEngine.java:453)
=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.= java:1747)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPri= vileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subj= ect.doAs(Subject.java:415)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.se= curity.UserGroupInformation.doAs(UserGroupInformation.java:1408)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1745)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Clien= t.call(Client.java:1237)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.= ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
=A0=A0=A0= =A0=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Nati= ve Method)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl= .invoke(NativeMethodAccessorImpl.java:57)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(De= legatingMethodAccessorImpl.java:43)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.r= eflect.Method.invoke(Method.java:606)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandle= r.java:164)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHandler.= invoke(RetryInvocationHandler.java:83)
=A0=A0=A0=A0=A0=A0=A0 at com.sun.= proxy.$Proxy10.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apa= che.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutput= Stream.java:1177)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreame= r.nextBlockOutputStream(DFSOutputStream.java:1030)
=A0=A0=A0=A0=A0=A0=A0= at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream= .java:488)
copyFromLocal: File /user/root/aa._COPYING_ could only be rep= licated to 0 nodes instead of minReplication (=3D1).=A0 There are 4 datanod= e(s) running and no node(s) are excluded in this operation.
13/12/06 09:40:52 ERROR hdfs.DFSClient: Failed to close file /user/root/aa.= _COPYING_
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Fi= le /user/root/aa._COPYING_ could only be replicated to 0 nodes instead of m= inReplication (=3D1).=A0 There are 4 datanode(s) running and no node(s) are= excluded in this operation.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.Bloc= kManager.chooseTarget(BlockManager.java:1339)
=A0=A0=A0=A0=A0=A0=A0 at o= rg.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNam= esystem.java:2198)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.serve= r.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTrans= latorPB.java:299)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protoc= ol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlocking= Method(ClientNamenodeProtocolProtos.java:44954)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$Pro= toBufRpcInvoker.call(ProtobufRpcEngine.java:453)
=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.= java:1747)
=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessController.doPri= vileged(Native Method)
=A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subj= ect.doAs(Subject.java:415)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.se= curity.UserGroupInformation.doAs(UserGroupInformation.java:1408)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1745)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Clien= t.call(Client.java:1237)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.= ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
=A0=A0=A0= =A0=A0=A0=A0 at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Nati= ve Method)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl= .invoke(NativeMethodAccessorImpl.java:57)
=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(De= legatingMethodAccessorImpl.java:43)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.r= eflect.Method.invoke(Method.java:606)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandle= r.java:164)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHandler.= invoke(RetryInvocationHandler.java:83)
=A0=A0=A0=A0=A0=A0=A0 at com.sun.= proxy.$Proxy10.addBlock(Unknown Source)
=A0=A0=A0=A0=A0=A0=A0 at org.apa= che.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutput= Stream.java:1177)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreame= r.nextBlockOutputStream(DFSOutputStream.java:1030)
=A0=A0=A0=A0=A0=A0=A0= at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream= .java:488)


--f46d043d6779fffdac04ecd54ded--