Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id ECF161735F for ; Thu, 22 Jan 2015 14:30:45 +0000 (UTC) Received: (qmail 80395 invoked by uid 500); 22 Jan 2015 14:30:40 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 80342 invoked by uid 500); 22 Jan 2015 14:30:40 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 80332 invoked by uid 99); 22 Jan 2015 14:30:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Jan 2015 14:30:40 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of josh.elser@gmail.com designates 209.85.214.177 as permitted sender) Received: from [209.85.214.177] (HELO mail-ob0-f177.google.com) (209.85.214.177) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 22 Jan 2015 14:30:35 +0000 Received: by mail-ob0-f177.google.com with SMTP id uy5so1611779obc.8 for ; Thu, 22 Jan 2015 06:30:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=8G1/D2Ab3Cs1Y/0aKIlATuJeQZYo9XWcoH4GJeJBEtI=; b=g65Q8Iwv/E/x2SidUlj81V+bwyA+YK0pWubgwQfa8R1AjOgPyUn05Qa9X5gzh3xp/0 f2IKdND1+tsi/sghw54sdxZnLB++7C0vC8f7PJAQQL9xyuc6hjvBIEAZsETB3x1wodpB DMUp7Xy5FOSIvUoAzsVPwr9jhp7N8gd5Xx/xFB+0/gIUTYdyHkXJmHAOXXSZl5kRPjdl X6R+Y5aYvgVcYA3NJtVc1gtahZhOP6wAH5Yw2eRH5T3LCEoZJiBdOjrfl9l10b2TQSEP dMpIfispwhpci4Rwr5L53zcy/Bp7WB7iuTjIwE05FjLmccY8sfjOW7A+iiE7yHZQKiRV 0LHg== MIME-Version: 1.0 X-Received: by 10.182.210.138 with SMTP id mu10mr1059946obc.31.1421937014716; Thu, 22 Jan 2015 06:30:14 -0800 (PST) Received: by 10.76.19.76 with HTTP; Thu, 22 Jan 2015 06:30:14 -0800 (PST) Received: by 10.76.19.76 with HTTP; Thu, 22 Jan 2015 06:30:14 -0800 (PST) In-Reply-To: References: Date: Thu, 22 Jan 2015 09:30:14 -0500 Message-ID: Subject: Re: why a error about replicated From: Josh Elser To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=001a11c1feaaa7be9c050d3e82d6 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c1feaaa7be9c050d3e82d6 Content-Type: text/plain; charset=UTF-8 How much free space do you still have in HDFS? If hdfs doesn't have enough free space to make the file, I believe you'll see the car that you have outlined. The way we create the file will also end up requiring at least one GB with the default configuration. Also make sure to take into account any reserved percent of hdfs when considering the hdfs usage. On Jan 22, 2015 1:46 AM, "Lu.Qin" wrote: > > Hi,I have a Accumulo clusters and it run 10 days ,but it show me many > errors now. > > 2015-01-22 13:04:21,161 [hdfs.DFSClient] WARN : Error while syncing > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /accumulo/wal/+9997/226dce4f-4e14-4704-b811-532afe0b0fb3 could only be > replicated to 0 nodes instead > of minReplication (=1). There are 3 datanode(s) running and no node(s) > are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1471) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2791) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:606) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:455) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) > > at org.apache.hadoop.ipc.Client.call(Client.java:1411) > at org.apache.hadoop.ipc.Client.call(Client.java:1364) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy20.addBlock(Unknown Source) > at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy20.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:368) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1449) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1270) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) > > I use hadoop fs to put a file into hadoop ,and it works good,and the file > has 2 replicates.Why accumulo can not work ? > > And I see there are so many file only 0B in /accumulo/wal/***/,why? > > Thanks. > --001a11c1feaaa7be9c050d3e82d6 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

How much free space do you still have in HDFS? If hdfs doesn= 't have enough free space to make the file, I believe you'll see th= e car that you have outlined. The way we create the file will also end up r= equiring at least one GB with the default configuration.

Also make sure to take into account any reserved percent of = hdfs when considering the hdfs usage.

On Jan 22, 2015 1:46 AM, "Lu.Qin" <= luq.java@gmail.com> wrote:

Hi,I have a Accumulo clusters and it run 10 days ,= but it show me many errors now.

2015-01-22 13= :04:21,161 [hdfs.DFSClient] WARN : Error while syncing
org.apache= .hadoop.ipc.RemoteException(java.io.IOException): File /accumulo/wal/+9997/= 226dce4f-4e14-4704-b811-532afe0b0fb3 could only be replicated to 0 nodes in= stead
=C2=A0of minReplication (=3D1).=C2=A0 There are 3 datanode(= s) running and no node(s) are excluded in this operation.
=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.blockmanagement.Block= Manager.chooseTarget(BlockManager.java:1471)
=C2=A0 =C2=A0 =C2=A0= =C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditiona= lBlock(FSNamesystem.java:2791)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org= .apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcS= erver.java:606)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.= hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(Clien= tNamenodeProtocolServerSideTranslatorPB.java:455)
=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtoc= olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocol= Protos.java)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc= .ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:58= 5)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.RPC$Serve= r.call(RPC.java:928)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.ha= doop.ipc.Server$Handler$1.run(Server.java:2013)
=C2=A0 =C2=A0 =C2= =A0 =C2=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)<= /div>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.security.AccessController.doP= rivileged(Native Method)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at javax.sec= urity.auth.Subject.doAs(Subject.java:415)
=C2=A0 =C2=A0 =C2=A0 = =C2=A0 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInf= ormation.java:1614)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.had= oop.ipc.Server$Handler.run(Server.java:2007)

=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Client.call(Client.java:1= 411)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.ipc.Client.= call(Client.java:1364)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.= hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at com.sun.proxy.$Proxy20.addBlock(Unkno= wn Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.reflect.GeneratedMe= thodAccessor16.invoke(Unknown Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0= at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso= rImpl.java:43)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.reflect.M= ethod.invoke(Method.java:606)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.= apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationH= andler.java:187)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop= .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at com.sun.proxy.$Proxy20.addBlock(Unkn= own Source)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs= .protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProto= colTranslatorPB.java:368)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apac= he.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputS= tream.java:1449)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop= .hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.ja= va:1270)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.DF= SOutputStream$DataStreamer.run(DFSOutputStream.java:526)
I use hadoop fs to put a file into hadoop ,and it works good,a= nd the file has 2 replicates.Why accumulo can not work ?

And I see there are so many file only 0B in /accumulo/wal/***/,why?<= /div>

Thanks.
--001a11c1feaaa7be9c050d3e82d6--