Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7DA24105E3 for ; Wed, 31 Jul 2013 10:39:45 +0000 (UTC) Received: (qmail 39676 invoked by uid 500); 31 Jul 2013 10:39:40 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 39074 invoked by uid 500); 31 Jul 2013 10:39:33 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 39067 invoked by uid 99); 31 Jul 2013 10:39:31 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jul 2013 10:39:31 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,MIME_QP_LONG_LINE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yypvsxf19870706@gmail.com designates 209.85.192.182 as permitted sender) Received: from [209.85.192.182] (HELO mail-pd0-f182.google.com) (209.85.192.182) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jul 2013 10:39:23 +0000 Received: by mail-pd0-f182.google.com with SMTP id r10so603473pdi.13 for ; Wed, 31 Jul 2013 03:39:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:references:from:content-type:x-mailer:in-reply-to :message-id:date:to:content-transfer-encoding:mime-version; bh=icJ5tL5+d0R5rn2AdmH0d0vsH+ylRSkFS1GkegRc5UA=; b=Qf+xkbfukE1RDjVVf5O6TdBCgHjbOR0ATPqD/PsHjpu7DCSdBiN6fJk8RnH9mXZmRm q8oFr9IK5SogSoumggiyD9fyFtpfw4KJCTXrkVpwx+/EE0gLEcC0VpW2xbamoNhQx7pg Yczvj4qYaAZICoM2CbCJJaMRBfBZ4XdIU7V75tr4SbGFdUyel6yuif+otlz73DkLEgEh 7R4q6ohvior0YakUeV5acPxmn/bSYsSefmPUklM8X1yp3E9pRx2WBtlaJdi7fladnIaJ O9KE7fvehWMlU+G8MeFK2WVjQfzsQZYm3p4+DvYE39jgIABJzzaXwgJ4Jm2any3VdSeZ zWbg== X-Received: by 10.67.23.36 with SMTP id hx4mr18894237pad.54.1375267141940; Wed, 31 Jul 2013 03:39:01 -0700 (PDT) Received: from [10.3.181.107] ([122.96.45.101]) by mx.google.com with ESMTPSA id ot4sm3717471pac.17.2013.07.31.03.38.58 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 31 Jul 2013 03:39:01 -0700 (PDT) Subject: Re: datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_" References: From: yypvsxf19870706 Content-Type: multipart/alternative; boundary=Apple-Mail-683486BF-3331-47EC-957F-CF56DE0793D7 X-Mailer: iPhone Mail (10B146) In-Reply-To: Message-Id: <560F991E-F767-422E-B426-D758D5A94BA6@gmail.com> Date: Wed, 31 Jul 2013 18:37:56 +0800 To: "user@hadoop.apache.org" Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (1.0) X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-683486BF-3331-47EC-957F-CF56DE0793D7 Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Hi I think it is important to make Clare how does the replica is missing. Here is an scenario: the disk of your datanode was broken down or the re= plic was just deleted, so that the append failed. Can you get similar log of your cluster? =20 =B7=A2=D7=D4=CE=D2=B5=C4 iPhone =D4=DA 2013-7-31=A3=AC15:01=A3=ACJitendra Yadav = =D0=B4=B5=C0=A3=BA > Hi, > =20 > I think there is some block synchronization issue in your hdfs cluster. Fra= nkly i haven't face this issue yet. > =20 > I believe you need to refresh your namenode fsimage to make it up to date w= ith your datanodes. > =20 > Thanks. > On Wed, Jul 31, 2013 at 6:16 AM, ch huang wrote: >> thanks for reply, i the block did not exist ,but why it will missing? >>=20 >>=20 >> On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav wrote: >>> Hi, >>>=20 >>> Can you please check the existence/status of any of mentioned block >>> in your hdfs cluster. >>>=20 >>> Command: >>> hdfs fsck / -block |grep 'blk number' >>>=20 >>> Thanks >>>=20 >>> On 7/30/13, ch huang wrote: >>> > i do not know how to solve this,anyone can help >>> > >>> > 2013-07-30 17:28:40,953 INFO >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_4588= 61 >>> > received exce >>> > ption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException:= >>> > Cannot append to a non-existent replica >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_ >>> > 458861 >>> > 2013-07-30 17:28:40,953 ERROR >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceive= r >>> > error processing WRITE_BLOCK operation src: /192.168.2.209:4421 dest:= /192 >>> > .168.10.34:50011 >>> > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Canno= t >>> > append to a non-existent replica >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_4588= 61 >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ge= tReplicaInfo(FsDatasetImpl.java:353) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ap= pend(FsDatasetImpl.java:489) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ap= pend(FsDatasetImpl.java:92) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockRecei= ver.java:168) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXcei= ver.java:451) >>> > at >>> > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Rec= eiver.java:103) >>> > at >>> > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiv= er.java:67) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.jav= a:221) >>> > at java.lang.Thread.run(Thread.java:662) >>> > 2013-07-30 17:28:40,978 INFO >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving >>> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458= 863 >>> > src: /192.168.2 >>> > .209:4423 dest: /192.168.10.34:50011 >>> > 2013-07-30 17:28:40,978 INFO >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock >>> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458= 863 >>> > received exc >>> > eption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException= : >>> > Cannot append to a non-existent replica >>> > BP-1099828917-192.168.10.22-1373361366827:blk_-205789402477599299 >>> > 3_458863 >>> > 2013-07-30 17:28:40,978 ERROR >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceive= r >>> > error processing WRITE_BLOCK operation src: /192.168.2.209:4423 dest:= /192 >>> > .168.10.34:50011 >>> > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Canno= t >>> > append to a non-existent replica >>> > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458= 86 >>> > 3 >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ge= tReplicaInfo(FsDatasetImpl.java:353) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ap= pend(FsDatasetImpl.java:489) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.ap= pend(FsDatasetImpl.java:92) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockRecei= ver.java:168) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXcei= ver.java:451) >>> > at >>> > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Rec= eiver.java:103) >>> > at >>> > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiv= er.java:67) >>> > at >>> > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.jav= a:221) >>> > at java.lang.Thread.run(Thread.java:662) >>> > 2013-07-30 17:28:41,002 INFO >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_4588= 65 >>> > src: /192.168.2. >>> > 209:4426 dest: /192.168.10.34:50011 >>> > 2013-07-30 17:28:41,002 INFO >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_4588= 65 >>> > received exce >>> > ption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException:= >>> > Cannot append to a non-existent replica >>> > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_ >>> > 458865 >>> > 2013-07-30 17:28:41,002 ERROR >>> > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceive= r >>> > error processing WRITE_BLOCK operation src: /192.168.2.209:4426 dest:= /192 >>> > .168.10.34:50011 >>> > >=20 --Apple-Mail-683486BF-3331-47EC-957F-CF56DE0793D7 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable
Hi

I think it= is important to make Clare how does the replica is missing.
 = ;  Here is an scenario: the disk of your datanode was broken down  = ;or the replic was just deleted, so that the append failed.
 =  Can you get similar log of your cluster?  

<= div>


=E5=8F=91=E8=87=AA=E6=88=91=E7=9A=84 iPhone
=

=E5=9C=A8 2013-7-31=EF=BC=8C15:01=EF=BC=8CJitendra Yadav <jeetuyadav200890@gmail.com> =E5= =86=99=E9=81=93=EF=BC=9A

Hi= ,
 
I think there is some block synchronization issue in your hdfs clu= ster. Frankly i haven't face this issue yet.
 
I believe you need to refresh your namenode fsimage to make it up t= o date with your datanodes.
 
Thanks.
On Wed, Jul 31, 2013 at 6:16 AM, ch huang <just= looks@gmail.com> wrote:
thanks for reply, i the block did not e= xist ,but why it will missing?=20


On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav <= span dir=3D"ltr"><jeetuyadav200890@gmail.com> wrote:
Hi,

Can you please check the exi= stence/status  of any of mentioned block
in your hdfs cluster.
Command:
hdfs fsck / -block |grep 'blk number'

Thanks

On 7/30/13, ch huang <justlooks@gmail.com> wrote:
> i do not know how t= o solve this,anyone can help
>
> 2013-07-30 17:28:40,953 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> B= P-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861
&= gt; received exce
> ption org.apache.hadoop.hdfs.server.datanode.Repli= caNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.1= 0.22-1373361366827:blk_7796221171187533460_
> 458861
> 2013-07-3= 0 17:28:40,953 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode= : CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation  src: /192.168.2.209:4421 dest: /192
&g= t; .168.10.34:50011
> org.apache.hadoop.hdfs.server.datanode.ReplicaNo= tFoundException: Cannot
> append to a non-existent replica
> BP-1099828917-192.168.10.22-13= 73361366827:blk_7796221171187533460_458861
>       &nbs= p; at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatase= tImpl.getReplicaInfo(FsDatasetImpl.java:353)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> &= nbsp;       at
> org.apache.hadoop.hdfs.server.datanode= .fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.BlockReceiver.<init>(BlockReceiver.java:168)
>   &nb= sp;     at
> org.apache.hadoop.hdfs.server.datanode.DataXcei= ver.writeBlock(DataXceiver.java:451)
>         at
> org.apache.hadoop.hdfs.protocol.= datatransfer.Receiver.opWriteBlock(Receiver.java:103)
>     &= nbsp;   at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receive= r.processOp(Receiver.java:67)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.DataXceiver.run(DataXceiver.java:221)
>       &n= bsp; at java.lang.Thread.run(Thread.java:662)
> 2013-07-30 17:28:40,97= 8 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_4588= 63
> src: /192.168.2
> .209:4423 dest: /192.168.10.34:50011
> 2013-07-30= 17:28:40,978 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> B= P-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863
= > received exc
> eption org.apache.hadoop.hdfs.server.datanode.Repl= icaNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.1= 0.22-1373361366827:blk_-205789402477599299
> 3_458863
> 2013-07-= 30 17:28:40,978 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNod= e: CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation  src: /192.168.2.209:4423 dest: /192
&g= t; .168.10.34:50011
> org.apache.hadoop.hdfs.server.datanode.ReplicaNo= tFoundException: Cannot
> append to a non-existent replica
> BP-1099828917-192.168.10.22-13= 73361366827:blk_-2057894024775992993_45886
> 3
>     &= nbsp;   at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.imp= l.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:353)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> &= nbsp;       at
> org.apache.hadoop.hdfs.server.datanode= .fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.BlockReceiver.<init>(BlockReceiver.java:168)
>   &nb= sp;     at
> org.apache.hadoop.hdfs.server.datanode.DataXcei= ver.writeBlock(DataXceiver.java:451)
>         at
> org.apache.hadoop.hdfs.protocol.= datatransfer.Receiver.opWriteBlock(Receiver.java:103)
>     &= nbsp;   at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receive= r.processOp(Receiver.java:67)
>         at
> org.apache.hadoop.hdfs.server.da= tanode.DataXceiver.run(DataXceiver.java:221)
>       &n= bsp; at java.lang.Thread.run(Thread.java:662)
> 2013-07-30 17:28:41,00= 2 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_45886= 5
> src: /192.168.2.<= /a>
> 209:4426 dest: /
192.168.10.34:50011
> 2013-07-30 17:28:41,002 INFO
> org.apache.hadoop.hdfs.server.data= node.DataNode: opWriteBlock
> BP-1099828917-192.168.10.22-137336136682= 7:blk_7728515140810267551_458865
> received exce
> ption org.apa= che.hadoop.hdfs.server.datanode.ReplicaNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.1= 0.22-1373361366827:blk_7728515140810267551_
> 458865
> 2013-07-3= 0 17:28:41,002 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode= : CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation  src: /192.168.2.209:4426 dest: /192
&g= t; .168.10.34:50011
>


= --Apple-Mail-683486BF-3331-47EC-957F-CF56DE0793D7--