Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0D6E610360 for ; Wed, 31 Jul 2013 00:46:51 +0000 (UTC) Received: (qmail 79518 invoked by uid 500); 31 Jul 2013 00:46:46 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 79400 invoked by uid 500); 31 Jul 2013 00:46:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 79393 invoked by uid 99); 31 Jul 2013 00:46:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jul 2013 00:46:46 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of justlooks@gmail.com designates 209.85.219.49 as permitted sender) Received: from [209.85.219.49] (HELO mail-oa0-f49.google.com) (209.85.219.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 31 Jul 2013 00:46:39 +0000 Received: by mail-oa0-f49.google.com with SMTP id n16so182709oag.36 for ; Tue, 30 Jul 2013 17:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=uerD9LD62u84HuPtkSs+bip6wAKUyt7aKV0EJdxfRXM=; b=xvPeFZRjhD7Bi50fIm5cMKk+prB+Zyd1tdRrCCGf7hEq0Li2vyuho6T4fxpY6tBiYy yDixAmDmDAyGe6Gn2/LexF7KCdYkO+/sxJy0zlJvzU8ztWlhcqJ5Fai4ABHSKkEgAfbO Pl4b1vKrFbdLR2DveQTCRdHhFeRKbRPpahdSC2qvQe+S+K7+ODxgRGImjvfiaGOVanM5 57r6ILJJlq9p0eq0RjGIeNUT6wVvrbOlZg4tnwVrY+xCnAn/eMbC5f5avfSfjd+w/JDT fl364iugDiomBas8dIF/tXEDNTVX8sYLrhkWCA5G00kJvaJgwP8jLduR8BS4VwRXZcXi 2rwQ== MIME-Version: 1.0 X-Received: by 10.60.116.6 with SMTP id js6mr30427928oeb.4.1375231578199; Tue, 30 Jul 2013 17:46:18 -0700 (PDT) Received: by 10.182.103.41 with HTTP; Tue, 30 Jul 2013 17:46:18 -0700 (PDT) In-Reply-To: References: Date: Wed, 31 Jul 2013 08:46:18 +0800 Message-ID: Subject: Re: datanode error "Cannot append to a non-existent replica BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_" From: ch huang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e011611b0b3dba904e2c40d11 X-Virus-Checked: Checked by ClamAV on apache.org --089e011611b0b3dba904e2c40d11 Content-Type: text/plain; charset=ISO-8859-1 thanks for reply, i the block did not exist ,but why it will missing? On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav wrote: > Hi, > > Can you please check the existence/status of any of mentioned block > in your hdfs cluster. > > Command: > hdfs fsck / -block |grep 'blk number' > > Thanks > > On 7/30/13, ch huang wrote: > > i do not know how to solve this,anyone can help > > > > 2013-07-30 17:28:40,953 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock > > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861 > > received exce > > ption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: > > Cannot append to a non-existent replica > > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_ > > 458861 > > 2013-07-30 17:28:40,953 ERROR > > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceiver > > error processing WRITE_BLOCK operation src: /192.168.2.209:4421 dest: > /192 > > .168.10.34:50011 > > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot > > append to a non-existent replica > > BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861 > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:353) > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489) > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92) > > at > > > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:168) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:451) > > at > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103) > > at > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) > > at java.lang.Thread.run(Thread.java:662) > > 2013-07-30 17:28:40,978 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving > > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863 > > src: /192.168.2 > > .209:4423 dest: /192.168.10.34:50011 > > 2013-07-30 17:28:40,978 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock > > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863 > > received exc > > eption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: > > Cannot append to a non-existent replica > > BP-1099828917-192.168.10.22-1373361366827:blk_-205789402477599299 > > 3_458863 > > 2013-07-30 17:28:40,978 ERROR > > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceiver > > error processing WRITE_BLOCK operation src: /192.168.2.209:4423 dest: > /192 > > .168.10.34:50011 > > org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot > > append to a non-existent replica > > BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_45886 > > 3 > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:353) > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489) > > at > > > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:92) > > at > > > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:168) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:451) > > at > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103) > > at > > > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) > > at java.lang.Thread.run(Thread.java:662) > > 2013-07-30 17:28:41,002 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving > > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_458865 > > src: /192.168.2. > > 209:4426 dest: /192.168.10.34:50011 > > 2013-07-30 17:28:41,002 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock > > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_458865 > > received exce > > ption org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: > > Cannot append to a non-existent replica > > BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_ > > 458865 > > 2013-07-30 17:28:41,002 ERROR > > org.apache.hadoop.hdfs.server.datanode.DataNode: CH34:50011:DataXceiver > > error processing WRITE_BLOCK operation src: /192.168.2.209:4426 dest: > /192 > > .168.10.34:50011 > > > --089e011611b0b3dba904e2c40d11 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable thanks for reply, i the block did not exist ,but why it will missing?
On Wed, Jul 31, 2013 at 2:02 AM, Jitendra Yadav = <jeetuyadav200890@gmail.com> wrote:
Hi,

Can you please check the e= xistence/status =A0of any of mentioned block
in your hdfs cluster.
Command:
hdfs fsck / -block |grep 'blk number'

Thanks

On 7/30/13, ch huang <justlooks@gmail.com> wrote:
> i do not know how to = solve this,anyone can help
>
> 2013-07-30 17:28:40,953 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> = BP-1099828917-192.168.10.22-1373361366827:blk_7796221171187533460_458861> received exce
> ption org.apache.hadoop.hdfs.server.datanode.Re= plicaNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.= 10.22-1373361366827:blk_7796221171187533460_
> 458861
> 2013-07= -30 17:28:40,953 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataN= ode: CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation =A0src: /192.168.2.209:4421 dest: /192
>= ; .168.10.34:50011
> org.apache.hadoop.hdfs.server.datanode.ReplicaNo= tFoundException: Cannot
> append to a non-existent replica
> BP-1099828917-192.168.10.22-1= 373361366827:blk_7796221171187533460_458861
> =A0 =A0 =A0 =A0 at
&= gt; org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.get= ReplicaInfo(FsDatasetImpl.java:353)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.fsda= taset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> =A0 =A0 =A0= =A0 at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDat= asetImpl.append(FsDatasetImpl.java:92)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.Bloc= kReceiver.<init>(BlockReceiver.java:168)
> =A0 =A0 =A0 =A0 at> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXc= eiver.java:451)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.protocol.datatransfe= r.Receiver.opWriteBlock(Receiver.java:103)
> =A0 =A0 =A0 =A0 at
&g= t; org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver= .java:67)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.Data= Xceiver.run(DataXceiver.java:221)
> =A0 =A0 =A0 =A0 at java.lang.Thre= ad.run(Thread.java:662)
> 2013-07-30 17:28:40,978 INFO
> org.ap= ache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458= 863
> src: /192.168.2
> .209:4423 dest: /192.168.10.34:50011
> 2013-07= -30 17:28:40,978 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock
> = BP-1099828917-192.168.10.22-1373361366827:blk_-2057894024775992993_458863> received exc
> eption org.apache.hadoop.hdfs.server.datanode.R= eplicaNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.= 10.22-1373361366827:blk_-205789402477599299
> 3_458863
> 2013-0= 7-30 17:28:40,978 ERROR
> org.apache.hadoop.hdfs.server.datanode.Data= Node: CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation =A0src: /192.168.2.209:4423 dest: /192
>= ; .168.10.34:50011
> org.apache.hadoop.hdfs.server.datanode.ReplicaNo= tFoundException: Cannot
> append to a non-existent replica
> BP-1099828917-192.168.10.22-1= 373361366827:blk_-2057894024775992993_45886
> 3
> =A0 =A0 =A0 = =A0 at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsData= setImpl.getReplicaInfo(FsDatasetImpl.java:353)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.fsda= taset.impl.FsDatasetImpl.append(FsDatasetImpl.java:489)
> =A0 =A0 =A0= =A0 at
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDat= asetImpl.append(FsDatasetImpl.java:92)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.Bloc= kReceiver.<init>(BlockReceiver.java:168)
> =A0 =A0 =A0 =A0 at> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXc= eiver.java:451)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.protocol.datatransfe= r.Receiver.opWriteBlock(Receiver.java:103)
> =A0 =A0 =A0 =A0 at
&g= t; org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver= .java:67)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.Data= Xceiver.run(DataXceiver.java:221)
> =A0 =A0 =A0 =A0 at java.lang.Thre= ad.run(Thread.java:662)
> 2013-07-30 17:28:41,002 INFO
> org.ap= ache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-1099828917-192.168.10.22-1373361366827:blk_7728515140810267551_4588= 65
> src: /192.168.2= .
> 209:4426 dest: /192.168.10.34:50011
> 2013-07-30 17:28:41,002 INFO
> org.apache.hadoop.hdfs.server.dat= anode.DataNode: opWriteBlock
> BP-1099828917-192.168.10.22-1373361366= 827:blk_7728515140810267551_458865
> received exce
> ption org.= apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException:
> Cannot append to a non-existent replica
> BP-1099828917-192.168.= 10.22-1373361366827:blk_7728515140810267551_
> 458865
> 2013-07= -30 17:28:41,002 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataN= ode: CH34:50011:DataXceiver
> error processing WRITE_BLOCK operation =A0src: /192.168.2.209:4426 dest: /192
>= ; .168.10.34:50011
>

--089e011611b0b3dba904e2c40d11--