Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1CCC711AE0 for ; Fri, 21 Feb 2014 04:26:37 +0000 (UTC) Received: (qmail 49625 invoked by uid 500); 21 Feb 2014 04:26:27 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 49523 invoked by uid 500); 21 Feb 2014 04:26:27 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 49510 invoked by uid 99); 21 Feb 2014 04:26:26 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Feb 2014 04:26:26 +0000 X-ASF-Spam-Status: No, hits=3.2 required=5.0 tests=FORGED_YAHOO_RCVD,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [98.139.212.175] (HELO nm16.bullet.mail.bf1.yahoo.com) (98.139.212.175) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 21 Feb 2014 04:26:19 +0000 Received: from [98.139.212.150] by nm16.bullet.mail.bf1.yahoo.com with NNFMP; 21 Feb 2014 04:25:58 -0000 Received: from [68.142.230.73] by tm7.bullet.mail.bf1.yahoo.com with NNFMP; 21 Feb 2014 04:25:58 -0000 Received: from [127.0.0.1] by smtp230.mail.bf1.yahoo.com with NNFMP; 21 Feb 2014 04:25:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1392956758; bh=JiA9rKr/O6cTxWTbZhKnIDWY/eT2WLJD0TshO2CkLSA=; h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:References:Mime-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Cc:X-Mailer:From:Subject:Date:To; b=TFrwLz33Zx2umkNy3MD/rX2BaEmHEIn/Qv558tlJb3/ZNlWmM08yerf/jISeaVoVnUufMQt2dnn4UYxD5fw5y5d9m//v61qCkjXQMqKuyVCRP3DardCwm0Zgxobd09AReYDSgnPFYf3VAfNDoRT8VrdZQ46eFNXjiSVJ57Cdlew= X-Yahoo-Newman-Id: 519539.72100.bm@smtp230.mail.bf1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: p1cCMmgVM1k5fRvT377HctLFnC6iAMFEeLJK0Dd90nSIpZ3 GPGOp43ceG5UpAFNxepBPCxPWtxFK._q_Nqon4dWBUa7uduA2YeEdWc_2YK3 _jD.7a_.lXvaro9tOqmyqtp5clUCl9sNjSMZPfewe8MASLqP6cmFLwp5rgFG 3XhjpELM0vEOLIifvatuX3p5orGYN9uddasJ5bu_0SQGkr9qhP5.rOPH6E1q P0BIJle51hL.DPqbL_PsadFabK8lslpRiKH8dbBCwkxCBDsiQDtdf47SFn_P GAXDNc1cf4AqJ351uAsw3BTfNckKB_SjkyQRxsFZdPRkvzE1qBs9U4mOkr6j sRssgEWozMrJHd_VWSW9lZXoE5zExIybSBMzffHNP8KdStYtANH5OVu05kPP RSrW6ICyOj95MSe10sjTw55wn5Ztl5kIhwc62DjCk8nGHnV.oX43q37VEubF Kv1VbcRoo6oAKM1jnHd4Ie3DIK.cXTUAt2o8khZMnNEH1_WDDJe_dway_pRH PQcXD0QEyy8nKLOzptNGFZO_eBg0fH7PmFDfYZi7ijv1cdHNjuVNOGOCtohs 9TxUV4jLz2ElWK3UE4YVIwbaSpG8.H2W5rXyP9xjgGf3z7Ok59VnlUv7S5Y1 DiRr1YQghXzurRV1d13aUpd12qFoE1I9ezyWA.OA3U9rcb.qNA45IcAEElhs pfg-- X-Yahoo-SMTP: RDVUGaSswBCVE7U4f9yJ6hCzGTqdsttyh3I- X-Rocket-Received: from [10.136.145.7] (anurag_tangri@166.137.179.160 with xymcookie [66.196.81.168]) by smtp230.mail.bf1.yahoo.com with SMTP; 21 Feb 2014 04:25:58 +0000 UTC References: Mime-Version: 1.0 (1.0) In-Reply-To: Content-Type: multipart/alternative; boundary=Apple-Mail-66536C43-1C63-4F8F-8CCD-C8E7C6F9BF27 Content-Transfer-Encoding: 7bit Message-Id: <41C1E865-A1B2-420A-9031-C9B58248F02D@yahoo.com> Cc: "user@hadoop.apache.org" X-Mailer: iPhone Mail (11A501) From: Anurag Tangri Subject: Re: issue about write append into hdfs "ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation " Date: Thu, 20 Feb 2014 20:25:55 -0800 To: "user@hadoop.apache.org" X-Virus-Checked: Checked by ClamAV on apache.org --Apple-Mail-66536C43-1C63-4F8F-8CCD-C8E7C6F9BF27 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable Did you check your unix open file limit and data node xceiver value ? Is it too low for the number of blocks/data in your cluster ?=20 Thanks, Anurag Tangri > On Feb 20, 2014, at 6:57 PM, ch huang wrote: >=20 > hi,maillist: > i see the following info in my hdfs log ,and the block belong to= the file which write by scribe ,i do not know why > is there any limit in hdfs system ? > =20 > 2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNo= de: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938= 003208_3823240 received exc > eption java.io.IOException: Replica gen stamp < block genstamp, block=3DBP= -1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, re= plica=3DReplicaWaitingToBeRecov > ered, blk_-8536558734938003208_3820986, RWR > getNumBytes() =3D 35840 > getBytesOnDisk() =3D 35840 > getVisibleLength()=3D -1 > getVolume() =3D /data/4/dn/current > getBlockFile() =3D /data/4/dn/current/BP-1043055049-192.168.11.11-138= 2442676609/current/rbw/blk_-8536558734938003208 > unlinked=3Dfalse > 2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNo= de: DatanodeRegistration(192.168.11.12, storageID=3DDS-754202132-192.168.11.= 12-50010-1382443087835, infoP > ort=3D50075, ipcPort=3D50020, storageInfo=3Dlv=3D-40;cid=3DCID-0e777b8c-19= f3-44a1-8af1-916877f2506c;nsid=3D2086828354;c=3D0):Got exception while servi= ng BP-1043055049-192.168.11.11-1382442676 > 609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564 > java.io.IOException: Replica gen stamp < block genstamp, block=3DBP-104305= 5049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=3D= ReplicaWaitingToBeRecovered, b > lk_-8536558734938003208_3820986, RWR > getNumBytes() =3D 35840 > getBytesOnDisk() =3D 35840 > getVisibleLength()=3D -1 > getVolume() =3D /data/4/dn/current > getBlockFile() =3D /data/4/dn/current/BP-1043055049-192.168.11.11-138= 2442676609/current/rbw/blk_-8536558734938003208 > unlinked=3Dfalse > at org.apache.hadoop.hdfs.server.datanode.BlockSender.(Block= Sender.java:205) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(Da= taXceiver.java:326) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlo= ck(Receiver.java:92) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp= (Receiver.java:64) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXcei= ver.java:221) > at java.lang.Thread.run(Thread.java:744) > 2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataN= ode: ch12:50010:DataXceiver error processing READ_BLOCK operation src: /192= .168.11.15:56564 dest: /192.168.11.12:50010 > java.io.IOException: Replica gen stamp < block genstamp, block=3DBP-104305= 5049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=3D= ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR > getNumBytes() =3D 35840 > getBytesOnDisk() =3D 35840 > getVisibleLength()=3D -1 > getVolume() =3D /data/4/dn/current > getBlockFile() =3D /data/4/dn/current/BP-1043055049-192.168.11.11-138= 2442676609/current/rbw/blk_-8536558734938003208 > unlinked=3Dfalse > at org.apache.hadoop.hdfs.server.datanode.BlockSender.(Block= Sender.java:205) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(Da= taXceiver.java:326) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlo= ck(Receiver.java:92) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp= (Receiver.java:64) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXcei= ver.java:221) > at java.lang.Thread.run(Thread.java:744) --Apple-Mail-66536C43-1C63-4F8F-8CCD-C8E7C6F9BF27 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit
Did you check your unix open file limit and data node xceiver value ?

Is it too low for the number of blocks/data in your cluster ? 

Thanks,
Anurag Tangri

On Feb 20, 2014, at 6:57 PM, ch huang <justlooks@gmail.com> wrote:

hi,maillist:
          i see the following info in my hdfs log ,and the block belong to the file which write by scribe ,i do not know why
is there any limit in hdfs system ?
 
2014-02-21 10:33:30,235 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opReadBlock BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240 received exc
eption java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecov
ered, blk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
2014-02-21 10:33:30,235 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.11.12, storageID=DS-754202132-192.168.11.12-50010-1382443087835, infoP
ort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-0e777b8c-19f3-44a1-8af1-916877f2506c;nsid=2086828354;c=0):Got exception while serving BP-1043055049-192.168.11.11-1382442676
609:blk_-8536558734938003208_3823240 to /192.168.11.15:56564
java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, b
lk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:744)
2014-02-21 10:33:30,236 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ch12:50010:DataXceiver error processing READ_BLOCK operation  src: /192.168.11.15:56564 dest: /192.168.11.12:50010
java.io.IOException: Replica gen stamp < block genstamp, block=BP-1043055049-192.168.11.11-1382442676609:blk_-8536558734938003208_3823240, replica=ReplicaWaitingToBeRecovered, blk_-8536558734938003208_3820986, RWR
  getNumBytes()     = 35840
  getBytesOnDisk()  = 35840
  getVisibleLength()= -1
  getVolume()       = /data/4/dn/current
  getBlockFile()    = /data/4/dn/current/BP-1043055049-192.168.11.11-1382442676609/current/rbw/blk_-8536558734938003208
  unlinked=false
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:205)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:326)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:92)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:64)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:744)
--Apple-Mail-66536C43-1C63-4F8F-8CCD-C8E7C6F9BF27--