Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7B0221014A for ; Fri, 8 Nov 2013 17:12:01 +0000 (UTC) Received: (qmail 82297 invoked by uid 500); 8 Nov 2013 17:11:01 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 82116 invoked by uid 500); 8 Nov 2013 17:10:57 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 82105 invoked by uid 99); 8 Nov 2013 17:10:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 08 Nov 2013 17:10:56 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy includes SPF record at spf.trusted-forwarder.org) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 08 Nov 2013 17:10:51 +0000 Received: by mail-vc0-f176.google.com with SMTP id ia6so1570214vcb.7 for ; Fri, 08 Nov 2013 09:10:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:from:date:message-id:subject:to :content-type; bh=UyZKYZ3CiKsXAKeC/fbFm6fddBH4XUoFq/E+Clch4p8=; b=haiCPV6yen+9/g2rtRF/TZvmzo5j5IwkSiHpy+CRz9Gej5rFp2dYE49MSEL8otZUDu 3hV3Q8nxoCt03sn7tqn2hDGep5r7m+ng3DvvV9PXpX3InrcfhLYdKvtzKIspIodWfVPU blkXRN/jBiQ7mvtiYpxpBG5KIOBla+LqMuUlYc9Byca5s+uILATAhNrlXCMMnFumC8CE PahEh2uPpg1utW4zjdQT0LjnTzKXEghaaSvrDXTBrzxoSe17JBNYVHwJIQX5MVD8XomB +kk6mMARIFtgNibwAOwqfRclxKpyBaqwSnSdbdyroxCkOM5IWgGvUEHqNe3BXcZe0MWR Omzw== X-Gm-Message-State: ALoCoQmFqBO3Tszne8vvmn6OLXKGwY3apknpepP76HJMnbVpwpNFHc5Ux675PSTBnLTqOVL8cZHM X-Received: by 10.52.97.138 with SMTP id ea10mr10744862vdb.31.1383930630051; Fri, 08 Nov 2013 09:10:30 -0800 (PST) MIME-Version: 1.0 Received: by 10.52.175.202 with HTTP; Fri, 8 Nov 2013 09:10:09 -0800 (PST) From: Jean-Marc Spaggiari Date: Fri, 8 Nov 2013 12:10:09 -0500 Message-ID: Subject: Could not obtain block? To: user Content-Type: multipart/alternative; boundary=20cf307f38f89930c504eaad7512 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307f38f89930c504eaad7512 Content-Type: text/plain; charset=UTF-8 Hi, I have a situation here that I'm wondering where it's coming from. I know that I'm using a pretty old version... 1.0.3 When I fsck a file, I can see tha there is one block but when I try to get the file, I'm not able to retrieve this block. I tought it was because the file was opened, so I killed all the process related to this file, but still can not access it. (Logs above). I tried to stop and restart hadoop but still, same issue. Worse case I can "simply" delete this file, but my goal is more to understand the situation. Thanks, JM hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fsck /hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 FSCK started by hadoop from /192.168.23.7 for path /hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 at Fri Nov 08 12:03:49 EST 2013 Status: HEALTHY Total size: 0 B (Total open files size: 1140681 B) Total dirs: 0 Total files: 0 (Files currently being written: 1) Total blocks (validated): 0 (Total open file blocks (not validated): 1) Minimally replicated blocks: 0 Over-replicated blocks: 0 Under-replicated blocks: 0 Mis-replicated blocks: 0 Default replication factor: 3 Average block replication: 0.0 Corrupt blocks: 0 Missing replicas: 0 Number of data-nodes: 8 Number of racks: 1 FSCK ended at Fri Nov 08 12:03:49 EST 2013 in 0 milliseconds The filesystem under path '/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597' is HEALTHY hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -get /hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 . 13/11/08 12:03:54 INFO hdfs.DFSClient: No node available for block: blk_7436507983567155151_3155853 file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 13/11/08 12:03:54 INFO hdfs.DFSClient: Could not obtain block blk_7436507983567155151_3155853 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... 13/11/08 12:03:57 INFO hdfs.DFSClient: No node available for block: blk_7436507983567155151_3155853 file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 13/11/08 12:03:57 INFO hdfs.DFSClient: Could not obtain block blk_7436507983567155151_3155853 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... 13/11/08 12:04:00 INFO hdfs.DFSClient: No node available for block: blk_7436507983567155151_3155853 file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 13/11/08 12:04:00 INFO hdfs.DFSClient: Could not obtain block blk_7436507983567155151_3155853 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry... 13/11/08 12:04:03 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_7436507983567155151_3155853 file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2269) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2224) at java.io.DataInputStream.read(DataInputStream.java:100) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:87) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341) at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:248) at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.java:199) at org.apache.hadoop.fs.FsShell.run(FsShell.java:1769) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at org.apache.hadoop.fs.FsShell.main(FsShell.java:1895) get: Could not obtain block: blk_7436507983567155151_3155853 file=/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 hadoop@node3:~/hadoop-1.0.3$ --20cf307f38f89930c504eaad7512 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,

I have a situation here that I&= #39;m wondering where it's coming from. I know that I'm using a pre= tty old version... 1.0.3

When I fsck a file, I can see tha the= re is one block but when I try to get the file, I'm not able to retriev= e this block. I tought it was because the file was opened, so I killed all = the process related to this file, but still can not access it. (Logs above)= . I tried to stop and restart hadoop but still, same issue. Worse case I ca= n "simply" delete this file, but my goal is more to understand th= e situation.

Thanks,

JM

hadoop@node3:~/hadoop-1.0.3$ bi= n/hadoop fsck /hbase/.logs/node5,60020,1383862856731-splitting/node5%2C6002= 0%2C1383862856731.1383874115597
FSCK started by hadoop from /192.168.23.7 for path /hbase/.logs/node5,60020,138= 3862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 at Fri Nov= 08 12:03:49 EST 2013
Status: HEALTHY
=C2=A0Total size:=C2=A0=C2=A0 =C2=A00 B (Total open file= s size: 1140681 B)
=C2=A0Total dirs:=C2=A0=C2=A0 =C2=A00
=C2=A0Total = files:=C2=A0=C2=A0 =C2=A00 (Files currently being written: 1)
=C2=A0Tota= l blocks (validated):=C2=A0=C2=A0 =C2=A00 (Total open file blocks (not vali= dated): 1)
=C2=A0Minimally replicated blocks:=C2=A0=C2=A0 =C2=A00
=C2=A0Over-replic= ated blocks:=C2=A0=C2=A0 =C2=A00
=C2=A0Under-replicated blocks:=C2=A0=C2= =A0 =C2=A00
=C2=A0Mis-replicated blocks:=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 = =C2=A00
=C2=A0Default replication factor:=C2=A0=C2=A0 =C2=A03
=C2=A0A= verage block replication:=C2=A0=C2=A0 =C2=A00.0
=C2=A0Corrupt blocks:=C2= =A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A00
=C2=A0Missing replicas:=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A00
=C2=A0Num= ber of data-nodes:=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A08
=C2=A0Number o= f racks:=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A01
FSCK ended at Fri Nov 08= 12:03:49 EST 2013 in 0 milliseconds


The filesystem under path &= #39;/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60020%2C138386= 2856731.1383874115597' is HEALTHY
hadoop@node3:~/hadoop-1.0.3$ bin/hadoop fs -get /hbase/.logs/node5,60020,13= 83862856731-splitting/node5%2C60020%2C1383862856731.1383874115597 .
13/1= 1/08 12:03:54 INFO hdfs.DFSClient: No node available for block: blk_7436507= 983567155151_3155853 file=3D/hbase/.logs/node5,60020,1383862856731-splittin= g/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:03:54 INFO hdfs.DFSClient: Could not obtain block blk_743650798= 3567155151_3155853 from any node: java.io.IOException: No live nodes contai= n current block. Will get new block locations from namenode and retry... 13/11/08 12:03:57 INFO hdfs.DFSClient: No node available for block: blk_743= 6507983567155151_3155853 file=3D/hbase/.logs/node5,60020,1383862856731-spli= tting/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:03:57 INFO= hdfs.DFSClient: Could not obtain block blk_7436507983567155151_3155853 fro= m any node: java.io.IOException: No live nodes contain current block. Will = get new block locations from namenode and retry...
13/11/08 12:04:00 INFO hdfs.DFSClient: No node available for block: blk_743= 6507983567155151_3155853 file=3D/hbase/.logs/node5,60020,1383862856731-spli= tting/node5%2C60020%2C1383862856731.1383874115597
13/11/08 12:04:00 INFO= hdfs.DFSClient: Could not obtain block blk_7436507983567155151_3155853 fro= m any node: java.io.IOException: No live nodes contain current block. Will = get new block locations from namenode and retry...
13/11/08 12:04:03 WARN hdfs.DFSClient: DFS Read: java.io.IOException: Could= not obtain block: blk_7436507983567155151_3155853 file=3D/hbase/.logs/node= 5,60020,1383862856731-splitting/node5%2C60020%2C1383862856731.1383874115597=
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.choos= eDataNode(DFSClient.java:2269)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.h= dfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2063)
=C2=A0=C2= =A0 =C2=A0at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient= .java:2224)
=C2=A0=C2=A0 =C2=A0at java.io.DataInputStream.read(DataInputStream.java:100= )
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.j= ava:68)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.io.IOUtils.copyBytes(IOU= tils.java:47)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.io.IOUtils.copyByt= es(IOUtils.java:87)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)=
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.fs.FsShell.copyToLocal(FsShell.= java:248)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.fs.FsShell.copyToLocal= (FsShell.java:199)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.fs.FsShell.ru= n(FsShell.java:1769)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java= :65)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.util.ToolRunner.run(ToolRun= ner.java:79)
=C2=A0=C2=A0 =C2=A0at org.apache.hadoop.fs.FsShell.main(FsS= hell.java:1895)

get: Could not obtain block: blk_7436507983567155151= _3155853 file=3D/hbase/.logs/node5,60020,1383862856731-splitting/node5%2C60= 020%2C1383862856731.1383874115597
hadoop@node3:~/hadoop-1.0.3$

--20cf307f38f89930c504eaad7512--