Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 12107 invoked from network); 20 Mar 2008 23:07:23 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 20 Mar 2008 23:07:23 -0000 Received: (qmail 77571 invoked by uid 500); 20 Mar 2008 23:07:19 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 77541 invoked by uid 500); 20 Mar 2008 23:07:19 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 77086 invoked by uid 99); 20 Mar 2008 23:07:17 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Mar 2008 16:07:17 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Mar 2008 23:06:47 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id CF771234C0A8 for ; Thu, 20 Mar 2008 16:05:25 -0700 (PDT) Message-ID: <2007758388.1206054325848.JavaMail.jira@brutus> Date: Thu, 20 Mar 2008 16:05:25 -0700 (PDT) From: "Robert Chansler (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-1911) infinite loop in dfs -cat command. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Chansler updated HADOOP-1911: ------------------------------------ Fix Version/s: 0.17.0 > infinite loop in dfs -cat command. > ---------------------------------- > > Key: HADOOP-1911 > URL: https://issues.apache.org/jira/browse/HADOOP-1911 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.13.1, 0.14.3 > Reporter: Koji Noguchi > Priority: Blocker > Fix For: 0.17.0 > > > [knoguchi]$ hadoop dfs -cat fileA > 07/09/13 17:36:02 INFO fs.DFSClient: Could not obtain block 0 from any node: > java.io.IOException: No live nodes contain current block > 07/09/13 17:36:20 INFO fs.DFSClient: Could not obtain block 0 from any node: > java.io.IOException: No live nodes contain current block > [repeats forever] > Setting one of the Debug statement to Warn, it kept on showing > {noformat} > WARN org.apache.hadoop.fs.DFSClient: Failed to connect > to /99.99.999.9 :11111:java.io.IOException: Recorded block size is 7496, but > datanode reports size of 0 > at org.apache.hadoop.dfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:690) > at org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:771) > at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:258) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at java.io.DataInputStream.readFully(DataInputStream.java:178) > at java.io.DataInputStream.readFully(DataInputStream.java:152) > at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.(ChecksumFileSystem.java:123) > at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:340) > at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:259) > at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.map(CopyFiles.java:466) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:186) > at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1707) > {noformat} > Turns out fileA was corrupted. Fsck showed crc file of 7496 bytes, but when I searched for the blocks on each node, 3 replicas were all size 0. > Not sure how it got corrupted, but it would be nice if the dfs command fail instead of getting into an infinite loop. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.