hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1911) infinite loop in dfs -cat command.
Date Sat, 05 Apr 2008 00:40:27 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12585857#action_12585857

Hadoop QA commented on HADOOP-1911:

-1 overall.  Here are the results of testing the latest attachment 
against trunk revision 643282.

    @author +1.  The patch does not contain any @author tags.

    tests included -1.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new javac compiler warnings.

    release audit +1.  The applied patch does not generate any new release audit warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests -1.  The patch failed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2170/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2170/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2170/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2170/console

This message is automatically generated.

> infinite loop in dfs -cat command.
> ----------------------------------
>                 Key: HADOOP-1911
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1911
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.13.1, 0.14.3
>            Reporter: Koji Noguchi
>            Assignee: Chris Douglas
>            Priority: Blocker
>             Fix For: 0.17.0
>         Attachments: 1911-0.patch
> [knoguchi]$ hadoop dfs -cat fileA
> 07/09/13 17:36:02 INFO fs.DFSClient: Could not obtain block 0 from any node: 
> java.io.IOException: No live nodes contain current block
> 07/09/13 17:36:20 INFO fs.DFSClient: Could not obtain block 0 from any node: 
> java.io.IOException: No live nodes contain current block
> [repeats forever]
> Setting one of the Debug statement to Warn, it kept on showing 
> {noformat} 
>  WARN org.apache.hadoop.fs.DFSClient: Failed to connect
> to /99.99.999.9 :11111:java.io.IOException: Recorded block size is 7496, but
> datanode reports size of 0
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:690)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.read(DFSClient.java:771)
> 	at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
> 	at java.io.DataInputStream.readFully(DataInputStream.java:152)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.(ChecksumFileSystem.java:123)
> 	at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:340)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:259)
> 	at org.apache.hadoop.util.CopyFiles$FSCopyFilesMapper.map(CopyFiles.java:466)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:186)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1707)
> {noformat} 
> Turns out fileA was corrupted. Fsck showed crc file of 7496 bytes, but when I searched
for the blocks on each node, 3 replicas were all size 0.
> Not sure how it got corrupted, but it would be nice if the dfs command fail instead of
getting into an infinite loop.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message