hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-690) TestAppend2#testComplexAppend failed on "Too many open files"
Date Wed, 21 Oct 2009 19:58:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12768394#action_12768394

Hadoop QA commented on HDFS-690:

+1 overall.  Here are the results of testing the latest attachment 
  against trunk revision 828116.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/47/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/47/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/47/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/47/console

This message is automatically generated.

> TestAppend2#testComplexAppend failed on "Too many open files"
> -------------------------------------------------------------
>                 Key: HDFS-690
>                 URL: https://issues.apache.org/jira/browse/HDFS-690
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.21.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.21.0
>         Attachments: leakingThreads.patch
> the append write failed on "Too many open files":
> Some bytes were failed to append to a file on the following error:
> java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, Too many
open files
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> 	at java.lang.Runtime.exec(Runtime.java:593)
> 	at java.lang.Runtime.exec(Runtime.java:466)
> 	at org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:644)
> 	at org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:205)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1075)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1058)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:110)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message