hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-690) TestAppend2#testComplexAppend failed on "Too many open files"
Date Thu, 22 Oct 2009 20:26:59 GMT

     [ https://issues.apache.org/jira/browse/HDFS-690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hairong Kuang updated HDFS-690:
-------------------------------

    Attachment: leakingThreads1.patch

This patch creates a private method as Cos suggested.

> TestAppend2#testComplexAppend failed on "Too many open files"
> -------------------------------------------------------------
>
>                 Key: HDFS-690
>                 URL: https://issues.apache.org/jira/browse/HDFS-690
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.21.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.21.0
>
>         Attachments: leakingThreads.patch, leakingThreads1.patch
>
>
> the append write failed on "Too many open files":
> Some bytes were failed to append to a file on the following error:
> java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, Too many
open files
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> 	at java.lang.Runtime.exec(Runtime.java:593)
> 	at java.lang.Runtime.exec(Runtime.java:466)
> 	at org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:644)
> 	at org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:205)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1075)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1058)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:110)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message