hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-690) TestAppend2#testComplexAppend failed on "Too many open files"
Date Fri, 23 Oct 2009 00:51:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12769007#action_12769007

Hairong Kuang commented on HDFS-690:

Puts are already synced. There is no need for checking empty ackQueue because the packet responder
is the only thread that removes a packet from the queue. It already checks the queue is not
empty, get the packet, send its ack, then removes it from the queue. The unsync/notification
problem was introduced by HDFS-673. I was not well thought when working on the jira.

> TestAppend2#testComplexAppend failed on "Too many open files"
> -------------------------------------------------------------
>                 Key: HDFS-690
>                 URL: https://issues.apache.org/jira/browse/HDFS-690
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.21.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.21.0
>         Attachments: leakingThreads.patch, leakingThreads1.patch
> the append write failed on "Too many open files":
> Some bytes were failed to append to a file on the following error:
> java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, Too many
open files
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:459)
> 	at java.lang.Runtime.exec(Runtime.java:593)
> 	at java.lang.Runtime.exec(Runtime.java:466)
> 	at org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:644)
> 	at org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:205)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1075)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1058)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:110)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:258)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:382)
> 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:323)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:111)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message