hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-673) BlockReceiver#PacketResponder should not remove a packet from the ack queue before its ack is sent
Date Fri, 09 Oct 2009 21:15:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12764210#action_12764210
] 

Hairong Kuang commented on HDFS-673:
------------------------------------

There were two failed tests:  org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction.testBlockCreation
 and org.apache.hadoop.hdfs.TestFileAppend2.testComplexAppend. The first one is a known bug.
For the second one, the append write failed on "Too many open files".
This seems not related to the change in this jira. I filed HDFS-690 to track the bug.

> BlockReceiver#PacketResponder should not remove a packet from the ack queue before its
ack is sent
> --------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-673
>                 URL: https://issues.apache.org/jira/browse/HDFS-673
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.21.0
>
>         Attachments: pkgRmOrder.patch
>
>
> After a BlockReceiver finish receiving the last packet of a block, it waits until the
ack queue becomes empty. It them assumes that all acks have been sent and shunts down network
connections. The current code removes a packet from the ack queue before its ack is sent.
So there is a chance that the connection gets closed before an ack is sent.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message