hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-73) DFSOutputStream does not close all the sockets
Date Fri, 12 Aug 2011 00:17:28 GMT

    [ https://issues.apache.org/jira/browse/HDFS-73?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13083836#comment-13083836
] 

Eli Collins commented on HDFS-73:
---------------------------------

Hi Uma,

Latest patch looks good. Wrt testing how about adding the following assert to all non-final
sockets in the file? The append tests will fail this currently and it checks that we don't
introduce this bug again.  

{noformat}
       try {
+        assert null == s : "Previous socket unclosed";
         s = createSocketForPipeline(nodes[0], nodes.length, dfsClient);
{noformat}

Nit: please make the formatting (spacing) in closeStream consistent and generate a new patch
that will apply with test-patch.

> DFSOutputStream does not close all the sockets
> ----------------------------------------------
>
>                 Key: HDFS-73
>                 URL: https://issues.apache.org/jira/browse/HDFS-73
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.23.0
>
>         Attachments: HADOOP-3071.patch, HDFS-73_0.23.patch
>
>
> When DFSOutputStream writes to multiple blocks, it closes only the socket opened for
the last block. When it is done with writing to one block it should close the socket.
> I noticed this when I was fixing HADOOP-3067. After fixing HADOOP-3067, there were still
a lot of sockets open (but not enough to fail the tests). These sockets were used to write
to blocks.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message