hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-4906) HDFS Output streams should not accept writes after being closed
Date Sat, 15 Jun 2013 01:07:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Aaron T. Myers updated HDFS-4906:
---------------------------------

    Attachment: HDFS-4906.patch

Here's a patch which addresses the issue by changing FSOutputSummer to check if the implementing
stream is closed first before accepting any writes.
                
> HDFS Output streams should not accept writes after being closed
> ---------------------------------------------------------------
>
>                 Key: HDFS-4906
>                 URL: https://issues.apache.org/jira/browse/HDFS-4906
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.0.5-alpha
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>         Attachments: HDFS-4906.patch
>
>
> Currently if one closes an OutputStream obtained from FileSystem#create and then calls
write(...) on that closed stream, the write will appear to succeed without error though no
data will be written to HDFS. A subsequent call to close will also silently appear to succeed.
We should make it so that attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message