hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-4906) HDFS Output streams should not accept writes after being closed
Date Tue, 18 Jun 2013 21:14:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Aaron T. Myers updated HDFS-4906:
---------------------------------

       Resolution: Fixed
    Fix Version/s: 2.1.0-beta
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Thanks a lot for the review, Steve. I've just committed this to trunk, branch-2, and branch-2.1-beta.
                
> HDFS Output streams should not accept writes after being closed
> ---------------------------------------------------------------
>
>                 Key: HDFS-4906
>                 URL: https://issues.apache.org/jira/browse/HDFS-4906
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.0.5-alpha
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>             Fix For: 2.1.0-beta
>
>         Attachments: HDFS-4906.patch, HDFS-4906.patch, HDFS-4906.patch, HDFS-4906.patch,
HDFS-4906.patch
>
>
> Currently if one closes an OutputStream obtained from FileSystem#create and then calls
write(...) on that closed stream, the write will appear to succeed without error though no
data will be written to HDFS. A subsequent call to close will also silently appear to succeed.
We should make it so that attempts to write to closed streams fails fast.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message