hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Fabbri (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-15239) S3ABlockOutputStream.flush() be no-op when stream closed
Date Mon, 30 Apr 2018 23:06:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Aaron Fabbri updated HADOOP-15239:
----------------------------------
       Resolution: Fixed
    Fix Version/s: 3.2.0
           Status: Resolved  (was: Patch Available)

Committed to trunk after the usual testing.. Thank you for the patch [~gabor.bota].

> S3ABlockOutputStream.flush() be no-op when stream closed
> --------------------------------------------------------
>
>                 Key: HADOOP-15239
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15239
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.9.0, 2.8.3, 2.7.5, 3.0.0
>            Reporter: Steve Loughran
>            Assignee: Gabor Bota
>            Priority: Trivial
>             Fix For: 3.2.0
>
>         Attachments: HADOOP-15239.001.patch, HADOOP-15239.002.patch
>
>
> when you call flush() on a closed S3A output stream, you get a stack trace. 
> This can cause problems in code with race conditions across threads, e.g. FLINK-8543.

> we could make it log@warn "stream closed" rather than raise an IOE. It's just a hint,
after all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message