hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-989) Flush/sync do not work on Hadoop LocalFileSystem
Date Mon, 07 Nov 2011 15:42:51 GMT

    [ https://issues.apache.org/jira/browse/HDFS-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13145563#comment-13145563
] 

Todd Lipcon commented on HDFS-989:
----------------------------------

With hsync/hflush at a checksum boundary, we re-write the last checksum into the checksum
file at the same time as we append the new data. There is a race during which the new data
is there but not the checksum info -- but during recovery operations I believe we deal with
this situation by ignoring checksum errors on the last checksum-chunk if there are no replicas
with a valid last-chunk.
                
> Flush/sync do not work on Hadoop LocalFileSystem
> ------------------------------------------------
>
>                 Key: HDFS-989
>                 URL: https://issues.apache.org/jira/browse/HDFS-989
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>            Reporter: Nathan Marz
>
> They seem to be no-ops. This is really easy to reproduce, just open a file using FileSystem/getLocal(new
Configuration()), write data to the output stream, and then try to flush/sync. I also tried
creating the output stream with a buffer size of 1, but that had no effect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message