hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7943) Append cannot handle the last block with length greater than the preferred block size
Date Tue, 17 Mar 2015 23:31:38 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366303#comment-14366303
] 

Jing Zhao commented on HDFS-7943:
---------------------------------

The test failures should be unrelated and they all passed in my local run.

> Append cannot handle the last block with length greater than the preferred block size
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-7943
>                 URL: https://issues.apache.org/jira/browse/HDFS-7943
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.0
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>            Priority: Blocker
>         Attachments: HDFS-7943.000.patch
>
>
> In HDFS-3689, we remove the restriction from concat that all the source files should
have the same preferred block size with the target file. This can cause a file to contain
blocks with size larger than its preferred block size.
> If such block happens to be the last block of a file, and later we append data to the
file without the {{CreateFlag.NEW_BLOCK}} flag (i.e., appending data to the last block), looks
like the current client code will keep writing to the last block and never allocate a new
block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message