hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7943) Append cannot handle the last block with length greater than the preferred block size
Date Wed, 18 Mar 2015 02:29:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366513#comment-14366513
] 

Tsz Wo Nicholas Sze commented on HDFS-7943:
-------------------------------------------

Is it possible to change append so that it always appends to a new block when the last block
is great than or equal to the preferred block size?

> Append cannot handle the last block with length greater than the preferred block size
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-7943
>                 URL: https://issues.apache.org/jira/browse/HDFS-7943
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.0
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>            Priority: Blocker
>         Attachments: HDFS-7943.000.patch
>
>
> In HDFS-3689, we remove the restriction from concat that all the source files should
have the same preferred block size with the target file. This can cause a file to contain
blocks with size larger than its preferred block size.
> If such block happens to be the last block of a file, and later we append data to the
file without the {{CreateFlag.NEW_BLOCK}} flag (i.e., appending data to the last block), looks
like the current client code will keep writing to the last block and never allocate a new
block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message