hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6087) Unify HDFS write/append/truncate
Date Mon, 17 Mar 2014 17:25:44 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938070#comment-13938070
] 

Todd Lipcon commented on HDFS-6087:
-----------------------------------

Even creating a new hard link on every hflush is a no-go, performance wise, I'd think. Involving
the NN in a round trip on hflush would also kill the scalability of HBase and other applications
that hflush hundreds of times per second per node.

> Unify HDFS write/append/truncate
> --------------------------------
>
>                 Key: HDFS-6087
>                 URL: https://issues.apache.org/jira/browse/HDFS-6087
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>            Reporter: Guo Ruijing
>         Attachments: HDFS Design Proposal.pdf, HDFS Design Proposal_3_14.pdf
>
>
> In existing implementation, HDFS file can be appended and HDFS block can be reopened
for append. This design will introduce complexity including lease recovery. If we design HDFS
block as immutable, it will be very simple for append & truncate. The idea is that HDFS
block is immutable if the block is committed to namenode. If the block is not committed to
namenode, it is HDFS client’s responsibility to re-added with new block ID.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message