hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Plamen Jeliazkov (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-3107) HDFS truncate
Date Tue, 23 Sep 2014 18:12:37 GMT

     [ https://issues.apache.org/jira/browse/HDFS-3107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Plamen Jeliazkov updated HDFS-3107:
-----------------------------------
    Attachment: HDFS-3107.patch

Attaching patch with updated Javadoc.

# Changed ClientProtocol JavaDoc to match what Konstantin pointed out.
# Also added an @return JavaDoc for ClientProtocol.
# Modified the FSDirectory unprotectedTruncate() JavaDoc and comments to remove any notion
of 'schedule block for truncate'. The scheduling logic lives  in FSNamesystem.

More tests to show when dealing with competing appends / creates are necessary. I'll include
some in the next patch.

> HDFS truncate
> -------------
>
>                 Key: HDFS-3107
>                 URL: https://issues.apache.org/jira/browse/HDFS-3107
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: datanode, namenode
>            Reporter: Lei Chang
>            Assignee: Plamen Jeliazkov
>         Attachments: HDFS-3107.patch, HDFS-3107.patch, HDFS_truncate.pdf, HDFS_truncate_semantics_Mar15.pdf,
HDFS_truncate_semantics_Mar21.pdf
>
>   Original Estimate: 1,344h
>  Remaining Estimate: 1,344h
>
> Systems with transaction support often need to undo changes made to the underlying storage
when a transaction is aborted. Currently HDFS does not support truncate (a standard Posix
operation) which is a reverse operation of append, which makes upper layer applications use
ugly workarounds (such as keeping track of the discarded byte range per file in a separate
metadata store, and periodically running a vacuum process to rewrite compacted files) to overcome
this limitation of HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message