hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daniel Templeton (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing
Date Fri, 04 Sep 2015 18:21:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731189#comment-14731189
] 

Daniel Templeton commented on HADOOP-12374:
-------------------------------------------

My primary concern is with the word "checkpoint."  I'm worried that a newbie, i.e. someone
who would be reading the documentation, wouldn't understand what that means without having
to do more research.  The advantage of the current phrasing is that it's immediately understandable,
even if it's not exactly accurate.

> Description of hdfs expunge command is confusing
> ------------------------------------------------
>
>                 Key: HADOOP-12374
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12374
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: documentation, trash
>    Affects Versions: 2.7.0, 2.7.1
>            Reporter: Weiwei Yang
>            Assignee: Weiwei Yang
>            Priority: Trivial
>              Labels: docuentation, newbie, suggestions, trash
>         Attachments: HADOOP-12374.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on the Trash
feature.
> this description is confusing. It gives user the impression that this command will empty
trash, but actually it only removes old checkpoints. If user sets a pretty long value for
fs.trash.interval, this command will not remove anything until checkpoints exist longer than
this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message