hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6099) HDFS file system limits not enforced on renames.
Date Tue, 18 Mar 2014 19:56:44 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13939702#comment-13939702

Tsz Wo Nicholas Sze commented on HDFS-6099:

- For rename in the same dir, is it possible to get over limit?  It particular, I think the
following is never true.  Otherwise, it already exceeds the limit before rename.
(isRenameInSameDir && count > maxDirItems)) 
- It is better to verifyFsLimitsForRename before verifyQuotaForRename since it is cheaper.
- The patch actually does not apply anymore.  Need to update it.

> HDFS file system limits not enforced on renames.
> ------------------------------------------------
>                 Key: HDFS-6099
>                 URL: https://issues.apache.org/jira/browse/HDFS-6099
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.3.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>             Fix For: 2.4.0
>         Attachments: HDFS-6099.1.patch, HDFS-6099.2.patch
> {{dfs.namenode.fs-limits.max-component-length}} and {{dfs.namenode.fs-limits.max-directory-items}}
are not enforced on the destination path during rename operations.  This means that it's still
possible to create files that violate these limits.

This message was sent by Atlassian JIRA

View raw message