hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6099) HDFS file system limits not enforced on renames.
Date Thu, 13 Mar 2014 22:53:43 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Chris Nauroth updated HDFS-6099:

    Attachment: HDFS-6099.2.patch

I'm attaching patch v2 with one more small change.  I added {{PathComponentTooLongException}}
and {{MaxDirectoryItemsExceededException}} to the terse exceptions list.  These are ultimately
caused by bad client requests, so there isn't any value in writing the full stack trace to
the NameNode logs.

> HDFS file system limits not enforced on renames.
> ------------------------------------------------
>                 Key: HDFS-6099
>                 URL: https://issues.apache.org/jira/browse/HDFS-6099
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.3.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>             Fix For: 2.4.0
>         Attachments: HDFS-6099.1.patch, HDFS-6099.2.patch
> {{dfs.namenode.fs-limits.max-component-length}} and {{dfs.namenode.fs-limits.max-directory-items}}
are not enforced on the destination path during rename operations.  This means that it's still
possible to create files that violate these limits.

This message was sent by Atlassian JIRA

View raw message