hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6055) Change default configuration to limit file name length in HDFS
Date Sat, 08 Mar 2014 05:44:43 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Nauroth updated HDFS-6055:
--------------------------------

    Attachment: HDFS-6055.1.patch

I'm attaching a patch that changes the default limit to 255.  This functionality is already
well tested by {{TestFsLimits}}, so I didn't add new tests.  I did discover that {{TestSymlinkHdfs}}
depends on creating long paths, so I set its configuration back to using 0.

> Change default configuration to limit file name length in HDFS
> --------------------------------------------------------------
>
>                 Key: HDFS-6055
>                 URL: https://issues.apache.org/jira/browse/HDFS-6055
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0, 2.4.0
>            Reporter: Suresh Srinivas
>            Assignee: Chris Nauroth
>         Attachments: HDFS-6055.1.patch
>
>
> Currently configuration "dfs.namenode.fs-limits.max-component-length" is set to 0. With
this HDFS file names have no length limit. However, we see more users run into issues where
they copy files from HDFS to another file system and the copy fails due to the file name length
being too long.
> I propose changing the default configuration "dfs.namenode.fs-limits.max-component-length"
to a reasonable value. This will be an incompatible change. However, user who need long file
names can override this configuration to turn off length limit.
> What do folks think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message