hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (HDFS-6055) Change default configuration to limit file name length in HDFS
Date Wed, 05 Mar 2014 21:54:47 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Nauroth reassigned HDFS-6055:
-----------------------------------

    Assignee: Chris Nauroth  (was: Suresh Srinivas)

> Change default configuration to limit file name length in HDFS
> --------------------------------------------------------------
>
>                 Key: HDFS-6055
>                 URL: https://issues.apache.org/jira/browse/HDFS-6055
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Suresh Srinivas
>            Assignee: Chris Nauroth
>
> Currently configuration "dfs.namenode.fs-limits.max-component-length" is set to 0. With
this HDFS file names have no length limit. However, we see more users run into issues where
they copy files from HDFS to another file system and the copy fails due to the file name length
being too long.
> I propose changing the default configuration "dfs.namenode.fs-limits.max-component-length"
to a reasonable value. This will be an incompatible change. However, user who need long file
names can override this configuration to turn off length limit.
> What do folks think?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message