hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2447) HDFS should be capable of limiting the total number of inodes in the system
Date Wed, 09 Jan 2008 11:09:37 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12557232#action_12557232

Hadoop QA commented on HADOOP-2447:

-1 overall.  Here are the results of testing the latest attachment 
against trunk revision .

    @author +1.  The patch does not contain any @author tags.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new compiler warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests -1.  The patch failed contrib unit tests.

Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1521/testReport/
Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1521/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1521/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1521/console

This message is automatically generated.

> HDFS should be capable of limiting the total number of inodes in the system
> ---------------------------------------------------------------------------
>                 Key: HADOOP-2447
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2447
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Sameer Paranjpye
>            Assignee: dhruba borthakur
>             Fix For: 0.16.0
>         Attachments: fileLimit.patch, fileLimit2.patch, fileLimit3.patch
> The HDFS Namenode should be capable of limiting the total number of Inodes (files + directories).
The can be done through a config variable, settable in hadoop-site.xml. The default should
be no limit.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message