hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2447) HDFS should be capable of limiting the total number of inodes in the system
Date Wed, 19 Dec 2007 18:24:43 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12553424
] 

Doug Cutting commented on HADOOP-2447:
--------------------------------------

> Heap: 34 M/b

Oops.  This might better be:

Heap: 34 / 90 MB (37%)

Where these would be the results of Runtime.getTotalMemory() and Runtime.getMaxMemory(). 
Then one could compare the two percentages (of objects and of maximum memory) to decide whether
it was safe to increase the maximum number of objects, with lots of provisos.

> HDFS should be capable of limiting the total number of inodes in the system
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-2447
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2447
>             Project: Hadoop
>          Issue Type: New Feature
>            Reporter: Sameer Paranjpye
>            Assignee: dhruba borthakur
>             Fix For: 0.16.0
>
>         Attachments: fileLimit.patch
>
>
> The HDFS Namenode should be capable of limiting the total number of Inodes (files + directories).
The can be done through a config variable, settable in hadoop-site.xml. The default should
be no limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message