hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma" <jssa...@facebook.com>
Subject RE: Limit the space used by hadoop on a slave node
Date Tue, 08 Jan 2008 19:32:34 GMT
at least up until 14.4, these options are broken. see https://issues.apache.org/jira/browse/HADOOP-2549

(there's a trivial patch - but i am still testing).


-----Original Message-----
From: Khalil Honsali [mailto:k.honsali@gmail.com]
Sent: Tue 1/8/2008 11:21 AM
To: hadoop-user@lucene.apache.org
Subject: Re: Limit the space used by hadoop on a slave node
 
I haven't tried yet, but I've seen this:
<property>
  <name>dfs.datanode.du.reserved</name>
  <value>0</value>
  <description>Reserved space in bytes per volume. Always leave this much
space free for non dfs use.
  </description>
</property>
or
<property>
  <name>dfs.datanode.du.pct</name>
  <value>0.98f</value>
  <description>When calculating remaining space, only use this percentage of
the real available space
  </description>
</property>


In:
conf/hadoop-site.xml


On 09/01/2008, S. Nunes <snunes@gmail.com> wrote:
>
> Hi,
>
> I'm trying to install hadoop on a set of computers that are not
> exclusively dedicated to run hadoop.
> Our goal is to use these computers in the hadoop cluster when they are
> inactive (during night).
>
> I would like to know if it is possible to limit the space used by
> hadoop at a slave node.
> Something like "hadoop.tmp.dir.max". I do not want hadoop to use all
> the available disk space.
>
> Thanks in advance for any help on this issue,
>
> --
> Sérgio Nunes
>



-


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message