hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <michael_se...@hotmail.com>
Subject RE: is there any way we can limit Hadoop Datanode's disk usage?
Date Wed, 31 Mar 2010 19:36:34 GMT



> From: awittenauer@linkedin.com
> To: common-user@hadoop.apache.org
> Subject: Re: is there any way we can limit Hadoop Datanode's disk usage?
> Date: Wed, 31 Mar 2010 18:09:04 +0000
> 
> On 3/30/10 8:12 PM, "steven zhuang" <steven.zhuang.1984@gmail.com> wrote:
> 
> > hi, guys,
> >                we have some machine with 1T disk, some with 100GB disk,
> >                I have this question that is there any means we can limit the
> > disk usage of datanodes on those machines with smaller disk?
> >                thanks!
> 
> 
> You can use dfs.datanode.du.reserved, but be aware that are *no* limits on
> mapreduce's usage, other than what you can create with file system quotas.
> 
>  I've started recommended creating file system partitions in order to work
> around Hadoop's crazy space reservation ideas.
> 
Hmmm.

Our sysadmins decided to put each of the jbod disks in to their own volume group.
Kind of makes sense if you want to limit any impact that Hadoop could cause. (Assuming someone
forgot to set up the dfs.datanode.du.reserved)

But I do agree that at a minimum, the file space used by hadoop should be a partition and
not on the '/' (root) disk space.
 		 	   		  
_________________________________________________________________
Hotmail: Trusted email with powerful SPAM protection.
http://clk.atdmt.com/GBL/go/210850553/direct/01/
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message