hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang" <hair...@yahoo-inc.com>
Subject RE: Limit the space used by hadoop on a slave node
Date Tue, 08 Jan 2008 21:32:35 GMT
Most of the time dfs and map/reduce share disks. Keep in mind that du
options can not control how much space that map/reduce tasks take.
Sometimes we get the out of disk space problem because data intensive
map/reduce tasks take a lot of disk space.

Hairong

-----Original Message-----
From: Ted Dunning [mailto:tdunning@veoh.com] 
Sent: Tuesday, January 08, 2008 1:13 PM
To: hadoop-user@lucene.apache.org
Subject: Re: Limit the space used by hadoop on a slave node


I think I have seen related bad behavior on 15.1.

On 1/8/08 11:49 AM, "Hairong Kuang" <hairong@yahoo-inc.com> wrote:

> Has anybody tried 15.0? Please check
> https://issues.apache.org/jira/browse/HADOOP-1463.
> 
> Hairong
> -----Original Message-----
> From: Joydeep Sen Sarma [mailto:jssarma@facebook.com]
> Sent: Tuesday, January 08, 2008 11:33 AM
> To: hadoop-user@lucene.apache.org; hadoop-user@lucene.apache.org
> Subject: RE: Limit the space used by hadoop on a slave node
> 
> at least up until 14.4, these options are broken. see
> https://issues.apache.org/jira/browse/HADOOP-2549
> 
> (there's a trivial patch - but i am still testing).
> 
> 


Mime
View raw message