hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Travis Crawford <traviscrawf...@gmail.com>
Subject Re: distributed cache exceeding local.cache.size
Date Fri, 01 Apr 2011 19:05:12 GMT
On Thu, Mar 31, 2011 at 3:25 PM, Allen Wittenauer <aw@apache.org> wrote:
>
> On Mar 31, 2011, at 11:45 AM, Travis Crawford wrote:
>
>> Is anyone familiar with how the distributed cache deals when datasets
>> larger than the total cache size are referenced? I've disabled the job
>> that caused this situation but am wondering if I can configure things
>> more defensively.
>
>        I've started building specific file systems on drives to store the map reduce
spill space.  It seems to be the only reliable way to prevent MR from going nuts.  Sure,
some jobs may fail, but that seems to be a better strategy than the alternative.
>

Interesting. So for example, say you have 2 disks in a
DataNode+TaskTracker machine. You'd make two partitions on each disk,
and expose 4 partitions to the system, then give two partitions (one
from each disk) to each app?

Is the idea here to prevent runaway jobs from causing DataNode disks
from filling up, which causes write failures?

--travis

Mime
View raw message