hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: question on yarn and fairscheduler
Date Tue, 20 May 2014 07:32:46 GMT

YARN's FairScheduler does not have maxMaps/maxReducers fields like its
MR1 counterpart did, given YARN's generic Container based
architecture. Please see
sub-section "Allocation file format" for the actual configurable
elements in it.

Preemption marks the tasks as killed, and not failed, so they do not
count towards your job failure thresholds. If your task attempts are
truly failing, their task attempt logs (syslog, stderr, stdout) will
have the reason they failed.

On Tue, May 20, 2014 at 12:23 PM, Du Lam <delim123456@gmail.com> wrote:
> Hi
> some questions on yarn+fairscheduler:
> 1. is the maxMaps and maxReduces in allocations.xml actually work? notice it
> is not working in my setup.
> 2. my job always fail with diagnostics such as: Task
> task_1400033851458_4824_m_000006 failed 4 times .
>  is it possible that this is due to preempted too many times? or any other
> issue. At the same job, there are also tasks get killed with note: Attmpt
> state missing from History : marked as KILLED
> any help would be appreciated. Thanks.

Harsh J

View raw message