hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 胡子千 <hzq0...@gmail.com>
Subject Re: Some questions about preemption policy on yarn
Date Wed, 22 Mar 2017 03:27:53 GMT
Hi Jesmine,
thank you for your reply. I'v solve the problem.
1. i have already change the user limit to 2, so each can use all the
2. the test label is exclusive.

the problem is caused by the parameter - yarn.resourcemanager.

there is only 8vcore each queue and the total_preemption_per_round is 0.1.
i guess under that setting each round we can only preempt 8*0.1 <1 vcore,
which is less than one container. So the preemption didn't happen.

In another  experiment, i increased the vcore of each queue to 20 and
increased the total_preemption_per_round, the preemption happened as

2017-03-21 11:23 GMT+08:00 Jesmine Zhang <jesmine.zhang@gmail.com>:

> We have also tested preemption with Node Labels enabled based on 2.8, and
> it works fine for us.
> Some questions for you:
> 1. For step 4. It doesn't look like your job A was running on test
> partition. There is no way a single job can use up to the queue's maximum
> capacity of 100% due to the user limit. User limit is the same as queue's
> capacity by default setting, which is 50% in your case. So job A could just
> consume as many as half of the resources on test Partition. You might want
> to check if your job indeed ran on test partition. You can look it at RM UI
> application page. You need to specify node label expression explicitly when
> submitting a job if the target queue doesn't have default node label
> expression configured, otherwise it goes to Default partition.
> 2. Is the "test" partition exclusive or non-exclusive? It looks like it is
> non-exclusive.
> On Mon, Mar 20, 2017 at 5:28 PM, 胡子千 <hzq0630@gmail.com> wrote:
>> Hi,
>> I'm a user of Hadoop YARN and these days i'm testing Node LABEL function
>> on YARN 2.8.0 with capacity scheduler. I found that the preemption didn't
>> work on queues with label. Here is the details:
>> 1. I set *test* label to 2 nodes in our cluster.
>> 2. I set *test1*, *test2* queues under root which can only access *test*
>> label. And each queue's accessible-node-labels
>> .test.capacity=50, accessible-node-labels.test.maximum-capacity=100
>> 3. enable the preemption policy:  yarn.resourcemanager.
>> scheduler.monitor.enable=true
>> 4. submit a spark task (named A) to queue *test1*, which asks for 16
>> executors and will use all resource of *test* partition.
>> 5. submit a spark test (named B) to queue *test2*. I assumes that
>> because of the under-satisfied of test2 and over-satisfied of test1, the
>> preemption will happen and each queue will use 50% resource of partition
>> test finally. In fact, the preempting didn't happen and the task B stay in
>> accepted state. when task A finished, task b started to run.
>> 6. submit same task to different queues in default partition and the
>> preemption happens as we expected。
>> I found that a patch YARN-2498 about preemption have merged to 2.8.0 and
>> i think with this patch YARN supports preemption on labeled partitions. So
>> is there any configure need to set or I made some mistake when i use this
>> function? Or did I misunderstand patch YARN-2498 and in fact 2.8.0 don't
>> support preemption on labeled partitions?
>> Looking forward your reply, thank you.
>> --
>> Best regards!
>> Ziqian HU 胡子千
>> Department of Computer Science, School of EECS, Peking University
> --
> Ying
> Best Regards


Best regards!

Ziqian HU 胡子千
Department of Computer Science, School of EECS, Peking University

View raw message