Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4FEAACF14 for ; Thu, 20 Nov 2014 14:15:32 +0000 (UTC) Received: (qmail 42701 invoked by uid 500); 20 Nov 2014 14:15:26 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 42597 invoked by uid 500); 20 Nov 2014 14:15:26 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 42587 invoked by uid 99); 20 Nov 2014 14:15:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Nov 2014 14:15:26 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of francexo83@gmail.com designates 209.85.220.42 as permitted sender) Received: from [209.85.220.42] (HELO mail-pa0-f42.google.com) (209.85.220.42) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Nov 2014 14:14:59 +0000 Received: by mail-pa0-f42.google.com with SMTP id et14so2642958pad.29 for ; Thu, 20 Nov 2014 06:14:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=leoHIVJWc/WzQpueDulmkQlh7nqL3D5OVRxE/dnhlXA=; b=ywLRgQiTT43/XSQDReO4lNCxWvlC84BGvr9rcETpJct0rVMAvElCEHdbRYZPVOV+jB xZEo/VCKIXQA7ZScT8BuYRWQwZ0LpKkpsq2QHzZT15yc6FjjQc1YW0uIOmq/41HFeE8S UEKpkBDtB9sr21uRfwSnieuFle08P9ITVj2msbZmeQ7QnmmnF8qSHNHEEh3i63/7zldr wfifVk5Mnh/yIlE9L2l5g5XeibpeESUkczmXyKE2AlFWoCx+Wrbvw4i1SyllWro9DEaD n0qU+WsSwLIsE2du/9WMCm3xyemW62hxW7zoKj+tPjVxZ1ZWD22rADmKQRlUVd7dqQVs Vv7w== MIME-Version: 1.0 X-Received: by 10.66.118.201 with SMTP id ko9mr56739120pab.46.1416492853057; Thu, 20 Nov 2014 06:14:13 -0800 (PST) Received: by 10.70.52.72 with HTTP; Thu, 20 Nov 2014 06:14:12 -0800 (PST) In-Reply-To: References: <0EE80F6F7A98A64EBD18F2BE839C9115676A233E@szxeml512-mbs.china.huawei.com> <0EE80F6F7A98A64EBD18F2BE839C9115676A240D@szxeml512-mbs.china.huawei.com> Date: Thu, 20 Nov 2014 15:14:12 +0100 Message-ID: Subject: Re: MR job fails with too many mappers From: francexo83 To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e89a8ffbaca9555d0b05084af171 X-Virus-Checked: Checked by ClamAV on apache.org --e89a8ffbaca9555d0b05084af171 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, as I said before, I wrote TableInputFormat and RecordReader extension that reads input data from an Hbase table, in my case every single row is associated with a single InputSplit. For example if I have 300000 rows to process, my custom TableInputFormat will generate 300000 input splits and as a result 300000 mapper task in my MapReguce job. That's all. Regards 2014-11-20 6:02 GMT+01:00 Susheel Kumar Gadalay : > In which case the split metadata go beyond 10MB? > Can u give some details of your input file and splits. > > On 11/19/14, francexo83 wrote: > > Thank you very much for your suggestion, it was very helpful. > > > > This is what I have after turning off log aggregation: > > > > 2014-11-18 18:39:01,507 INFO [main] > > org.apache.hadoop.service.AbstractService: Service > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED; > > cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > > java.io.IOException: Split metadata size exceeded 10000000. Aborting jo= b > > job_1416332245344_0004 > > org.apache.hadoop.yarn.exceptions.YarnRuntimeException: > > java.io.IOException: Split metadata size exceeded 10000000. Aborting jo= b > > job_1416332245344_0004 > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.create= Splits(JobImpl.java:1551) > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transi= tion(JobImpl.java:1406) > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.transi= tion(JobImpl.java:1373) > > at > > > org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTr= ansition(StateMachineFactory.java:385) > > at > > > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachin= eFactory.java:302) > > at > > > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineF= actory.java:46) > > at > > > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doT= ransition(StateMachineFactory.java:448) > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:9= 86) > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:1= 38) > > at > > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(= MRAppMaster.java:1249) > > at > > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.j= ava:1049) > > at > > org.apache.hadoop.service.AbstractService.start(AbstractService.java:19= 3) > > at > > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:146= 0) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:422) > > at > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1554) > > at > > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAp= pMaster.java:1456) > > at > > > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1389= ) > > Caused by: java.io.IOException: Split metadata size exceeded 10000000. > > Aborting job job_1416332245344_0004 > > at > > > org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInfo(S= plitMetaInfoReader.java:53) > > at > > > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.create= Splits(JobImpl.java:1546) > > > > > > I exceeded the split metadata size so I added the following property > into > > the mapred-site.xml and it worked: > > > > > > mapreduce.job.split.metainfo.maxsize > > 500000000 > > > > > > thanks again. > > > > > > > > > > > > > > > > > > 2014-11-18 17:59 GMT+01:00 Rohith Sharma K S >: > > > >> If log aggregation is enabled, log folder will be deleted. So I sugge= st > >> disable =E2=80=9Cyarn.log-aggregation-enable=E2=80=9D and run job agai= n. All the logs > >> remains at log folder. Then you can find container logs > >> > >> > >> > >> Thanks & Regards > >> > >> Rohith Sharma K S > >> > >> > >> > >> This e-mail and its attachments contain confidential information from > >> HUAWEI, which is intended only for the person or entity whose address = is > >> listed above. Any use of the information contained herein in any way > >> (including, but not limited to, total or partial disclosure, > >> reproduction, > >> or dissemination) by persons other than the intended recipient(s) is > >> prohibited. If you receive this e-mail in error, please notify the > sender > >> by phone or email immediately and delete it! > >> > >> > >> > >> *From:* francexo83 [mailto:francexo83@gmail.com] > >> *Sent:* 18 November 2014 22:15 > >> *To:* user@hadoop.apache.org > >> *Subject:* Re: MR job fails with too many mappers > >> > >> > >> > >> Hi, > >> > >> > >> > >> thank you for your quick response, but I was not able to see the logs > for > >> the container. > >> > >> > >> > >> I get a "no such file or directory" when I try to access the logs of > the > >> container from the shell: > >> > >> > >> > >> cd /var/log/hadoop-yarn/containers/application_1416304409718_0032 > >> > >> > >> > >> > >> > >> It seems that the container has never been created. > >> > >> > >> > >> > >> > >> > >> > >> thanks > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> 2014-11-18 16:43 GMT+01:00 Rohith Sharma K S >: > >> > >> Hi > >> > >> > >> > >> Could you get syserr and sysout log for contrainer.? These logs will b= e > >> available in the same location syslog for container. > >> > >> ${yarn.nodemanager.log-dirs}// > >> > >> This helps to find problem!! > >> > >> > >> > >> > >> > >> Thanks & Regards > >> > >> Rohith Sharma K S > >> > >> > >> > >> *From:* francexo83 [mailto:francexo83@gmail.com] > >> *Sent:* 18 November 2014 20:53 > >> *To:* user@hadoop.apache.org > >> *Subject:* MR job fails with too many mappers > >> > >> > >> > >> Hi All, > >> > >> > >> > >> I have a small hadoop cluster with three nodes and HBase 0.98.1 > >> installed > >> on it. > >> > >> > >> > >> The hadoop version is 2.3.0 and below my use case scenario. > >> > >> > >> > >> I wrote a map reduce program that reads data from an hbase table and > does > >> some transformations on these data. > >> > >> Jobs are very simple so they didn't need the reduce phase. I also wro= te > >> a > >> TableInputFormat extension in order to maximize the number of > concurrent > >> maps on the cluster. > >> > >> In other words, each row should be processed by a single map task. > >> > >> > >> > >> Everything goes well until the number of rows and consequently mapper= s > >> exceeds 300000 quota. > >> > >> > >> > >> This is the only exception I see when the job fails: > >> > >> > >> > >> Application application_1416304409718_0032 failed 2 times due to AM > >> Container for appattempt_1416304409718_0032_000002 exited with exitCod= e: > >> 1 > >> due to: > >> > >> > >> > >> > >> > >> Exception from container-launch: > >> org.apache.hadoop.util.Shell$ExitCodeException: > >> > >> org.apache.hadoop.util.Shell$ExitCodeException: > >> > >> at org.apache.hadoop.util.Shell.runCommand(Shell.java:511) > >> > >> at org.apache.hadoop.util.Shell.run(Shell.java:424) > >> > >> at > >> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:656) > >> > >> at > >> > org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launch= Container(DefaultContainerExecutor.java:195) > >> > >> at > >> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:300) > >> > >> at > >> > org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Conta= inerLaunch.call(ContainerLaunch.java:81) > >> > >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) > >> > >> at > >> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1142) > >> > >> at > >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:617) > >> > >> at java.lang.Thread.run(Thread.java:745) > >> > >> Container exited with a non-zero exit code 1 > >> > >> > >> > >> > >> > >> Cluster configuration details: > >> > >> Node1: 12 GB, 4 core > >> > >> Node2: 6 GB, 4 core > >> > >> Node3: 6 GB, 4 core > >> > >> > >> > >> yarn.scheduler.minimum-allocation-mb=3D2048 > >> > >> yarn.scheduler.maximum-allocation-mb=3D4096 > >> > >> yarn.nodemanager.resource.memory-mb=3D6144 > >> > >> > >> > >> > >> > >> > >> > >> Regards > >> > >> > >> > > > --e89a8ffbaca9555d0b05084af171 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,=C2=A0

as I said before, I wrote TableInpu= tFormat and RecordReader extension that reads input data from an Hbase tabl= e,=C2=A0

in my case every single row is associated= with a single InputSplit.=C2=A0

For example if I = have 300000 rows to process, =C2=A0my custom TableInputFormat will generate= 300000 input splits and as a result =C2=A0

300000= mapper task in my MapReguce job.

That's all.<= /div>

Regards

=C2=A0

2014-11-20= 6:02 GMT+01:00 Susheel Kumar Gadalay <skgadalay@gmail.com>:
In which case the split metadata go be= yond 10MB?
Can u give some details of your input file and splits.

On 11/19/14, francexo83 <francex= o83@gmail.com> wrote:
> Thank you very much for your suggestion, it was very helpful.
>
> This is what I have after=C2=A0 turning off log aggregation:
>
> 2014-11-18 18:39:01,507 INFO [main]
> org.apache.hadoop.service.AbstractService: Service
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED= ;
> cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Split metadata size exceeded 10000000. Aborting j= ob
> job_1416332245344_0004
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.io.IOException: Split metadata size exceeded 10000000. Aborting j= ob
> job_1416332245344_0004
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.cre= ateSplits(JobImpl.java:1551)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.tra= nsition(JobImpl.java:1406)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.tra= nsition(JobImpl.java:1373)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.d= oTransition(StateMachineFactory.java:385)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMac= hineFactory.java:302)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachi= neFactory.java:46)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.= doTransition(StateMachineFactory.java:448)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.jav= a:986)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.jav= a:138)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.hand= le(MRAppMaster.java:1249)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaste= r.java:1049)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:1= 93)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:= 1460)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at java.security.AccessController.doP= rivileged(Native Method)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at javax.security.auth.Subject.doAs(S= ubject.java:422)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1554)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(M= RAppMaster.java:1456)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1= 389)
> Caused by: java.io.IOException: Split metadata size exceeded 10000000.=
> Aborting job job_1416332245344_0004
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.split.SplitMetaInfoReader.readSplitMetaInf= o(SplitMetaInfoReader.java:53)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$InitTransition.cre= ateSplits(JobImpl.java:1546)
>
>
> I exceeded the split metadata size so I=C2=A0 added the following prop= erty into
> the mapred-site.xml and it worked:
>
> <property>
>=C2=A0 =C2=A0 =C2=A0<name>mapreduce.job.split.metainfo.maxsize<= ;/name>
>=C2=A0 =C2=A0 =C2=A0<value>500000000</value>
> </property>
>
> thanks again.
>
>
>
>
>
>
>
>
> 2014-11-18 17:59 GMT+01:00 Rohith Sharma K S <rohithsharmaks@huawei.com>:
>
>>=C2=A0 If log aggregation is enabled, log folder will be deleted. S= o I suggest
>> disable =E2=80=9Cyarn.log-aggregation-enable=E2=80=9D and run job = again. All the logs
>> remains at log folder. Then you can find container logs
>>
>>
>>
>> Thanks & Regards
>>
>> Rohith Sharma K S
>>
>>
>>
>> This e-mail and its attachments contain confidential information f= rom
>> HUAWEI, which is intended only for the person or entity whose addr= ess is
>> listed above. Any use of the information contained herein in any w= ay
>> (including, but not limited to, total or partial disclosure,
>> reproduction,
>> or dissemination) by persons other than the intended recipient(s) = is
>> prohibited. If you receive this e-mail in error, please notify the= sender
>> by phone or email immediately and delete it!
>>
>>
>>
>> *From:* francexo83 [mailto:francexo83@gmail.com]
>> *Sent:* 18 November 2014 22:15
>> *To:* user@hadoop.apache= .org
>> *Subject:* Re: MR job fails with too many mappers
>>
>>
>>
>> Hi,
>>
>>
>>
>> thank you for your quick response, but I was not able to see the l= ogs for
>> the container.
>>
>>
>>
>> I get a=C2=A0 "no such file or directory" when I try to = access the logs of the
>> container from the shell:
>>
>>
>>
>> cd /var/log/hadoop-yarn/containers/application_1416304409718_0032<= br> >>
>>
>>
>>
>>
>> It seems that the container has never been created.
>>
>>
>>
>>
>>
>>
>>
>> thanks
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 2014-11-18 16:43 GMT+01:00 Rohith Sharma K S <rohithsharmaks@huawei.com>:
>>
>> Hi
>>
>>
>>
>> Could you get syserr and sysout log for contrainer.? These logs wi= ll be
>> available in the same location=C2=A0 syslog for container.
>>
>> ${yarn.nodemanager.log-dirs}/<app-id>/<container-id> >>
>> This helps to find problem!!
>>
>>
>>
>>
>>
>> Thanks & Regards
>>
>> Rohith Sharma K S
>>
>>
>>
>> *From:* francexo83 [mailto:francexo83@gmail.com]
>> *Sent:* 18 November 2014 20:53
>> *To:* user@hadoop.apache= .org
>> *Subject:* MR job fails with too many mappers
>>
>>
>>
>> Hi All,
>>
>>
>>
>> I have a small=C2=A0 hadoop cluster with three nodes and HBase 0.9= 8.1
>> installed
>> on it.
>>
>>
>>
>> The hadoop version is 2.3.0 and below my use case scenario.
>>
>>
>>
>> I wrote a map reduce program that reads data from an hbase table a= nd does
>> some transformations on these data.
>>
>> Jobs are very simple so they didn't need the=C2=A0 reduce phas= e. I also wrote
>> a
>> TableInputFormat=C2=A0 extension in order to maximize the number o= f concurrent
>> maps on the cluster.
>>
>> In other words, each=C2=A0 row should be processed by a single map= task.
>>
>>
>>
>> Everything goes well until the number of rows and consequently=C2= =A0 mappers
>> exceeds 300000 quota.
>>
>>
>>
>> This is the only exception I see when the job fails:
>>
>>
>>
>> Application application_1416304409718_0032 failed 2 times due to A= M
>> Container for appattempt_1416304409718_0032_000002 exited with exi= tCode:
>> 1
>> due to:
>>
>>
>>
>>
>>
>> Exception from container-launch:
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>
>> org.apache.hadoop.util.Shell$ExitCodeException:
>>
>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:511)
>>
>> at org.apache.hadoop.util.Shell.run(Shell.java:424)
>>
>> at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.ja= va:656)
>>
>> at
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor= .launchContainer(DefaultContainerExecutor.java:195)
>>
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launche= r.ContainerLaunch.call(ContainerLaunch.java:300)
>>
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launche= r.ContainerLaunch.call(ContainerLaunch.java:81)
>>
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecut= or.java:1142)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecu= tor.java:617)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Container exited with a non-zero exit code 1
>>
>>
>>
>>
>>
>> Cluster configuration details:
>>
>> Node1: 12 GB, 4 core
>>
>> Node2: 6 GB, 4 core
>>
>> Node3: 6 GB, 4 core
>>
>>
>>
>> yarn.scheduler.minimum-allocation-mb=3D2048
>>
>> yarn.scheduler.maximum-allocation-mb=3D4096
>>
>> yarn.nodemanager.resource.memory-mb=3D6144
>>
>>
>>
>>
>>
>>
>>
>> Regards
>>
>>
>>
>

--e89a8ffbaca9555d0b05084af171--