hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Munna <munnava...@gmail.com>
Subject Re: Working with Capacity Scheduler
Date Wed, 27 Nov 2013 00:02:31 GMT
Hi olivier,

Sorry to again, same error: Job error given below

[mapred@host~]$ hive -e "use databse; select count(*) from table"
Logging initialized using configuration in
jar:file:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive/lib/hive-common-0.10.0-cdh4.4.0.jar!/hive-log4j.properties
Hive history
file=/tmp/mapred/hive_job_log_868a2d7a-1df8-4a8a-b09d-4fad10d3f1b4_1410849451.txt
OK
Time taken: 1.616 seconds
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_1385510241043_0001, Tracking URL =
http://host:8088/proxy/application_1385510241043_0001/
Kill Command =
/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/bin/hadoop job
-kill job_1385510241043_0001
Ended Job = job_1385510241043_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL:
http://hpdl-R306-16:8088/proxy/application_1385510241043_0001/
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec



On Wed, Nov 27, 2013 at 5:01 AM, Olivier Renault
<orenault@hortonworks.com>wrote:

> Yes you need to forbid every users on your root queue
>
>   <property>
>       <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
>       <value> </value>
>   </property>
>
>   <property>
>       <name>yarn.scheduler.capacity.root.acl_administer_queue</name>
>       <value> </value>
>   </property>
>
> Olivier
>
>
> On 26 November 2013 23:29, Munna <munnavarsk@gmail.com> wrote:
>
>> I have configured acls like this:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *<property>  <name>yarn.scheduler.capacity.root.
>> production.acl_submit_applications</name>
>> <value>yarn,mapred</value></property><property>    <value>userb</value>
>> <name>yarn.scheduler.capacity.root.exploration.a.acl_submit_applications</name>
>> </property><property>
>> <name>yarn.scheduler.capacity.root.exploration.b.acl_submit_applications</name>
>> <value>userb</value></property><property>
>> <name>yarn.scheduler.capacity.root.exploration.c.acl_submit_applications</name>
>> <value>userc</value></property>*
>>  Is any thing missing on this?
>>
>> Thanks
>>
>>
>> On Wed, Nov 27, 2013 at 4:31 AM, Olivier Renault <
>> orenault@hortonworks.com> wrote:
>>
>>> I don't believe the distro should matter.
>>>
>>> Could you confirm that you've got the following in
>>> capacity-scheduler.xml ? ( The permission are hierarchical, so you need to
>>> restrict the root queue )
>>>   <property>
>>>       <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
>>>       <value> </value>
>>>   </property>
>>>
>>>   <property>
>>>       <name>yarn.scheduler.capacity.root.acl_administer_queue</name>
>>>       <value> </value>
>>>
>>> Thanks,
>>> Olivier
>>>
>>>
>>> On 26 November 2013 22:51, Munna <munnavarsk@gmail.com> wrote:
>>>
>>>> Hi Olivier,
>>>>
>>>> "yarn.acl.enable"  is enabled earlier, for your information i am using
>>>> Cloudera Manager to manage the cluster. again same problem :(
>>>>
>>>>
>>>>
>>>>
>>>> On Wed, Nov 27, 2013 at 3:37 AM, Olivier Renault <
>>>> orenault@hortonworks.com> wrote:
>>>>
>>>>> Here is a working configuration.
>>>>>
>>>>> In yarn-site.xml, you'll need to enable
>>>>>     <name>yarn.acl.enable</name>
>>>>>     <value>true</value>
>>>>>
>>>>> Thanks,
>>>>> Olivier
>>>>>
>>>>>
>>>>> On 26 November 2013 21:48, Munna <munnavarsk@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Can i able to get the solution on Capacity Scheduler ?
>>>>>>
>>>>>>
>>>>>> On Wed, Nov 27, 2013 at 2:24 AM, Munna <munnavarsk@gmail.com>
wrote:
>>>>>>
>>>>>>> yes... acl's are not enforced.
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Nov 27, 2013 at 2:16 AM, Olivier Renault <
>>>>>>> orenault@hortonworks.com> wrote:
>>>>>>>
>>>>>>>> Do you mean that acl are not being enforced?
>>>>>>>>
>>>>>>>> Olivier
>>>>>>>> On 26 Nov 2013 20:18, "Munna" <munnavarsk@gmail.com>
wrote:
>>>>>>>>
>>>>>>>>> Hi Olivier,
>>>>>>>>>
>>>>>>>>> have u hold any solution, why user are not identifying
their
>>>>>>>>> queues?
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>> Munna
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Nov 26, 2013 at 11:17 PM, Munna <munnavarsk@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> *mapred queue -list* result:
>>>>>>>>>>
>>>>>>>>>> [user@host ~]$ mapred queue -list
>>>>>>>>>> 13/11/26 09:37:48 INFO service.AbstractService:
>>>>>>>>>> Service:org.apache.hadoop.yarn.client.YarnClientImpl
is inited.
>>>>>>>>>> 13/11/26 09:37:48 INFO service.AbstractService:
>>>>>>>>>> Service:org.apache.hadoop.yarn.client.YarnClientImpl
is started.
>>>>>>>>>> ======================
>>>>>>>>>> Queue Name : exploration
>>>>>>>>>> Queue State : running
>>>>>>>>>> Scheduling Info : Capacity: 30.000002, MaximumCapacity:
1.0,
>>>>>>>>>> CurrentCapacity: 0.0
>>>>>>>>>>     ======================
>>>>>>>>>>     Queue Name : a
>>>>>>>>>>     Queue State : running
>>>>>>>>>>     Scheduling Info : Capacity: 30.000002, MaximumCapacity:
1.0,
>>>>>>>>>> CurrentCapacity: 0.0
>>>>>>>>>>     ======================
>>>>>>>>>>     Queue Name : b
>>>>>>>>>>     Queue State : running
>>>>>>>>>>     Scheduling Info : Capacity: 30.000002, MaximumCapacity:
1.0,
>>>>>>>>>> CurrentCapacity: 0.0
>>>>>>>>>>     ======================
>>>>>>>>>>     Queue Name : c
>>>>>>>>>>     Queue State : running
>>>>>>>>>>     Scheduling Info : Capacity: 40.0, MaximumCapacity:
1.0,
>>>>>>>>>> CurrentCapacity: 0.0
>>>>>>>>>> ======================
>>>>>>>>>> Queue Name : production
>>>>>>>>>> Queue State : running
>>>>>>>>>> Scheduling Info : Capacity: 70.0, MaximumCapacity:
1.0,
>>>>>>>>>> CurrentCapacity: 0.0
>>>>>>>>>>
>>>>>>>>>> and I saw in RM logs, mapred user try to find default
queue, logs
>>>>>>>>>> are below:
>>>>>>>>>>
>>>>>>>>>> 2013-11-26 05:22:59,804 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService:
Application
>>>>>>>>>> with id 1 submitted by user mapred
>>>>>>>>>> 2013-11-26 05:22:59,808 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger:
USER=mapred
>>>>>>>>>> IP=10.10.10.1  OPERATION=Submit Application Request
>>>>>>>>>> TARGET=ClientRMService  RESULT=SUCCESS  APPID=application_1385472086605_0001
>>>>>>>>>> 2013-11-26 05:22:59,828 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
>>>>>>>>>> application_1385472086605_0001 State change from
NEW to SUBMITTED
>>>>>>>>>> 2013-11-26 05:22:59,829 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService:
>>>>>>>>>> Registering appattempt_1385472086605_0001_000001
>>>>>>>>>> 2013-11-26 05:22:59,830 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
>>>>>>>>>> appattempt_1385472086605_0001_000001 State change
from NEW to SUBMITTED
>>>>>>>>>> 2013-11-26 05:22:59,832 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl:
>>>>>>>>>> appattempt_1385472086605_0001_000001 State change
from SUBMITTED to FAILED
>>>>>>>>>> 2013-11-26 05:22:59,836 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
>>>>>>>>>> application_1385472086605_0001 State change from
SUBMITTED to FAILED
>>>>>>>>>> 2013-11-26 05:22:59,837 WARN
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger:
USER=mapred
>>>>>>>>>> OPERATION=Application Finished - Failed TARGET=RMAppManager
>>>>>>>>>> RESULT=FAILURE  DESCRIPTION=App failed with state:
FAILED
>>>>>>>>>> PERMISSIONS=Application appattempt_1385472086605_0001_000001
submitted by
>>>>>>>>>> user mapred to unknown queue: default APPID=application_1385472086605_0001
>>>>>>>>>> 2013-11-26 05:22:59,841 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore:
>>>>>>>>>> Removing info for app: application_1385472086605_0001
>>>>>>>>>> 2013-11-26 05:22:59,842 INFO
>>>>>>>>>> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary:
>>>>>>>>>> appId=application_1385472086605_0001,name=select
count(*) from
>>>>>>>>>> crc_exchange_rate(Stage-1),user=mapred,queue=default,state=FAILED,trackingUrl=hostname:8088/proxy/application_1385472086605_0001/,appMasterHost=N/A,startTime=1385472179799,finishTime=1385472179836
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Tue, Nov 26, 2013 at 10:36 PM, Olivier Renault
<
>>>>>>>>>> orenault@hortonworks.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Could you confirm that the queue are running
by:
>>>>>>>>>>> mapred queue -list
>>>>>>>>>>>
>>>>>>>>>>> Otherwise what error do you get when submitting
a job ?
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Olivier
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 26 November 2013 15:33, Munna <munnavarsk@gmail.com>
wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Below configured queue's are showing, then
why i cannt able to
>>>>>>>>>>>> run the job.
>>>>>>>>>>>>
>>>>>>>>>>>> [root@host ~]# hadoop queue -showacls
>>>>>>>>>>>> DEPRECATED: Use of this script to execute
mapred command is
>>>>>>>>>>>> deprecated.
>>>>>>>>>>>> Instead use the mapred command for it.
>>>>>>>>>>>>
>>>>>>>>>>>> 13/11/26 05:25:43 INFO service.AbstractService:
>>>>>>>>>>>> Service:org.apache.hadoop.yarn.client.YarnClientImpl
is inited.
>>>>>>>>>>>> 13/11/26 05:25:44 INFO service.AbstractService:
>>>>>>>>>>>> Service:org.apache.hadoop.yarn.client.YarnClientImpl
is started.
>>>>>>>>>>>> Queue acls for user :  root
>>>>>>>>>>>>
>>>>>>>>>>>> Queue  Operations
>>>>>>>>>>>> =====================
>>>>>>>>>>>> root  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>> exploration  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>> a  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>> b  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>> c  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>> production  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Nov 26, 2013 at 8:05 PM, Olivier
Renault <
>>>>>>>>>>>> orenault@hortonworks.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Sorry my mistake, it's :
>>>>>>>>>>>>> mapred queue -showacls
>>>>>>>>>>>>>
>>>>>>>>>>>>> Or you can also use hadoop queue -showacls
as suggested by
>>>>>>>>>>>>> Jitendra
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Olviier
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 26 November 2013 14:17, Munna <munnavarsk@gmail.com>
wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Yes! I have executed same command:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [user@host ~]$* yarn queue -showacls*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>>>>>>>>>>>>> queue
>>>>>>>>>>>>>> Caused by: java.lang.ClassNotFoundException:
queue
>>>>>>>>>>>>>>         at
>>>>>>>>>>>>>> java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>>>>>>>>>>>>         at java.security.AccessController.doPrivileged(Native
>>>>>>>>>>>>>> Method)
>>>>>>>>>>>>>>         at
>>>>>>>>>>>>>> java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>>>>>>>>>>>>         at
>>>>>>>>>>>>>> java.lang.ClassLoader.loadClass(ClassLoader.java:306)
>>>>>>>>>>>>>>         at
>>>>>>>>>>>>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>>>>>>>>>>>>         at
>>>>>>>>>>>>>> java.lang.ClassLoader.loadClass(ClassLoader.java:247)
>>>>>>>>>>>>>> Could not find the main class: queue.
 Program will exit.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Nov 26, 2013 at 7:25 PM,
Olivier Renault <
>>>>>>>>>>>>>> orenault@hortonworks.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> it should be :
>>>>>>>>>>>>>>> yarn queue -showacls
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Olivier
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 26 November 2013 13:28, Munna
<munnavarsk@gmail.com>wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi Olivier,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thank you for your reply.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> As you said, i ran those
commands and i am getting
>>>>>>>>>>>>>>>> following error message on
both the commands.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *[root@host ~]# sudo -u yarn
yarn queue -showacls Exception
>>>>>>>>>>>>>>>> in thread "main" java.lang.NoClassDefFoundError:
queueCaused by:
>>>>>>>>>>>>>>>> java.lang.ClassNotFoundException:
queue        at
>>>>>>>>>>>>>>>> java.net.URLClassLoader$1.run(URLClassLoader.java:202)
       at
>>>>>>>>>>>>>>>> java.security.AccessController.doPrivileged(Native
Method)         at
>>>>>>>>>>>>>>>> java.net.URLClassLoader.findClass(URLClassLoader.java:190)
       at
>>>>>>>>>>>>>>>> java.lang.ClassLoader.loadClass(ClassLoader.java:306)
       at
>>>>>>>>>>>>>>>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
       at
>>>>>>>>>>>>>>>> java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the
>>>>>>>>>>>>>>>> main class: queue.  Program
will exit.*
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>>> Munna
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Nov 26, 2013 at 4:10
PM, Olivier Renault <
>>>>>>>>>>>>>>>> orenault@hortonworks.com>
wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Could you maybe send
us the output of :
>>>>>>>>>>>>>>>>>  - yarn queue -showacls
>>>>>>>>>>>>>>>>>  - yarn queue -list
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>> Olivier
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On 26 November 2013 05:53,
Munna <munnavarsk@gmail.com>wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I working with Capacity
Scheduler on YARN and I have
>>>>>>>>>>>>>>>>>> configured different
queues. I can able to see all the queues on RM ui.
>>>>>>>>>>>>>>>>>> But, when i start
to run MR jobs with configured user names(yarn,mapred), i
>>>>>>>>>>>>>>>>>> am unable to run
the Jobs and job are suspended. Again i set default as
>>>>>>>>>>>>>>>>>> FIFO working fine.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Can you please help
me out sort this issue and configured
>>>>>>>>>>>>>>>>>> Configurations are
given below.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> *<property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.queues</name>
>>>>>>>>>>>>>>>>>> <value>production,exploration</value>
</property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.queues</name>
>>>>>>>>>>>>>>>>>> <value>a,b,c</value></property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.capacity</name>
>>>>>>>>>>>>>>>>>> <value>100</value></property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.production.capacity</name>
>>>>>>>>>>>>>>>>>> <value>70</value></property>
<property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.capacity</name>
>>>>>>>>>>>>>>>>>> <value>30</value></property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.a.capacity</name>
>>>>>>>>>>>>>>>>>> <value>30</value></property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.b.capacity</name>
>>>>>>>>>>>>>>>>>> <value>30</value></property>
<property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.c.capacity</name>
>>>>>>>>>>>>>>>>>> <value>40</value></property><property>
 <name>yarn.scheduler.capacity.root.
>>>>>>>>>>>>>>>>>> production.acl_submit_applications</name>
>>>>>>>>>>>>>>>>>> <value>yarn,mapred</value></property><property>
   <value>userb</value>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.a.acl_submit_applications</name>
>>>>>>>>>>>>>>>>>> </property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.b.acl_submit_applications</name>
>>>>>>>>>>>>>>>>>> <value>userb</value></property><property>
>>>>>>>>>>>>>>>>>> <name>yarn.scheduler.capacity.root.exploration.c.acl_submit_applications</name>
>>>>>>>>>>>>>>>>>> <value>userc</value></property></configuration>*
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>> *Regards*
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>  *Munna*
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>> CONFIDENTIALITY NOTICE
>>>>>>>>>>> NOTICE: This message is intended for the use
of the individual
>>>>>>>>>>> or entity to which it is addressed and may contain
information that is
>>>>>>>>>>> confidential, privileged and exempt from disclosure
under applicable law.
>>>>>>>>>>> If the reader of this message is not the intended
recipient, you are hereby
>>>>>>>>>>> notified that any printing, copying, dissemination,
distribution,
>>>>>>>>>>> disclosure or forwarding of this communication
is strictly prohibited. If
>>>>>>>>>>> you have received this communication in error,
please contact the sender
>>>>>>>>>>> immediately and delete it from your system. Thank
You.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Regards*
>>>>>>>>>>
>>>>>>>>>> *Munna*
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> *Regards*
>>>>>>>>>
>>>>>>>>> *Munna*
>>>>>>>>>
>>>>>>>>
>>>>>>>> CONFIDENTIALITY NOTICE
>>>>>>>> NOTICE: This message is intended for the use of the individual
or
>>>>>>>> entity to which it is addressed and may contain information
that is
>>>>>>>> confidential, privileged and exempt from disclosure under
applicable law.
>>>>>>>> If the reader of this message is not the intended recipient,
you are hereby
>>>>>>>> notified that any printing, copying, dissemination, distribution,
>>>>>>>> disclosure or forwarding of this communication is strictly
prohibited. If
>>>>>>>> you have received this communication in error, please contact
the sender
>>>>>>>> immediately and delete it from your system. Thank You.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Regards*
>>>>>>>
>>>>>>> *Munna*
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Regards*
>>>>>>
>>>>>> *Munna*
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>    * Olivier Renault *       Solution Engineer
>>>>> ------------------------------
>>>>>
>>>>>     Phone:        +44 7500 933 036
>>>>>   Email:      orenault@hortonworks.com
>>>>>   Website:   http://www.hortonworks.com/
>>>>>
>>>>>       * Follow Us: *
>>>>> <http://facebook.com/hortonworks/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>>>>> <http://twitter.com/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>>>>> <http://www.linkedin.com/company/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>>>>>
>>>>>  [image: photo]
>>>>>
>>>>>   Latest From Our Blog:  SAP HANA + Hadoop: A Perfect Match <http://hortonworks.com/blog/sap-hana-hadoop-a-perfect-match/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>>>>>
>>>>> CONFIDENTIALITY NOTICE
>>>>> NOTICE: This message is intended for the use of the individual or
>>>>> entity to which it is addressed and may contain information that is
>>>>> confidential, privileged and exempt from disclosure under applicable
law.
>>>>> If the reader of this message is not the intended recipient, you are
hereby
>>>>> notified that any printing, copying, dissemination, distribution,
>>>>> disclosure or forwarding of this communication is strictly prohibited.
If
>>>>> you have received this communication in error, please contact the sender
>>>>> immediately and delete it from your system. Thank You.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Regards*
>>>>
>>>> *Munna*
>>>>
>>>
>>> CONFIDENTIALITY NOTICE
>>> NOTICE: This message is intended for the use of the individual or entity
>>> to which it is addressed and may contain information that is confidential,
>>> privileged and exempt from disclosure under applicable law. If the reader
>>> of this message is not the intended recipient, you are hereby notified that
>>> any printing, copying, dissemination, distribution, disclosure or
>>> forwarding of this communication is strictly prohibited. If you have
>>> received this communication in error, please contact the sender immediately
>>> and delete it from your system. Thank You.
>>>
>>
>>
>>
>> --
>> *Regards*
>>
>> *Munna*
>>
>
>
>
> --
>    * Olivier Renault *       Solution Engineer
> ------------------------------
>
>     Phone:        +44 7500 933 036
>   Email:      orenault@hortonworks.com
>   Website:   http://www.hortonworks.com/
>
>       * Follow Us: *
> <http://facebook.com/hortonworks/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
> <http://twitter.com/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
> <http://www.linkedin.com/company/hortonworks?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
>  [image: photo]
>
>   Latest From Our Blog:  SAP HANA + Hadoop: A Perfect Match <http://hortonworks.com/blog/sap-hana-hadoop-a-perfect-match/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>



-- 
*Regards*

*Munna*

Mime
View raw message