hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Phillip Wu <phillip...@unsw.edu.au>
Subject RE: maximum-am-resource-percent is insufficient to start a single application
Date Wed, 15 Jun 2016 06:42:28 GMT

Thanks for your email.

1.       I don’t think anything on the cluster is being used – see below

I’m not sure how to get my “total cluster resource size” – please advise how to get
After doing the hive insert I get this:
hduser@ip-10-118-112-182:/$ hadoop queue -info default -showJobs
16/06/10 02:24:49 INFO client.RMProxy: Connecting to ResourceManager at /
Queue Name : default
Queue State : running
Scheduling Info : Capacity: 100.0, MaximumCapacity: 100.0, CurrentCapacity: 0.0
Total jobs:1
                  JobId      State           StartTime      UserName           Queue     
Priority       UsedContainers  RsvdContainers  UsedMem         RsvdMem         NeededMem 
       AM info
job_1465523894946_0001       PREP       1465524072194        hduser         default      
 NORMAL                    0               0       0M              0M                0M  

hduser@ip-10-118-112-182:/$ mapred job -status  job_1465523894946_0001
Job: job_1465523894946_0001
Job File: /tmp/hadoop-yarn/staging/hduser/.staging/job_1465523894946_0001/job.xml
Job Tracking URL : http://localhost:8088/proxy/application_1465523894946_0001/
Uber job : false
Number of maps: 0
Number of reduces: 0
map() completion: 0.0
reduce() completion: 0.0
Job state: PREP
retired: false
reason for failure:
Counters: 0

2.       There are no other applications except I’m running zookeeper

3.       There is only one user

For your assistance this seems to be the code generating the error message[…yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java]:
if (!Resources.lessThanOrEqual(
          resourceCalculator, lastClusterResource, userAmIfStarted,
          userAMLimit)) {
        if (getNumActiveApplications() < 1) {
          LOG.warn("maximum-am-resource-percent is insufficient to start a" +
            " single application in queue for user, it is likely set too low." +
            " skipping enforcement to allow at least one application to start");
        } else {
          LOG.info("not starting application as amIfStarted exceeds " +

Any ideas?

From: Sunil Govind [mailto:sunil.govind@gmail.com]
Sent: Wednesday, 15 June 2016 4:24 PM
To: Phillip Wu; user@hadoop.apache.org
Subject: Re: maximum-am-resource-percent is insufficient to start a single application

Hi Philip

Higher maximum-am-resource-percent value (0~1) will help to allocate more resource for your
ApplicationMaster container of a yarn application (MR Jobs here), but also depend on the capacity
configured for the queue. You have mentioned that there is only default queue here, so that
wont be a problem. Few questions:
    - How much is your total cluster resource size and how much of cluster resource is used
now ?
    - Is there any other application were running in cluster and whether it was taking full
cluster resource.? This is a possibility since you now gave whole queue's capacity for AM
    - Do you have multiple users in your cluster who runs applications other that this hive
job? If so, yarn.scheduler.capacity.<queue-path>.minimum-user-limit-percent will have
impact on AM resource usage limit. I think you can double check this.

- Sunil

On Wed, Jun 15, 2016 at 8:47 AM Phillip Wu <phillip.wu@unsw.edu.au<mailto:phillip.wu@unsw.edu.au>>

I'm new to Hadoop and Hive.

I'm using Hadoop 2.6.4 (binary I got from internet) & Hive 2.0.1 (binary I got from internet).
I can create a database and table in hive.

However when I try to insert a record into a previously created table I get:
"org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: maximum-am-resource-percent
is insufficient to start a single application in queue"


      Maximum percent of resources in the cluster which can be used to run
      application masters i.e. controls number of concurrent running

According to the documentation this means I have allocated 100% to my one and only default
scheduler queue.
"yarn.scheduler.capacity.maximum-am-resource-percent / yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
Maximum percent of resources in the cluster which can be used to run application masters -
controls number of concurrent active applications.
Limits on each queue are directly proportional to their queue capacities and user limits.
Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with yarn.scheduler.capacity.maximum-am-resource-percent
and can also be overridden on a per queue basis by setting

Can someone tell me how to fix this?
View raw message