hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zangxiangyu <zangxian...@qiyi.com>
Subject RE:
Date Thu, 30 May 2013 12:17:18 GMT
Hi ,I suggest you

Always use


set mapred.job.queue.name=$QUEUE_NAME;


before HQL.if not ,the default pool will be used .


You can also change now running job’ queue and priority in http://ip:port/scheduler by hand,

Same address with JT home                        

if you use fair schedeuler and open the URL above.


From: Job Thomas [mailto:jobt@suntecgroup.com] 
Sent: Thursday, May 30, 2013 7:49 PM
To: user@hadoop.apache.org
Importance: High


Hi All,

I amin a team developing hadoop with hive.

we are using fair schedeuler.

but all hive jobs are going to same pool whose name is same as username of where hive server


this is all about,


my hive server is in user named 'hadoop'.

my hive client program in user named 'abc'.

but all jobs in hadoop are in the pool named 'hadoop'. (user name of hive server location)


Becouse of this  i am not getiing equal resource sharing.


can we submit job to hive by specify  pool name?


The same in the case if i used capacity scheduler.. All jobs from hive client are going to
the 'dfault' queue.


Thanks in advance.


Job M Thomas

View raw message