hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kevin Buckley <kevin.buckley.ecs.vuw.ac...@gmail.com>
Subject Hadoop 2.8.0: Use of container-executor.cfg to restrict access to MapReduce jobs
Date Mon, 07 Aug 2017 02:17:37 GMT
Hi again

early on in my attempts to Kerberise our Hadoop instance, I had seen an
error message that suggested I needed to add a list of users who could
run jobs into the last line of Hadoop's


for which the default content is

yarn.nodemanager.linux-container-executor.group=#configured value of
banned.users=#comma separated list of users who can not run applications
min.user.id=1000#Prevent other super-users
allowed.system.users=##comma separated list of system users who CAN
run applications

and after I had dropped the min.user.id to allow for the yarn user in
our systems to run jobs AND added a list of users higher than that,
those other users were able to run jobs.

I now came to test out removing a user from the "allowed" list and I
can't seem to prevent that user from running MapReduce jobs, no
matter which of the various daemons I stop and start, including
shutting down and restarting the whole thing.

Should I be reading that


list to be a list of UIDs from BELOW the


list, rather than an actual "only allow users in the list" to run jobs list ?

Clealry, one can't run jobs if one doesn't have access to directories
to put data into, so that's a kind of "job control" ACL of itself but I
was hoping that the underlying HDFS might contain a wider set of
users than those allowed to run jobs at any given time, in which case,
altering the ability via the


list seemed a simple way to achieve that.

Any clues/insight welcome,

Kevin M. Buckley

eScience Consultant
School of Engineering and Computer Science
Victoria University of Wellington
New Zealand

To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org

View raw message