Return-Path: X-Original-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 248079DD0 for ; Mon, 11 Jun 2012 23:16:29 +0000 (UTC) Received: (qmail 19664 invoked by uid 500); 11 Jun 2012 23:16:28 -0000 Delivered-To: apmail-hadoop-mapreduce-dev-archive@hadoop.apache.org Received: (qmail 19602 invoked by uid 500); 11 Jun 2012 23:16:28 -0000 Mailing-List: contact mapreduce-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-dev@hadoop.apache.org Received: (qmail 19594 invoked by uid 99); 11 Jun 2012 23:16:28 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Jun 2012 23:16:28 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of atm@cloudera.com designates 209.85.220.176 as permitted sender) Received: from [209.85.220.176] (HELO mail-vc0-f176.google.com) (209.85.220.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Jun 2012 23:16:23 +0000 Received: by vcbfo14 with SMTP id fo14so3214669vcb.35 for ; Mon, 11 Jun 2012 16:16:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=lIvqwHRZZlB2NgICOWgZg4jjl4b2lM50KwyFNuazGzY=; b=pshPxn/ZOBH+LifAzU9r+CDYI361GtNV3jQAlK4K+d4AYUzpTlvWSuTfWIMV5peqoj YYXIv1iaM5kfMT9kPfKKJWNYiJEx/4KdDAS9JwEg7mt1nM9CJvm0NeYTd4YjNbN0BKo0 2Z7Lo9X9yfdrdCN9HwGZTNgTdE1/lAR9IwvYL3oVgUtdse90D/NXnv0flFNlKfl6I8zJ ut+sqN2bC7u7Yh/9T2stq1g8PsHhFpCU+POxcRGWgj1k7w9LQ2pb7nTRC9LWohVVO0I0 3C6BAreOck5/B9c2zEt1H9Gzu3AWM4TcX4w1s0mZzxTuSU0j3jei1RijF81kAaxPqWdC jdZw== Received: by 10.52.100.4 with SMTP id eu4mr11320181vdb.66.1339456562809; Mon, 11 Jun 2012 16:16:02 -0700 (PDT) MIME-Version: 1.0 Received: by 10.52.159.98 with HTTP; Mon, 11 Jun 2012 16:15:32 -0700 (PDT) In-Reply-To: References: From: "Aaron T. Myers" Date: Mon, 11 Jun 2012 16:15:32 -0700 Message-ID: Subject: Re: validating user IDs To: mapreduce-dev@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf3071c7109e4aa504c23a8826 X-Gm-Message-State: ALoCoQkbdUt7EUpdnBzy85Os+nwO+0wrHCFfDVFPOzBotFkh8HusEuPpEgHZ3Lax62XVj50a/0ru X-Virus-Checked: Checked by ClamAV on apache.org --20cf3071c7109e4aa504c23a8826 Content-Type: text/plain; charset=ISO-8859-1 -hdfs-dev@ +mapreduce-dev@ Moving to the more-relevant mapreduce-dev. -- Aaron T. Myers Software Engineer, Cloudera On Mon, Jun 11, 2012 at 4:12 PM, Alejandro Abdelnur wrote: > Colin, > > Would be possible using some kind of cmake config magic to set a macro to > the current OS limit? Even if this means detecting the OS version and > assuming its default limit. > > thx > > On Mon, Jun 11, 2012 at 3:57 PM, Colin McCabe >wrote: > > > Hi all, > > > > I recently pulled the latest source, and ran a full build. The > > command line was this: > > mvn compile -Pnative > > > > I was confronted with this: > > > > [INFO] Requested user cmccabe has id 500, which is below the minimum > > allowed 1000 > > [INFO] FAIL: test-container-executor > > [INFO] ================================================ > > [INFO] 1 of 1 test failed > > [INFO] Please report to mapreduce-dev@hadoop.apache.org > > [INFO] ================================================ > > [INFO] make[1]: *** [check-TESTS] Error 1 > > [INFO] make[1]: Leaving directory > > > > > `/home/cmccabe/hadoop4/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/target/native/container-executor' > > > > Needless to say, it didn't do much to improve my mood. I was even > > less happy when I discovered that -DskipTests has no effect on native > > tests (they always run.) See HADOOP-8480. > > > > Unfortunately, it seems like this problem is popping up more and more > > in our native code. It first appeared in test-task-controller (see > > MAPREDUCE-2376) and then later in test-container-executor > > (HADOOP-8499). The basic problem seems to be the hardcoded assumption > > that all user IDs below 1000 are system IDs. > > > > It is true that there are configuration files that can be changed to > > alter the minimum user ID, but unfortunately these configuration files > > are not used by the unit tests. So anyone developing on a platform > > where the user IDs start at 500 is now a second-class citizen, unable > > to run unit tests. This includes anyone running Red Hat, MacOS, > > Fedora, etc. > > > > Personally, I can change my user ID. It's a time-consuming process, > > because I need to re-uid all files, but I can do it. This luxury may > > not be available to everyone, though-- developers who don't have root > > on their machines, or are using a pre-assigned user ID to connect to > > NFS come to mind. > > > > It's true that we could hack around this with environment variables. > > It might even be possible to have Maven set these environment > > variables automatically from the current user ID. However, the larger > > question I have here is whether this UID validation scheme even makes > > any sense. I have a user named "nobody" whose user ID is 65534. > > Surely I should not be able to run map-reduce jobs as this user? Yet, > > under the current system, I can do exactly that. The root of the > > problem seems to be that there is both a default minimum and a default > > maximum for "automatic" user IDs. This configuration seems to be > > stored in /etc/login.defs. > > > > On my system, it has: > > SYSTEM_UID_MIN 100 > > SYSTEM_UID_MAX 499 > > UID_MIN 500 > > UID_MAX 60000 > > > > So that means that anything over 60000 (like nobody) is not considered > > a valid user ID for regular users. > > We could potentially read this file (at least on Linux) and get more > > sensible defaults. > > > > I am also curious if we could simply check whether the user we're > > trying to run the job as has a valid login shell. System users are > > almost always set to have a login shell of /bin/false or > > /sbin/nologin. > > > > Thoughts? > > Colin > > > > > > -- > Alejandro > --20cf3071c7109e4aa504c23a8826--