Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 66594EC6F for ; Wed, 27 Feb 2013 01:22:51 +0000 (UTC) Received: (qmail 98103 invoked by uid 500); 27 Feb 2013 01:22:46 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 97991 invoked by uid 500); 27 Feb 2013 01:22:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 97983 invoked by uid 99); 27 Feb 2013 01:22:46 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Feb 2013 01:22:46 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of hadoop.ca@gmail.com designates 209.85.128.53 as permitted sender) Received: from [209.85.128.53] (HELO mail-qe0-f53.google.com) (209.85.128.53) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Feb 2013 01:22:41 +0000 Received: by mail-qe0-f53.google.com with SMTP id cz11so24273qeb.12 for ; Tue, 26 Feb 2013 17:22:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=w5WK4iEKd3TWRvac4XzIZrgRoEaNXomxCgqWy3Cl9lk=; b=GzwJvd8tSk9syqj6Mjt4NSwr37kVHwnTlkqMuz8QXUSLf7CcqO+ysCSnCInsgXtEtJ oFskgNtK1KrhDElxIdEWuWnUKR7J+V+nHqJp/0jhbdvV+sOi5btnnpJY3z1+b7Um33eq foe+l+llJqjO4I2kOzhbLfNoqVziHF129DoaECSpDuy0mrxigsiDiMUi9/1AgW/2KifD looTTvthHcB7m6tFZMdYZmavZycRcr/qja+3rY8HQKdTY+CVDyl8UK6cgJ1mJq+V1s+t hVmxgozewaWKaZrq3x1ExNo5PXuFxDFn0cqmpJ1x0TGCYWpSsYX95+2zGsSmSLH0Tm90 2qPg== MIME-Version: 1.0 X-Received: by 10.224.186.82 with SMTP id cr18mr5290431qab.64.1361928141302; Tue, 26 Feb 2013 17:22:21 -0800 (PST) Received: by 10.49.49.234 with HTTP; Tue, 26 Feb 2013 17:22:21 -0800 (PST) In-Reply-To: References: Date: Tue, 26 Feb 2013 17:22:21 -0800 Message-ID: Subject: Re: JobTracker security From: Serge Blazhievsky To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf303b3e8312934904d6aa9b4f X-Virus-Checked: Checked by ClamAV on apache.org --20cf303b3e8312934904d6aa9b4f Content-Type: text/plain; charset=ISO-8859-1 All right! Thanks for advice! Serge On Tue, Feb 26, 2013 at 4:57 PM, Jean-Marc Spaggiari < jean-marc@spaggiari.org> wrote: > I mean the executable files. Or even the entire hadoop directory? > People might still be able to install a local copy of hadoop and > configure it to point to the same trackers, and then do the kill, but > at least that will complicate the things a bit? > > If user1 and user2 are on different groups also, that might allow you > to block some user2 actions against user1 processes? Also, you should > take look to the "Security" chapter in "Hadoop: The Definitive Guide" > and to the hadoop-policy.xml file (I never looked at this file, so > maybe it's not at all related). > > 2013/2/26 Serge Blazhievsky : > > hi Jean, > > > > Do you mean input files for hadoop ? or hadoop directory? > > > > Serge > > > > > > On Tue, Feb 26, 2013 at 4:38 PM, Jean-Marc Spaggiari > > wrote: > >> > >> Maybe restrict access to the hadoop file(s) to the user1? > >> > >> 2013/2/26 Serge Blazhievsky : > >> > I am trying to not to use kerberos... > >> > > >> > Is there other option? > >> > > >> > Thanks > >> > Serge > >> > > >> > > >> > On Tue, Feb 26, 2013 at 3:31 PM, Patai Sangbutsarakum > >> > wrote: > >> >> > >> >> Kerberos > >> >> > >> >> From: Serge Blazhievsky > >> >> Reply-To: > >> >> Date: Tue, 26 Feb 2013 15:29:08 -0800 > >> >> To: > >> >> Subject: JobTracker security > >> >> > >> >> Hi all, > >> >> > >> >> Is there a way to restrict job monitoring and management only to jobs > >> >> started by each individual user? > >> >> > >> >> > >> >> The basic scenario is: > >> >> > >> >> 1. Start a job under user1 > >> >> 2. Login as user2 > >> >> 3. hadoop job -list to retrieve job id > >> >> 4. hadoop job -kill job_id > >> >> 5. Job gets terminated.... > >> >> > >> >> Is there something that needs to be enabled to prevent that from > >> >> happening? > >> >> > >> >> Thanks > >> >> Serge > >> > > >> > > > > > > --20cf303b3e8312934904d6aa9b4f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable All right!

Thanks for advice!


Serge

On Tue, Feb 26, 2013 at 4:57 PM, Jean-Marc Spaggiari <j= ean-marc@spaggiari.org> wrote:
I mean the executable files. Or even the ent= ire hadoop directory?
People might still be able to install a local copy of hadoop and
configure it to point to the same trackers, and then do the kill, but
at least that will complicate the things a bit?

If user1 and user2 are on different groups also, that might allow you
to block some user2 actions against user1 processes? Also, you should
take look to the "Security" chapter in "Hadoop: The Definiti= ve Guide"
and to the hadoop-policy.xml file (I never looked at this file, so
maybe it's not at all related).

2013/2/26 Serge Blazhievsky <hado= op.ca@gmail.com>:
> hi Jean,
>
> Do you mean input files for hadoop ? or hadoop directory?
>
> Serge
>
>
> On Tue, Feb 26, 2013 at 4:38 PM, Jean-Marc Spaggiari
> <jean-marc@spaggiari.org= > wrote:
>>
>> Maybe restrict access to the hadoop file(s) to the user1?
>>
>> 2013/2/26 Serge Blazhievsky <hadoop.ca@gmail.com>:
>> > I am trying to not to use kerberos...
>> >
>> > Is there other option?
>> >
>> > Thanks
>> > Serge
>> >
>> >
>> > On Tue, Feb 26, 2013 at 3:31 PM, Patai Sangbutsarakum
>> > <Patai.Sa= ngbutsarakum@turn.com> wrote:
>> >>
>> >> Kerberos
>> >>
>> >> From: Serge Blazhievsky <hadoop.ca@gmail.com>
>> >> Reply-To: <u= ser@hadoop.apache.org>
>> >> Date: Tue, 26 Feb 2013 15:29:08 -0800
>> >> To: <user@ha= doop.apache.org>
>> >> Subject: JobTracker security
>> >>
>> >> Hi all,
>> >>
>> >> Is there a way to restrict job monitoring and management = only to jobs
>> >> started by each individual user?
>> >>
>> >>
>> >> The basic scenario is:
>> >>
>> >> 1. Start a job under user1
>> >> 2. Login as user2
>> >> 3. hadoop job -list to retrieve job id
>> >> 4. hadoop job -kill job_id
>> >> 5. Job gets terminated....
>> >>
>> >> Is there something that needs to be enabled to prevent th= at from
>> >> happening?
>> >>
>> >> Thanks
>> >> Serge
>> >
>> >
>
>

--20cf303b3e8312934904d6aa9b4f--