hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: hbase client security (cluster is secure)
Date Sat, 09 Jun 2012 15:25:30 GMT
Hi again Tony,

Moving this to user@hbase.apache.org (bcc'd
common-user@hadoop.apache.org). Please use the right user group lists
for best responses. I've added you to CC in case you aren't subscribed
to the HBase user lists.

Can you share the whole error/stacktrace-if-any/logs you get at the
HMaster that says AccessControlException? Would be helpful to see what
particular class/operation logged it to help you specifically.

I have an instance of 0.92-based cluster running after having followed
http://hbase.apache.org/book.html#zookeeper and
and it seems to work well enough with auth enabled.

On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean <Tony.Dean@sas.com> wrote:
> Hi all,
> I have created a hadoop/hbase/zookeeper cluster that is secured and verified.  Now a
simple test is to connect an hbase client (e.g, shell) to see its behavior.
> Well, I get the following message on the hbase master: AccessControlException: authentication
is required.
> Looking at the code it appears that the client passed "simple" authentication byte in
the rpc header.  Why, I don't know?
> My client configuration is as follows:
> hbase-site.xml:
>   <property>
>      <name>hbase.security.authentication</name>
>      <value>kerberos</value>
>   </property>
>   <property>
>      <name>hbase.rpc.engine</name>
>      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
>   </property>
> hbase-env.sh:
> export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/usr/local/hadoop/hbase/conf/hbase.jaas"
> hbase.jaas:
> Client {
>   com.sun.security.auth.module.Krb5LoginModule required
>   useKeyTab=false
>   useTicketCache=true
>  };
> I issue kinit for the client I want to use.  Then invoke hbase shell.  I simply issue
list and see the error on the server.
> Any ideas what I am doing wrong?
> Thanks so much!
> _____________________________________________
> From: Tony Dean
> Sent: Tuesday, June 05, 2012 5:41 PM
> To: common-user@hadoop.apache.org
> Subject: hadoop file permission 1.0.3 (security)
> Can someone detail the options that are available to set file permissions at the hadoop
and os level?  Here's what I have discovered thus far:
> dfs.permissions  = true|false (works as advertised)
> dfs.supergroup = supergroup (works as advertised)
> dfs.umaskmode = umask (I believe this should be used in lieu of dfs.umask) - it appears
to set the permissions for files created in hadoop fs (minus execute permission).
> why was dffs.umask deprecated?  what's difference between the 2.
> dfs.datanode.data.dir.perm = perm (not sure this is working at all?) I thought it was
supposed to set permission on blks at the os level.
> Are there any other file permission configuration properties?
> What I would really like to do is set data blk file permissions at the os level so that
the blocks can be locked down from all users except super and supergroup, but allow it to
be used accessed by hadoop API as specified by hdfs permissions.  Is this possible?
> Thanks.
> Tony Dean
> SAS Institute Inc.
> Senior Software Developer
> 919-531-6704
>  << OLE Object: Picture (Device Independent Bitmap) >>

Harsh J

View raw message