Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E40B9CFCE for ; Mon, 11 Jun 2012 17:27:33 +0000 (UTC) Received: (qmail 12745 invoked by uid 500); 11 Jun 2012 17:27:32 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 12700 invoked by uid 500); 11 Jun 2012 17:27:31 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 12691 invoked by uid 99); 11 Jun 2012 17:27:31 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Jun 2012 17:27:31 +0000 Received: from localhost (HELO mail-gh0-f169.google.com) (127.0.0.1) (smtp-auth username apurtell, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Mon, 11 Jun 2012 17:27:31 +0000 Received: by ghrr18 with SMTP id r18so3542069ghr.14 for ; Mon, 11 Jun 2012 10:27:30 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.182.231 with SMTP id eh7mr6886753igc.42.1339435650434; Mon, 11 Jun 2012 10:27:30 -0700 (PDT) Received: by 10.64.24.212 with HTTP; Mon, 11 Jun 2012 10:27:30 -0700 (PDT) In-Reply-To: References: Date: Mon, 11 Jun 2012 10:27:30 -0700 Message-ID: Subject: Re: hbase client security (cluster is secure) From: Andrew Purtell To: user@hbase.apache.org Cc: Harsh J Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable This is a bit of an X-Y discussion. The error here is not ZooKeeper related in any way: > 12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: rea= dAndProcess threw exception org.apache.hadoop.security.AccessControlExcepti= on: Authentication is required. Count of bytes read: 0 > org.apache.hadoop.security.AccessControlException: Authentication is requ= ired This says the Hadoop RPC client is configured for AuthMethod.SIMPLE but what the HBase master wants is AuthMethod.KERBEROS. See http://hbase.apache.org/book/security.html and insure the server and client side configurations conform before we should proceed further. Separately, > BTW: I knew that the hbase master authenticated to zookeeper via quorum e= nsemble in order to find where hadoop dfs lives, but I didn't realize the r= egion servers and hbase clients also needed to authenticate to zookeeper. = Explain? The master and regionservers authenticate to ZooKeeper to access protected znodes that serve a variety of functions. If clients could access them, such clients could subvert system functions including security. So we set up typically a "hbase" service principal and only that principal can access those protected znodes. Thus all HBase daemons must authenticate with ZooKeeper using that principal. Clients do not need to authenticate with ZooKeeper. The znodes which clients must access are not sensitive and do not have restrictive ACLs. (Although clients wanting to take administrative action currently must, see https://issues.apache.org/jira/browse/HBASE-6068.) On Sat, Jun 9, 2012 at 1:51 PM, Tony Dean wrote: > Hi Harsh, > > Thanks for re-routing to HBase user-group. ;-) > > I followed the same steps as you, at least I tried to. > > My cluster appears to be working and I outlined my client configuration b= elow. > > BTW: I knew that the hbase master authenticated to zookeeper via quorum e= nsemble in order to find where hadoop dfs lives, but I didn't realize the r= egion servers and hbase clients also needed to authenticate to zookeeper. = =A0Explain? > > Anyway, here are the traces that I collected. > > HBase master: > > 12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: Will send token of s= ize 50 from initSASLContext. > 12/06/09 16:40:36 DEBUG security.HBaseSaslRpcClient: SASL client context = established. Negotiated QoP: auth > 12/06/09 16:40:47 WARN ipc.HBaseServer: IPC Server listener on 60000: rea= dAndProcess threw exception org.apache.hadoop.security.AccessControlExcepti= on: Authentication is required. Count of bytes read: 0 > org.apache.hadoop.security.AccessControlException: Authentication is requ= ired > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnecti= on.readAndProcess(SecureServer.java:414) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead= (HBaseServer.java:703) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .doRunLoop(HBaseServer.java:495) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .run(HBaseServer.java:470) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(= ThreadPoolExecutor.java:886) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.run(Thre= adPoolExecutor.java:908) > =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:619) > 12/06/09 16:40:48 WARN ipc.HBaseServer: IPC Server listener on 60000: rea= dAndProcess threw exception org.apache.hadoop.security.AccessControlExcepti= on: Authentication is required. Count of bytes read: 0 > org.apache.hadoop.security.AccessControlException: Authentication is requ= ired > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnecti= on.readAndProcess(SecureServer.java:414) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead= (HBaseServer.java:703) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .doRunLoop(HBaseServer.java:495) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .run(HBaseServer.java:470) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(= ThreadPoolExecutor.java:886) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.run(Thre= adPoolExecutor.java:908) > =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:619) > 12/06/09 16:40:49 WARN ipc.HBaseServer: IPC Server listener on 60000: rea= dAndProcess threw exception org.apache.hadoop.security.AccessControlExcepti= on: Authentication is required. Count of bytes read: 0 > org.apache.hadoop.security.AccessControlException: Authentication is requ= ired > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.SecureServer$SecureConnecti= on.readAndProcess(SecureServer.java:414) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead= (HBaseServer.java:703) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .doRunLoop(HBaseServer.java:495) > =A0 =A0 =A0 =A0at org.apache.hadoop.hbase.ipc.HBaseServer$Listener$Reader= .run(HBaseServer.java:470) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(= ThreadPoolExecutor.java:886) > =A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.run(Thre= adPoolExecutor.java:908) > =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:619) > > > zookeeper: > > 12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Responding to client SASL= token. > 12/06/09 16:40:47 DEBUG server.ZooKeeperServer: Size of client SASL token= : 67 > Krb5Context.unwrap: token=3D[60 41 06 09 2a 86 48 86 f7 12 01 02 02 02 01= 11 00 ff ff ff ff 65 66 6b 58 fd f1 6b ec 27 53 22 23 5d 7b 03 33 0b e3 2d= 7f d3 a9 13 62 01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 41 53 2e 43 4f= 4d 01 ] > Krb5Context.unwrap: data=3D[01 01 00 00 73 61 73 70 61 64 40 4e 41 2e 53 = 41 53 2e 43 4f 4d ] > 12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Successfully authe= nticated client: authenticationID=3Dsaspad@NA.SAS.COM; =A0authorizationID= =3Dsaspad@NA.SAS.COM. > 12/06/09 16:40:47 INFO auth.SaslServerCallbackHandler: Setting authorized= ID: saspad > 12/06/09 16:40:47 INFO server.ZooKeeperServer: adding SASL authorization = for authorizationID: saspad > 12/06/09 16:40:50 DEBUG server.FinalRequestProcessor: Processing request:= : sessionid:0x137d2f4f3350005 type:ping cxid:0xfffffffffffffffe zxid:0xffff= ffffffff > > It looks like my client identity, "saspad", flowed across the wire succes= sfully. > > > Thanks again for taking a look at this. > > > > > > > -----Original Message----- > From: Harsh J [mailto:harsh@cloudera.com] > Sent: Saturday, June 09, 2012 11:26 AM > To: user@hbase.apache.org > Cc: Tony Dean > Subject: Re: hbase client security (cluster is secure) > > Hi again Tony, > > Moving this to user@hbase.apache.org (bcc'd common-user@hadoop.apache.org= ). Please use the right user group lists for best responses. I've added you= to CC in case you aren't subscribed to the HBase user lists. > > Can you share the whole error/stacktrace-if-any/logs you get at the HMast= er that says AccessControlException? Would be helpful to see what particula= r class/operation logged it to help you specifically. > > I have an instance of 0.92-based cluster running after having followed ht= tp://hbase.apache.org/book.html#zookeeper and https://ccp.cloudera.com/disp= lay/CDH4DOC/HBase+Security+Configuration > and it seems to work well enough with auth enabled. > > On Sat, Jun 9, 2012 at 3:41 AM, Tony Dean wrote: >> Hi all, >> >> I have created a hadoop/hbase/zookeeper cluster that is secured and veri= fied. =A0Now a simple test is to connect an hbase client (e.g, shell) to se= e its behavior. >> >> Well, I get the following message on the hbase master: AccessControlExce= ption: authentication is required. >> >> Looking at the code it appears that the client passed "simple" authentic= ation byte in the rpc header. =A0Why, I don't know? >> >> My client configuration is as follows: >> >> hbase-site.xml: >> =A0 >> =A0 =A0 =A0hbase.security.authentication >> =A0 =A0 =A0kerberos >> =A0 >> >> =A0 >> =A0 =A0 =A0hbase.rpc.engine >> =A0 =A0 =A0org.apache.hadoop.hbase.ipc.SecureRpcEngine >> =A0 >> >> hbase-env.sh: >> export HBASE_OPTS=3D"$HBASE_OPTS -Djava.security.auth.login.config=3D/us= r/local/hadoop/hbase/conf/hbase.jaas" >> >> hbase.jaas: >> Client { >> =A0 com.sun.security.auth.module.Krb5LoginModule required >> =A0 useKeyTab=3Dfalse >> =A0 useTicketCache=3Dtrue >> =A0}; >> >> I issue kinit for the client I want to use. =A0Then invoke hbase shell. = =A0I simply issue list and see the error on the server. >> >> Any ideas what I am doing wrong? >> >> Thanks so much! >> >> >> _____________________________________________ >> From: Tony Dean >> Sent: Tuesday, June 05, 2012 5:41 PM >> To: common-user@hadoop.apache.org >> Subject: hadoop file permission 1.0.3 (security) >> >> >> Can someone detail the options that are available to set file permission= s at the hadoop and os level? =A0Here's what I have discovered thus far: >> >> dfs.permissions =A0=3D true|false (works as advertised) dfs.supergroup = =3D >> supergroup (works as advertised) dfs.umaskmode =3D umask (I believe this >> should be used in lieu of dfs.umask) - it appears to set the permissions= for files created in hadoop fs (minus execute permission). >> why was dffs.umask deprecated? =A0what's difference between the 2. >> dfs.datanode.data.dir.perm =3D perm (not sure this is working at all?) I= thought it was supposed to set permission on blks at the os level. >> >> Are there any other file permission configuration properties? >> >> What I would really like to do is set data blk file permissions at the o= s level so that the blocks can be locked down from all users except super a= nd supergroup, but allow it to be used accessed by hadoop API as specified = by hdfs permissions. =A0Is this possible? >> >> Thanks. >> >> >> Tony Dean >> SAS Institute Inc. >> Senior Software Developer >> 919-531-6704 >> >> =A0<< OLE Object: Picture (Device Independent Bitmap) >> >> >> >> > > > > -- > Harsh J > > --=20 Best regards, =A0 =A0- Andy Problems worthy of attack prove their worth by hitting back. - Piet Hein (via Tom White)