hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rick Hangartner <hangart...@strands.com>
Subject Re: Hadoop Permissions Question -> [Fwd: Hbase on hadoop]
Date Fri, 09 May 2008 18:51:55 GMT
Hi Nicholas,

I was the original poster of this question.  Thanks for your  
response.  (And thanks for elevating attention to this Stack).

Am I missing something or is one implication of how hdfs determines  
privileges from the Linux filesystem that the hbase master must be run  
on the same machine as the hadoop hdfs (what part of it?) if one wants  
to use the hdfs permissions system or that right now we must run  
without permissions?

Here's most of the full Java trace for the exception that might be  
helpful in determining why superuser privilege is required to run  
HMaster.  Unfortunately log4j appears to have chopped off the last 6  
entries.  (This is from the hbase log).

Thanks for the help.

2008-05-08 10:13:28,670 ERROR org.apache.hadoop.hbase.HMaster: Can not  
start master
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native  
Method)
	at  
sun 
.reflect 
.NativeConstructorAccessorImpl 
.newInstance(NativeConstructorAccessorImpl.java:39)
	at  
sun 
.reflect 
.DelegatingConstructorAccessorImpl 
.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
	at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3312)
	at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3346)
Caused by: org.apache.hadoop.ipc.RemoteException:  
org.apache.hadoop.fs.permission.AccessControlException: Superuser  
privilege is required
	at  
org 
.apache 
.hadoop.dfs.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:4020)
	at org.apache.hadoop.dfs.FSNamesystem.setSafeMode(FSNamesystem.java: 
3794)
	at org.apache.hadoop.dfs.NameNode.setSafeMode(NameNode.java:473)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at  
sun 
.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 
39)
	at  
sun 
.reflect 
.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 
25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

	at org.apache.hadoop.ipc.Client.call(Client.java:512)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
	at org.apache.hadoop.dfs.$Proxy0.setSafeMode(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at  
sun 
.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 
39)
	at  
sun 
.reflect 
.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 
25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at  
org 
.apache 
.hadoop 
.io 
.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java: 
82)
	at  
org 
.apache 
.hadoop 
.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at org.apache.hadoop.dfs.$Proxy0.setSafeMode(Unknown Source)
	at org.apache.hadoop.dfs.DFSClient.setSafeMode(DFSClient.java:486)
	at  
org 
.apache 
.hadoop 
.dfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:257)
	at org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:893)
	at org.apache.hadoop.hbase.HMaster.<init>(HMaster.java:859)
	... 6 more

On May 9, 2008, at 11:34 AM, s29752-hadoopuser@yahoo.com wrote:

> Hi Stack,
>
>> One question this raises is if the "hbase:hbase" user and group are  
>> being derived from the Linux file system user and group, or if they  
>> are the hdfs user and group?
> HDFS currently does not manage user and group information.  User and  
> group in HDFS are being derived from the underlying OS (Linux in  
> your case) user and group.
>
>> Otherwise, how can we indicate that "hbase" user is in the hdfs  
>> group "supergroup"?
> In Hadoop conf, the property dfs.permissions.supergroup specifies  
> the super-user group and the default value is "supergroup".   
> Administrator should set this property to a dedicated group in the  
> underlying OS for HDFS superuser.  For example, you could create a  
> group "hdfs-superuser" in Linux, set dfs.permissions.supergroup to  
> "hdfs-superuser" and add "hdfs-superuser" to hbase's group list.   
> Then, "hbase" becomes a HDFS superuser.
>
> I don't know why superuser privilege is required to run HMaster.  I  
> might be able to tell if a complete stack track is given.
>
> Nicholas
>
>
>
> ----- Original Message ----
> From: stack <stack@duboce.net>
> To: hadoop-user@lucene.apache.org
> Sent: Thursday, May 8, 2008 8:44:42 PM
> Subject: Hadoop Permissions Question -> [Fwd: Hbase on hadoop]
>
> Can someone familiar with permissions offer an opinion on the below?
> Thanks,
> St.Ack
> Hi,
>
> We have an issue with hbase on hadoop and file system permissions we
> hope someone already knows the answer to.  Our apologies if we missed
> that this issue has already been addressed on this list.
>
> We are running hbase-0.1.2 on top of hadoop-0.16.3, starting the hbase
> daemon from an "hbase" user account and the hadoop daemon and have
> observed this "feature".   We are running hbase in a separate "hadoop"
> user account and hadoop in it's own "hadoop" user account on a single
> machine.
>
> When we try to start up hbase, we see this error message in the log:
>
> 2008-05-06 12:09:02,845 ERROR org.apache.hadoop.hbase.HMaster: Can not
> start master
> java.lang.reflect.InvocationTargetException
>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>    at
> sun
> .reflect
> .NativeConstructorAccessorImpl
> .newInstance(NativeConstructorAccessorImpl.java:39)
>    at
> sun
> .reflect
> .DelegatingConstructorAccessorImpl
> .newInstance(DelegatingConstructorAccessorImpl.java:27)
>    at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>    at org.apache.hadoop.hbase.HMaster.doMain(HMaster.java:3329)
>    at org.apache.hadoop.hbase.HMaster.main(HMaster.java:3363)
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.fs.permission.AccessControlException: Superuser
> privilege is required
>         ... (etc)
>
> If we run hbase in the hadoop user account we don't have any problems.
>
> We think we've narrowed the issue down a bit from the debug logs.
>
> The method "FSNameSystem.checkPermission()" method is throwing the
> exception because the "PermissionChecker()" constructor is returning
> that the hbase user is not a superuser or in the same supergroup as
> hadoop.
>
>   private void checkSuperuserPrivilege() throws
> AccessControlException {
>     if (isPermissionEnabled) {
>       PermissionChecker pc = new PermissionChecker(
>           fsOwner.getUserName(), supergroup);
>       if (!pc.isSuper) {
>         throw new AccessControlException("Superuser privilege is
> required");
>       }
>     }
>   }
>
> If we look at at the "PermissionChecker()" constructor we see that it
> is comparing the hdfs owner name (which should be "hadoop") and the
> hdfs file system owner's group ("supergroup") to the current user and
> groups, which the log seems to indicate the user is "hbase" and the
> groups for user "hbase" only include "hbase" :
>   PermissionChecker(String fsOwner, String supergroup
>       ) throws AccessControlException{
>     UserGroupInformation ugi = UserGroupInformation.getCurrentUGI();
>     if (LOG.isDebugEnabled()) {
>       LOG.debug("ugi=" + ugi);
>     }
>
>     if (ugi != null) {
>       user = ugi.getUserName();
>       groups.addAll(Arrays.asList(ugi.getGroupNames()));
>       isSuper = user.equals(fsOwner) || groups.contains(supergroup);
>     }
>     else {
>       throw new AccessControlException("ugi = null");
>     }
>   }
>
> The current user and group is derived from the thread information:
>   private static final ThreadLocal<UserGroupInformation> currentUGI
>     = new ThreadLocal<UserGroupInformation>();
>
>   /** @return the {@link UserGroupInformation} for the current thread
> */
>   public static UserGroupInformation getCurrentUGI() {
>     return currentUGI.get();
>   }
>
> which we're hoping might be enough to illuminate the problem.
>
> One question this raises is if the "hbase:hbase" user and group are
> being derived from the Linux file system user and group, or if they
> are the hdfs user and group?
> Otherwise, how can we indicate that "hbase" user is in the hdfs group
> "supergroup"? Is there a parameter in a hadoop configuration file?
> Apparently setting the groups of the web server to include
> "supergroup" didn't have any effect, although perhaps that could be
> for some other reason?
>
> Thanks.


Mime
View raw message