Return-Path: X-Original-To: apmail-accumulo-notifications-archive@minotaur.apache.org Delivered-To: apmail-accumulo-notifications-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3259FF67F for ; Wed, 17 Apr 2013 19:37:19 +0000 (UTC) Received: (qmail 28991 invoked by uid 500); 17 Apr 2013 19:37:19 -0000 Delivered-To: apmail-accumulo-notifications-archive@accumulo.apache.org Received: (qmail 28945 invoked by uid 500); 17 Apr 2013 19:37:18 -0000 Mailing-List: contact notifications-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@apache.org Delivered-To: mailing list notifications@accumulo.apache.org Received: (qmail 28801 invoked by uid 99); 17 Apr 2013 19:37:18 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 17 Apr 2013 19:37:18 +0000 Date: Wed, 17 Apr 2013 19:37:18 +0000 (UTC) From: "Hudson (JIRA)" To: notifications@accumulo.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (ACCUMULO-1282) Monitor requires jumping through hadoop permissions hoops (and granting accumulo broad permissions) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/ACCUMULO-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13634340#comment-13634340 ] Hudson commented on ACCUMULO-1282: ---------------------------------- Integrated in Accumulo-1.5 #82 (See [https://builds.apache.org/job/Accumulo-1.5/82/]) ACCUMULO-1282 ignore permission problems getting disk usage information (Revision 1468995) Result = SUCCESS ecn : Files : * /accumulo/branches/1.5/server/src/main/java/org/apache/accumulo/server/monitor/servlets/DefaultServlet.java > Monitor requires jumping through hadoop permissions hoops (and granting accumulo broad permissions) > --------------------------------------------------------------------------------------------------- > > Key: ACCUMULO-1282 > URL: https://issues.apache.org/jira/browse/ACCUMULO-1282 > Project: Accumulo > Issue Type: Bug > Affects Versions: 1.4.0 > Reporter: Michael Berman > Assignee: Eric Newton > Priority: Minor > Fix For: 1.5.0 > > > The monitor's master status box requires getContentSummary(new Path("/")) on HDFS (otherwise see stack trace below). There doesn't seem to be any way to grant this permission to a particular user or change the permissions on /, so for this to work, you either need to run accumulo and hdfs as the same user or add accumulo's user to hadoop's supergroup. Either way, this seems like it's granting unnecessarily broad permissions to accumulo. Is there some other way to get the disk usage information out of hadoop with normal user-level permissions? > Stack trace running as separate users without special permissions: > {code} > 2013-04-16 20:34:57,770 [servlets.BasicServlet] DEBUG: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=accumulo, ac > cess=READ_EXECUTE, inode="system":hadoop:supergroup:rwx------ > org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=accumulo, access=READ_EXECUTE, inode="system":hadoop:supergroup:rwx- > ----- > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:532) > at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95) > at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) > at org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:1438) > at org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:251) > at org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(TraceFileSystem.java:312) > at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(DefaultServlet.java:317) > at org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(DefaultServlet.java:256) > at org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(BasicServlet.java:61) > at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(DefaultServlet.java:157) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401) > at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) > at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542) > at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228) > at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) > Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.security.AccessControlException: Permission denied: user=accumulo, access=READ_EXECUTE, inode="system":hadoop:supergroup:rwx-- > ---- > at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199) > at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkSubAccess(FSPermissionChecker.java:168) > at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:137) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5468) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:2225) > at org.apache.hadoop.hdfs.server.namenode.NameNode.getContentSummary(NameNode.java:986) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:416) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) > at org.apache.hadoop.ipc.Client.call(Client.java:1107) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) > at sun.proxy.$Proxy1.getContentSummary(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:616) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) > at sun.proxy.$Proxy1.getContentSummary(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:1436) > ... 22 more > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira