accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Drew Farris <>
Subject Re: Resolving the "Permission Denied" Message On Accumulo Monitor Page
Date Tue, 23 Oct 2012 14:15:29 GMT
For what it's worth, I encountered this when trying to set up a system
where Accumulo is run by a user different than the one used to run

This will likely become more prevalent as people move towards hadoop
1+ where different users are used for hdfs and mapred -- the hadoop
user becomes a less obvious choice for running Accumulo.

In addition to the Namenode permission denied message, it seems that
the monitor is unable to connect to the master when the accumulo user
is not in the Hadoop supergroup (in 1.4.1)

I observed the same error messages David recorded above, but didn't
see anything that seemed specific to the master issue.

I haven't had the chance to dig much further, has anyone looked in to
this? Any thoughts on whether it might be possible for things to work
without having to add the accumulo user to the hdfs supergroup?

Perhaps a discussion of running Accumulo as a particular user could be
added to the installation manual - I don't think the current manual
covers anything related to user accounts at all.


On Thu, Jul 26, 2012 at 6:16 PM, David Medinets
<> wrote:
> On Mon, Jul 23, 2012 at 8:35 PM, Josh Elser <> wrote:
>> Out of curiosity, what is the actual exception/stack-trace printed in the
>> monitor's log?
> 26 21:43:37,848 [servlets.BasicServlet] DEBUG:
> Permission denied:
> user=accumulo, access=READ_EXECUTE,
> inode="system":hadoop:supergroup:rwx-wx-wx
> Permission denied:
> user=accumulo, access=READ_EXECUTE,
> inode="system":hadoop:supergroup:rwx-wx-wx
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
>         at java.lang.reflect.Constructor.newInstance(
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
>         at org.apache.hadoop.hdfs.DFSClient.getContentSummary(
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(
>         at org.apache.accumulo.server.trace.TraceFileSystem.getContentSummary(
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doAccumuloTable(
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.pageBody(
>         at org.apache.accumulo.server.monitor.servlets.BasicServlet.doGet(
>         at org.apache.accumulo.server.monitor.servlets.DefaultServlet.doGet(
>         at javax.servlet.http.HttpServlet.service(
>         at javax.servlet.http.HttpServlet.service(
>         at org.mortbay.jetty.servlet.ServletHolder.handle(
>         at org.mortbay.jetty.servlet.ServletHandler.handle(
>         at org.mortbay.jetty.servlet.SessionHandler.handle(
>         at org.mortbay.jetty.handler.ContextHandler.handle(
>         at org.mortbay.jetty.handler.ContextHandlerCollection.handle(
>         at org.mortbay.jetty.handler.HandlerWrapper.handle(
>         at org.mortbay.jetty.Server.handle(
>         at org.mortbay.jetty.HttpConnection.handleRequest(
>         at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(
>         at org.mortbay.jetty.HttpParser.parseNext(
>         at org.mortbay.jetty.HttpParser.parseAvailable(
>         at org.mortbay.jetty.HttpConnection.handle(
>         at$
>         at org.mortbay.thread.QueuedThreadPool$

View raw message