Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6576F101C5 for ; Tue, 18 Mar 2014 15:17:40 +0000 (UTC) Received: (qmail 38215 invoked by uid 500); 18 Mar 2014 15:17:39 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 38150 invoked by uid 500); 18 Mar 2014 15:17:37 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 38141 invoked by uid 99); 18 Mar 2014 15:17:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Mar 2014 15:17:36 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mdrob@cloudera.com designates 209.85.219.51 as permitted sender) Received: from [209.85.219.51] (HELO mail-oa0-f51.google.com) (209.85.219.51) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Mar 2014 15:17:31 +0000 Received: by mail-oa0-f51.google.com with SMTP id i4so7181540oah.38 for ; Tue, 18 Mar 2014 08:17:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type; bh=5cyGgvvTbz05rqRV4f0cgyvUr1rzJOH6IeYhSm/8IZ8=; b=V69Ep9pU5cc8LQk2JNDV5Cv1mekAizpt1qQOCL3/VKhs7mrgf2kT3NgXk241IFmKsC kF8M2R37mFZbH7ggerEx3PO4Jj5hYaooiDDenKeyvFJeySoSlvxR/vsWnhPpP778kbAZ AkB1voeX3qX1MAr/wiGCPqh8QDjhqXxlwl9/2JaGGtdaYW/t3uGP1jraSCyFNlYb3mew WMulwv2BlrrI7LZjM2s4vpSLWBpXkBNcZRL6VrvbhR+taY3W8huCIbZmW1oXxJwXLBzQ CaT0aNCfeY2PcFZtphsm4I+XXs/YdFM/+N4LzRDHpO9qOrZbZRD6fy+lDhrOpwilua4u I53Q== X-Gm-Message-State: ALoCoQm6+ZEW/s9hKbmLSeA3xUVuafu+hyk2igP+/rlD2ReeX3+arKJXAVo7PlVX+mWvHEmzKeyY X-Received: by 10.182.250.200 with SMTP id ze8mr657080obc.72.1395155830302; Tue, 18 Mar 2014 08:17:10 -0700 (PDT) MIME-Version: 1.0 Received: by 10.60.35.104 with HTTP; Tue, 18 Mar 2014 08:16:50 -0700 (PDT) In-Reply-To: References: <53264A7C.6090603@gmail.com> From: Mike Drob Date: Tue, 18 Mar 2014 11:16:50 -0400 Message-ID: Subject: Re: Installing with Hadoop 2.2.0 To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=089e0160c660abf3e904f4e3079f X-Virus-Checked: Checked by ClamAV on apache.org --089e0160c660abf3e904f4e3079f Content-Type: text/plain; charset=ISO-8859-1 Can you check a tablet server log? If the contents are too large for the mailing list, then you can post them as a github gist or paste, or whatever your favourite tool is. On Tue, Mar 18, 2014 at 11:04 AM, Benjamin Parrish < benjamin.d.parrish@gmail.com> wrote: > First off, are there specific ports that need to be opened up for > accumulo? I have hadoop operating without any issues as a 5 node cluster. > Zookeeper seems to be operating with 2181, 3888, 2888 ports open. > > Here is some data from trying to get everything started and getting into > the shell. I discluded the bash portion as Eric suggested because the > mailing list rejected it for length and thinking it was spam. > > bin/start-all.sh > > [root@hadoop-node-1 zookeeper]# bash -x > /usr/local/accumulo/bin/start-all.sh > Starting monitor on hadoop-node-1 > WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 > Starting tablet servers ....... done > Starting tablet server on hadoop-node-3 > Starting tablet server on hadoop-node-5 > Starting tablet server on hadoop-node-2 > Starting tablet server on hadoop-node-4 > WARN : Max files open on hadoop-node-3 is 1024, recommend 65536 > WARN : Max files open on hadoop-node-2 is 1024, recommend 65536 > WARN : Max files open on hadoop-node-5 is 1024, recommend 65536 > WARN : Max files open on hadoop-node-4 is 1024, recommend 65536 > Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library > /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled > stack guard. The VM will try to fix the stack guard no > w. > It's highly recommended that you fix the library with 'execstack -c > ', or link it with '-z noexecstack'. > 2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > 2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting to talk to > zookeeper > 2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zookeeper connected and > initialized, attemping to talk to HDFS > 2014-03-18 10:38:44,558 [server.Accumulo] INFO : Connected to HDFS > Starting master on hadoop-node-1 > WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 > Starting garbage collector on hadoop-node-1 > WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 > Starting tracer on hadoop-node-1 > WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 > > starting shell as root... > > [root@hadoop-node-1 zookeeper]# bash -x /usr/local/accumulo/bin/accumulo > shell -u root > Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library > /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled > stack guard. The VM will try to fix the stack guard no > w. > It's highly recommended that you fix the library with 'execstack -c > ', or link it with '-z noexecstack'. > 2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Password: **** > 2014-03-18 10:38:58,762 [impl.ServerClient] WARN : There are no tablet > servers: check that zookeeper and accumulo are running. > > ... this is the point where it sits and acts like it doesn't do anything > > -- LOGS -- (most of this looks to be that I cannot connect to anything) > > here is the tail -f > $ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debug.log > > 2014-03-18 10:42:54,617 [impl.ThriftScanner] DEBUG: Failed to locate > tablet for table : !0 row : ~err_ > 2014-03-18 10:42:57,625 [monitor.Monitor] INFO : Failed to obtain problem > reports > java.lang.RuntimeException: > org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException > at > org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174) > at > org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241) > at > org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299) > at > org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:399) > at > org.apache.accumulo.server.monitor.Monitor$1.run(Monitor.java:530) > at > org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34) > at java.lang.Thread.run(Thread.java:744) > Caused by: > org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException > at > org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:212) > at > org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82) > at > org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164) > ... 6 more > > here is the tail -f > $ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug.log > > 2014-03-18 10:47:44,759 [impl.ServerClient] DEBUG: ClientService request > failed null, retrying ... > org.apache.thrift.transport.TTransportException: Failed to connect to a > server > at > org.apache.accumulo.core.client.impl.ThriftTransportPool.getAnyTransport(ThriftTransportPool.java:455) > at > org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:154) > at > org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:128) > at > org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123) > at > org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105) > at > org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71) > at > org.apache.accumulo.core.client.impl.ConnectorImpl.(ConnectorImpl.java:64) > at > org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:154) > at > org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:149) > at > org.apache.accumulo.server.trace.TraceServer.(TraceServer.java:200) > at > org.apache.accumulo.server.trace.TraceServer.main(TraceServer.java:295) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.accumulo.start.Main$1.run(Main.java:103) > at java.lang.Thread.run(Thread.java:744) > > > On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton wrote: > >> Can you post the exact error message you are seeing? >> >> Verify that your HADOOP_PREFIX and HADOOP_CONF_DIR are being set properly >> in accumulo-site.xml. >> >> The output of: >> >> bash -x $ACCUMULO_HOME/bin/accumulo shell -u root >> >> >> would also help. >> >> It's going to be something simple. >> >> >> On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish < >> benjamin.d.parrish@gmail.com> wrote: >> >>> Looking to see if there was an answer to this issue or if you could >>> point me in a direction or example that could lead to a solution. >>> >>> >>> On Sun, Mar 16, 2014 at 9:52 PM, Benjamin Parrish < >>> benjamin.d.parrish@gmail.com> wrote: >>> >>>> I am running Accumulo 1.5.1 >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> instance.zookeeper.host >>>> >>>> hadoop-node-1:2181,hadoop-node-2:2181,hadoop-node-3:2181,hadoop-node-4:2181,hadoop-node-5:2181 >>>> comma separated list of zookeeper servers >>>> >>>> >>>> >>>> logger.dir.walog >>>> walogs >>>> The property only needs to be set if upgrading from >>>> 1.4 which used to store write-ahead logs on the local >>>> filesystem. In 1.5 write-ahead logs are stored in DFS. When 1.5 >>>> is started for the first time it will copy any 1.4 >>>> write ahead logs into DFS. It is possible to specify a >>>> comma-separated list of directories. >>>> >>>> >>>> >>>> >>>> instance.secret >>>> >>>> A secret unique to a given instance that all servers >>>> must know in order to communicate with one another. >>>> Change it before initialization. To >>>> change it later use ./bin/accumulo >>>> org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new >>>> [newpasswd], >>>> and then update this file. >>>> >>>> >>>> >>>> >>>> tserver.memory.maps.max >>>> 1G >>>> >>>> >>>> >>>> tserver.cache.data.size >>>> 128M >>>> >>>> >>>> >>>> tserver.cache.index.size >>>> 128M >>>> >>>> >>>> >>>> trace.token.property.password >>>> >>>> >>>> >>>> >>>> >>>> trace.user >>>> root >>>> >>>> >>>> >>>> general.classpaths >>>> >>>> $HADOOP_PREFIX/share/hadoop/common/.*.jar, >>>> $HADOOP_PREFIX/share/hadoop/common/lib/.*.jar, >>>> $HADOOP_PREFIX/share/hadoop/hdfs/.*.jar, >>>> $HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar, >>>> $HADOOP_PREFIX/share/hadoop/yarn/.*.jar, >>>> /usr/lib/hadoop/.*.jar, >>>> /usr/lib/hadoop/lib/.*.jar, >>>> /usr/lib/hadoop-hdfs/.*.jar, >>>> /usr/lib/hadoop-mapreduce/.*.jar, >>>> /usr/lib/hadoop-yarn/.*.jar, >>>> $ACCUMULO_HOME/server/target/classes/, >>>> $ACCUMULO_HOME/lib/accumulo-server.jar, >>>> $ACCUMULO_HOME/core/target/classes/, >>>> $ACCUMULO_HOME/lib/accumulo-core.jar, >>>> $ACCUMULO_HOME/start/target/classes/, >>>> $ACCUMULO_HOME/lib/accumulo-start.jar, >>>> $ACCUMULO_HOME/fate/target/classes/, >>>> $ACCUMULO_HOME/lib/accumulo-fate.jar, >>>> $ACCUMULO_HOME/proxy/target/classes/, >>>> $ACCUMULO_HOME/lib/accumulo-proxy.jar, >>>> $ACCUMULO_HOME/lib/[^.].*.jar, >>>> $ZOOKEEPER_HOME/zookeeper[^.].*.jar, >>>> $HADOOP_CONF_DIR, >>>> $HADOOP_PREFIX/[^.].*.jar, >>>> $HADOOP_PREFIX/lib/[^.].*.jar, >>>> >>>> Classpaths that accumulo checks for updates and class >>>> files. >>>> When using the Security Manager, please remove the >>>> ".../target/classes/" values. >>>> >>>> >>>> >>>> >>>> >>>> On Sun, Mar 16, 2014 at 9:06 PM, Josh Elser wrote: >>>> >>>>> Posting your accumulo-site.xml (filtering out instance.secret and >>>>> trace.password before you post) would also help us figure out what exactly >>>>> is going on. >>>>> >>>>> >>>>> On 3/16/14, 8:41 PM, Mike Drob wrote: >>>>> >>>>>> Which version of Accumulo are you using? >>>>>> >>>>>> You might be missing the hadoop libraries from your classpath. For >>>>>> this, >>>>>> you would check your accumulo-site.xml and find the comment about >>>>>> Hadoop >>>>>> 2 in the file. >>>>>> >>>>>> >>>>>> On Sun, Mar 16, 2014 at 8:28 PM, Benjamin Parrish >>>>>> > >>>>>> wrote: >>>>>> >>>>>> I have a couple of issues when trying to use Accumulo on Hadoop >>>>>> 2.2.0 >>>>>> >>>>>> 1) I start with accumulo init and everything runs through just >>>>>> fine, >>>>>> but I can find '/accumulo' using 'hadoop fs -ls /' >>>>>> >>>>>> 2) I try to run 'accumulo shell -u root' and it says that that >>>>>> Hadoop and ZooKeeper are not started, but if I run 'jps' on the >>>>>> each >>>>>> cluster node it shows all the necessary processes for both in the >>>>>> JVM. Is there something I am missing? >>>>>> >>>>>> -- >>>>>> Benjamin D. Parrish >>>>>> H: 540-597-7860 >>>>>> >>>>>> >>>>>> >>>> >>>> >>>> -- >>>> Benjamin D. Parrish >>>> H: 540-597-7860 >>>> >>> >>> >>> >>> -- >>> Benjamin D. Parrish >>> H: 540-597-7860 >>> >> >> > > > -- > Benjamin D. Parrish > H: 540-597-7860 > --089e0160c660abf3e904f4e3079f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Can you check a tablet server log? If the contents ar= e too large for the mailing list, then you can post them as a github gist o= r paste, or whatever your favourite tool is.


On Tue, Mar 18, 2014 at 11:04 AM, Benjam= in Parrish <benjamin.d.parrish@gmail.com> wrote:<= br>
First off, are there specific ports that need to be opened up for accumulo= ? I have hadoop operating without any issues as a 5 node cluster. =A0Zookee= per seems to be operating with 2181, 3888, 2888 ports open.

Here is some data fro= m trying to get everything started and getting into the shell. I discluded = the bash portion as Eric suggested because the mailing list rejected it for= length and thinking it was spam.

bin/st= art-all.sh
=
[= root@hadoop-node-1 zookeeper]# bash -x /usr/local/accumulo/bin/start-all.sh=
Starting monitor on hadoop-node-1
WARN : Max files open on h= adoop-node-1 is 1024, recommend 65536
Starting tablet servers ....... done<= /div>
Starting tablet server on hadoop-node-3
Starting tablet serv= er on hadoop-node-5
Starting tablet server on hadoop-node-2
=
Starting tablet server on hadoop-node-4
WARN : Max files ope= n on hadoop-node-3 is 1024, recommend 65536
WARN : Max files open on hadoop-node-2 is 1024, recommend 65536
<= div>WARN : Max files open on hadoop-node-5 is 1024, recommend 65536
WARN : Max files open on hadoop-node-4 is 1024, recommend 65536
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /us= r/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stac= k guard. The VM will try to fix the stack guard no
w.
It's highly recommended that you fix the library with 'execstack -c= <libfile>', or link it with '-z noexecstack'.
= 2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unable to load nativ= e-hadoop library for your platform... using builtin-java classes where appl= icable
2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting to talk to= zookeeper
2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zooke= eper connected and initialized, attemping to talk to HDFS
2014-03= -18 10:38:44,558 [server.Accumulo] INFO : Connected to HDFS
Starting master on hadoop-node-1
WARN : Max files open on ha= doop-node-1 is 1024, recommend 65536
Starting garbage collector o= n hadoop-node-1
WARN : Max files open on hadoop-node-1 is 1024, r= ecommend 65536
Starting tracer on hadoop-node-1
WARN : Max files open on ha= doop-node-1 is 1024, recommend 65536

starting shell as root...

[root@hadoop-node-1 zookeeper]# bash -x /usr/local/accumulo/b= in/accumulo shell -u root
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /us= r/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stac= k guard. The VM will try to fix the stack guard no
w.
It's highly recommended that you fix the library with 'execstack -c= <libfile>', or link it with '-z noexecstack'.
= 2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unable to load nativ= e-hadoop library for your platform... using builtin-java classes where appl= icable
Password: ****
2014-03-18 10:38:58,762 [impl.ServerClient] W= ARN : There are no tablet servers: check that zookeeper and accumulo are ru= nning.

... th= is is the point where it sits and acts like it doesn't do anything

-- LOGS -- (most of this looks to be that I cannot connect to anything)

here is the tail -f = $ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debug.log

2014-03-18 10:42:= 54,617 [impl.ThriftScanner] DEBUG: =A0Failed to locate tablet for table : != 0 row : ~err_
2014-03-18 10:42:57,625 [monitor.Monitor] INFO : =A0Failed to obtain p= roblem reports
java.lang.RuntimeException: org.apache.accumulo.co= re.client.impl.ThriftScanner$ScanTimedOutException
=A0 =A0 =A0 = =A0 at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(Scanner= Iterator.java:174)
=A0 =A0 =A0 =A0 at org.apache.accumulo.server.problems.ProblemReports$= 3.hasNext(ProblemReports.java:241)
=A0 =A0 =A0 =A0 at org.apache.= accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)<= /div>
=A0 =A0 =A0 =A0 at org.apache.accumulo.server.monitor.Monitor.fet= chData(Monitor.java:399)
=A0 =A0 =A0 =A0 at org.apache.accumulo.server.monitor.Monitor$1.run(Mo= nitor.java:530)
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.util.= LoggingRunnable.run(LoggingRunnable.java:34)
=A0 =A0 =A0 =A0 at j= ava.lang.Thread.run(Thread.java:744)
Caused by: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTime= dOutException
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.= impl.ThriftScanner.scan(ThriftScanner.java:212)
=A0 =A0 =A0 =A0 a= t org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIt= erator.java:82)
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.ScannerIterato= r.hasNext(ScannerIterator.java:164)
=A0 =A0 =A0 =A0 ... 6 more

here is the tail -f $ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug.lo= g

2014-03-18= 10:47:44,759 [impl.ServerClient] DEBUG: ClientService request failed null,= retrying ...
org.apache.thrift.transport.TTransportException: Failed to connect to = a server
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.= ThriftTransportPool.getAnyTransport(ThriftTransportPool.java:455)
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.ServerClient.getCo= nnection(ServerClient.java:154)
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.ServerClient.g= etConnection(ServerClient.java:128)
=A0 =A0 =A0 =A0 at org.apache= .accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123= )
=A0 =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.ServerClient.execut= eRaw(ServerClient.java:105)
=A0 =A0 =A0 =A0 at org.apache.accumul= o.core.client.impl.ServerClient.execute(ServerClient.java:71)
=A0= =A0 =A0 =A0 at org.apache.accumulo.core.client.impl.ConnectorImpl.<init= >(ConnectorImpl.java:64)
=A0 =A0 =A0 =A0 at org.apache.accumulo.server.client.HdfsZooInstance.g= etConnector(HdfsZooInstance.java:154)
=A0 =A0 =A0 =A0 at org.apac= he.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java= :149)
=A0 =A0 =A0 =A0 at org.apache.accumulo.server.trace.TraceServer.<init>= ;(TraceServer.java:200)
=A0 =A0 =A0 =A0 at org.apache.accumulo.se= rver.trace.TraceServer.main(TraceServer.java:295)
=A0 =A0 =A0 =A0= at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
=A0 =A0 =A0 =A0 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeM= ethodAccessorImpl.java:57)
=A0 =A0 =A0 =A0 at sun.reflect.Delegat= ingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
=A0 =A0 =A0 =A0 at java.lang.reflect.Method.invoke(Method.java:606)
=A0 =A0 =A0 =A0 at org.apache.accumulo.start.Main$1.run(Main.java:103)=
=A0 =A0 =A0 =A0 at java.lang.Thread.run(Thread.java:744)


On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton <eric.newton@gmail.com> wrote:
Can you post the exact erro= r message you are seeing?

Verify that your HADOOP_PREFIX= and HADOOP_CONF_DIR are being set properly in accumulo-site.xml.

The output of:

bash -x $ACCUMULO_HOME/bin/accumulo shell -u root

would also help.

It's go= ing to be something simple.


On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish <= benjamin.= d.parrish@gmail.com> wrote:
Looking to see if there was= an answer to this issue or if you could point me in a direction or example= that could lead to a solution.


On = Sun, Mar 16, 2014 at 9:52 PM, Benjamin Parrish <benjamin.d.parr= ish@gmail.com> wrote:
I am running Accumulo = 1.5.1

<?xml version=3D"1.0" encoding= =3D"UTF-8"?>
<!--
=A0 Licensed to the Apache Software Foundation (ASF)= under one or more
=A0 contributor license agreements. =A0See the NOTICE file distributed= with
=A0 this work for additional information regarding copyrigh= t ownership.
=A0 The ASF licenses this file to You under the Apac= he License, Version 2.0
=A0 (the "License"); you may not use this file except in com= pliance with
=A0 the License. =A0You may obtain a copy of the Lic= ense at


=A0 Unless required by applicable law or agreed to in w= riting, software
=A0 distributed under the License is distributed= on an "AS IS" BASIS,
=A0 WITHOUT WARRANTIES OR CONDITI= ONS OF ANY KIND, either express or implied.
=A0 See the License for the specific language governing permissions an= d
=A0 limitations under the License.
-->
&= lt;?xml-stylesheet type=3D"text/xsl" href=3D"configuration.x= sl"?>

<configuration>
=A0 <!-- Put your si= te-specific accumulo configurations here. The available configuration value= s along with their defaults are documented in docs/config.html Unless
=A0 =A0 you are simply testing at your workstation, you will most defi= nitely need to change the three entries below. -->

<= div>=A0 <property>
=A0 =A0 <name>instance.zookeeper.h= ost</name>
=A0 =A0 <value>hadoop-node-1:2181,hadoop-node-2:2181,hadoop-node= -3:2181,hadoop-node-4:2181,hadoop-node-5:2181</value>
=A0 = =A0 <description>comma separated list of zookeeper servers</descri= ption>
=A0 </property>

=A0 <property>
=A0 =A0 <name>logger.dir.walog</name>
=A0 =A0= <value>walogs</value>
=A0 =A0 <description>The= property only needs to be set if upgrading from 1.4 which used to store wr= ite-ahead logs on the local
=A0 =A0 =A0 filesystem. In 1.5 write-ahead logs are stored in DFS. =A0= When 1.5 is started for the first time it will copy any 1.4
=A0 = =A0 =A0 write ahead logs into DFS. =A0It is possible to specify a comma-sep= arated list of directories.
=A0 =A0 </description>
=A0 </property>

=A0 <property>
=A0 =A0 <name>instance= .secret</name>
=A0 =A0 <value></value>
=A0 =A0 <description>A secret unique to a given instance that all s= ervers must know in order to communicate with one another.
=A0 =A0 =A0 Change it before initialization. To
=A0 =A0 =A0 = change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSe= cret --old [oldpasswd] --new [newpasswd],
=A0 =A0 =A0 and then up= date this file.
=A0 =A0 </description>
=A0 </property>

=A0 <property>
=A0 =A0 <name>tserver.= memory.maps.max</name>
=A0 =A0 <value>1G</value>= ;
=A0 </property>

=A0 <property>
=A0 =A0 <name>tserver.cache.data.size</name>
= =A0 =A0 <value>128M</value>
=A0 </property>

=A0 <property>
=A0 =A0 <name>tserver.cache= .index.size</name>
=A0 =A0 <value>128M</value><= /div>
=A0 </property>

=A0 <property&g= t;
=A0 =A0 <name>trace.token.property.password</name>
= =A0 =A0 <!-- change this to the root user's password, and/or change = the user below -->
=A0 =A0 <value></value>
=A0 </property>

=A0 <property>
=A0 =A0 <name>trac= e.user</name>
=A0 =A0 <value>root</value>
=
=A0 </property>

=A0 <property>
=A0 =A0 <name>general.classpaths</name>
=A0 =A0 <value>
=A0 =A0 =A0 $HADOOP_PREFIX/share/hadoo= p/common/.*.jar,
=A0 =A0 =A0 $HADOOP_PREFIX/share/hadoop/common/l= ib/.*.jar,
=A0 =A0 =A0 $HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,
=A0 =A0 =A0 $HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,
=A0 =A0 =A0 $HADOOP_PREFIX/share/hadoop/yarn/.*.jar,
=A0 =A0= =A0 /usr/lib/hadoop/.*.jar,
=A0 =A0 =A0 /usr/lib/hadoop/lib/.*.j= ar,
=A0 =A0 =A0 /usr/lib/hadoop-hdfs/.*.jar,
=A0 =A0 = =A0 /usr/lib/hadoop-mapreduce/.*.jar,
=A0 =A0 =A0 /usr/lib/hadoop-yarn/.*.jar,
=A0 =A0 =A0 $ACCUMU= LO_HOME/server/target/classes/,
=A0 =A0 =A0 $ACCUMULO_HOME/lib/ac= cumulo-server.jar,
=A0 =A0 =A0 $ACCUMULO_HOME/core/target/classes= /,
=A0 =A0 =A0 $ACCUMULO_HOME/lib/accumulo-core.jar,
=A0 =A0 =A0 $ACCUMULO_HOME/start/target/classes/,
=A0 =A0 = =A0 $ACCUMULO_HOME/lib/accumulo-start.jar,
=A0 =A0 =A0 $ACCUMULO_= HOME/fate/target/classes/,
=A0 =A0 =A0 $ACCUMULO_HOME/lib/accumul= o-fate.jar,
=A0 =A0 =A0 $ACCUMULO_HOME/proxy/target/classes/,
=A0 =A0 =A0 $AC= CUMULO_HOME/lib/accumulo-proxy.jar,
=A0 =A0 =A0 $ACCUMULO_HOME/li= b/[^.].*.jar,
=A0 =A0 =A0 $ZOOKEEPER_HOME/zookeeper[^.].*.jar,
=A0 =A0 =A0 $HADOOP_CONF_DIR,
=A0 =A0 =A0 $HADOOP_PREFIX/[^.].*.jar,
=A0 =A0 =A0 $HADOOP_P= REFIX/lib/[^.].*.jar,
=A0 =A0 </value>
=A0 =A0 &l= t;description>Classpaths that accumulo checks for updates and class file= s.
=A0 =A0 =A0 When using the Security Manager, please remove the= ".../target/classes/" values.
=A0 =A0 </description>
=A0 </property>
</configuration>


On Sun, Mar 16, 2014 at 9:06 PM, Josh= Elser <josh.elser@gmail.com> wrote:
Posting your accumulo-site.xml (filtering ou= t instance.secret and trace.password before you post) would also help us fi= gure out what exactly is going on.


On 3/16/14, 8:41 PM, Mike Drob wrote:
Which version of Accumulo are you using?

You might be missing the hadoop libraries from your classpath. For this, you would check your accumulo-site.xml and find the comment about Hadoop 2 in the file.


On Sun, Mar 16, 2014 at 8:28 PM, Benjamin Parrish
<benja= min.d.parrish@gmail.com <mailto:benjamin.d.parrish@gmail.com>&g= t; wrote:

=A0 =A0 I have a couple of issues when trying to use Accumulo on Hadoop 2.2= .0

=A0 =A0 1) I start with accumulo init and everything runs through just fine= ,
=A0 =A0 but I can find '/accumulo' using 'hadoop fs -ls /'<= br>
=A0 =A0 2) I try to run 'accumulo shell -u root' and it says that t= hat
=A0 =A0 Hadoop and ZooKeeper are not started, but if I run 'jps' on= the each
=A0 =A0 cluster node it shows all the necessary processes for both in the =A0 =A0 JVM. =A0Is there something I am missing?

=A0 =A0 --
=A0 =A0 Benjamin D. Parrish
=A0 =A0 H: 540-597-7860 <tel:540-597-7860>





--
Benjamin D. = Parrish
H: 540-597-7860



--
= Benjamin D. Parrish
H: 540-597-7860




--
= Benjamin D. Parrish
H: 540-597-7860

--089e0160c660abf3e904f4e3079f--