Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 44AE61040E for ; Tue, 18 Mar 2014 16:08:26 +0000 (UTC) Received: (qmail 59437 invoked by uid 500); 18 Mar 2014 16:08:25 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 59054 invoked by uid 500); 18 Mar 2014 16:08:23 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 59042 invoked by uid 99); 18 Mar 2014 16:08:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Mar 2014 16:08:22 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of benjamin.d.parrish@gmail.com designates 209.85.215.46 as permitted sender) Received: from [209.85.215.46] (HELO mail-la0-f46.google.com) (209.85.215.46) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Mar 2014 16:08:17 +0000 Received: by mail-la0-f46.google.com with SMTP id hr17so4939842lab.33 for ; Tue, 18 Mar 2014 09:07:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=hWMUW5+GZc8GQ1A2YdIwuMbcON3cN9kyyDq1kswLy3c=; b=yWFOP+6n1LooQJyCkxsVhkIqTW6jvKS/E3j77EkaivkkESrWypc9qHLb3veueANg1g DSNzA7Y902gAFKTKP6kDCrOwjSVZfT34Oxa0HaFuKMXVbcGrOJDlDvQDhjnCoMaZ5uAu qbSQXWeCmWfvlGMVWnhQgUuOY7V8LiFZIKqZKsOv9dBkQBnq91nryAZf24A40LIk30Ev 8MvBSZ01HGyk5JlcVUF0XX9GBcmK+hUtmkAUrHL7jyZOHS520qJwU8TF/9ksbxx9Ctb8 6Gpevxfp9KYBVZVJkML6VNXjUInzpXnAjFCT7T/f7fgiyXXxSk75XyEy+sZ638nBdRzy yNeA== MIME-Version: 1.0 X-Received: by 10.152.120.168 with SMTP id ld8mr21986848lab.12.1395158875440; Tue, 18 Mar 2014 09:07:55 -0700 (PDT) Received: by 10.114.216.68 with HTTP; Tue, 18 Mar 2014 09:07:55 -0700 (PDT) In-Reply-To: <53286E2E.1080208@gmail.com> References: <53264A7C.6090603@gmail.com> <53286E2E.1080208@gmail.com> Date: Tue, 18 Mar 2014 12:07:55 -0400 Message-ID: Subject: Re: Installing with Hadoop 2.2.0 From: Benjamin Parrish To: user@accumulo.apache.org Content-Type: multipart/alternative; boundary=089e012281a42d52be04f4e3bdf9 X-Virus-Checked: Checked by ClamAV on apache.org --089e012281a42d52be04f4e3bdf9 Content-Type: text/plain; charset=ISO-8859-1 on all 5 nodes there is a conf/accumulo-site.xml with the same values for all 5. someone is going to kill me when they find out that i left off a bracket or semi-colon somewhere... On Tue, Mar 18, 2014 at 12:02 PM, Josh Elser wrote: > No, running `accumulo init` on a single host is sufficient. > > Is accumulo-site.xml consistent across all machines? > > > On 3/18/14, 11:57 AM, Benjamin Parrish wrote: > >> So here is the error from the tablet server... >> >> 2014-03-18 10:38:43,456 [client.ZooKeeperInstance] ERROR: unable obtain >> instance id at /accumulo/instance_id >> 2014-03-18 10:38:43,456 [tabletserver.TabletServer] ERROR: Uncaught >> exception in TabletServer.main, exiting >> java.lang.RuntimeException: Accumulo not initialized, there is no >> instance id at /accumulo/instance_id >> at >> org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs( >> ZooKeeperInstance.java:295) >> at >> org.apache.accumulo.server.client.HdfsZooInstance._ >> getInstanceID(HdfsZooInstance.java:126) >> at >> org.apache.accumulo.server.client.HdfsZooInstance. >> getInstanceID(HdfsZooInstance.java:119) >> at >> org.apache.accumulo.server.conf.ZooConfiguration. >> getInstance(ZooConfiguration.java:55) >> at >> org.apache.accumulo.server.conf.ServerConfiguration.getZooConfiguration( >> ServerConfiguration.java:50) >> at >> org.apache.accumulo.server.conf.ServerConfiguration.getConfiguration( >> ServerConfiguration.java:104) >> at org.apache.accumulo.server.Accumulo.init(Accumulo.java:98) >> at >> org.apache.accumulo.server.tabletserver.TabletServer. >> main(TabletServer.java:3249) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke( >> NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke( >> DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at org.apache.accumulo.start.Main$1.run(Main.java:103) >> at java.lang.Thread.run(Thread.java:744) >> >> Do I need to run bin/accumulo init on every box in the cluster? >> >> >> On Tue, Mar 18, 2014 at 11:19 AM, Eric Newton > > wrote: >> >> Port numbers (for 1.5+) >> >> 4560 Accumulo monitor (for centralized log display) >> 9997 Tablet Server >> 9999 Master Server >> 12234 Accumulo Tracer >> 50091 Accumulo GC >> 50095 Accumulo HTTP monitor >> >> >> On Tue, Mar 18, 2014 at 11:04 AM, Benjamin Parrish >> > >> >> wrote: >> >> First off, are there specific ports that need to be opened up >> for accumulo? I have hadoop operating without any issues as a 5 >> node cluster. Zookeeper seems to be operating with 2181, 3888, >> 2888 ports open. >> >> Here is some data from trying to get everything started and >> getting into the shell. I discluded the bash portion as Eric >> suggested because the mailing list rejected it for length and >> thinking it was spam. >> >> bin/start-all.sh >> >> [root@hadoop-node-1 zookeeper]# bash -x >> /usr/local/accumulo/bin/start-all.sh >> Starting monitor on hadoop-node-1 >> WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 >> Starting tablet servers ....... done >> Starting tablet server on hadoop-node-3 >> Starting tablet server on hadoop-node-5 >> Starting tablet server on hadoop-node-2 >> Starting tablet server on hadoop-node-4 >> WARN : Max files open on hadoop-node-3 is 1024, recommend 65536 >> WARN : Max files open on hadoop-node-2 is 1024, recommend 65536 >> WARN : Max files open on hadoop-node-5 is 1024, recommend 65536 >> WARN : Max files open on hadoop-node-4 is 1024, recommend 65536 >> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded >> library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which >> might have disabled stack guard. The VM will try to fix the >> stack guard no >> w. >> It's highly recommended that you fix the library with 'execstack >> -c ', or link it with '-z noexecstack'. >> 2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unable to >> load native-hadoop library for your platform... using >> builtin-java classes where applicable >> 2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting to >> talk to zookeeper >> 2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zookeeper >> connected and initialized, attemping to talk to HDFS >> 2014-03-18 10:38:44,558 [server.Accumulo] INFO : Connected to HDFS >> Starting master on hadoop-node-1 >> WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 >> Starting garbage collector on hadoop-node-1 >> WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 >> Starting tracer on hadoop-node-1 >> WARN : Max files open on hadoop-node-1 is 1024, recommend 65536 >> >> starting shell as root... >> >> [root@hadoop-node-1 zookeeper]# bash -x >> /usr/local/accumulo/bin/accumulo shell -u root >> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded >> library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which >> might have disabled stack guard. The VM will try to fix the >> stack guard no >> w. >> It's highly recommended that you fix the library with 'execstack >> -c ', or link it with '-z noexecstack'. >> 2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unable to >> load native-hadoop library for your platform... using >> builtin-java classes where applicable >> Password: **** >> 2014-03-18 10:38:58,762 [impl.ServerClient] WARN : There are no >> tablet servers: check that zookeeper and accumulo are running. >> >> ... this is the point where it sits and acts like it doesn't do >> anything >> >> -- LOGS -- (most of this looks to be that I cannot connect to >> anything) >> >> here is the tail -f >> $ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debug.log >> >> 2014-03-18 10:42:54,617 [impl.ThriftScanner] DEBUG: Failed to >> locate tablet for table : !0 row : ~err_ >> 2014-03-18 10:42:57,625 [monitor.Monitor] INFO : Failed to >> obtain problem reports >> java.lang.RuntimeException: >> org.apache.accumulo.core.client.impl.ThriftScanner$ >> ScanTimedOutException >> at >> org.apache.accumulo.core.client.impl.ScannerIterator. >> hasNext(ScannerIterator.java:174) >> at >> org.apache.accumulo.server.problems.ProblemReports$3. >> hasNext(ProblemReports.java:241) >> at >> org.apache.accumulo.server.problems.ProblemReports. >> summarize(ProblemReports.java:299) >> at >> org.apache.accumulo.server.monitor.Monitor.fetchData( >> Monitor.java:399) >> at >> org.apache.accumulo.server.monitor.Monitor$1.run(Monitor. >> java:530) >> at >> org.apache.accumulo.core.util.LoggingRunnable.run( >> LoggingRunnable.java:34) >> at java.lang.Thread.run(Thread.java:744) >> Caused by: >> org.apache.accumulo.core.client.impl.ThriftScanner$ >> ScanTimedOutException >> at >> org.apache.accumulo.core.client.impl.ThriftScanner. >> scan(ThriftScanner.java:212) >> at >> org.apache.accumulo.core.client.impl.ScannerIterator$ >> Reader.run(ScannerIterator.java:82) >> at >> org.apache.accumulo.core.client.impl.ScannerIterator. >> hasNext(ScannerIterator.java:164) >> ... 6 more >> >> here is the tail -f >> $ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug.log >> >> 2014-03-18 10:47:44,759 [impl.ServerClient] DEBUG: ClientService >> request failed null, retrying ... >> org.apache.thrift.transport.TTransportException: Failed to >> connect to a server >> at >> org.apache.accumulo.core.client.impl.ThriftTransportPool. >> getAnyTransport(ThriftTransportPool.java:455) >> at >> org.apache.accumulo.core.client.impl.ServerClient. >> getConnection(ServerClient.java:154) >> at >> org.apache.accumulo.core.client.impl.ServerClient. >> getConnection(ServerClient.java:128) >> at >> org.apache.accumulo.core.client.impl.ServerClient. >> getConnection(ServerClient.java:123) >> at >> org.apache.accumulo.core.client.impl.ServerClient. >> executeRaw(ServerClient.java:105) >> at >> org.apache.accumulo.core.client.impl.ServerClient. >> execute(ServerClient.java:71) >> at >> org.apache.accumulo.core.client.impl.ConnectorImpl.< >> init>(ConnectorImpl.java:64) >> at >> org.apache.accumulo.server.client.HdfsZooInstance. >> getConnector(HdfsZooInstance.java:154) >> at >> org.apache.accumulo.server.client.HdfsZooInstance. >> getConnector(HdfsZooInstance.java:149) >> at >> org.apache.accumulo.server.trace.TraceServer.( >> TraceServer.java:200) >> at >> org.apache.accumulo.server.trace.TraceServer.main( >> TraceServer.java:295) >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >> Method) >> at >> sun.reflect.NativeMethodAccessorImpl.invoke( >> NativeMethodAccessorImpl.java:57) >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke( >> DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:606) >> at org.apache.accumulo.start.Main$1.run(Main.java:103) >> at java.lang.Thread.run(Thread.java:744) >> >> >> On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton >> > wrote: >> >> Can you post the exact error message you are seeing? >> >> Verify that your HADOOP_PREFIX and HADOOP_CONF_DIR are being >> set properly in accumulo-site.xml. >> >> The output of: >> >> bash -x $ACCUMULO_HOME/bin/accumulo shell -u root >> >> >> would also help. >> >> It's going to be something simple. >> >> >> On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish >> > > wrote: >> >> Looking to see if there was an answer to this issue or >> if you could point me in a direction or example that >> could lead to a solution. >> >> >> On Sun, Mar 16, 2014 at 9:52 PM, Benjamin Parrish >> > > wrote: >> >> I am running Accumulo 1.5.1 >> >> >> >> > href="configuration.xsl"?> >> >> >> >> >> >> instance.zookeeper.host >> >> hadoop-node-1:2181,hadoop-node-2:2181,hadoop- >> node-3:2181,hadoop-node-4:2181,hadoop-node-5:2181 >> comma separated list of zookeeper >> servers >> >> >> >> logger.dir.walog >> walogs >> The property only needs to be set >> if upgrading from 1.4 which used to store >> write-ahead logs on the local >> filesystem. In 1.5 write-ahead logs are >> stored in DFS. When 1.5 is started for the first >> time it will copy any 1.4 >> write ahead logs into DFS. It is possible to >> specify a comma-separated list of directories. >> >> >> >> >> instance.secret >> >> A secret unique to a given >> instance that all servers must know in order to >> communicate with one another. >> Change it before initialization. To >> change it later use ./bin/accumulo >> org.apache.accumulo.server.util.ChangeSecret --old >> [oldpasswd] --new [newpasswd], >> and then update this file. >> >> >> >> >> tserver.memory.maps.max >> 1G >> >> >> >> tserver.cache.data.size >> 128M >> >> >> >> tserver.cache.index.size >> 128M >> >> >> >> trace.token.property.password >> >> >> >> >> >> trace.user >> root >> >> >> >> general.classpaths >> >> $HADOOP_PREFIX/share/hadoop/common/.*.jar, >> $HADOOP_PREFIX/share/hadoop/common/lib/.*.jar, >> $HADOOP_PREFIX/share/hadoop/hdfs/.*.jar, >> $HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar, >> $HADOOP_PREFIX/share/hadoop/yarn/.*.jar, >> /usr/lib/hadoop/.*.jar, >> /usr/lib/hadoop/lib/.*.jar, >> /usr/lib/hadoop-hdfs/.*.jar, >> /usr/lib/hadoop-mapreduce/.*.jar, >> /usr/lib/hadoop-yarn/.*.jar, >> $ACCUMULO_HOME/server/target/classes/, >> $ACCUMULO_HOME/lib/accumulo-server.jar, >> $ACCUMULO_HOME/core/target/classes/, >> $ACCUMULO_HOME/lib/accumulo-core.jar, >> $ACCUMULO_HOME/start/target/classes/, >> $ACCUMULO_HOME/lib/accumulo-start.jar, >> $ACCUMULO_HOME/fate/target/classes/, >> $ACCUMULO_HOME/lib/accumulo-fate.jar, >> $ACCUMULO_HOME/proxy/target/classes/, >> $ACCUMULO_HOME/lib/accumulo-proxy.jar, >> $ACCUMULO_HOME/lib/[^.].*.jar, >> $ZOOKEEPER_HOME/zookeeper[^.].*.jar, >> $HADOOP_CONF_DIR, >> $HADOOP_PREFIX/[^.].*.jar, >> $HADOOP_PREFIX/lib/[^.].*.jar, >> >> Classpaths that accumulo checks >> for updates and class files. >> When using the Security Manager, please >> remove the ".../target/classes/" values. >> >> >> >> >> >> On Sun, Mar 16, 2014 at 9:06 PM, Josh Elser >> > >> >> wrote: >> >> Posting your accumulo-site.xml (filtering out >> instance.secret and trace.password before you >> post) would also help us figure out what exactly >> is going on. >> >> >> On 3/16/14, 8:41 PM, Mike Drob wrote: >> >> Which version of Accumulo are you using? >> >> You might be missing the hadoop libraries >> from your classpath. For this, >> you would check your accumulo-site.xml and >> find the comment about Hadoop >> 2 in the file. >> >> >> On Sun, Mar 16, 2014 at 8:28 PM, Benjamin >> Parrish >> > >> > >> >> >> wrote: >> >> I have a couple of issues when trying >> to use Accumulo on Hadoop 2.2.0 >> >> 1) I start with accumulo init and >> everything runs through just fine, >> but I can find '/accumulo' using >> 'hadoop fs -ls /' >> >> 2) I try to run 'accumulo shell -u >> root' and it says that that >> Hadoop and ZooKeeper are not started, >> but if I run 'jps' on the each >> cluster node it shows all the necessary >> processes for both in the >> JVM. Is there something I am missing? >> >> -- >> Benjamin D. Parrish >> H: 540-597-7860 >> > >> >> >> >> >> >> >> -- >> Benjamin D. Parrish >> H: 540-597-7860 >> >> >> >> >> -- >> Benjamin D. Parrish >> H: 540-597-7860 >> >> >> >> >> >> -- >> Benjamin D. Parrish >> H: 540-597-7860 >> >> >> >> >> >> -- >> Benjamin D. Parrish >> H: 540-597-7860 >> > -- Benjamin D. Parrish H: 540-597-7860 --089e012281a42d52be04f4e3bdf9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
on all 5 nodes there is a conf/accumulo-site.xml with the = same values for all 5.

someone is going to kill me when = they find out that i left off a bracket or semi-colon somewhere...


On Tue, Mar 1= 8, 2014 at 12:02 PM, Josh Elser <josh.elser@gmail.com> wr= ote:
No, running `accumulo init` on a single host= is sufficient.

Is accumulo-site.xml consistent across all machines?
=

On 3/18/14, 11:57 AM, Benjamin Parrish wrote:
So here is the error from the tablet server...

2014-03-18 10:38:43,456 [client.ZooKeeperInstance] ERROR: unable obtain
instance id at /accumulo/instance_id
2014-03-18 10:38:43,456 [tabletserver.TabletServer] ERROR: Uncaught
exception in TabletServer.main, exiting
java.lang.RuntimeException: Accumulo not initialized, there is no
instance id at /accumulo/instance_id
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.core.client.ZooKeeperInstance.getInstance= IDFromHdfs(ZooKeeperInstance.java:295)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.client.HdfsZooInstance._getInstanc= eID(HdfsZooInstance.java:126)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.client.HdfsZooInstance.getInstance= ID(HdfsZooInstance.java:119)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.conf.ZooConfiguration.getInstance(= ZooConfiguration.java:55)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.conf.ServerConfiguration.getZooCon= figuration(ServerConfiguration.java:50)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.conf.ServerConfiguration.getConfig= uration(ServerConfiguration.java:104)
=A0 =A0 =A0 =A0 =A0at org.apache.accumulo.server.Accumulo.init(Accum= ulo.java:98)
=A0 =A0 =A0 =A0 =A0at
org.apache.accumulo.server.tabletserver.TabletServer.main(Tab= letServer.java:3249)
=A0 =A0 =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.in= voke0(Native Method)
=A0 =A0 =A0 =A0 =A0at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMeth= odAccessorImpl.java:57)
=A0 =A0 =A0 =A0 =A0at
sun.reflect.DelegatingMethodAccessorImpl.invoke(Delega= tingMethodAccessorImpl.java:43)
=A0 =A0 =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:60= 6)
=A0 =A0 =A0 =A0 =A0at org.apache.accumulo.start.Main$1.run(Main.java= :103)
=A0 =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:744)

Do I need to run bin/accumulo init on every box in the cluster?


On Tue, Mar 18, 2014 at 11:19 AM, Eric Newton <eric.newton@gmail.com
=
<mailto:eric.= newton@gmail.com>> wrote:

=A0 =A0 Port numbers (for 1.5+)

=A0 =A0 4560 Accumulo monitor (for centralized log display)
=A0 =A0 9997 Tablet Server
=A0 =A0 9999 Master Server
=A0 =A0 12234 Accumulo Tracer
=A0 =A0 50091 Accumulo GC
=A0 =A0 50095 Accumulo HTTP monitor


=A0 =A0 On Tue, Mar 18, 2014 at 11:04 AM, Benjamin Parrish
=A0 =A0 <benjamin.d.parrish@gmail.com <mailto:benjamin.d.parrish@gmail.com>>

=A0 =A0 wrote:

=A0 =A0 =A0 =A0 First off, are there specific ports that need to be opened = up
=A0 =A0 =A0 =A0 for accumulo? I have hadoop operating without any issues as= a 5
=A0 =A0 =A0 =A0 node cluster. =A0Zookeeper seems to be operating with 2181,= 3888,
=A0 =A0 =A0 =A0 2888 ports open.

=A0 =A0 =A0 =A0 Here is some data from trying to get everything started and=
=A0 =A0 =A0 =A0 getting into the shell. I discluded the bash portion as Eri= c
=A0 =A0 =A0 =A0 suggested because the mailing list rejected it for length a= nd
=A0 =A0 =A0 =A0 thinking it was spam.

=A0 =A0 =A0 =A0 bin/start-all.sh

=A0 =A0 =A0 =A0 [root@hadoop-node-1 zookeeper]# bash -x
=A0 =A0 =A0 =A0 /usr/local/accumulo/bin/start-all.sh
=A0 =A0 =A0 =A0 Starting monitor on hadoop-node-1
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-1 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 Starting tablet servers ....... done
=A0 =A0 =A0 =A0 Starting tablet server on hadoop-node-3
=A0 =A0 =A0 =A0 Starting tablet server on hadoop-node-5
=A0 =A0 =A0 =A0 Starting tablet server on hadoop-node-2
=A0 =A0 =A0 =A0 Starting tablet server on hadoop-node-4
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-3 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-2 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-5 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-4 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 Java HotSpot(TM) 64-Bit Server VM warning: You have loaded<= br> =A0 =A0 =A0 =A0 library /usr/local/hadoop/lib/native/libhadoop.so.1.= 0.0 which
=A0 =A0 =A0 =A0 might have disabled stack guard. The VM will try to fix the=
=A0 =A0 =A0 =A0 stack guard no
=A0 =A0 =A0 =A0 w.
=A0 =A0 =A0 =A0 It's highly recommended that you fix the library with &= #39;execstack
=A0 =A0 =A0 =A0 -c <libfile>', or link it with '-z noexecstac= k'.
=A0 =A0 =A0 =A0 2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unab= le to
=A0 =A0 =A0 =A0 load native-hadoop library for your platform... using
=A0 =A0 =A0 =A0 builtin-java classes where applicable
=A0 =A0 =A0 =A0 2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting= to
=A0 =A0 =A0 =A0 talk to zookeeper
=A0 =A0 =A0 =A0 2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zookeeper<= br> =A0 =A0 =A0 =A0 connected and initialized, attemping to talk to HDFS
=A0 =A0 =A0 =A0 2014-03-18 10:38:44,558 [server.Accumulo] INFO : Connected = to HDFS
=A0 =A0 =A0 =A0 Starting master on hadoop-node-1
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-1 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 Starting garbage collector on hadoop-node-1
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-1 is 1024, recommend 6= 5536
=A0 =A0 =A0 =A0 Starting tracer on hadoop-node-1
=A0 =A0 =A0 =A0 WARN : Max files open on hadoop-node-1 is 1024, recommend 6= 5536

=A0 =A0 =A0 =A0 starting shell as root...

=A0 =A0 =A0 =A0 [root@hadoop-node-1 zookeeper]# bash -x
=A0 =A0 =A0 =A0 /usr/local/accumulo/bin/accumulo shell -u root
=A0 =A0 =A0 =A0 Java HotSpot(TM) 64-Bit Server VM warning: You have loaded<= br> =A0 =A0 =A0 =A0 library /usr/local/hadoop/lib/native/libhadoop.so.1.= 0.0 which
=A0 =A0 =A0 =A0 might have disabled stack guard. The VM will try to fix the=
=A0 =A0 =A0 =A0 stack guard no
=A0 =A0 =A0 =A0 w.
=A0 =A0 =A0 =A0 It's highly recommended that you fix the library with &= #39;execstack
=A0 =A0 =A0 =A0 -c <libfile>', or link it with '-z noexecstac= k'.
=A0 =A0 =A0 =A0 2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unab= le to
=A0 =A0 =A0 =A0 load native-hadoop library for your platform... using
=A0 =A0 =A0 =A0 builtin-java classes where applicable
=A0 =A0 =A0 =A0 Password: ****
=A0 =A0 =A0 =A0 2014-03-18 10:38:58,762 [impl.ServerClient] WARN : There ar= e no
=A0 =A0 =A0 =A0 tablet servers: check that zookeeper and accumulo are runni= ng.

=A0 =A0 =A0 =A0 ... this is the point where it sits and acts like it doesn&= #39;t do
=A0 =A0 =A0 =A0 anything

=A0 =A0 =A0 =A0 -- LOGS -- (most of this looks to be that I cannot connect = to
=A0 =A0 =A0 =A0 anything)

=A0 =A0 =A0 =A0 here is the tail -f
=A0 =A0 =A0 =A0 $ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debu= g.log

=A0 =A0 =A0 =A0 2014-03-18 10:42:54,617 [impl.ThriftScanner] DEBUG: =A0Fail= ed to
=A0 =A0 =A0 =A0 locate tablet for table : !0 row : ~err_
=A0 =A0 =A0 =A0 2014-03-18 10:42:57,625 [monitor.Monitor] INFO : =A0Failed = to
=A0 =A0 =A0 =A0 obtain problem reports
=A0 =A0 =A0 =A0 java.lang.RuntimeException:
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ThriftScanner$<= u>ScanTimedOutException
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ScannerIterator= .hasNext(ScannerIterator.java:174)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.problems.ProblemReports$3= .hasNext(ProblemReports.java:241)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.problems.ProblemReports.<= u>summarize(ProblemReports.java:299)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.monitor.Monitor.fetchData= (Monitor.java:399)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.monitor.Monitor$1.run(Mon= itor.java:530)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.ja= va:744)
=A0 =A0 =A0 =A0 Caused by:
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ThriftScanner$<= u>ScanTimedOutException
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ThriftScanner.<= u>scan(ThriftScanner.java:212)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ScannerIterator= $Reader.run(ScannerIterator.java:82)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ScannerIterator= .hasNext(ScannerIterator.java:164)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0... 6 more

=A0 =A0 =A0 =A0 here is the tail -f
=A0 =A0 =A0 =A0 $ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug= .log

=A0 =A0 =A0 =A0 2014-03-18 10:47:44,759 [impl.ServerClient] DEBUG: ClientSe= rvice
=A0 =A0 =A0 =A0 request failed null, retrying ...
=A0 =A0 =A0 =A0 org.apache.thrift.transport.TTransportException: Fai= led to
=A0 =A0 =A0 =A0 connect to a server
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ThriftTr= ansportPool.getAnyTransport(ThriftTransportPool.java:455)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:154)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:128)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.core.client.impl.ConnectorImpl.&= lt;init>(ConnectorImpl.java:64)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:154)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:149)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.trace.TraceServer.<ini= t>(TraceServer.java:200)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 org.apache.accumulo.server.trace.TraceServer.main(TraceServer.java:295)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at sun.reflect.NativeMethodAccess= orImpl.invoke0(Native
=A0 =A0 =A0 =A0 Method)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 sun.reflect.NativeMethodAccessorImpl.invoke(<= u>NativeMethodAccessorImpl.java:57)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at
=A0 =A0 =A0 =A0 sun.reflect.DelegatingMethodAccessorImpl.invo= ke(DelegatingMethodAccessorImpl.java:43)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at java.lang.reflect.Method.invok= e(Method.java:606)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at org.apache.accumulo.start.Main= $1.run(Main.java:103)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.ja= va:744)


=A0 =A0 =A0 =A0 On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton
=A0 =A0 =A0 =A0 <eric.newton@gmail.com <mailto:eric.newton@gmail.com>> wrote:
=A0 =A0 =A0 =A0 =A0 =A0 Can you post the exact error message you are seeing= ?

=A0 =A0 =A0 =A0 =A0 =A0 Verify that your HADOOP_PREFIX and HADOOP_CONF_DIR = are being
=A0 =A0 =A0 =A0 =A0 =A0 set properly in accumulo-site.xml.

=A0 =A0 =A0 =A0 =A0 =A0 The output of:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bash -x $ACCUMULO_HOME/bin/accumulo shell -= u root


=A0 =A0 =A0 =A0 =A0 =A0 would also help.

=A0 =A0 =A0 =A0 =A0 =A0 It's going to be something simple.


=A0 =A0 =A0 =A0 =A0 =A0 On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish =A0 =A0 =A0 =A0 =A0 =A0 <benjamin.d.parrish@gmail.com
=A0 =A0 =A0 =A0 =A0 =A0 <mailto:benjamin.d.parrish@gmail.com>> w= rote:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Looking to see if there was an answer to th= is issue or
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if you could point me in a direction or exa= mple that
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 could lead to a solution.


=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 On Sun, Mar 16, 2014 at 9:52 PM, Benjamin P= arrish
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <benjamin.d.parrish@gmail.com
<= div class=3D"h5"> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <mailto:benjamin.d.parrish@gmail.com&g= t;> wrote:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 I am running Accumulo 1.5.1

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <?xml version=3D"1.0" = encoding=3D"UTF-8"?>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <!--
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Licensed to the Apache Softw= are Foundation (ASF)
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 under one or more
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0contributor license agreemen= ts. =A0See the NOTICE
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 file distributed with
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0this work for additional inf= ormation regarding
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 copyright ownership.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0The ASF licenses this file t= o You under the
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Apache License, Version 2.0
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(the "License"); y= ou may not use this file except
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 in compliance with
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0the License. =A0You may obta= in a copy of the License at

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 http://www.apache.org/licenses= /LICENSE-2.0

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Unless required by applicabl= e law or agreed to in
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 writing, software
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0distributed under the Licens= e is distributed on
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 an "AS IS" BASIS,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0WITHOUT WARRANTIES OR CONDIT= IONS OF ANY KIND,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 either express or implied.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0See the License for the spec= ific language
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 governing permissions and
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0limitations under the Licens= e.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 -->
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <?xml-stylesheet type=3D"te= xt/xsl"
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 href=3D"configuration.xsl"= ;?>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <configuration>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<!-- Put your site-specif= ic accumulo
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 configurations here. The available = configuration
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 values along with their defaults ar= e documented in
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 docs/config.html Unless
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0you are simply testing a= t your workstation, you
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 will most definitely need to change= the three
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 entries below. -->

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>instance.zoo= keeper.host</name>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <value>hadoop-node-1:2181,= hadoop-node-2:2181,hadoop-node-3:2181,hadoop-node-4:2181,= hadoop-node-5:2181</value>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<description>comma= separated list of zookeeper
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 servers</description>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>logger.dir.w= alog</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>walogs</= value>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<description>The p= roperty only needs to be set
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if upgrading from 1.4 which used to= store
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 write-ahead logs on the local
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0filesystem. In 1.5 w= rite-ahead logs are
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 stored in DFS. =A0When 1.5 is start= ed for the first
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 time it will copy any 1.4
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0write ahead logs int= o DFS. =A0It is possible to
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 specify a comma-separated list of d= irectories.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</description>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>instance.sec= ret</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value></value&= gt;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<description>A sec= ret unique to a given
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 instance that all servers must know= in order to
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 communicate with one another.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Change it before ini= tialization. To
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0change it later use = ./bin/accumulo
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 org.apache.accumulo.server.u= til.ChangeSecret --old
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 [oldpasswd] --new [newpasswd],
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0and then update this= file.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</description>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>tserver.memo= ry.maps.max</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>1G</valu= e>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>tserver.cach= e.data.size</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>128M</va= lue>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>tserver.cach= e.index.size</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>128M</va= lue>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>trace.token.= property.password</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<!-- change this to t= he root user's password,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 and/or change the user below -->=
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value></value&= gt;
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>trace.user&l= t;/name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>root</va= lue>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<name>general.clas= spaths</name>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<value>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/share= /hadoop/common/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/share= /hadoop/common/lib/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/share= /hadoop/hdfs/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/share= /hadoop/mapreduce/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/share= /hadoop/yarn/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/usr/lib/hadoop/.*.j= ar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/usr/lib/hadoop/lib/= .*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/usr/lib/hadoop-hdfs= /.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/usr/lib/hadoop-mapr= educe/.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/usr/lib/hadoop-yarn= /.*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/serve= r/target/classes/,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/a= ccumulo-server.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/core/= target/classes/,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/a= ccumulo-core.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/start= /target/classes/,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/a= ccumulo-start.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/fate/= target/classes/,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/a= ccumulo-fate.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/proxy= /target/classes/,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/a= ccumulo-proxy.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ACCUMULO_HOME/lib/[= ^.].*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$ZOOKEEPER_HOME/zook= eeper[^.].*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_CONF_DIR, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/[^.].= *.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0$HADOOP_PREFIX/lib/[= ^.].*.jar,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</value>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0<description>Class= paths that accumulo checks
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 for updates and class files.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0When using the Secur= ity Manager, please
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 remove the ".../target/classes= /" values.
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</description>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0</property>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 </configuration>


=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 On Sun, Mar 16, 2014 at 9:06 PM, Jo= sh Elser
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <josh.elser@gmail.com <mailto:josh.elser@gmail.com>&= gt;

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 wrote:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Posting your accumulo-site.= xml (filtering out
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 instance.secret and trace.p= assword before you
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 post) would also help us fi= gure out what exactly
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 is going on.


=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 On 3/16/14, 8:41 PM, Mike D= rob wrote:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Which version of Ac= cumulo are you using?

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 You might be missin= g the hadoop libraries
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 from your classpath= . For this,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 you would check you= r accumulo-site.xml and
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 find the comment ab= out Hadoop
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 2 in the file.


=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 On Sun, Mar 16, 201= 4 at 8:28 PM, Benjamin
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Parrish
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <benjamin.d.parrish@gmail= .com
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <mailto:benjamin.d.parri= sh@gmail.com>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <mailto:benjamin.d.parrish@__= gmail.com

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <mailto:benjamin.d.parri= sh@gmail.com>>> wrote:

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0I have a= couple of issues when trying
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 to use Accumulo on = Hadoop 2.2.0

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01) I sta= rt with accumulo init and
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 everything runs thr= ough just fine,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0but I ca= n find '/accumulo' using
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 'hadoop fs -ls = /'

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A02) I try= to run 'accumulo shell -u
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 root' and it sa= ys that that
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Hadoop a= nd ZooKeeper are not started,
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 but if I run 'j= ps' on the each
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0cluster = node it shows all the necessary
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 processes for both = in the
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0JVM. =A0= Is there something I am missing?

=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0--
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Benjamin= D. Parrish
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0H: 540-597-78= 60 <tel:540-597-7860>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 <tel:540-597-7860= <tel:540-597-7860>>






=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 --
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Benjamin D. Parrish
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 H: 540-597-7860 <tel:540-597-7860&= gt;




=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 --
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Benjamin D. Parrish
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 H: 540-597-7860 <tel:540-597-7860>





=A0 =A0 =A0 =A0 --
=A0 =A0 =A0 =A0 Benjamin D. Parrish
=A0 =A0 =A0 =A0 H: 540-597-7860 <tel:540-597-7860>





--
Benjamin D. Parrish
H: 54= 0-597-7860



--
Benjamin D. = Parrish
H: 540-597-7860
--089e012281a42d52be04f4e3bdf9--