Port numbers (for 1.5+)4560 Accumulo monitor (for centralized log display)9997 Tablet Server9999 Master Server12234 Accumulo Tracer50091 Accumulo GC50095 Accumulo HTTP monitorOn Tue, Mar 18, 2014 at 11:04 AM, Benjamin Parrish <benjamin.d.parrish@gmail.com> wrote:
First off, are there specific ports that need to be opened up for accumulo? I have hadoop operating without any issues as a 5 node cluster. Zookeeper seems to be operating with 2181, 3888, 2888 ports open.Here is some data from trying to get everything started and getting into the shell. I discluded the bash portion as Eric suggested because the mailing list rejected it for length and thinking it was spam.bin/start-all.sh[root@hadoop-node-1 zookeeper]# bash -x /usr/local/accumulo/bin/start-all.shStarting monitor on hadoop-node-1WARN : Max files open on hadoop-node-1 is 1024, recommend 65536Starting tablet servers ....... doneStarting tablet server on hadoop-node-3Starting tablet server on hadoop-node-5Starting tablet server on hadoop-node-2Starting tablet server on hadoop-node-4WARN : Max files open on hadoop-node-3 is 1024, recommend 65536WARN : Max files open on hadoop-node-2 is 1024, recommend 65536WARN : Max files open on hadoop-node-5 is 1024, recommend 65536WARN : Max files open on hadoop-node-4 is 1024, recommend 65536Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting to talk to zookeeper2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS2014-03-18 10:38:44,558 [server.Accumulo] INFO : Connected to HDFSStarting master on hadoop-node-1WARN : Max files open on hadoop-node-1 is 1024, recommend 65536Starting garbage collector on hadoop-node-1WARN : Max files open on hadoop-node-1 is 1024, recommend 65536Starting tracer on hadoop-node-1WARN : Max files open on hadoop-node-1 is 1024, recommend 65536starting shell as root...[root@hadoop-node-1 zookeeper]# bash -x /usr/local/accumulo/bin/accumulo shell -u rootJava HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unable to load native-hadoop library for your platform... using builtin-java classes where applicablePassword: ****2014-03-18 10:38:58,762 [impl.ServerClient] WARN : There are no tablet servers: check that zookeeper and accumulo are running.... this is the point where it sits and acts like it doesn't do anything-- LOGS -- (most of this looks to be that I cannot connect to anything)here is the tail -f $ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debug.log2014-03-18 10:42:54,617 [impl.ThriftScanner] DEBUG: Failed to locate tablet for table : !0 row : ~err_2014-03-18 10:42:57,625 [monitor.Monitor] INFO : Failed to obtain problem reportsjava.lang.RuntimeException: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutExceptionat org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)at org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)at org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)at org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:399)at org.apache.accumulo.server.monitor.Monitor$1.run(Monitor.java:530)at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)at java.lang.Thread.run(Thread.java:744)Caused by: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutExceptionat org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:212)at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)... 6 morehere is the tail -f $ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug.log2014-03-18 10:47:44,759 [impl.ServerClient] DEBUG: ClientService request failed null, retrying ...org.apache.thrift.transport.TTransportException: Failed to connect to a serverat org.apache.accumulo.core.client.impl.ThriftTransportPool.getAnyTransport(ThriftTransportPool.java:455)at org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:154)at org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:128)at org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123)at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105)at org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71)at org.apache.accumulo.core.client.impl.ConnectorImpl.<init>(ConnectorImpl.java:64)at org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:154)at org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:149)at org.apache.accumulo.server.trace.TraceServer.<init>(TraceServer.java:200)at org.apache.accumulo.server.trace.TraceServer.main(TraceServer.java:295)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.accumulo.start.Main$1.run(Main.java:103)at java.lang.Thread.run(Thread.java:744)On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton <eric.newton@gmail.com> wrote:
Can you post the exact error message you are seeing?Verify that your HADOOP_PREFIX and HADOOP_CONF_DIR are being set properly in accumulo-site.xml.The output of:bash -x $ACCUMULO_HOME/bin/accumulo shell -u rootwould also help.It's going to be something simple.On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish <benjamin.d.parrish@gmail.com> wrote:
Looking to see if there was an answer to this issue or if you could point me in a direction or example that could lead to a solution.On Sun, Mar 16, 2014 at 9:52 PM, Benjamin Parrish <benjamin.d.parrish@gmail.com> wrote:
I am running Accumulo 1.5.1<?xml version="1.0" encoding="UTF-8"?><!--Licensed to the Apache Software Foundation (ASF) under one or morecontributor license agreements. See the NOTICE file distributed withthis work for additional information regarding copyright ownership.The ASF licenses this file to You under the Apache License, Version 2.0(the "License"); you may not use this file except in compliance withthe License. You may obtain a copy of the License atUnless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.--><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><!-- Put your site-specific accumulo configurations here. The available configuration values along with their defaults are documented in docs/config.html Unlessyou are simply testing at your workstation, you will most definitely need to change the three entries below. --><property><name>instance.zookeeper.host</name><value>hadoop-node-1:2181,hadoop-node-2:2181,hadoop-node-3:2181,hadoop-node-4:2181,hadoop-node-5:2181</value><description>comma separated list of zookeeper servers</description></property><property><name>logger.dir.walog</name><value>walogs</value><description>The property only needs to be set if upgrading from 1.4 which used to store write-ahead logs on the localfilesystem. In 1.5 write-ahead logs are stored in DFS. When 1.5 is started for the first time it will copy any 1.4write ahead logs into DFS. It is possible to specify a comma-separated list of directories.</description></property><property><name>instance.secret</name><value></value><description>A secret unique to a given instance that all servers must know in order to communicate with one another.Change it before initialization. Tochange it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new [newpasswd],and then update this file.</description></property><property><name>tserver.memory.maps.max</name><value>1G</value></property><property><name>tserver.cache.data.size</name><value>128M</value></property>
<property><name>tserver.cache.index.size</name><value>128M</value></property><property><name>trace.token.property.password</name><!-- change this to the root user's password, and/or change the user below --><value></value></property><property><name>trace.user</name><value>root</value></property><property><name>general.classpaths</name><value>$HADOOP_PREFIX/share/hadoop/common/.*.jar,$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,/usr/lib/hadoop/.*.jar,/usr/lib/hadoop/lib/.*.jar,/usr/lib/hadoop-hdfs/.*.jar,/usr/lib/hadoop-mapreduce/.*.jar,/usr/lib/hadoop-yarn/.*.jar,$ACCUMULO_HOME/server/target/classes/,$ACCUMULO_HOME/lib/accumulo-server.jar,$ACCUMULO_HOME/core/target/classes/,$ACCUMULO_HOME/lib/accumulo-core.jar,$ACCUMULO_HOME/start/target/classes/,$ACCUMULO_HOME/lib/accumulo-start.jar,$ACCUMULO_HOME/fate/target/classes/,$ACCUMULO_HOME/lib/accumulo-fate.jar,$ACCUMULO_HOME/proxy/target/classes/,$ACCUMULO_HOME/lib/accumulo-proxy.jar,$ACCUMULO_HOME/lib/[^.].*.jar,$ZOOKEEPER_HOME/zookeeper[^.].*.jar,$HADOOP_CONF_DIR,$HADOOP_PREFIX/[^.].*.jar,$HADOOP_PREFIX/lib/[^.].*.jar,</value><description>Classpaths that accumulo checks for updates and class files.When using the Security Manager, please remove the ".../target/classes/" values.</description></property></configuration>On Sun, Mar 16, 2014 at 9:06 PM, Josh Elser <josh.elser@gmail.com> wrote:
Posting your accumulo-site.xml (filtering out instance.secret and trace.password before you post) would also help us figure out what exactly is going on.
On 3/16/14, 8:41 PM, Mike Drob wrote:
Which version of Accumulo are you using?
You might be missing the hadoop libraries from your classpath. For this,
you would check your accumulo-site.xml and find the comment about Hadoop
2 in the file.
On Sun, Mar 16, 2014 at 8:28 PM, Benjamin Parrish<benjamin.d.parrish@gmail.com <mailto:benjamin.d.parrish@gmail.com>> wrote:H: 540-597-7860 <tel:540-597-7860>
I have a couple of issues when trying to use Accumulo on Hadoop 2.2.0
1) I start with accumulo init and everything runs through just fine,
but I can find '/accumulo' using 'hadoop fs -ls /'
2) I try to run 'accumulo shell -u root' and it says that that
Hadoop and ZooKeeper are not started, but if I run 'jps' on the each
cluster node it shows all the necessary processes for both in the
JVM. Is there something I am missing?
--
Benjamin D. Parrish
--
Benjamin D. Parrish
H: 540-597-7860--
Benjamin D. Parrish
H: 540-597-7860--
Benjamin D. Parrish
H: 540-597-7860