accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Josh Elser <josh.el...@gmail.com>
Subject Re: Tablet server and tracer could get started since they cannot see initialized accumulo instance
Date Wed, 19 Nov 2014 18:37:50 GMT
Sure thing. This showed me what I thought might be the case.

In your accumulo-env.sh, you're test && export'ing (likely) an incorrect 
HADOOP_CONF_DIR. Note that you have the Hadoop-2 HADOOP_CONF_DIR 
location uncommented with the Hadoop-1 commented. The example 
configuration files for 1.6.1 expect that you're using a Hadoop-2 release.

I'm guessing that you have the location properly configured in your 
local environment which is why `accumulo init` properly found HDFS. When 
start-all.sh SSH's to the other nodes to start them, it doesn't have 
HADOOP_CONF_DIR in its environment, accumulo-env.sh sets the wrong 
location and then the server processes can't find HDFS and falls back to 
the local filesystem.

Try changing accumulo-env.sh on all the nodes in your system and then 
re-run start-all.sh.

Salih Kardan wrote:
> Hi Josh,
>
> Thanks for the prompt response, it is very good for Accumulo users to
> have such a responsive community like this.
>
> I have reproduced the problem and fetched the logs you want.
>
> Here is the output of *accumulo init *command :
>
>     2014-11-19 20:14:30,583 [fs.VolumeManagerImpl] WARN :
>     dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is
>     possible on system reset or power loss
>     2014-11-19 20:14:30,584 [init.Initialize] INFO : Hadoop Filesystem
>     is hdfs://hadoop35:8020
>     2014-11-19 20:14:30,585 [init.Initialize] INFO : Accumulo data dirs
>     are [hdfs://hadoop35:8020/accumulo]
>     2014-11-19 20:14:30,585 [init.Initialize] INFO : Zookeeper server is
>     hadoop33:2181
>     2014-11-19 20:14:30,585 [init.Initialize] INFO : Checking if
>     Zookeeper is available. If this hangs, then you need to make sure
>     zookeeper is running
>
>     Warning!!! Your instance secret is still set to the default, this is
>     not secure. We highly recommend you change it.
>
>     You can change the instance secret in accumulo by using:
>         bin/accumulo org.apache.accumulo.server.util.ChangeSecret
>     oldPassword newPassword.
>     You will also need to edit your secret in your configuration file by
>     adding the property instance.secret to your conf/accumulo-site.xml.
>     Without this accumulo will not operate correctly
>     Instance name : test
>     Instance name "test" exists. Delete existing entry from zookeeper?
>     [Y/N] : Y
>     Enter initial password for root (this may not be applicable for your
>     security setup): ****
>     Confirm initial password for root: ****
>     2014-11-19 20:14:38,774 [fs.VolumeManagerImpl] WARN :
>     dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is
>     possible on system reset or power loss
>     2014-11-19 20:14:41,128 [util.NativeCodeLoader] WARN : Unable to
>     load native-hadoop library for your platform... using builtin-java
>     classes where applicable
>     2014-11-19 20:14:41,214 [conf.AccumuloConfiguration] INFO : Loaded
>     class : org.apache.accumulo.server.security.handler.ZKAuthorizor
>     2014-11-19 20:14:41,216 [conf.AccumuloConfiguration] INFO : Loaded
>     class : org.apache.accumulo.server.security.handler.ZKAuthenticator
>     2014-11-19 20:14:41,217 [conf.AccumuloConfiguration] INFO : Loaded
>     class : org.apache.accumulo.server.security.handler.ZKPermHandler
>
>
>
> Also here is the *accumulo-env.sh*:
>
>
>     #! /usr/bin/env bash
>
>     if [ -z "$HADOOP_HOME" ]
>     then
>         test -z "$HADOOP_PREFIX" && export HADOOP_PREFIX=/opt/hadoop-1.2.1
>     else
>         HADOOP_PREFIX="$HADOOP_HOME"
>         unset HADOOP_HOME
>     fi
>
>     # hadoop-1.2:
>     # test -z "$HADOOP_CONF_DIR" && export
>     HADOOP_CONF_DIR="$HADOOP_PREFIX/conf"
>     # hadoop-2.0:
>     test -z "$HADOOP_CONF_DIR" && export
>     HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
>
>     test -z "$JAVA_HOME" && export
>     JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
>     test -z "$ZOOKEEPER_HOME" && export ZOOKEEPER_HOME=/opt/zookeeper-3.4.5
>     test -z "$ACCUMULO_LOG_DIR" && export
>     ACCUMULO_LOG_DIR=/opt/accumulo-1.6.1/log
>     if [ -f ${ACCUMULO_CONF_DIR}/accumulo.policy ]
>     then
>         POLICY="-Djava.security.manager
>     -Djava.security.policy=${ACCUMULO_CONF_DIR}/accumulo.policy"
>     fi
>     test -z "$ACCUMULO_TSERVER_OPTS" && export
>     ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx128m -Xms128m "
>     test -z "$ACCUMULO_MASTER_OPTS" && export
>     ACCUMULO_MASTER_OPTS="${POLICY} -Xmx128m -Xms128m"
>     test -z "$ACCUMULO_MONITOR_OPTS" && export
>     ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx64m -Xms64m"
>     test -z "$ACCUMULO_GC_OPTS" && export ACCUMULO_GC_OPTS="-Xmx64m -Xms64m"
>     test -z "$ACCUMULO_GENERAL_OPTS" && export
>     ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC
>     -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
>     test -z "$ACCUMULO_OTHER_OPTS" && export
>     ACCUMULO_OTHER_OPTS="-Xmx128m -Xms64m"
>     # what do when the JVM runs out of heap memory
>     export ACCUMULO_KILL_CMD='kill -9 %p'
>
>     ### Optionally look for hadoop and accumulo native libraries for your
>     ### platform in additional directories. (Use DYLD_LIBRARY_PATH on
>     Mac OS X.)
>     ### May not be necessary for Hadoop 2.x or using an RPM that installs to
>     ### the correct system library directory.
>     # export
>     LD_LIBRARY_PATH=${HADOOP_PREFIX}/lib/native/${PLATFORM}:${LD_LIBRARY_PATH}
>
>     # Should the monitor bind to all network interfaces -- default: false
>     # export ACCUMULO_MONITOR_BIND_ALL="true"
>
>
>
> And finally output of *accumulo classpath *command:
>
>     Level 1: Java System Classloader (loads Java system resources) URL
>     classpath items are:
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunpkcs11.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/localedata.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunjce_provider.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/java-atk-wrapper.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/pulse-java.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/zipfs.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/dnsns.jar
>     file:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/libatk-wrapper.so
>
>     Level 2: Java Classloader (loads everything defined by java
>     classpath) URL classpath items are:
>     file:/opt/accumulo-1.6.1/conf/
>     file:/opt/accumulo-1.6.1/lib/accumulo-start.jar
>     file:/opt/hadoop-1.2.1/lib/log4j-1.2.15.jar
>
>     Level 3: Accumulo Classloader (loads everything defined by
>     general.classpaths) URL classpath items are:
>     file:/opt/accumulo-1.6.1/lib/accumulo-core.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-start.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-fate.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-proxy.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-test.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-monitor.jar
>     file:/opt/accumulo-1.6.1/lib/jcommander.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-server.jar
>     file:/opt/accumulo-1.6.1/lib/slf4j-log4j12.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-util.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-servlet.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-server-base.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-minicluster.jar
>     file:/opt/accumulo-1.6.1/lib/javax.servlet-api.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-gc.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-fate.jar
>     file:/opt/accumulo-1.6.1/lib/commons-math.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-core.jar
>     file:/opt/accumulo-1.6.1/lib/jline.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-trace.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-start.jar
>     file:/opt/accumulo-1.6.1/lib/gson.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-http.jar
>     file:/opt/accumulo-1.6.1/lib/commons-vfs2.jar
>     file:/opt/accumulo-1.6.1/lib/libthrift.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-security.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-tserver.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-tracer.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-examples-simple.jar
>     file:/opt/accumulo-1.6.1/lib/slf4j-api.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-master.jar
>     file:/opt/accumulo-1.6.1/lib/guava.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-io.jar
>     file:/opt/accumulo-1.6.1/lib/jetty-continuation.jar
>     file:/opt/accumulo-1.6.1/lib/accumulo-proxy.jar
>     file:/opt/zookeeper-3.4.5/zookeeper-3.4.5.jar
>     file:/etc/hadoop/
>     file:/opt/hadoop-1.2.1/hadoop-test-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-ant-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-core-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-minicluster-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-tools-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-examples-1.2.1.jar
>     file:/opt/hadoop-1.2.1/hadoop-client-1.2.1.jar
>     file:/opt/hadoop-1.2.1/lib/hadoop-fairscheduler-1.2.1.jar
>     file:/opt/hadoop-1.2.1/lib/commons-collections-3.2.1.jar
>     file:/opt/hadoop-1.2.1/lib/hadoop-thriftfs-1.2.1.jar
>     file:/opt/hadoop-1.2.1/lib/commons-configuration-1.6.jar
>     file:/opt/hadoop-1.2.1/lib/commons-logging-api-1.0.4.jar
>     file:/opt/hadoop-1.2.1/lib/commons-httpclient-3.0.1.jar
>     file:/opt/hadoop-1.2.1/lib/oro-2.0.8.jar
>     file:/opt/hadoop-1.2.1/lib/log4j-1.2.15.jar
>     file:/opt/hadoop-1.2.1/lib/commons-io-2.1.jar
>     file:/opt/hadoop-1.2.1/lib/commons-net-3.1.jar
>     file:/opt/hadoop-1.2.1/lib/xmlenc-0.52.jar
>     file:/opt/hadoop-1.2.1/lib/commons-el-1.0.jar
>     file:/opt/hadoop-1.2.1/lib/aspectjtools-1.6.11.jar
>     file:/opt/hadoop-1.2.1/lib/jdeb-0.8.jar
>     file:/opt/hadoop-1.2.1/lib/hsqldb-1.8.0.10.jar
>     file:/opt/hadoop-1.2.1/lib/jersey-core-1.8.jar
>     file:/opt/hadoop-1.2.1/lib/commons-logging-1.1.1.jar
>     file:/opt/hadoop-1.2.1/lib/commons-beanutils-core-1.8.0.jar
>     file:/opt/hadoop-1.2.1/lib/jsch-0.1.42.jar
>     file:/opt/hadoop-1.2.1/lib/commons-math-2.1.jar
>     file:/opt/hadoop-1.2.1/lib/commons-daemon-1.0.1.jar
>     file:/opt/hadoop-1.2.1/lib/kfs-0.2.2.jar
>     file:/opt/hadoop-1.2.1/lib/core-3.1.1.jar
>     file:/opt/hadoop-1.2.1/lib/jersey-json-1.8.jar
>     file:/opt/hadoop-1.2.1/lib/commons-lang-2.4.jar
>     file:/opt/hadoop-1.2.1/lib/aspectjrt-1.6.11.jar
>     file:/opt/hadoop-1.2.1/lib/jasper-runtime-5.5.12.jar
>     file:/opt/hadoop-1.2.1/lib/jetty-6.1.26.jar
>     file:/opt/hadoop-1.2.1/lib/servlet-api-2.5-20081211.jar
>     file:/opt/hadoop-1.2.1/lib/jackson-mapper-asl-1.8.8.jar
>     file:/opt/hadoop-1.2.1/lib/commons-digester-1.8.jar
>     file:/opt/hadoop-1.2.1/lib/hadoop-capacity-scheduler-1.2.1.jar
>     file:/opt/hadoop-1.2.1/lib/asm-3.2.jar
>     file:/opt/hadoop-1.2.1/lib/jersey-server-1.8.jar
>     file:/opt/hadoop-1.2.1/lib/jackson-core-asl-1.8.8.jar
>     file:/opt/hadoop-1.2.1/lib/commons-beanutils-1.7.0.jar
>     file:/opt/hadoop-1.2.1/lib/jasper-compiler-5.5.12.jar
>     file:/opt/hadoop-1.2.1/lib/jetty-util-6.1.26.jar
>     file:/opt/hadoop-1.2.1/lib/commons-cli-1.2.jar
>     file:/opt/hadoop-1.2.1/lib/commons-codec-1.4.jar
>     file:/opt/hadoop-1.2.1/lib/jets3t-0.6.1.jar
>     file:/opt/hadoop-1.2.1/lib/junit-4.5.jar
>     file:/opt/hadoop-1.2.1/lib/mockito-all-1.8.5.jar
>
>     Level 4: Accumulo Dynamic Classloader (loads everything defined by
>     general.dynamic.classpaths) VFS classpaths items are:
>
>
> Hope this helps you to understand what is going on my setup.
> Thanks,
> Salih
>
> Salih Kardan
>
> On Wed, Nov 19, 2014 at 5:34 PM, Josh Elser <josh.elser@gmail.com
> <mailto:josh.elser@gmail.com>> wrote:
>
>     Hi Salih,
>
>     It looks like the Accumulo processes are trying to communicate with
>     the local filesystem instead of HDFS. Do you still have the output
>     from `accumulo init`?
>
>     Also, is it possible to share your accumulo-env.sh,
>     accumulo-site.xml and the output from `accumulo classpath`?
>
>     Thanks.
>
>     Salih Kardan wrote:
>
>         Hello everyone,
>
>         Sorry in the previous mail, I forgot to fill out subject line of
>         mail.
>         Please ignore previous mail.
>
>         Currently I am testing Accumulo 1.6.1 with Hadoop 1.2.1 and
>         Zookeeper
>         3.4.5 with 4 hadoop nodes. I gave a role to each node in hadoop
>         cluster;
>         specifically
>         assume my hadoop nodes like this: hadoop1, hadoop2, hadoop3 and
>         hadoop4.
>
>         The accumulo roles are distributed to hadoop nodes;
>         hadoop1 = accumulo master ( single zookeeper instance is also
>         running on
>         this node)
>         hadoop2 = gc
>         hadoop3 = monitor
>         hadoop4 = tablet server + tracer
>
>         After I initialize accumulo with *"accumulo init"* command, I
>         invoked
>
>         start_all.sh script from master machine. All services except
>         tracer and
>         tablet server started working.
>         However on tablet server machine I see these logs :
>
>         2014-11-19 11:20:32,327 [zookeeper.ZooUtil] ERROR: unable obtain
>         instance id at file:/accumulo/instance_id
>         2014-11-19 11:20:32,328 [tserver.TabletServer] ERROR: Uncaught
>         exception
>         in TabletServer.main, exiting
>         java.lang.RuntimeException: Accumulo not initialized, there is no
>         instance id at file:/accumulo/instance_id
>               at
>         org.apache.accumulo.core.__zookeeper.ZooUtil.__getInstanceIDFromHdfs(ZooUtil.__java:62)
>               at
>         org.apache.accumulo.server.__client.HdfsZooInstance.___getInstanceID(HdfsZooInstance.__java:132)
>               at
>         org.apache.accumulo.server.__client.HdfsZooInstance.__getInstanceID(HdfsZooInstance.__java:116)
>               at
>         org.apache.accumulo.server.__conf.__ServerConfigurationFactory.<__init>(__ServerConfigurationFactory.__java:113)
>               at
>         org.apache.accumulo.server.__conf.ServerConfiguration.<__init>(ServerConfiguration.__java:79)
>               at
>         org.apache.accumulo.tserver.__TabletServer.main(__TabletServer.java:3668)
>               at sun.reflect.__NativeMethodAccessorImpl.__invoke0(Native
>         Method)
>               at
>         sun.reflect.__NativeMethodAccessorImpl.__invoke(__NativeMethodAccessorImpl.java:__57)
>               at
>         sun.reflect.__DelegatingMethodAccessorImpl.__invoke(__DelegatingMethodAccessorImpl.__java:43)
>               at java.lang.reflect.Method.__invoke(Method.java:606)
>               at org.apache.accumulo.start.__Main$1.run(Main.java:141)
>               at java.lang.Thread.run(Thread.__java:745)
>
>         And the logs I see in tracer log file:
>
>         Thread "tracer" died Accumulo not initialized, there is no
>         instance id
>         at file:/accumulo/instance_id
>         java.lang.RuntimeException: Accumulo not initialized, there is no
>         instance id at file:/accumulo/instance_id
>               at
>         org.apache.accumulo.core.__zookeeper.ZooUtil.__getInstanceIDFromHdfs(ZooUtil.__java:62)
>               at
>         org.apache.accumulo.server.__client.HdfsZooInstance.___getInstanceID(HdfsZooInstance.__java:132)
>               at
>         org.apache.accumulo.server.__client.HdfsZooInstance.__getInstanceID(HdfsZooInstance.__java:116)
>               at
>         org.apache.accumulo.server.__conf.__ServerConfigurationFactory.<__init>(__ServerConfigurationFactory.__java:113)
>               at
>         org.apache.accumulo.server.__conf.ServerConfiguration.<__init>(ServerConfiguration.__java:79)
>               at
>         org.apache.accumulo.tracer.__TraceServer.main(TraceServer.__java:290)
>               at sun.reflect.__NativeMethodAccessorImpl.__invoke0(Native
>         Method)
>               at
>         sun.reflect.__NativeMethodAccessorImpl.__invoke(__NativeMethodAccessorImpl.java:__57)
>               at
>         sun.reflect.__DelegatingMethodAccessorImpl.__invoke(__DelegatingMethodAccessorImpl.__java:43)
>               at java.lang.reflect.Method.__invoke(Method.java:606)
>               at org.apache.accumulo.start.__Main$1.run(Main.java:141)
>               at java.lang.Thread.run(Thread.__java:745)
>
>
>         I checked hdfs and it seems accumulo is initialized. Here is the
>         output
>         of *"hadoop dfs -ls /accumulo/instance_id" *command*, *
>
>         Found 1 items
>         -rw-r--r--   1 root supergroup          0 2014-11-19 11:19
>         /accumulo/instance_id/__268acc40-e20b-4a35-8d8a-__0e46e7859a0d
>
>         I googled the problem, some comments stating that the problem
>         may occur
>         due to missing hadoop libs in classpath, but I checked classpath of
>         with *"accumulo classpath"*  command, it also seems correct,
>         both hadoop
>         and zookeeper libs listed in classspath.
>
>
>         Then I tried single node accumulo installation, that way all
>         services
>         including tablet server seems working.
>         What can be the problem when I use multiple nodes? Any help is
>         appreciated.
>
>         Thanks
>
>

Mime
View raw message