accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ott, Charlie H." <CHARLES.H....@leidos.com>
Subject RE: Installing with Hadoop 2.2.0
Date Wed, 19 Mar 2014 13:59:28 GMT
Benjamin,

It may be better to step back for a second and make sure you have the Hadoop environment set
up correctly.  You are very close but it seems like there is just an issue with the Accumulo
classpath or your environment variables.

In regard to ensuring zookeeper is working, you can use the Command line interface script
"zkCli.sh" to connect.  However, you need to specify which server zookeeper is running on.

# zkCli.sh -server <yourZooServer>

My setup is a bit different, but If you connect you should be able to do ls / and see what
znodes are created in zookeeper.  If the accumulo shell -init worked correctly, you should
see something like this when you connect and run ls /accumulo:


[root@1620-Megatron bin]# ./zookeeper-client -server 1620-Megatron
Connecting to 1620-Megatron
2014-03-19 09:50:57,286 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.5-cdh4.5.0--1,
built on 11/20/2013 22:29 GMT
2014-03-19 09:50:57,289 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=1620-Megatron
2014-03-19 09:50:57,290 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.7.0_45
2014-03-19 09:50:57,290 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle
Corporation
2014-03-19 09:50:57,291 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.7.0_45/jre
2014-03-19 09:50:57,291 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../build/classes:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../build/lib/*.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../lib/slf4j-api-1.6.1.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../lib/netty-3.2.2.Final.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../lib/log4j-1.2.15.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../zookeeper-3.4.5-cdh4.5.0.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/zookeeper/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/etc/zookeeper/conf:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/zookeeper-3.4.5-cdh4.5.0.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/zookeeper.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/lib/log4j-1.2.15.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/lib/slf4j-api-1.6.1.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/lib/netty-3.2.2.Final.jar:/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin/../lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar
2014-03-19 09:50:57,292 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2014-03-19 09:50:57,292 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2014-03-19 09:50:57,293 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2014-03-19 09:50:57,293 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2014-03-19 09:50:57,294 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2014-03-19 09:50:57,294 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64
2014-03-19 09:50:57,295 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2014-03-19 09:50:57,295 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2014-03-19 09:50:57,295 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/bin
2014-03-19 09:50:57,297 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection,
connectString=1620-Megatron sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@40e51e67
Welcome to ZooKeeper!
2014-03-19 09:50:57,319 [myid:] - INFO  [main-SendThread(1620-Megatron:2181):ClientCnxn$SendThread@966]
- Opening socket connection to server 1620-Megatron/10.35.56.87:2181. Will not attempt to
authenticate using SASL (unknown error)
2014-03-19 09:50:57,324 [myid:] - INFO  [main-SendThread(1620-Megatron:2181):ClientCnxn$SendThread@849]
- Socket connection established to 1620-Megatron /10.35.56.87:2181, initiating session
JLine support is enabled
[zk: 1620-Megatron(CONNECTING) 0] 2014-03-19 09:50:57,360 [myid:] - INFO  [main-SendThread(1620-Megatron:2181):ClientCnxn$SendThread@1207]
- Session establishment complete on server 1620-Megatron /10.35.56.87:2181, sessionid = 0x244d57a1c511db9,
negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: 1620-Megatron(CONNECTED) 0] ls /
[accumulo, zookeeper]
[zk: 1620-Megatron(CONNECTED) 1] ls /accumulo
[a85286bf-031c-4e24-9b47-f6aca34401b8, a531e027-6154-47ec-8b0e-eebf20a9a902, instances, ec960eed-e05f-448c-b417-74620a242764,
1b491f7e-d2a2-4353-a691-fdcab06592bd, 675bbabc-0e0d-4ae1-9de0-0dba3cd4d1f0]

If you are able to see that there is an /accumulo node, then you're accumulo init probably
worked fine.  Then you need to make sure the accumulo-env.sh is defining the correct paths
to the Hadoop settings.   Remember that the slaves running tablet servers must be setup the
same, and ensure passwordless ssh is working for the account you are running start-all.sh.

Hope this helps.

From: user-return-3911-CHARLES.H.OTT=leidos.com@accumulo.apache.org [mailto:user-return-3911-CHARLES.H.OTT=leidos.com@accumulo.apache.org]
On Behalf Of Benjamin Parrish
Sent: Wednesday, March 19, 2014 9:28 AM
To: user@accumulo.apache.org
Subject: Re: Installing with Hadoop 2.2.0

So, I am back to no clue now...

On Wed, Mar 19, 2014 at 9:13 AM, Josh Elser <josh.elser@gmail.com<mailto:josh.elser@gmail.com>>
wrote:

I think by default zkCli.sh will just try to connect to localhost. You can change this by
providing the quorum string to the script with the -server option.
On Mar 19, 2014 8:29 AM, "Benjamin Parrish" <benjamin.d.parrish@gmail.com<mailto:benjamin.d.parrish@gmail.com>>
wrote:
I adjusted accumulo-env.sh to have hard coded values as seen below.

Are there any logs that could shed some light on this issue?

If it also helps I am using CentOS 6.5, Hadoop 2.2.0, ZooKeeper 3.4.6.

I also ran across this, that didn't look right...

Welcome to ZooKeeper!
2014-03-19 08:25:53,479 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@975]
- Opening socket connection to server localhost/127.0.0.1:2181<http://127.0.0.1:2181>.
Will not attempt to authenticat
e using SASL (unknown error)
2014-03-19 08:25:53,483 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@852]
- Socket connection established to localhost/127.0.0.1:2181<http://127.0.0.1:2181>,
initiating session
JLine support is enabled
[zk: localhost:2181(CONNECTING) 0] 2014-03-19 08:25:53,523 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235]
- Session establishment complete on server localhost/127.0.
0.1:2181, sessionid = 0x144da4e00d90000, negotiated timeout = 30000

should ZooKeeper try to hit localhost/127.0.0.1<http://127.0.0.1>?

my zoo.cfg looks like this....
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
clientPort=2181
server.1=hadoop-node-1:2888:3888
server.2=hadoop-node-2:2888:3888
server.3=hadoop-node-3:2888:3888
server.4=hadoop-node-4:2888:3888
server.5=hadoop-node-5:2888:3888

#! /usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

###
### Configure these environment variables to point to your local installations.
###
### The functional tests require conditional values, so keep this style:
###
### test -z "$JAVA_HOME" && export JAVA_HOME=/usr/local/lib/jdk-1.6.0
###
###
### Note that the -Xmx -Xms settings below require substantial free memory:
### you may want to use smaller values, especially when running everything
### on a single machine.
###
if [ -z "$HADOOP_HOME" ]
then
   test -z "$HADOOP_PREFIX"      && export HADOOP_PREFIX=/usr/local/hadoop
else
   HADOOP_PREFIX="$HADOOP_HOME"
   unset HADOOP_HOME
fi
# test -z "$HADOOP_CONF_DIR"       && export HADOOP_CONF_DIR="/usr/local/hadoop/conf"
# hadoop-2.0:
test -z "$HADOOP_CONF_DIR"     && export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

#! /usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

###
### Configure these environment variables to point to your local installations.
###
### The functional tests require conditional values, so keep this style:
###
### test -z "$JAVA_HOME" && export JAVA_HOME=/usr/local/lib/jdk-1.6.0
###
###
### Note that the -Xmx -Xms settings below require substantial free memory:
### you may want to use smaller values, especially when running everything
### on a single machine.
###
if [ -z "$HADOOP_HOME" ]
then
   test -z "$HADOOP_PREFIX"      && export HADOOP_PREFIX=/usr/local/hadoop
else
   HADOOP_PREFIX="$HADOOP_HOME"
   unset HADOOP_HOME
fi
# test -z "$HADOOP_CONF_DIR"       && export HADOOP_CONF_DIR="/usr/local/hadoop/conf"
# hadoop-2.0:
test -z "$HADOOP_CONF_DIR"     && export HADOOP_CONF_DIR="/usr/local/hadoop/etc/hadoop"

test -z "$JAVA_HOME"             && export JAVA_HOME=/usr/lib/jvm/jdk1.7.0
test -z "$ZOOKEEPER_HOME"        && export ZOOKEEPER_HOME=/usr/local/zookeeper
test -z "$ACCUMULO_LOG_DIR"      && export ACCUMULO_LOG_DIR=/usr/local/accumulo/logs
if [ -f /usr/local/accumulo/conf/accumulo.policy ]
then
   POLICY="-Djava.security.manager -Djava.security.policy=/usr/local/accumulo/conf/accumulo.policy"
fi
test -z "$ACCUMULO_TSERVER_OPTS" && export ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx1g
-Xms1g -XX:NewSize=500m -XX:MaxNewSize=500m "
test -z "$ACCUMULO_MASTER_OPTS"  && export ACCUMULO_MASTER_OPTS="${POLICY} -Xmx1g
-Xms1g"
test -z "$ACCUMULO_MONITOR_OPTS" && export ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx1g
-Xms256m"
test -z "$ACCUMULO_GC_OPTS"      && export ACCUMULO_GC_OPTS="-Xmx256m -Xms256m"
test -z "$ACCUMULO_GENERAL_OPTS" && export ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
test -z "$ACCUMULO_OTHER_OPTS"   && export ACCUMULO_OTHER_OPTS="-Xmx1g -Xms256m"
# what do when the JVM runs out of heap memory
export ACCUMULO_KILL_CMD='kill -9 %p'

# Should the monitor bind to all network interfaces -- default: false
# export ACCUMULO_MONITOR_BIND_ALL="true"

On Tue, Mar 18, 2014 at 8:58 PM, Sean Busbey <busbey+lists@cloudera.com<mailto:busbey+lists@cloudera.com>>
wrote:

On Mar 18, 2014 7:51 PM, "Benjamin Parrish" <benjamin.d.parrish@gmail.com<mailto:benjamin.d.parrish@gmail.com>>
wrote:
>
> HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop is set in all ~/.bash_profile files as needed.
>
>

Can you add to the gist the output of running

$> find $HADOOP_CONF_DIR

As the user who runs the tablet server on the same host you ran the classpath command on?

-Sean



--
Benjamin D. Parrish
H: 540-597-7860<tel:540-597-7860>



--
Benjamin D. Parrish
H: 540-597-7860

Mime
View raw message