incubator-accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clint Green <clintonrgr...@gmail.com>
Subject Re: Request Apache Accumulo error help using Hadoop and Zookeeper in a test environment
Date Wed, 21 Dec 2011 02:27:24 GMT
Do you have Accumulo masters and slaves files pointing to localhost?
On Dec 20, 2011 8:22 PM, "Eric Newton" <eric.newton@gmail.com> wrote:

> The client can't find servers in zookeeper... clients are trying to talk
> to the tablet servers, so check the tablet server logs for errors:
>
> $ tail -f 100 /home/hadoop/accumulo/logs/tserver*.log
>
> This is a single-host set-up, right?
>
> Make sure /home/hadoop/walogs exists.
>
> -Eric
>
> On Tue, Dec 20, 2011 at 7:36 PM, Rob Burkhard <rob@robburkhard.com> wrote:
>
>>  I am setting up a test environment using apache accumulo ver 1.4.0,
>>  hadoop ver 0.20.2 and zookeeper ver 3.3.3. Hadoop and Zookeeper work
>>  great together, but when I start accumulo shell using the procedures on
>>  apache incubator, accumulo complains:
>>
>>
>>  18 12:44:38,746 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:38,846 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:38,947 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:39,048 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:39,148 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:39,249 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:39,350 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>  18 12:44:39,450 [impl.ServerClient] WARN : Failed to find an available
>>  server in the list of servers: []
>>
>>
>>  I have followed the instructions carefully on accumulo-incubator and
>>  built this several times on Centos and Ubuntu OS witht he same results.
>>
>>  I have also manipulated the memory settings with no change in
>>  performance. Please see my Accumulo, Hadoop and Zoo configuration files
>>  below.
>>  Any help would be appreciated. This is driving me crazy :)
>>
>>
>>
>>  cat zoo/conf/zoo.cfg
>>  # The number of milliseconds of each tick
>>  tickTime=2000
>>  # The number of ticks that the initial
>>  # synchronization phase can take
>>  initLimit=10
>>  # The number of ticks that can pass between
>>  # sending a request and getting an acknowledgement
>>  syncLimit=5
>>  # the directory where the snapshot is stored.
>>  dataDir=/home/hadoop/zoo/dataDir
>>  # the port at which the clients will connect
>>  clientPort=2181
>>  maxClientCnxns=100
>>
>>
>>  cat hadoop/conf/core-site.xml
>>  <?xml version="1.0"?>
>>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>>  <!-- Put site-specific property overrides in this file. -->
>>
>>  <configuration>
>>  <property>
>>  <name>fs.default.name</name>
>>  <value>hdfs://localhost:9000</value>
>>  </property>
>>  </configuration>
>>
>>
>>
>>  cat hadoop/conf/hadoop-env.sh
>>  # Set Hadoop-specific environment variables here.
>>
>>  # The only required environment variable is JAVA_HOME. All others are
>>  # optional. When running a distributed configuration it is best to
>>  # set JAVA_HOME in this file, so that it is correctly defined on
>>  # remote nodes.
>>
>>  # The java implementation to use. Required.
>>  export JAVA_HOME=/usr/lib/jvm/java-6-openjdk
>>
>>  # Extra Java CLASSPATH elements. Optional.
>>  # export HADOOP_CLASSPATH=
>>
>>  # The maximum amount of heap to use, in MB. Default is 1000.
>>  # export HADOOP_HEAPSIZE=2000
>>
>>  # Extra Java runtime options. Empty by default.
>>  # export HADOOP_OPTS=-server
>>
>>  # Command specific options appended to HADOOP_OPTS when specified
>>  export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
>>  $HADOOP_NAMENODE_OPTS"
>>  export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
>>  $HADOOP_SECONDARYNAMENODE_OPTS"
>>  export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
>>  $HADOOP_DATANODE_OPTS"
>>  export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
>>  $HADOOP_BALANCER_OPTS"
>>  export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
>>  $HADOOP_JOBTRACKER_OPTS"
>>  # export HADOOP_TASKTRACKER_OPTS=
>>  # The following applies to multiple commands (fs, dfs, fsck, distcp
>> etc)
>>  # export HADOOP_CLIENT_OPTS
>>
>>  # Extra ssh options. Empty by default.
>>  # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o
>>  SendEnv=HADOOP_CONF_DIR"
>>
>>  # Where log files are stored. $HADOOP_HOME/logs by default.
>>  # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
>>
>>  # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
>>  # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
>>
>>  # host:path where hadoop code should be rsync'd from. Unset by default.
>>  # export HADOOP_MASTER=master:/home/$USER/src/hadoop
>>
>>  # Seconds to sleep between slave commands. Unset by default. This
>>  # can be useful in large clusters, where, e.g., slave rsyncs can
>>  # otherwise arrive faster than the master can service them.
>>  # export HADOOP_SLAVE_SLEEP=0.1
>>
>>  # The directory where pid files are stored. /tmp by default.
>>  # export HADOOP_PID_DIR=/var/hadoop/pids
>>
>>  # A string representing this instance of hadoop. $USER by default.
>>  # export HADOOP_IDENT_STRING=$USER
>>
>>  # The scheduling priority for daemon processes. See 'man nice'.
>>  # export HADOOP_NICENESS=10
>>
>>
>>  accumulo-env.sh
>>  #! /usr/bin/env bash
>>
>>  # Licensed to the Apache Software Foundation (ASF) under one or more
>>  # contributor license agreements. See the NOTICE file distributed with
>>  # this work for additional information regarding copyright ownership.
>>  # The ASF licenses this file to You under the Apache License, Version
>>  2.0
>>  # (the "License"); you may not use this file except in compliance with
>>  # the License. You may obtain a copy of the License at
>>  #
>>  # http://www.apache.org/licenses/LICENSE-2.0
>>  #
>>  # Unless required by applicable law or agreed to in writing, software
>>  # distributed under the License is distributed on an "AS IS" BASIS,
>>  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
>>  implied.
>>  # See the License for the specific language governing permissions and
>>  # limitations under the License.
>>
>>  ###
>>  ### Configure these environment variables to point to your local
>>  installations.
>>  ###
>>  ### The functional tests require conditional values, so keep this
>> style:
>>  ###
>>  ### test -z "$JAVA_HOME" && export JAVA_HOME=/usr/local/lib/jdk-1.6.0
>>  ###
>>  ###
>>  ### Note that the -Xmx -Xms settings below require substantial free
>>  memory:
>>  ### you may want to use smaller values, especially when running
>>  everything
>>  ### on a single machine.
>>  ###
>>
>>  test -z "$JAVA_HOME" && export
>>  JAVA_HOME=/usr/lib/jvm/java-6-sun
>>  test -z "$HADOOP_HOME" && export
>>  HADOOP_HOME=/home/hadoop/hadoop-0.20.2
>>  test -z "$ACCUMULO_LOG_DIR" && export
>>  ACCUMULO_LOG_DIR=/home/hadoop/accumulo/logs
>>  test -z "$ZOOKEEPER_HOME" && export
>>  ZOOKEEPER_HOME=/home/hadoop/zoo/
>>  if [ -f ${ACCUMULO_HOME}/conf/accumulo.policy ]
>>  then
>>  POLICY="-Djava.security.manager
>>  -Djava.security.policy=${ACCUMULO_HOME}/conf/accumulo.policy"
>>  fi
>>  test -z "$ACCUMULO_TSERVER_OPTS" && export
>>  ACCUMULO_TSERVER_OPTS="${POLICY} -Xmx128m -Xms128m -Xss128k"
>>  test -z "$ACCUMULO_MASTER_OPTS" && export
>>  ACCUMULO_MASTER_OPTS="${POLICY} -Xmx128m -Xms128m"
>>  test -z "$ACCUMULO_MONITOR_OPTS" && export
>>  ACCUMULO_MONITOR_OPTS="${POLICY} -Xmx128m -Xms128m"
>>  test -z "$ACCUMULO_GC_OPTS" && export ACCUMULO_GC_OPTS="-Xmx128m
>>  -Xms128m"
>>  test -z "$ACCUMULO_LOGGER_OPTS" && export
>>  ACCUMULO_LOGGER_OPTS="-Xmx128m -Xms128m"
>>  test -z "$ACCUMULO_GENERAL_OPTS" && export
>>  ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC
>>  -XX:CMSInitiatingOccupancyFraction=75"
>>  test -z "$ACCUMULO_OTHER_OPTS" && export ACCUMULO_OTHER_OPTS="-Xmx128m
>>  -Xms128m"
>>  export ACCUMULO_LOG_HOST=`(grep -v '^#' $ACCUMULO_HOME/conf/masters ;
>>  echo localhost ) 2>/dev/null | head -1`
>>
>>  accumulo-site.xml
>>  <?xml version="1.0" encoding="UTF-8"?>
>>  <!--
>>  Licensed to the Apache Software Foundation (ASF) under one or more
>>  contributor license agreements. See the NOTICE file distributed with
>>  this work for additional information regarding copyright ownership.
>>  The ASF licenses this file to You under the Apache License, Version
>>  2.0
>>  (the "License"); you may not use this file except in compliance with
>>  the License. You may obtain a copy of the License at
>>
>>  http://www.apache.org/licenses/LICENSE-2.0
>>
>>  Unless required by applicable law or agreed to in writing, software
>>  distributed under the License is distributed on an "AS IS" BASIS,
>>  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
>>  implied.
>>  See the License for the specific language governing permissions and
>>  limitations under the License.
>>  -->
>>  <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>>  <configuration>
>>  <!--
>>  Put your site-specific accumulo configurations here.
>>
>>  The available configuration values along with their defaults
>>  are documented in docs/config.html
>>
>>  Unless you are simply testing at your workstation, you will most
>>  definitely need to change the three entries below.
>>  -->
>>  <property>
>>  <name>instance.zookeeper.host</name>
>>  <value>localhost:2181</value>
>>  <description>list of zookeeper servers</description>
>>  </property>
>>  <property>
>>  <name>logger.dir.walog</name>
>>  <value>/home/hadoop/walogs</value>
>>  <description>local directory for write ahead logs</description>
>>  </property>
>>
>>  <property>
>>  <name>instance.secret</name>
>>  <value>cloud</value>
>>  <description>A secret unique to a given instance that all servers
>>  must know in order to communicate with one another.
>>  Change it before initialization. To change it later
>>  use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret
>>  [oldpasswd] [newpasswd],
>>  and then update this file.
>>  </description>
>>  </property>
>>
>>  <property>
>>  <name>tserver.memory.maps.max</name>
>>  <value>128M</value>
>>  </property>
>>
>>  <property>
>>  <name>tserver.cache.data.size</name>
>>  <value>50M</value>
>>  </property>
>>
>>  <property>
>>  <name>tserver.cache.index.size</name>
>>  <value>128M</value>
>>  </property>
>>
>>  <property>
>>  <name>general.classpaths</name>
>>  <value>
>>  $ACCUMULO_HOME/src/server/target/classes/,
>>  $ACCUMULO_HOME/src/core/target/classes/,
>>  $ACCUMULO_HOME/src/start/target/classes/,
>>  $ACCUMULO_HOME/src/examples/target/classes/,
>>  $ACCUMULO_HOME/lib/[^.].$ACCUMULO_VERSION.jar,
>>  $ACCUMULO_HOME/lib/[^.].*.jar,
>>  $ZOOKEEPER_HOME/[^.].*.jar,
>>  $HADOOP_HOME/conf,
>>  $HADOOP_HOME/[^.].*.jar,
>>  $HADOOP_HOME/lib/[^.].*.jar,
>>  </value>
>>  <description>Classpaths that accumulo checks for updates and class
>>  files.
>>  When using the Security Manager, please remove the
>>  ".../target/classes/" values.
>>  </description>
>>  </property>
>>
>>  </configuration>
>>
>>
>

Mime
View raw message