accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Benjamin Parrish <benjamin.d.parr...@gmail.com>
Subject Re: Installing with Hadoop 2.2.0
Date Mon, 17 Mar 2014 01:52:08 GMT
I am running Accumulo 1.5.1

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
  <!-- Put your site-specific accumulo configurations here. The available
configuration values along with their defaults are documented in
docs/config.html Unless
    you are simply testing at your workstation, you will most definitely
need to change the three entries below. -->

  <property>
    <name>instance.zookeeper.host</name>

<value>hadoop-node-1:2181,hadoop-node-2:2181,hadoop-node-3:2181,hadoop-node-4:2181,hadoop-node-5:2181</value>
    <description>comma separated list of zookeeper servers</description>
  </property>

  <property>
    <name>logger.dir.walog</name>
    <value>walogs</value>
    <description>The property only needs to be set if upgrading from 1.4
which used to store write-ahead logs on the local
      filesystem. In 1.5 write-ahead logs are stored in DFS.  When 1.5 is
started for the first time it will copy any 1.4
      write ahead logs into DFS.  It is possible to specify a
comma-separated list of directories.
    </description>
  </property>

  <property>
    <name>instance.secret</name>
    <value></value>
    <description>A secret unique to a given instance that all servers must
know in order to communicate with one another.
      Change it before initialization. To
      change it later use ./bin/accumulo
org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new
[newpasswd],
      and then update this file.
    </description>
  </property>

  <property>
    <name>tserver.memory.maps.max</name>
    <value>1G</value>
  </property>

  <property>
    <name>tserver.cache.data.size</name>
    <value>128M</value>
  </property>

  <property>
    <name>tserver.cache.index.size</name>
    <value>128M</value>
  </property>

  <property>
    <name>trace.token.property.password</name>
    <!-- change this to the root user's password, and/or change the user
below -->
    <value></value>
  </property>

  <property>
    <name>trace.user</name>
    <value>root</value>
  </property>

  <property>
    <name>general.classpaths</name>
    <value>
      $HADOOP_PREFIX/share/hadoop/common/.*.jar,
      $HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,
      $HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,
      $HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,
      $HADOOP_PREFIX/share/hadoop/yarn/.*.jar,
      /usr/lib/hadoop/.*.jar,
      /usr/lib/hadoop/lib/.*.jar,
      /usr/lib/hadoop-hdfs/.*.jar,
      /usr/lib/hadoop-mapreduce/.*.jar,
      /usr/lib/hadoop-yarn/.*.jar,
      $ACCUMULO_HOME/server/target/classes/,
      $ACCUMULO_HOME/lib/accumulo-server.jar,
      $ACCUMULO_HOME/core/target/classes/,
      $ACCUMULO_HOME/lib/accumulo-core.jar,
      $ACCUMULO_HOME/start/target/classes/,
      $ACCUMULO_HOME/lib/accumulo-start.jar,
      $ACCUMULO_HOME/fate/target/classes/,
      $ACCUMULO_HOME/lib/accumulo-fate.jar,
      $ACCUMULO_HOME/proxy/target/classes/,
      $ACCUMULO_HOME/lib/accumulo-proxy.jar,
      $ACCUMULO_HOME/lib/[^.].*.jar,
      $ZOOKEEPER_HOME/zookeeper[^.].*.jar,
      $HADOOP_CONF_DIR,
      $HADOOP_PREFIX/[^.].*.jar,
      $HADOOP_PREFIX/lib/[^.].*.jar,
    </value>
    <description>Classpaths that accumulo checks for updates and class
files.
      When using the Security Manager, please remove the
".../target/classes/" values.
    </description>
  </property>
</configuration>


On Sun, Mar 16, 2014 at 9:06 PM, Josh Elser <josh.elser@gmail.com> wrote:

> Posting your accumulo-site.xml (filtering out instance.secret and
> trace.password before you post) would also help us figure out what exactly
> is going on.
>
>
> On 3/16/14, 8:41 PM, Mike Drob wrote:
>
>> Which version of Accumulo are you using?
>>
>> You might be missing the hadoop libraries from your classpath. For this,
>> you would check your accumulo-site.xml and find the comment about Hadoop
>> 2 in the file.
>>
>>
>> On Sun, Mar 16, 2014 at 8:28 PM, Benjamin Parrish
>> <benjamin.d.parrish@gmail.com <mailto:benjamin.d.parrish@gmail.com>>
>> wrote:
>>
>>     I have a couple of issues when trying to use Accumulo on Hadoop 2.2.0
>>
>>     1) I start with accumulo init and everything runs through just fine,
>>     but I can find '/accumulo' using 'hadoop fs -ls /'
>>
>>     2) I try to run 'accumulo shell -u root' and it says that that
>>     Hadoop and ZooKeeper are not started, but if I run 'jps' on the each
>>     cluster node it shows all the necessary processes for both in the
>>     JVM.  Is there something I am missing?
>>
>>     --
>>     Benjamin D. Parrish
>>     H: 540-597-7860 <tel:540-597-7860>
>>
>>
>>


-- 
Benjamin D. Parrish
H: 540-597-7860

Mime
View raw message