incubator-accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Scott Roberts <sco...@jhu.edu>
Subject Re: Accumulo 1.3.5 configuration issues with pre-existing Hadoop Rocks+ cluster
Date Mon, 13 Feb 2012 05:24:04 GMT
John,

Thanks for the quick reply, especially for late on a Sunday night!  I was able to resolve
the issue by running this command on each compute node:

cd /opt/apache/hadoop/conf/apache-mr;ln -s /opt/apache/hadoop/bin;ln -s /opt/apache/hadoop/lib;ln
-s mapreduce conf;for i in `ls /opt/apache/hadoop/*.jar`;do ln -s $i;done

WRT the slave nodes, the messages were always the same, except this time the tablet servers
actually started on the compute nodes. E.g.:

Starting tablet servers and loggers ...... done
Starting logger on compute-0-2
Starting tablet server on compute-0-1
Starting tablet server on compute-0-0
Starting tablet server on compute-0-2
Starting logger on compute-0-1
Starting logger on compute-0-0
Starting master on frontend
Starting garbage collector on frontend
Starting monitor on frontend
Starting tracer on frontend

Cheers.


> 
> We do not leverage the HADOOP_CONF_DIR for our scripts, but that's definately something
we should look into. We currently just expect $HADOOP_HOME, grab the libs out of that directory
as well as grabbing the config files from $HADOOP_HOME/conf. So copying your directory shouldn't
be necessary, but you may have to put a symlink in place. I am creating a ticket in order
to make us run better with installations of hadoop not isolated to a single directory.
> 
> As for the slave nodes, you should see messages when you run start-all.sh for each service
it's starting. If you are not seeing it attempt to start tservers/loggers on your slave nodes,
check the slaves file in $ACCUMULO_HOME/conf. If you are seeing messages about those services,
go to one of those nodes and check the log files (start with .out and .err) to see if there's
any more information there. If you have an error similar to your above error WRT HDFS home,
then you should make sure that your HADOOP_HOME changes you made are in effect on every node.
As mentioned above, it's how we resolves our classpaths and we are currently designed with
this expectation. If you can't figure out the error, let us know. And if it does work, let
us know anyway so we know what should be made clearer.
> 
> John


Mime
View raw message