incubator-accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John W Vines <john.w.vi...@ugov.gov>
Subject Re: Accumulo 1.3.5 configuration issues with pre-existing Hadoop Rocks+ cluster
Date Mon, 13 Feb 2012 19:14:50 GMT
So looking at what you did, I have a question with how rocks+hadoop works. Did you set HADOOP_HOME,
or did it set HADOOP_HOME to /opt/apache/hadoop/conf/apache-mr for you? Because looking at
the symlinks you put in (as well as the rocks+ rpms), HADOOP_HOME should have just been /opt/apache/hadoop.


John

----- Original Message -----
| From: "Scott Roberts" <scotty@jhu.edu>
| To: "<accumulo-user@incubator.apache.org>" <accumulo-user@incubator.apache.org>
| Sent: Monday, February 13, 2012 12:24:04 AM
| Subject: Re: Accumulo 1.3.5 configuration issues with pre-existing Hadoop Rocks+ cluster
| John,
| 
| Thanks for the quick reply, especially for late on a Sunday night! I
| was able to resolve the issue by running this command on each compute
| node:
| 
| cd /opt/apache/hadoop/conf/apache-mr;ln -s /opt/apache/hadoop/bin;ln
| -s /opt/apache/hadoop/lib;ln -s mapreduce conf;for i in `ls
| /opt/apache/hadoop/*.jar`;do ln -s $i;done
| 
| WRT the slave nodes, the messages were always the same, except this
| time the tablet servers actually started on the compute nodes. E.g.:
| 
| Starting tablet servers and loggers ...... done
| Starting logger on compute-0-2
| Starting tablet server on compute-0-1
| Starting tablet server on compute-0-0
| Starting tablet server on compute-0-2
| Starting logger on compute-0-1
| Starting logger on compute-0-0
| Starting master on frontend
| Starting garbage collector on frontend
| Starting monitor on frontend
| Starting tracer on frontend
| 
| Cheers.
| 
| 
| >
| > We do not leverage the HADOOP_CONF_DIR for our scripts, but that's
| > definately something we should look into. We currently just expect
| > $HADOOP_HOME, grab the libs out of that directory as well as
| > grabbing the config files from $HADOOP_HOME/conf. So copying your
| > directory shouldn't be necessary, but you may have to put a symlink
| > in place. I am creating a ticket in order to make us run better with
| > installations of hadoop not isolated to a single directory.
| >
| > As for the slave nodes, you should see messages when you run
| > start-all.sh for each service it's starting. If you are not seeing
| > it attempt to start tservers/loggers on your slave nodes, check the
| > slaves file in $ACCUMULO_HOME/conf. If you are seeing messages about
| > those services, go to one of those nodes and check the log files
| > (start with .out and .err) to see if there's any more information
| > there. If you have an error similar to your above error WRT HDFS
| > home, then you should make sure that your HADOOP_HOME changes you
| > made are in effect on every node. As mentioned above, it's how we
| > resolves our classpaths and we are currently designed with this
| > expectation. If you can't figure out the error, let us know. And if
| > it does work, let us know anyway so we know what should be made
| > clearer.
| >
| > John

Mime
View raw message