hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michel Segel <michael_se...@hotmail.com>
Subject Re: Hadoop not working after replacing hadoop-core.jar with hadoop-core-append.jar
Date Fri, 10 Jun 2011 11:12:20 GMT
Just a simple suggestion...
When you install Hadoop and HBase, you may want to go into /usr/lib/hadoop and create a symbolic
link called hadoop.jar that points to the current hadoop jar. Do the same for hbase.
Then make all of your references to these sym links and environment variables as needed.
Makes life a lot easier.

Sent from a remote device. Please excuse any typos...

Mike Segel

On Jun 9, 2011, at 6:40 PM, Stack <stack@duboce.net> wrote:

> On Tue, Jun 7, 2011 at 2:32 PM, Stack <stack@duboce.net> wrote:
>> On Mon, Jun 6, 2011 at 10:37 PM, Mike Spreitzer <mspreitz@us.ibm.com> wrote:
>>> So my
>>> suggestion is to be unequivocal about it: when running distributed, always
>>> build your own Hadoop and put its -core JAR into your HBase installation
>>> (or use Cloudera, which has done this for you).  Also: explicitly explain
>>> how the file has to be named (there is a strict naming requirement so that
>>> the launching scripts work, right?).
> I made this change,
> http://svn.apache.org/viewvc?view=revision&revision=1134129 .
> removed the section where we talk of copying the hbase hadoop jar
> across a cluster.  I notice that in Michael Noll's blog he talks of
> renaming branch-0.20.append jar as Andy Zhong points out so will leave
> it at that (unless others have improvements).
> St.Ack

View raw message