hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Purtell <apurt...@apache.org>
Subject Re: hbase packaging
Date Fri, 31 Jul 2009 20:50:20 GMT
The RPM packaging I've done is Cloudera specific. HBase depends on Hadoop services as specified
by Cloudera's sysvinit service definitions. Also the file locations are disto specific:

  - Config in /etc/<package>-<base-version>/conf (symlink)

  - Configuration alternatives as /etc/<package>-<base-version>/conf.<config-name>

  - Everything in /usr/lib/<package>-<base-version>

  - Symlinks in /usr/bin, etc.

(<base-version> is 0.20 for 0.20.0 etc.)

config packages use 'alternatives' to play with symlinks under their
etc/ . So our package for them does the same. There is an expectation
we provide a "psuedo distributed" config package, so we do. 

The mechanics of the rpmbuild process applies
diffs to the plain source (diffs include wholesale utility scripts for
the build process), runs 'ant package', then executes the build helper
scripts to move files around for packaging. Also packages for different
configs are produced. The base package also includes sysvinit scripts,
e.g. for /sbin/service start hbase-master, /sbin/service start
hbase-regionserver, /sbin/service start hbase-thrift, etc. 

We will need to make a small amount of documentation describing how
to make a non-pseudo config for HBase -- trivial thanks to Nitay's work
with plugging zookeeper into hbase-site. 

lot of what Cloudera does is not necessary if we just choose to install
HBase into a vanilla filesystem layout under /opt/hbase-<base-version> or similar. 
Making a RPM is pretty trivial. We can make up our own
spec file to include in the HBase tarball. One could make RPMs out of
it by doing 'rpmbuild -tb hbase-0.20.0.tar.gz'. However, without Hadoop
distro support, having some random RPM or DEB of HBase is not really
that helpful. For example, how does one specify that HBase daemons
which may be started via sysvinit depend on DFS service?
Or some external ZK service? We can determine if ZK is not started. We can determine if DFS
is not started... maybe... if the hadoop script is on the path. But we can't trigger automatic
starting of dependent services. So it falls to the admin to do all of that work. Might as
just release tarballs.

  - Andy

From: stack <stack@duboce.net>
To: hbase-dev@hadoop.apache.org
Sent: Friday, July 31, 2009 1:00:17 PM
Subject: hbase packaging

I know Andrew is working on rpm/deb packaging of hbase for possible
inclusion by cloudera.  I was wondering if it'd be possible to use the
rpms/debs independently or do they have some cloudera dependencies built in?

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message