hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: Best CFM Engine for Hadoop
Date Thu, 10 Dec 2009 14:59:08 GMT
Edward Capriolo wrote:
system to ignore this file.
> 
> So now that I am done complaining, what do I think should do?
> 
> 1 clearly document your install process
> 2 make you install process fully script-table
> --or--
> 3 role your own rpms (or debs, tar etc) for everything not in someone else RPM
> 4 run 1 nightly backup for the each server class
> 5 revision your config files
> 6 (optionally) use tripwires/MD5s only to check for unauthorized changes
> 
> Anyway, my long short point, get something that works the way you want
> it to. Look out for systems that offer you "new" and "exciting" ways
> to do things that only take 10 seconds, like edit /etc/fstab, or
> install an RPM.

RPMs are not actually that bad for getting stuff out, especially if you 
can do PXE/kickstart stuff and bring up machines from scratch. One 
problem: the need to rebuild and push out RPMs for every change, if you 
push out configuration that way.
Other problems:
  * its possible for different RPMs to claim ownership of things, much 
confusion arises
  * the RPM dependency model doesn't work that well with Java. I say 
that as someone who has outstanding disputes with the JPackage scheme, 
and who also knows that the maven/ivy dependency model is flawed too 
(how do you declare in any of these tools that you want "an xml parser 
with XSD validation" without saying which one.
* spec files are painful to work with, so is their build and test 
process. You do have a test process, right ?
* The way RPMs upgrade is brain dead; they install the new stuff then 
decide whether or not to uninstall the old stuff, makes it very hard to 
do some upgrades that change directory structure

-Steve


Mime
View raw message