hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-3873) Mavenize Hadoop Snappy JAR/SOs project dependencies
Date Thu, 02 Jun 2011 04:28:47 GMT

    [ https://issues.apache.org/jira/browse/HBASE-3873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042601#comment-13042601

stack commented on HBASE-3873:


This looks great.

I tried it on a fresh machine where I had to setup a build environment.  I installed snappy.
 I then moved to hadoop-snappy and tried doing mvn package.  It seems like I have to run the
command as root?  Is that so for you?  I then got stuck here:

tack@sv4borg231:~/hadoop-snappy-read-only$ sudo ~/bin/mvn/bin/mvn package
Warning: JAVA_HOME environment variable is not set.
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Hadoop Snappy
[INFO]    task-segment: [package]
[INFO] ------------------------------------------------------------------------
[INFO] [resources:resources {execution: default-resources}]
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/stack/hadoop-snappy-read-only/src/main/resources
[INFO] [compiler:compile {execution: default-compile}]
[INFO] Nothing to compile - all classes are up to date
[INFO] [antrun:run {execution: compile}]
[INFO] Executing tasks



     [exec] Can't exec "libtoolize": No such file or directory at /usr/bin/autoreconf line
     [exec] Use of uninitialized value $libtoolize in pattern match (m//) at /usr/bin/autoreconf
line 188.
     [exec] Can't exec "aclocal": No such file or directory at /usr/share/autoconf/Autom4te/FileUtils.pm
line 326.
     [exec] autoreconf: failed to run aclocal: No such file or directory

This seems to be saying that I should have run a configure in here in hadoop-snappy-read-only
first?  Or I'm running 'mvn package' but i should have done something else first?

I'm asking because i want to copy your instructions above into our manual here: http://hbase.apache.org/book/snappy.compression.html
(Maybe you have suggestions on what to add here?)

Thanks for the nice work Alejandro.

> Mavenize Hadoop Snappy JAR/SOs project dependencies
> ---------------------------------------------------
>                 Key: HBASE-3873
>                 URL: https://issues.apache.org/jira/browse/HBASE-3873
>             Project: HBase
>          Issue Type: Improvement
>          Components: build
>    Affects Versions: 0.90.2
>         Environment: Linux
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>              Labels: build
>         Attachments: HBASE-3873.patch
> (This JIRA builds on HBASE-3691)
> I'm working on simplifying how to use Hadoop Snappy from other based maven projects.
The idea is that hadoop-snappy JAR and the SOs (snappy and hadoop-snappy) would be picked
up from a Maven repository (like any other dependencies). SO files will be picked up based
on the architecture where the build is running (32 or 64 bits).
> For Hbase this would remove the need to manually copy snappy JAR and SOs (snappy and
hadoop-snappy) into HADOOP_HOME/lib or HBASE_HOME/lib and hadoop-snappy would be handled as
a regular maven dependency (with a trick for the SOs file).
> The changes would affect only the pom.xml and the would be in a 'snappy' profile, thus
requiring '-Dsnappy' option in Maven invocations to trigger the including of snappy JAR and
> Because hadoop-snappy (JAR and SOs) are not currently avail in public Maven repos, until
that happens, Hbase developer would have to checkout and 'mvn install' hadoop-snappy. Which
is (IMO) simpler than what will have to be done in once HBASE-3691 is committed.

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message