accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher <ctubb...@apache.org>
Subject Re: 1.5 - how to build rpm; cdh3u4;
Date Wed, 19 Jun 2013 14:03:51 GMT
Not to the extent that I've just stated, but the instructions for
rebuilding everything are somewhat self-documented in the
assemble/build.sh convenience script.

There is a ticket open to improve the README in 1.6 (ACCUMULO-1515)
and the discussion has centered around including instructions for
building.

--
Christopher L Tubbs II
http://gravatar.com/ctubbsii


On Wed, Jun 19, 2013 at 8:40 AM, Rob Tallis <robtallis@gmail.com> wrote:
> Thanks Christopher, that worked.
> Is any of this documented anywhere?
>
>
> On 17 June 2013 02:10, Christopher <ctubbsii@apache.org> wrote:
>
>> Since 1.5 was released, the RPM now expects at least one other profile
>> to be active, also: the thrift profile. This is because it was decided
>> during the reviewing of the release candidates for 1.5 that the thrift
>> bindings for several languages to the new proxy feature, should be
>> delivered with the new proxy.
>>
>> The correct command for building the entire RPM for 1.5 would be
>> (minimally, if we skip tests):
>> mvn package -DskipTests -P thrift,native,rpm
>>
>> Typically, one would also activate the seal-jars profile and the docs
>> profile, as well as build the aggregate javadocs for packaging with
>> the monitor:
>> mvn clean compile javadoc:aggregate package -DskipTests -P
>> docs,seal-jars,thrift,native,rpm
>>
>> Also, don't expect trunk to work the same way. ACCUMULO-210 is going
>> to result in changes to the way we build RPMs. Even if we make an
>> effort to continue to support building the monolithic RPM, there's no
>> guarantee that the maven profile prerequisites won't change, due to
>> other improvements in the build. For instance, the docs directory is
>> now a proper maven module and there are likely going to be changes due
>> to the discussion of consolidating documentation.
>>
>> --
>> Christopher L Tubbs II
>> http://gravatar.com/ctubbsii
>>
>>
>> On Sun, Jun 16, 2013 at 9:16 AM, Rob Tallis <robtallis@gmail.com> wrote:
>> > Dragging the rpm question up again, the instruction to create an rpm from
>> > source was *mvn clean package -P native,rpm*
>> >
>> > From a fresh clone, on both trunk and 1.5 I get:
>> >
>> > *[ERROR] Failed to execute goal
>> > org.codehaus.mojo:rpm-maven-plugin:2.1-alpha-2:attached-rpm
>> (build-bin-rpm)
>> > on project accumulo: Unable to copy files for packaging: You must set at
>> > least one file. -> [Help 1]*
>> >
>> > and I can't decipher the build setup to figure this out. What am I doing
>> > wrong?
>> >
>> > Thanks, Rob
>> >
>> >
>> > On 15 May 2013 19:06, Rob Tallis <robtallis@gmail.com> wrote:
>> >
>> >> That sorted it, thanks.
>> >>
>> >>
>> >> On 15 May 2013 18:11, John Vines <vines@apache.org> wrote:
>> >>
>> >>> In the example files, specifically accumulo-env.sh, there are 2
>> commented
>> >>> lines after HADOOP_CONF_DIR is set, I believe. Make sure that you
>> comment
>> >>> out the old one and uncomment the one after the hadoop2 comment.
>> >>>
>> >>> This is necessary because Accumulo puts the hadoop conf dir on the
>> >>> classpath in order to load the core-site.xml, which has the HDFS
>> namenode
>> >>> config. By default, this is file:///, so if it's not there it's goingto
>> >>> default to the local file system. A quick way to validate is to run
>> >>> bin/accumulo classpath and then look to see if the conf dir (I don't
>> not
>> >>> recall what is it for CDH4) is there.
>> >>>
>> >>>
>> >>> On Wed, May 15, 2013 at 4:06 AM, Rob Tallis <robtallis@gmail.com>
>> wrote:
>> >>>
>> >>> > I've given up on cdh3 then. I've trying to get 1.5 and /or trunk
>> going
>> >>> on
>> >>> > cdh4.2.1 on a small hadoop cluster installed via cloudera manager.
I
>> >>> built
>> >>> > the tar specifying Dhadoop.profile=2.0
>> -Dhadoop.version=2.0.0-cdh4.2.1.
>> >>> > I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar,
to the
>> >>> > classpath. This lets me init and start the processes but I've got
the
>> >>> > problem of the instance information being stored on local disk
rather
>> >>> than
>> >>> > on hdfs. (unable obtain instance id at /accumulo/instance_id)
>> >>> >
>> >>> > I can see references to this problem elsewhere but I can't figure
out
>> >>> what
>> >>> > I'm doing wrong. Something wrong with my environment when I init
i
>> >>> guess..?
>> >>> > (tbh it's the first time I've tried a cluster install over a
>> standalone
>> >>> so
>> >>> > it might not have anything to do with the versions I'm trying)
>> >>> >
>> >>> > Rob
>> >>> >
>> >>> >
>> >>> >
>> >>> > On 13 May 2013 12:24, Rob Tallis <robtallis@gmail.com> wrote:
>> >>> >
>> >>> > > Perfect, thanks for the help
>> >>> > >
>> >>> > >
>> >>> > > On 11 May 2013 08:37, John Vines <jvines@gmail.com>
wrote:
>> >>> > >
>> >>> > >> It also appears that CDH3u* does not have commons-collections
or
>> >>> > >> commons-configuration included, so you will need to manually
add
>> >>> those
>> >>> > >> jars
>> >>> > >> to the classpath, either in accumulo lib or hadoop lib.
Without
>> these
>> >>> > >> files, tserver and master will not start.
>> >>> > >>
>> >>> > >>
>> >>> > >> On Fri, May 10, 2013 at 11:14 AM, Josh Elser <
>> josh.elser@gmail.com>
>> >>> > >> wrote:
>> >>> > >>
>> >>> > >> > FWIW, if you don't run -DskipTests, you will get
some failures
>> on
>> >>> some
>> >>> > >> of
>> >>> > >> > the newer MiniAccumuloCluster tests.
>> >>> > >> >
>> >>> > >> > testPerTableClasspath(org.**apache.accumulo.server.mini.**
>> >>> > >> > MiniAccumuloClusterTest)
>> >>> > >> >
>> test(org.apache.accumulo.**server.mini.**MiniAccumuloClusterTest)
>> >>> > >> >
>> >>> > >> > Just about the same thing as we were seeing on
>> >>> > >> https://issues.apache.org/*
>> >>> > >> > *jira/browse/ACCUMULO-837<
>> >>> > >> https://issues.apache.org/jira/browse/ACCUMULO-837>.
>> >>> > >> > My guess would be that we're including the wrong
test
>> dependency.
>> >>> > >> >
>> >>> > >> > On 5/10/13 3:33 AM, Rob Tallis wrote:
>> >>> > >> >
>> >>> > >> >> For info, changing it to cdh3u5, it*does*  work:
>> >>> > >> >>
>> >>> > >> >> mvn clean package -P assemble -DskipTests
>> >>> > >>  -Dhadoop.version=0.20.2-cdh3u5
>> >>> > >> >> -Dzookeeper.version=3.3.5-**cdh3u5
>> >>> > >> >>
>> >>> > >> >
>> >>> > >> >
>> >>> > >>
>> >>> > >>
>> >>> > >> --
>> >>> > >> Cheers
>> >>> > >> ~John
>> >>> > >>
>> >>> > >
>> >>> > >
>> >>> >
>> >>>
>> >>
>> >>
>>

Mime
View raw message