mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyubimov <dlie...@gmail.com>
Subject Re: Upgrade to Spark 1.1.0?
Date Mon, 20 Oct 2014 21:08:27 GMT
On Mon, Oct 20, 2014 at 1:51 PM, Pat Ferrel <pat@occamsmachete.com> wrote:

> Is anyone else nervous about ignoring this issue or relying on non-build
> (hand run) test driven transitive dependency checking. I hope someone else
> will chime in.
>
> As to running unit tests on a TEST_MASTER I’ll look into it. Can we set up
> the build machine to do this? I’d feel better about eyeballing deps if we
> could have a TEST_MASTER automatically run during builds at Apache. Maybe
> the regular unit tests are OK for building locally ourselves.
>
> >
> > On Oct 20, 2014, at 12:23 PM, Dmitriy Lyubimov <dlieu.7@gmail.com>
> wrote:
> >
> > On Mon, Oct 20, 2014 at 11:44 AM, Pat Ferrel <pat@occamsmachete.com>
> wrote:
> >
> >> Maybe a more fundamental issue is that we don’t know for sure whether we
> >> have missing classes or not. The job.jar at least used the pom
> dependencies
> >> to guarantee every needed class was present. So the job.jar seems to
> solve
> >> the problem but may ship some unnecessary duplicate code, right?
> >>
> >
> > No, as i wrote spark doesn't  work with job jar format. Neither as it
> turns
> > out more recent hadoop MR btw.
>
> Not speaking literally of the format. Spark understands jars and maven can
> build one from transitive dependencies.
>
> >
> > Yes, this is A LOT of duplicate code (will take normally MINUTES to
> startup
> > tasks with all of it just on copy time). This is absolutely not the way
> to
> > go with this.
> >
>
> Lack of guarantee to load seems like a bigger problem than startup time.
> Clearly we can’t just ignore this.
>

Nope. given highly iterative nature and dynamic task allocation in this
environment, one is looking to effects similar to Map Reduce. This is not
the only reason why I never go to MR anymore, but that's one of main ones.

How about experiment: why don't you create assembly that copies ALL
transitive dependencies in one folder, and then try to broadcast it from
single point (front end) to well... let's start with 20 machines. (of
course we ideally want to into 10^3 ..10^4 range -- but why bother if we
can't do it for 20).

Or, heck, let's try to simply parallel-copy it between too machines 20
times that are not collocated on the same subnet.


> >
> >> There may be any number of bugs waiting for the time we try running on a
> >> node machine that doesn’t have some class in it’s classpath.
> >
> >
> > No. Assuming any given method is tested on all its execution paths, there
> > will be no bugs. The bugs of that sort will only appear if the user is
> > using algebra directly and calls something that is not on the path, from
> > the closure. In which case our answer to this is the same as for the
> solver
> > methodology developers -- use customized SparkConf while creating context
> > to include stuff you really want.
> >
> > Also another right answer to this is that we probably should reasonably
> > provide the toolset here. For example, all the stats stuff found in R
> base
> > and R stat packages so the user is not compelled to go non-native.
> >
> >
>
> Huh? this is not true. The one I ran into was found by calling something
> in math from something in math-scala. It led outside and you can encounter
> such things even in algebra.  In fact you have no idea if these problems
> exists except for the fact you have used it a lot personally.
>


You ran it with your own code that never existed before.

But there's difference between released Mahout code (which is what you are
working on) and the user code. Released code must run thru remote tests as
you suggested and thus guarantee there are no such problems with post
release code.

For users, we only can provide a way for them to load stuff that they
decide to use. We don't have apriori knowledge what they will use. It is
the same thing that spark does, and the same thing that MR does, doesn't it?

Of course mahout should drop rigorously the stuff it doesn't load, from the
scala scope. No argue about that. In fact that's what i suggested as #1
solution. But there's nothing much to do here but to go dependency
cleansing for math and spark code. Part of the reason there's so much is
because newer modules still bring in everything from mrLegacy.

You are right in saying it is hard to guess what else dependencies are in
the util/legacy code that are actually used. but that's not a justification
for brute force "copy them all" approach that virtually guarantees ruining
one of the foremost legacy issues this work intended to address.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message