mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (MAHOUT-1636) Class dependencies for the spark module are put in a job.jar, which is very inefficient
Date Tue, 30 Dec 2014 02:22:13 GMT


ASF GitHub Bot commented on MAHOUT-1636:

Github user pferrel commented on a diff in the pull request:
    --- Diff: spark/src/main/assembly/dependencies.xml ---
    @@ -38,9 +38,34 @@
    +        <!-- MAHOUT-1636 -->
    +        <!-- add any projects that are included in the spark environment or are in
    +        but not used in spark drivers -->
    +        <exclude>org.apache.spark:spark-core_${scala.major}</exclude>
    +        <exclude>org.scala-lang:scala-library</exclude>
    +        <exclude>jackson-core-asl</exclude>
    +        <exclude>jackson-mapper-asl</exclude>
    +        <exclude>xstream</exclude>
    +        <exclude>lucene-core</exclude>
    +        <exclude>lucene-analyzers-common</exclude>
    --- End diff --
    @dlyubimov open to that argument but maybe you can explain. This allows us to add dependencies
by putting them in any dependent module pom but without dealing with the assembly xml. Doing
it with includes mean we have to remember to add to the module pom as well as this assembly
every time. Also the excludes will only be things in the environment, I would think change
seldom, even with new versions.
    The reverse argument is also true. If we add new parts of spark or scala we'll have to
add them to the excludes since they are already in the environment. Not sure what those would
be but maybe you have some examples.

> Class dependencies for the spark module are put in a job.jar, which is very inefficient
> ---------------------------------------------------------------------------------------
>                 Key: MAHOUT-1636
>                 URL:
>             Project: Mahout
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 1.0-snapshot
>            Reporter: Pat Ferrel
>            Assignee: Ted Dunning
>             Fix For: 1.0-snapshot
> using a maven plugin and an assembly job.xml a job.jar is created with all dependencies
including transitive ones. This job.jar is in mahout/spark/target and is included in the classpath
when a Spark job is run. This allows dependency classes to be found at runtime but the job.jar
include a great deal of things not needed that are duplicates of classes found in the main
mrlegacy job.jar.  If the job.jar is removed, drivers will not find needed classes. A better
way needs to be implemented for including class dependencies.
> I'm not sure what that better way is so am leaving the assembly alone for now. Whoever
picks up this Jira will have to remove it after deciding on a better method.

This message was sent by Atlassian JIRA

View raw message