mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyubimov <dlie...@gmail.com>
Subject Re: [DISCUSS] Naming convention for multiple spark/scala combos
Date Fri, 07 Jul 2017 17:24:51 GMT
it would seem 2nd option is preferable if doable. Any option that has most
desirable combinations prebuilt, is preferable i guess. Spark itself also
releases tons of hadoop profile binary variations. so i don't have to build
one myself.

On Fri, Jul 7, 2017 at 8:57 AM, Trevor Grant <trevor.d.grant@gmail.com>
wrote:

> Hey all,
>
> Working on releasing 0.13.1 with multiple spark/scala combos.
>
> Afaik, there is no 'standard' for multiple spark versions (but I may be
> wrong, I don't claim expertise here).
>
> One approach is simply only release binaries for:
> Spark-1.6 + Scala 2.10
> Spark-2.1 + Scala 2.11
>
> OR
>
> We could do like dl4j
>
> org.apache.mahout:mahout-spark_2.10:0.13.1_spark_1
> org.apache.mahout:mahout-spark_2.11:0.13.1_spark_1
>
> org.apache.mahout:mahout-spark_2.10:0.13.1_spark_2
> org.apache.mahout:mahout-spark_2.11:0.13.1_spark_2
>
> OR
>
> some other option I don't know of.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message