mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAHOUT-1464) Cooccurrence Analysis on Spark
Date Sun, 15 Jun 2014 01:47:02 GMT

    [ https://issues.apache.org/jira/browse/MAHOUT-1464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14031751#comment-14031751
] 

ASF GitHub Bot commented on MAHOUT-1464:
----------------------------------------

Github user tdunning commented on a diff in the pull request:

    https://github.com/apache/mahout/pull/18#discussion_r13783816
  
    --- Diff: math-scala/src/main/scala/org/apache/mahout/math/scalabindings/MatrixOps.scala
---
    @@ -188,8 +188,8 @@ object MatrixOps {
         def apply(f: Vector): Double = f.sum
       }
     
    -  private def vectorCountFunc = new VectorFunction {
    -    def apply(f: Vector): Double = f.aggregate(Functions.PLUS, Functions.greater(0))
    +  private def vectorCountNonZeroElementsFunc = new VectorFunction {
    +    def apply(f: Vector): Double = f.aggregate(Functions.PLUS, Functions.notEqual(0))
    --- End diff --
    
    The issue I have is with the rowAggregation and columnAggregation API.  It enforces row
by row evaluation.  A map-reduce API could evaluate in many different orders and could iterate
by rows or by columns for either aggregation and wouldn't require the a custom VectorFunction
for simple aggregations.


> Cooccurrence Analysis on Spark
> ------------------------------
>
>                 Key: MAHOUT-1464
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1464
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Collaborative Filtering
>         Environment: hadoop, spark
>            Reporter: Pat Ferrel
>            Assignee: Pat Ferrel
>             Fix For: 1.0
>
>         Attachments: MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch, MAHOUT-1464.patch,
MAHOUT-1464.patch, MAHOUT-1464.patch, run-spark-xrsj.sh
>
>
> Create a version of Cooccurrence Analysis (RowSimilarityJob with LLR) that runs on Spark.
This should be compatible with Mahout Spark DRM DSL so a DRM can be used as input. 
> Ideally this would extend to cover MAHOUT-1422. This cross-cooccurrence has several applications
including cross-action recommendations. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message