mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAHOUT-1615) SparkEngine drmFromHDFS returning the same Key for all Key,Vec Pairs for Text-Keyed SequenceFiles
Date Sun, 14 Sep 2014 21:01:33 GMT

    [ https://issues.apache.org/jira/browse/MAHOUT-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14133379#comment-14133379
] 

ASF GitHub Bot commented on MAHOUT-1615:
----------------------------------------

GitHub user andrewpalumbo opened a pull request:

    https://github.com/apache/mahout/pull/52

    MAHOUT-1615: drmFromHDFS returning the same Key for all Key,Vec Pairs for Text-Keyed SequenceFiles

    SparkContext.sequenceFile(...) will yield the same key per partition for Text-Keyed Sequence
files if the key a new copy of the key is not created when mapping to an RDD.  This patch
checks for Text Keys and creates a copy of each Key if necessary.    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/andrewpalumbo/mahout MAHOUT-1615

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/mahout/pull/52.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #52
    
----
commit 6adb01ee53ce591962b97a4ed474c111635f7c47
Author: Andrew Palumbo <ap.dev@outlook.com>
Date:   2014-09-14T20:50:50Z

    Create copy of Key for Text Keys

----


> SparkEngine drmFromHDFS returning the same Key for all Key,Vec Pairs for Text-Keyed SequenceFiles
> -------------------------------------------------------------------------------------------------
>
>                 Key: MAHOUT-1615
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1615
>             Project: Mahout
>          Issue Type: Bug
>            Reporter: Andrew Palumbo
>             Fix For: 1.0
>
>
> When reading in seq2sparse output from HDFS in the spark-shell of form <Text,VectorWriteable>
 SparkEngine's drmFromHDFS method is creating rdds with the same Key for all Pairs:  
> {code}
> mahout> val drmTFIDF= drmFromHDFS( path = "/tmp/mahout-work-andy/20news-test-vectors/part-r-00000")
> {code}
> Has keys:
> {...} 
>     key: /talk.religion.misc/84570
>     key: /talk.religion.misc/84570
>     key: /talk.religion.misc/84570
> {...}
> for the entire set.  This is the last Key in the set.
> The problem can be traced to the first line of drmFromHDFS(...) in SparkEngine.scala:

> {code}
>  val rdd = sc.sequenceFile(path, classOf[Writable], classOf[VectorWritable], minPartitions
= parMin)
>         // Get rid of VectorWritable
>         .map(t => (t._1, t._2.get()))
> {code}
> which gives the same key for all t._1.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message