spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joseph K. Bradley (JIRA)" <>
Subject [jira] [Updated] (SPARK-5905) Improve RowMatrix user guide and doc.
Date Thu, 18 Jun 2015 22:04:01 GMT


Joseph K. Bradley updated SPARK-5905:
    Target Version/s: 1.4.1, 1.5.0  (was: 1.4.0)

> Improve RowMatrix user guide and doc.
> -------------------------------------
>                 Key: SPARK-5905
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation, MLlib
>    Affects Versions: 1.3.0
>            Reporter: Xiangrui Meng
>            Priority: Minor
> From mbofb's comment in PR
> {code}
> The description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html should
be more precise/explicit regarding the m x n matrix. In the current description I would conclude
that n refers to the rows. According to
this way of describing a matrix is only used in particular domains. I as a reader interested
on applying SVD would rather prefer the more common m x n way of rows x columns (e.g.
) which is also used in (and also within
the ARPACK manual:
> “
> N Integer. (INPUT) - Dimension of the eigenproblem. 
> NEV Integer. (INPUT) - Number of eigenvalues of OP to be computed. 0 < NEV < N.

> NCV Integer. (INPUT) - Number of columns of the matrix V (less than or equal to N).
> “
> ).
> description of RowMatrix.computeSVD and mllib-dimensionality-reduction.html:
> "We assume n is smaller than m." Is this just a recommendation or a hard requirement.
This condition seems not to be checked and causing an IllegalArgumentException – the processing
finishes even though the vectors have a higher dimension than the number of vectors.
> description of RowMatrix. computePrincipalComponents or RowMatrix in general:
> I got a Exception.
> java.lang.IllegalArgumentException: Argument with more than 65535 cols: 7949273
> at org.apache.spark.mllib.linalg.distributed.RowMatrix.checkNumColumns(RowMatrix.scala:131)
> at org.apache.spark.mllib.linalg.distributed.RowMatrix.computeCovariance(RowMatrix.scala:318)
> at org.apache.spark.mllib.linalg.distributed.RowMatrix.computePrincipalComponents(RowMatrix.scala:373)
> This 65535 cols restriction would be nice to be written in the doc (if this still applies
in 1.3).
> {code}

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message