spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jkbradley <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-1405] [mllib] Latent Dirichlet Allocati...
Date Thu, 15 Jan 2015 19:34:27 GMT
Github user jkbradley commented on the pull request:

    https://github.com/apache/spark/pull/4047#issuecomment-70146798
  
    @EntilZha  Here’s a sketch of my plan.
    
    Datasets:
    * UCI ML Repository data (also used by Asuncion et al., 2009):
      * KOS
      * NIPS
      * NYTimes
      * PubMed (full)
    * Wikipedia?
    
    Data preparation:
    * Converting to bags of words:
      * UCI datasets are given as word counts already.
      * Wikipedia dump is text.
        * I use the SimpleTokenizer in the LDAExample, which sets term = word and only accepts
alphabetic characters.
        * Use stopwords from @dlwh located at [https://github.com/dlwh/spark/feature/lda]
        * No stemming
    * Choosing vocab: For various vocabSize settings, I took the most common vocabSize terms.
    
    Scaling tests: *(doing these first)*
    * corpus size
    * vocabSize
    * k
    * numIterations
    
    Accuracy tests: *(doing these second)*
    * train on full datasets
    * Tune hyperparameters via grid search, following Asuncion et al. (2009) section 4.1.
    * Can hopefully compare with their results in Fig. 5.
    
    These tests will run on a 16-node EC2 cluster of r3.2xlarge instances.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message