Return-Path: X-Original-To: apmail-spark-commits-archive@minotaur.apache.org Delivered-To: apmail-spark-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 32EE617455 for ; Sat, 1 Nov 2014 01:33:28 +0000 (UTC) Received: (qmail 53036 invoked by uid 500); 1 Nov 2014 01:33:28 -0000 Delivered-To: apmail-spark-commits-archive@spark.apache.org Received: (qmail 53000 invoked by uid 500); 1 Nov 2014 01:33:28 -0000 Mailing-List: contact commits-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list commits@spark.apache.org Received: (qmail 52989 invoked by uid 99); 1 Nov 2014 01:33:28 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 01 Nov 2014 01:33:28 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id D3A6599329B; Sat, 1 Nov 2014 01:33:27 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: meng@apache.org To: commits@spark.apache.org Message-Id: <9fab829af2594e3a9b0800c8a660e7f5@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: git commit: [SPARK-3838][examples][mllib][python] Word2Vec example in python Date: Sat, 1 Nov 2014 01:33:27 +0000 (UTC) Repository: spark Updated Branches: refs/heads/master 62d01d255 -> e07fb6a41 [SPARK-3838][examples][mllib][python] Word2Vec example in python This pull request refers to issue: https://issues.apache.org/jira/browse/SPARK-3838 Python example for word2vec mengxr Author: Anant Closes #2952 from anantasty/SPARK-3838 and squashes the following commits: 87bd723 [Anant] remove stop line 4bd439e [Anant] Changes as per code review. Fized error in word2vec python example, simplified example in docs. 3d3c9ee [Anant] Added empty line after python imports 0c90c31 [Anant] Fixed erroneous code. I was still treating each line to be a single word instead of 16 words ee4f5f6 [Anant] Fixes from code review comments c637bcf [Anant] Added word2vec python example to docs 269f31f [Anant] added example in docs c015b14 [Anant] Added python example for word2vec Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e07fb6a4 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e07fb6a4 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e07fb6a4 Branch: refs/heads/master Commit: e07fb6a41ee949f8dba44d5a3b6c0615f27f0eaf Parents: 62d01d2 Author: Anant Authored: Fri Oct 31 18:33:19 2014 -0700 Committer: Xiangrui Meng Committed: Fri Oct 31 18:33:19 2014 -0700 ---------------------------------------------------------------------- docs/mllib-feature-extraction.md | 17 +++++++++ examples/src/main/python/mllib/word2vec.py | 50 +++++++++++++++++++++++++ 2 files changed, 67 insertions(+) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/spark/blob/e07fb6a4/docs/mllib-feature-extraction.md ---------------------------------------------------------------------- diff --git a/docs/mllib-feature-extraction.md b/docs/mllib-feature-extraction.md index 886d71d..197bc77 100644 --- a/docs/mllib-feature-extraction.md +++ b/docs/mllib-feature-extraction.md @@ -203,6 +203,23 @@ for((synonym, cosineSimilarity) <- synonyms) { } {% endhighlight %} +
+{% highlight python %} +from pyspark import SparkContext +from pyspark.mllib.feature import Word2Vec + +sc = SparkContext(appName='Word2Vec') +inp = sc.textFile("text8_lines").map(lambda row: row.split(" ")) + +word2vec = Word2Vec() +model = word2vec.fit(inp) + +synonyms = model.findSynonyms('china', 40) + +for word, cosine_distance in synonyms: + print "{}: {}".format(word, cosine_distance) +{% endhighlight %} +
## StandardScaler http://git-wip-us.apache.org/repos/asf/spark/blob/e07fb6a4/examples/src/main/python/mllib/word2vec.py ---------------------------------------------------------------------- diff --git a/examples/src/main/python/mllib/word2vec.py b/examples/src/main/python/mllib/word2vec.py new file mode 100644 index 0000000..99fef42 --- /dev/null +++ b/examples/src/main/python/mllib/word2vec.py @@ -0,0 +1,50 @@ +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +# This example uses text8 file from http://mattmahoney.net/dc/text8.zip +# The file was downloadded, unziped and split into multiple lines using +# +# wget http://mattmahoney.net/dc/text8.zip +# unzip text8.zip +# grep -o -E '\w+(\W+\w+){0,15}' text8 > text8_lines +# This was done so that the example can be run in local mode + + +import sys + +from pyspark import SparkContext +from pyspark.mllib.feature import Word2Vec + +USAGE = ("bin/spark-submit --driver-memory 4g " + "examples/src/main/python/mllib/word2vec.py text8_lines") + +if __name__ == "__main__": + if len(sys.argv) < 2: + print USAGE + sys.exit("Argument for file not provided") + file_path = sys.argv[1] + sc = SparkContext(appName='Word2Vec') + inp = sc.textFile(file_path).map(lambda row: row.split(" ")) + + word2vec = Word2Vec() + model = word2vec.fit(inp) + + synonyms = model.findSynonyms('china', 40) + + for word, cosine_distance in synonyms: + print "{}: {}".format(word, cosine_distance) + sc.stop() --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org For additional commands, e-mail: commits-help@spark.apache.org